Server consolidation is fast becoming the strategy of choice for IT organisations faced with burgeoning costs from their distributed application environments. The success stories are piling up:

  • Gap reported saving $2 million annually by consolidating from 269 two-way servers to 10 eight-way servers, and reaping a 43 per cent improvement in ROI over two years.

  • Pella consolidated 16 Microsoft Exchange mail servers down to six, increasing reliability and cutting costs in half.

  • consolidated 64 servers down to just three boxes, saving more than 60 per cent in staffing and maintenance costs.

According to IDC research, 79 per cent of US organisations are in the middle of server and storage consolidation projects, up from 52 per cent in 2000. "And the vast majority - 85 per cent to 90 per cent of the users we've surveyed - are happy with the way things have gone and plan to do more," says Matt Eastwood, IDC's worldwide research programme director IDC.

But as with any IT project, there's a right way and a wrong way to approach server consolidation. The primary hurdles to success, users and analysts say, are not so much technical as they are organisational and political.

Get buy-in
When IT is decentralised, "getting other groups to buy in to the consolidation idea is not easy," says Devin York, IT director of financial systems for Continental Airlines.

Under York's guidance, the financial systems group spearheaded the consolidation of 50 application servers for six departments onto two 32-way HP 9000 Superdome servers running HP-UX, saving the airline $2 million off the bat and $1 million a year in maintenance costs.

But Continental's internal setup made the process difficult. At the airline, IT projects are departmentally driven. Over the years, this had helped each department better track and limit costs, but it had left the company's data centre with a hodgepodge of space-eating, difficult-to-maintain application servers.

"Every group had its own Web server, application tier and database server, so for every group there were at least three boxes that needed to be supported and maintained," York says. Some mission-critical departmental applications were distributed, as well. Some pieces of his group's revenue accounting package were housed on the mainframe, while other parts were on different servers across the company.

"The systems and processes were a little bit everywhere," York says. "And that's not a great idea." This architecture was especially problematic when York's group looked to upgrade the revenue accounting application because simply finding the various pieces and getting them all synched up at once was difficult.

The Superdome was a good fit for Continental because it supported hardware partitioning and HP-UX, on which most of the various departments' mission-critical applications already ran, York says. Plus, consolidating applications on the Superdome would help all the groups save money, while providing each with a better server environment, complete with failover and redundancy -- features none could afford on its own. Still some groups were leery.

"Each group has pretty tough performance goals set by management, so people were hesitant to give up their hardware," he says. "They wanted to be able to control their own destinies when it came to meeting yearly goals."

The trick was proving to management that they could meet or beat their goals while saving money. "It came down to a financial argument. We showed them how they could get better availability and better performance for 15 per cent less than the annual cost of supporting their own environments," York says.

In the end, when it came time to justify costs to upper management, no department wanted to be the odd one out. "It becomes difficult for them to say 'No' because the next time they need to upgrade, their cost is going to be more than ours and someone in management will wonder why they didn't go along with our consolidation. You can't say 'No' to better performance at lower cost when you're at an airline that's struggling," he explains.

With the re-designing of the revenue accounting project nearing deadline, the financial systems group implemented the new Superdome architecture in 2002, bringing applications from the six other groups onboard at the same time. The financial systems group manages the hardware for the others, although each group retains responsibility at the application level.

Go slow
Many organisations find that starting with a non-critical application will help demonstrate the consolidation concept. "We see people targeting the low-hanging fruit first," IDC's Eastwood says. "They target the infrastructure -- the e-mail systems, the file and print servers, the networking-type servers -- because those are the assets that IT has full control over."

At Zurich Life, this meant starting with development and staging servers. Using VMware virtualisation software and HP/Compaq ProLiant DL 380 and 580 servers, this financial insurance firm reduced the overall number of application servers from about 200 Windows-based machines to fewer than 80, says Mark Bradley, who led the project and is now applications development analyst at a large financial company.

"To do any sort of change in the production environment, there are so many groups involved that getting approvals is difficult," he says. "But in a staging-and-development environment, you're under the radar, so getting a change in and out is easier." Once you've proved a concept in the development area, getting approvals and starting on the production servers is easier, he says.

At Endress & Hauser, a global process controls manufacturer, starting small meant targeting a Linux-based IT project management application running on several IBM AS/400 servers distributed throughout the company's 35 business units. "We started with a very small application," says E&H executive board member Jan Olaf. Because only IT used this Linux application, Olaf's group felt comfortable starting with it. "We were the only ones who were impacted by the consolidation, and we were able to get feedback directly from our own people," he says.

Endress & Hauser eventually consolidated its 19 production SAP systems, together with associated DB2 database servers, onto just two IBM zSeries 990 mainframes configured with 48 zSeries processors, and running a mix of Linux and Windows 2003. The production SAP systems, which support more than 3,500 users in 35 corporate locations worldwide, are distributed on 14 logical partitions, while the databases are distributed on six logical partitions on the zSeries 990. The mainframes are housed at Endress & Hauser's data centre in the headquarters city of Weil am Rhein, Germany.

Part 2 is published here.