General Motors might be globalising its business, but in the area of servers, the auto-maker is thinking small. "The smallest physical-sized servers have a far better cost per unit of work than the large, 32-CPU-class boxes," says Mark Hillman, GM's director of global computing centres. The former will serve as building blocks as the company moves from a project-specific, application-centric focus to a standardised server utility model.
The effort started with engineering, but "now we're moving that to the more general-purpose business applications," says Hillman. The results so far have been promising. By shifting from vertically scaled servers to commodity systems, GM reduced operating costs by 65 per cent in its Toronto data centre. That project will continue through 2008, Hillman says.
GM isn't alone in planning major server projects for the year ahead. In sister publication Computerworld's most recent Vital Signs survey, servers topped the list of planned purchases for 2008, and nearly one in three respondents (32 per cent) said that server spending represents the single biggest increase in their IT budgets.
IT executives cite a litany of reasons for moving server-related projects to the front burner. Consolidation is a big one. Some organisations are also preparing for new versions of key software, such as Windows Server 2008, and servers that will come preloaded with an embedded VMware hypervisor to speed the setup of virtual machines. "That's one of the biggest things we'll see," says James Staten, an analyst at Forrester Research.
Then there are regular server refreshes. After the dot-com crash, many organisations began keeping equipment for four, five or even seven years rather than the usual three. Many of those systems are now being retired.
"Data centres today have a fairly old infrastructure. We find some very old boxes out there," says Klaus Schmelzeisen, vice president of HP's consulting and integration infrastructure practice.
Compared with the Pentium 4-class and older machines still in many data centres, the latest technology performs better, takes up less space and is far more energy efficient. New servers also use multi-core processors, which are better suited for the consolidation of the many Linux and Windows applications that continue to migrate onto virtual machines.
"Forty percent of our servers are virtualised," and that consolidation will continue in 2008, says Jim Hull, group head of network and operations services for MasterCard Worldwide's global technology and operations group.
The Bank of New York Mellon also plans to "hit the virtualisation space more" this year, says Dennis Smith, first vice president in the bank's IT group. Having just completed a merger last July, the bank is moving rapidly to establish hardware and software standards as it consolidates data centres. "We are speeding up the refresh cycle because of the need to do something quickly," Smith says.
More mission-critical applications are also migrating onto commodity servers and virtual machines. "Unix platforms were more expensive but considered to be more robust," Hull says. Not so much anymore. MasterCard is migrating more of its business-critical applications onto Linux and Windows x86-class servers, which it considers to be more secure and reliable than they once were, he says.
Competition among server vendors is also making transitions more tempting. It's a buyer's market, says Hillman. "The market is much more mature and more of a commodity than it ever was."
The big push toward using larger, scaled-up servers a few years ago has ebbed. "With cheaper, smaller boxes, migrating or replacing that [Sun E10K or IBM System p] - scaling out vs. scaling up - is the smart thing to do," Hull says. "You can do many of the same functions with six or seven smaller boxes that are a lot cheaper." That wasn't true five years ago, he says, adding, "We'll be doing more of that" type of consolidation this year.
Not every application on a mid-range system is a candidate for commoditisation or scaling out, but the trend is also putting downward pressure on prices for those proprietary Unix systems, says Hillman. "The commoditisation of servers is affecting the more traditional Unix systems," he says. "They absolutely are coming down in price."
New hardware needs
Virtualisation is also redefining server hardware itself, says Forrester's Staten. For example, IBM is seeing a surge in sales of four- and eight-processor systems, and the average revenue per machine has increased.
"[Customers] are plugging in all of the processors now, and a lot more memory than they used to," says Jay Bretzman, manager of product marketing for IBM's System x server line.
Hillman says GM's estimates of the ratio of CPUs to memory aren't holding steady. "We're finding that we need a lot more memory" to support the 10 to 20 Windows virtual machines or the four to six Linux virtual machines that GM is loading onto each physical server, he says.
Hull looks for servers that are workload-optimised. "Today you have specialty engines for scale-out and scale-up," he says. Even on mainframes, Hull says, MasterCard is migrating onto specialised processors such as the zSeries Application Assist Processor for running Java. MasterCard isn't alone: Many businesses are asking to see benchmarks of performance under specific workloads, says Rakesh Kumar, an analyst at Gartner.
Smith says Bank of New York Mellon will continue to consolidate using virtualisation but is also "looking hard" at expanding its use of blade servers to allow more rapid provisioning of physical severs. Of his top 10 projects for the coming year, Smith says that "three or four" involve servers.
To a lesser extent, energy efficiency is also driving server purchasing decisions. "Our watts per square foot are four or five times what they were a few years back," and the cost of power has become "very significant" for engineering applications, says Hillman.
For data centres that can't bring in more power, virtualisation is delivering power savings. An application running on an old 400-watt server can be consolidated onto a physical server that may consume half the power, and that application could use just 10 per cent of the new machine's resources if it runs within a virtual machine on that server, says Staten.
"Whenever we do server bake-offs, [energy efficiency] is a question that comes up," says Smith. The bank plans to have a program for data centre efficiency in place early this year. Often, however, power is a secondary consideration. "We don't buy something based on its power," but it makes sense to use virtualisation to cut power consumption and carbon output, Hull says.
Some organisations are using server virtualisation technology for more than just consolidation. Sean Wieland, technology officer at The Henry J. Kaiser Family Foundation, plans to use virtualisation this year for business continuity. He says that his organisation plans to replicate virtual machine instances of Exchange Server, SharePoint, SQL Server and other key applications in its Menlo Park, Calif., and Washington offices and mirror the data between sites in "almost real time."
The non-profit expects to install new, redundant servers in both locations that will allow full recovery within minutes of a failure. "That's better than being a couple of days off," says Wieland, referring to the performance of his current tape backup systems.
Despite the advances, IT executives say there's one thing that's still missing: mature cross-platform management tools. "I would like the ability to manage multiple platforms via a single tool that allows me to see the whole environment versus islands of servers," says Hull. "That's pretty important if you have thousands of servers."
Smith sees consolidating server infrastructures as a precursor to full-blown utility computing. "You're starting to see capabilities that have existed in the mainframe world for years," he says. But those services - in areas such as priority queuing, resource management and security - still aren't mature, says Hull. His advice to server vendors in 2008: "Get more mainframe-like."