The enterprise data centre of the future will be a highly flexible and adaptable organism, responding quickly to changing needs because of technologies like virtualisation, a modular building approach, and an operating system that treats distributed resources as a single computing pool.

The move toward flexibility in all data centre processes, discussed extensively by analysts and IT professionals at Gartner's 27th annual data centre conference, comes after years of building monolithic data centres that react poorly to change.

"For years we spent a lot of money building out these data centres, and the second something changed it was: 'How are we going to be able to do that?'" says Brad Blake, director of IT at Boston Medical centre. "What we've built up is so specifically built for a particular function, if something changes we have no flexibility."

Rapidly changing business needs and new technologies that require extensive power and cooling are necessitating a makeover of data centres, which represent a significant chunk of an organisation's capital costs, Blake notes.

For example, he says "when blade servers came out that completely screwed up all of our matrices as far as the power we needed per square foot, and the cooling we needed because these things sucked up so much energy, used so much heat."

Virtualisation of servers, storage, desktops and the network is the key to flexibility in Blake's mind, because hardware has long been tied too rigidly to specific applications and systems.

But the growing use of virtualisation is far from the only trend making data centres more flexible. Gartner expects to see today's blade servers replaced in the next few years with a more flexible type of server that treats memory, processors and I/O cards as shared resources that can be arranged and rearranged as often as necessary.

Instead of relying on vendors to decide what proportion of memory, processing and I/O connections are on each blade, enterprises will be able to buy whatever resources they need in any amount, a far more efficient approach.

For example, an IT shop could combine 32 processors and any number of memory modules to create one large server that appears to an operating system as a single, fixed computing unit. This approach also will increase utilisation rates by reducing the resources wasted because blade servers aren't configured optimally for the applications they serve.

Data centres will also become more flexible by building in a modular approach that separates data centres into self-contained pods or zones which each have their own power feeds and cooling.

The concept is similar to shipping container-based data centres, but data centre zones don't have to be enclosed. By not treating a data centre as a homogenous whole, it is easier to separate equipment into high, medium and low heat densities, and devote expensive cooling only to the areas that really need it.

Additionally, this separation allows zones to be upgraded or repaired without causing other systems to go offline.

"Modularisation is a good thing. It gives you the ability to refresh continuously and have higher uptime," says Gartner analyst Carl Claunch.

This approach can involve incremental build-outs, building a few zones and leaving room for more when needed. But you're not wasting power because each zone has its own power feed and cooling supply, and empty space is just that. This is in contrast to long-used design principles, in which power is supplied to every square foot of a data centre even if it's not yet needed.

"Historical design principles for data centres were simple - figure out what you have now, estimate growth for 15 to 20 years, then build to suit," Gartner states. "Newly built data centres often opened with huge areas of pristine white floor space, fully powered and backed up by a UPS, water and air cooled, and mostly empty. With the cost of mechanical and electrical equipment, as well as the price of power, this model no longer works."

While the zone approach assumes that each section is self-contained, that doesn't mean the data centre of the future will be fragmented. Gartner predicts that corporate data centres will be operated as private "clouds," flexible computing networks which are modeled after public providers such as Google and Amazon yet are built and managed internally for an enterprise's own users.

By 2012, Gartner predicts that private clouds will account for at least 14 percent of the infrastructure at Fortune 1000 companies, which will benefit from service-oriented, scalable and elastic IT resources.

Private clouds will need a meta operating system to manage all of an enterprise's distributed resources as a single computing pool, Gartner analyst Thomas Bittman says, arguing that the server operating system relied upon so heavily today is undergoing a transition. Virtualisation became popular because of the failures of x86 server operating systems, which essentially limit each server to one application and waste tons of horsepower, he says. Now spinning up new virtual machines is easy, and they proliferate quickly.

"The concept of the operating system used to be about managing a box," Bittman said. "Do I really need a million copies of a general purpose operating system?"

IT needs server operating systems with smaller footprints, customised to specific types of applications, Bittman argued. With some functionality stripped out of the general purpose operating system, a meta operating system to manage the whole data centre will be necessary.

The meta operating system is still evolving but is similar to VMware's new Virtual Datacentre Operating System. Gartner describes the concept as "a virtualisation layer between applications and distributed computing resources ... that utilises distributed computing resources to perform scheduling, loading, initiating, supervising applications and error handling."

All these new concepts and technologies - cloud computing, virtualisation, the meta operating system, building in zones and pods, and more customisable server architectures -- are helping build toward a future when IT can quickly provide the right level of services to users based on individual needs, and not worry about running out of space or power. The goal, Blake says, is to create data centre resources that can be easily manipulated and are ready for growth.

"It's all geared toward providing that flexibility because stuff changes," he says. "This is IT. Every 12 to 16 months there's something new out there new and we have to react."