Most analysts and observers agree that datacentres must become dynamic rather than static resources. But what does that mean?

As we head into a new year and look back at the last couple of years, a few key trends present themselves - but there's one over-arching theme and that's the need for more efficient datacentres.

Let's not kid ourselves about ideological spending on so-called 'greenness' - how likely is that? - the trend is of course driven by ratcheting energy prices. One of the consequences is the need to reach out and drag as many applications and assets as possible back into the datacentre, where energy use and other attributes can be managed - and where devices are nowhere near as vulnerable to malware and user-induced glitches; we'll leave IT staff-induced issues out of the equation for the moment.

Additionally, virtualisation technology has now moved on from being simply a way of consolidating servers - although for most real-world users as opposed to commentators, this remains a goal yet actually to be achieved.

There's a growing awareness of virtualisation's ability improve an organisation's disaster recovery schemes. Instead of needing an expensive, energy-guzzling second infrastructure humming away, 24 hours a day, it's now possible to institute a standby system consisting a fraction of the original hardware, but loaded up with virtual images of the production datacentre's servers.

Long-time VMware ecosystem partner Platespin spotted this last year and in December launched a disaster recovery system in a box, which consists of an off-the-shelf Dell server and some smart software. Others are moving in a similar direction.

All these new capabilities being shouldered by the datacentre have consequences, however, the main one of which is complexity. A fully virtualised and therefore complex datacentre needs to be managed and seen as single entity.

Yet it's becoming increasingly difficult - if not impossible according to some commentators - to manage so complex an entity using existing staff levels and expertise. Spending more on staff is simply not an option in most cases, so the fix is data centre automation, or orchestration, as the arrival of new products and willingness of marketing men to talk in such terms at VMworld last September demonstrated.

According to Gartner, a substantial minority of datacentre managers - about a third - who attended its Data Centre conference last year have also reached this conclusion and are looking to implement some form of datacentre automation.

Similarly, analysts such as Dan Kusnetzky argue that datacentre orchestration or automation systems offer a number of benefits, including improved levels of scalability, reliability and performance, plus agility and improved utilisation of hardware, software and IT staff resources.

Interestingly, given that virtualisation technology can result in a explosion in the number of servers physical and virtual, among the first results of datacentre automation is to reduce the apparent number of servers.

As Fred Gedling of DataSynapse, vendor of datacentre automation package FabricServer says: "VMware is very much about supply side of the equation - getting the correct mix of operating systems. What we do is to make many servers look like one as VMware does the opposite."

Of course, the cynic would argue that the technology industry spends too much time rescuing customers from the mistakes committed by vendors of earlier generation technology. Those in the datacentre orchestration market wouldn't put it like that, but the growth of virtual servers does add up to a management headache. And for once, the leading culprit - VMware of course - is also offering management software to help ease the problem by allowing virtual servers to be moved around the datacentre according to resource requirements and availability.

Kusnetzky reckons his company "knows of many successful implementations of this type of virtualisation." It would appear that, as we said back in September 2007, you can expect to hear more about this trend in 2008.