Picking the easier-to-pick, low-hanging green fruit from your datacentre is not a trivial exercise.

In a previous article we saw that making any changes to the air-conditioning, the electricity supply or the cooling arrangements were not trivial undertakings. Datacentre infrastructure changes are significant exercises. What about making changes to the IT kit inside the datacentre?

Let's categorise it into server boxes, storage boxes and network boxes. There are two approaches. First we can find out under-utilised boxes and deal with the under use. Secondly we can make much better use of apparently well-utilised boxes and thus eject some now not-needed ones. That basically means consolidating workloads and/or virtualising them.

Servers

Server proliferation has been near-endemic in datacentres as the inability of Windows to multi-task well and the need for more storage have both encouraged a buy-another-server response to dealing with application slowdown or the introduction of a fresh application.

One hardware response has been to introduce blade servers: have many server blades mounted in one rack instead of in individual rack shelf units. This can increase the server density of a rack three or four times and thus free up space. But it introduces fresh problems of its own. The power needs of the racks increase and so do their cooling needs, which increases the power need yet more.

So blading servers can help solve a datacentre space problem but it adds to datacentre power and cooling issues. It has taken another way of dealing with servers, a software virtualisation route, to enable a relatively easy way to cut server proliferation down to size.

This is, of course, VMware, and its rise has shown how very, very bad at multi-tasking is Windows. A single VMware server can replace five, ten or even more individual Windows servers, yet it is basically the same hardware now working much more efficiently. Virtualising servers is proving to be an efficient way to reduce server space take-up in datacentres without adding to power and cooling needs.

Unix and Linux have their virtualisation products too; such as XEN Source and Solaris containers. With these you can also consolidate server hardware.

Buy VMware and you can apparently eject quite a high proportion of your Windows server boxes running just one application. Customers have reported 60-80 percent utilisation rates for x86 servers, up from today's 5-15 percent. That indicates that from every set of ten servers operating at 10 percent utilisation you could throw eight away and have two operating at 50 percent utilisation each - a happy thought in terms of reduced power and cooling cost.

One problem though: each of the eight discarded servers had its own direct-attached storage. What do you do about that?

Storage

If you consolidate the storage into a Fibre Channel storage area network (SAN), then you need to add host bus adapters (HBAs) for each physical server, preferably with virtualised features so they work with the VMware virtual machines, Fibre Channel cabling, SAN fabric switch boxes and a set of Fibre Channel-connected drive arrays plus SAN management software.

It adds up in terms of box and device power cost and incremental admin expense and skills.

If you use an iSCSI SAN or a network-attached storage (NAS) approach then you can use Ethernet as the storage link and avoid Fibre Channel-related expense and skills.

Consolidated storage is a natural extension of the virtual server approach.

Virtualising the storage can drive up the array disk utilisation rates. With it applications think they have sole use of a logical unit (LUN) of storage but in fact it is mapped by the virtualisation software to any drive array volume that the storage admin people set up, and can be moved at will.

Thin provisioning the storage is an even better idea. Applications traditionally get allocated all the block storage they need over a period of time, such as 12 or more months. But they don't physically need all these disk blocks; they may write to just ten percent of them one month, another ten percent the next, and so on.

What thin provisioning does is to tell the application it has, say, a 100GB LUN, but actually only allocates 10GB. When it is close to getting filled up then another tranche of actual storage is allocated. Suppliers such as 3PAR, HP and Hitachi Data systems have this feature. EMC does not, yet.

It means you can defer purchasing possibly huge amounts of disk across your entire set of server applications and thus defer drawing down the electricity needed to power and cool it.

Another promising idea is de-duplication. Where you backup data to disk then de-duplication can identify repeated character strings in files and between files, and replace them with pointers. This can result in de-dupe ratios of 10, 30, even 30:1. You literally only store the information once. Where you would have needed 30TB of disk for your disk-based backups you can now make down with 5TB or less.

Suppliers such as Diligent, EMC's Avamar, Sepaton and others have this feature.

A further step you can take is to have secondary data, needed to be online but not accessed frequently, stored on a MAID drive array, with MAID standing for Massive Array of Idle Disks. Most of the disks are spun down and not consuming energy. If data that is on a an idle disk is needed then it is spun up.

Because most disks are not spinning, the drive array doesn't get so hot and more disks can be packed into the same space. Thus you save on power, cooling and space requirements.

MAID product suppliers include Copan, Nexsan and Fujitsu.

Lastly, you can keep on using tape or optical storage for tertiary data; data that must be kept but may not be accessed for weeks, if not months. Once data is written to the tape or optical media then that media can be taken off-line and left on a shelf or in a library slot. It consumes no power at all.

All in all, there are several incremental steps you an take to reduce the power and cooling needs of your storage and get rid of some drive arrays.

You may also choose to use newer, higher-capacity drives and thus reduce the physical number of drives you need. One 750GB serial ATA (SATA) disk can replace three 250GB SATA drives.

An ESG report, Power, Cooling, Space Efficient Storage, provides a very good description of various storage efficiencies such as these listed above and a few more such as relying on Snapshots to 'copy' data sets. It is a very worthwhile read.

Network kit

Box reduction in networking means replacing several boxes with few ports apiece with one box with many more ports; this is a port consolidation exercise. It is again a natural adjunct to server virtualisation. Instead of many discrete network links to individual server boxes you have fewer physical links to fewer physical server boxes which run many virtual servers all communicating over the same network links.

One's intuition says network link utilisation has rarely been as poor as server or storage utilisation. In fact, the actuality has been that communications links are choked by excess data rather than starved of data. The expectation is that network box reduction is not as practical or potentially reading as either server consolidation and virtualisation and the various storage box reduction tactics listed above.

Conclusions

The simplest and most practical step you can take to reduce datacentre IT kit power, cooling and space requirements is to virtualise your servers. The next thing is to look for incremental savings in your storage by consolidating separate islands into a single pool for the virtualised servers. Then look to technologies such as thin provisioning and data de-duplication to further shrink your storage estate. MAID and offline archive storage can further limit physical disk array sprawl albeit at the expense of slowed-down data access.

Storage needs in terms of the amount of data needing to be electronically kept are rising. Perhaps it is time to change the mindset of users and re-impose storage quotas to reduce the apparently remorseless flow of drive arrays into your datacentre. Otherwise, any greening you do undertake could bring you short-term wins only. In a few years time you could be back at square one.