This year marks the 10th anniversary of the 1,200 square foot data centre at the Franklin W. Olin College of Engineering, three years past what CIO and vice president of operations Joanne Kossuth originally planned.

Despite needing more capacity and better connectivity, Kossuth has been forced to backburner the issue due to iffy economic times. "Demand has certainly increased over the years, pushing the data centre to its limits, but the recession has tabled revamp discussions," she says.

Like many of her peers, including leaders at Citigroup and Marriott International, Kossuth has had to get creative to eke more out of servers, storage and the facility itself. To do so, she's had to re-examine the lifecycle of data and applications, storage array layouts, rack architectures, server utilisation, orphaned devices and more.

Rakesh Kumar, research vice president at Gartner, says he's been bombarded by large organisations looking for ways to avoid the cost of a data centre upgrade, expansion or relocation. "Any data centre investment costs at minimum tens of millions, if not hundreds of millions, of dollars. With a typical data center refresh rate of five to 10 years, that's a lot of money, so companies are looking for alternatives," he says.

While that outlook might seem gloomy, Kumar finds that many companies can extract an extra two to five years from their data center by employing a combination of strategies, including consolidating and rationalising hardware and software usage, rolling out virtualisation and physically moving IT equipment around. Most companies don't optimise the components of their data centre and, therefore bump up against its limitations faster than necessary, he says.

Here are some strategies that IT leaders and other experts suggest to push data centres farther.

Relocate non-critical data

One of the first areas that drew the attention of Olin College's Kossuth was the cost of dealing with data. As one example, alumni, admissions staff and other groups take multiple CDs worth of high resolution photos at every event. They use server, storage and bandwidth resources to edit, share and retain those large images over long periods of time.

To free the data centre from dealing with the almost 10 terabytes of data those photos require, Kossuth opened a corporate account on Flickr and moved all processes surrounding management of those photos over there. Not only did it save her the cost of a $40,000 (£25,000) storage array she would have had to purchase, but also alleviated the pressure from the resource-intensive activity associated with high resolution images.

"There is little risk in moving non-core data out of the data centre, and now we have storage space for mission critical projects," Kossuth says.

Take the pressure off of high value applications and infrastructure

Early on, Olin College purchased an $80,000 (£50,000) Tandberg videoconferencing system and supporting storage array. Rather than exhausting that investment from overuse, Kossuth now prioritises video capture and distribution, shifting lower priority projects to less expensive videoconferencing solutions and YouTube for storage.

For example, most public relations videos are generated outside of the Tandberg system and are posted on the college's YouTube channel. "The data centre no longer has to supply dedicated bandwidth for streaming and dedicated hardware for retention," she says. More importantly, the Tandberg system is kept pristine for high profile conferences and mission critical distance learning.

Standardise servers and storage

Dan Blanchard, vice president of enterprise operations at hotel giant Marriott International, boasts that his main data centre is 22 years old and he intends to get 20 more years from it. He credits discipline on the part of IT as a reason for its long life, particularly in terms of standardisation.

Each year, the IT team settles on a handful of server and storage models to purchase. If a new project starts up or one of the 300 to 400 physical servers fails, machines are ready and waiting. Storage is handled similarly.

Even switches, though on a longer refresh cycle of about five years, are standardized. "Uniformity makes it much simpler to manage resources and predict capacity. If you have lots of unique hardware from numerous vendors, it's harder to plan," Blanchard says.

He recommends working closely with vendors to understand their roadmap and strategise standardised refreshes accordingly. For instance, Marriott might delay a planned refresh if a vendor's announced feature sets are worth it.