Brian Burch knew the moment had arrived. Two of his data centre's key services, availability and business continuity, needed fast and dramatic improvement. Design and location limitations meant that his company's existing data centre couldn't be upgraded to the levels necessary to provide the required function and performance gains.
So Burch, senior worldwide infrastructure director of Kemet, a capacitor manufacturer, decided last year that it was time for his data centre to split.
Even in today's challenging economy, enterprises are facing rising internal and external demands for IT services. When an existing facility can no longer shoulder an enterprise's IT burden alone or when it becomes necessary to establish a secondary site to provide enhanced business continuity or regional network support, an important decision point has been reached.
For a number of enterprises, the obvious solution is to add another data centre, and for many of those it means partnering with a colocation facility. If you're considering this option, it doesn't just pay to do your homework, experts say. It's essential.
"You absolutely need to do the buy-vs-build analysis," says Jeff Paschke, senior analyst at Tier1 Research. That said, "I am a former enterprise data centre manager, and from what I know now, more should be using [colo] than they do," he added.
The number one reason to consider colocation comes down to financials. "Do you want to go to your board and ask for $50 million in capex [capital expenditures] for another data center?" Paschke asks. "The alternative is to go to a provider and use opex [operating expenses] and not have to spend money upfront," he says.
Given the massive costs and time demands required to build a traditional data centre, "fewer organisations are deciding to build their own satellite data centres," says Lynda Stadtmueller, an analyst at technology research company Frost & Sullivan.
Especially for enterprises that have latency sensitive applications that require local presence, there is a trend toward leasing space from a colo or hosting provider rather than building and managing their own data centres, she explains.
A Frost & Sullivan study conducted a year ago showed that total space used by enterprises will increase by almost 15% annually through 2013. Yet the percentage of that space that the enterprises own themselves, versus leasing from another provider, will decrease, from 70% to 64%, during that time. "A pretty hefty swing," Stadtmueller says.
Technology research firm Info-Tech Research Group backs that up. Some 64% of organisations engage in some form of colocation services, including hosting, but over 77% of them do not outsource the entire data centre, according to a survey of 78 customers conducted in late 2010.
Most organisations begin thinking about adding a data centre as soon as their existing facility starts maxing out its physical space and/or support resources, Stadtmueller says. "Once you see you're beginning to run out of space, run out of server capacity [or] when you're looking to add or upgrade an application, that's when you begin to look outside."
Sometimes the push comes in the form of a business need. A new direction may require a lot of extra capacity ASAP, or enough that it would push your existing data centre over the edge of its existing power usage, for instance. Power is usually the gating factor in many older data centres these days, meaning that enterprises run out of power options long before they run out of space.
For a number of organisations, the idea of building out a second site often arises from a desire to create, enhance or save costs on an enterprise business continuity strategy. "With our new site, we really wanted to improve on the response time from any kind of a failure," Burch says. Kemet was also looking for a way to escape a costly relationship with a disaster recovery (DR) services provider, he adds.
Analysis showed that the new facility would trim recovery time from 72 hours or more to a range of five minutes to 18 hours, depending on the system category. The annualised cost of the new facility would be about the same as continuing the current DR contract.
Given all that evidence, Burch decided to go with colo. And in addition to the DR features, now the company has "a modern test and development environment with a three year refresh cycle," Burch says. "Basically, we got a new data centre with new equipment and communications lines with zero change in budget."
"One month after go-live on the new data centre we conducted a test recovery of the systems previously covered under our DR contract," Burch explains. "We recovered all of the target systems in less than 10 hours." He notes that the dramatic improvement over the previous recovery target of 72 hours or more included "normal delays from recovering on new equipment in a new location and using new procedures."
To maximise the business continuity value, Burch and his team decided to place a significant amount of distance between the new facility and Kemet's headquarters. "We felt like we had to go at least 100 miles away to avoid the types of disasters that lead to electrical substation problems, large storms, those sorts of things," Burch says. The team ultimately fudged a little bit on its distance mandate and settled on a location some 90 miles away.
Beyond business continuity, Burch says the new data center was designed to fulfill another key goal: to provide a test and development centre that would operate independently of the main facility. "Probably 95% of the hardware that's down there is being used for test and development instances of our applications," Burch says. "In the event of a disaster, it will just automatically convert from that role into running our production systems."