For years, Robert Wakefield and Dameon Rustin lived with the problems of keeping Snelling Staffing Service’s old, poorly designed datacentre up and running. Not only were the intricate cable runs and varied server makes and models difficult to keep straight, but the building itself tended to compound their management headaches.

“Our 15-ton air-conditioning unit was water-cooled, but the building [management] didn’t clean the cooling tower very often,” says Wakefield, vice president of IT at Snelling Staffing and Intrepid USA, a home healthcare firm also owned by Snelling’s parent firm, Patriarch Partners. “Muck would get in, clog up our strainers and shut down the AC unit to our datacentre. That was a big problem.”

In addition, the building owners would not give Snelling the OK to put in a diesel backup generator to power the datacentre. “Let’s just say they weren’t very helpful,” says Wakefield, who spoke about his datacentre project at the recent Network World IT Roadmap Conference and Expo in Dallas.

Things began to change quickly once Patriarch bought up Intrepid in 2006. Wakefield and Rustin, Snelling’s director of technology, were charged with building a brand-new datacentre that would not only solve the current Snelling problems, but also house Intrepid’s datacentre and be ready to support any future growth.

“We had to build expandability into it because Patriarch is a private investment firm, and their goal is to buy more companies and roll them in,” Wakefield says. “We were told to give ourselves about 100 percent growth room.”

The downside? They needed to do all that with a budget of $800,000 and a window of only six months. “It was a challenge,” Wakefield says.

But it was a challenge they met head-on. Today, Snelling and Intrepid’s new 1,100 square foot datacentre in Dallas efficiently houses a variety of equipment, including:

  • A total of 137 servers (45 for Intrepid and 92 for Snelling), 37 of which are new dual-core, dual-AMD Opteron processor-based Sun Fire X-series Unix servers.

  • Three EMC storage systems, including an EMC CX400, a CX3-20 iSCSI system and an old SC4500, as well as a Quantum tape library.

  • A variety of networking components, including shared virus scanners and Web surfing control appliances.

  • A Liebert 100kVA uninterruptible power supply (UPS).

  • Two Emerson 10-ton and one Emerson 15-ton glycol-based AC units.

And even with all of that, Wakefield says he still has room to add nine more server racks.

Getting there

Wakefield and Rustin first visited several datacentres to get an idea of what could and could not be done. They also looked at a number of different locations before deciding in January on the Dallas building. Then, the real planning began.

“Once we had the dimensions, everything else came from that,” Wakefield says. He and Ruston drew up 10 different floor plans and began calculating how many servers they’d need, and how much cabinet space. At that point, requirements began to fall into place. “High-density became a requirement; virtualisation became a requirement,” he says.

Although the new datacentre is only 150 square feet larger than the old one, it needed to support more than 40 additional servers, plus provide room for growth. Wakefield considered going the blade server route to save space, but soon learned they were prohibitively expensive.

“Blades were pretty high cost-wise, and we had bought some of the Sun X-series boxes in the past,” he says. “They are AMD-based, so they use less energy and put out less heat. And they’re dual-core, dual-processor with about 8GB of RAM, so we could set up [virtual machines] on a good chunk of them, and that saved us a lot of space too.”

Wakefield says space constraints also led him to purchase new Chatsworth CPI TeraFrame high-density racks, each of which can hold as many as 36 1U servers. “They’re vented at the top and handle air circulation really well,” he says. “We’re on a raised floor, so the cooling comes from below, it gets sucked in the front of the cabinet and then vented out the back and straight up the top. It’s very efficient.”

He addressed the AC problems by purchasing the glycol-based units, which are completely self-contained. “Now, all of our cooling is independent of the building,” he says. “So if the building needs to shut down their water supply, it doesn’t shut down my datacentre.”

Wakefield has also planned for optimal power usage. A 600-amp power cabinet powers everything in the datacentre. “We have a UPS tied to that, and then we have a power distribution unit out on the floor in the datacentre that provides feeds to each cabinet,” Wakefield explains. “Each cabinet has the ability for a single box to plug four power supplies into it, and each of those power supplies is on a different circuit for redundancy.”

And if that’s not enough, he’s also planning to soon install a generator. That will provide backup power not only for the datacentre, but for critical business areas that support payroll and billing, so that Intrepid and Snelling can both stay open for business even during a power outage.

Wakefield says the new datacentre optimises efficiency by enabling Snelling and Intrepid to share as much equipment as possible. Snelling’s 43 locations are linked via an MPLS network to the datacentre, while Intrepid’s 115 locations use a variety of DSL, frame relay and MPLS, with most gradually moving to MPLS over time. Each company has its own router, but they share a 10Gbps core switch in the datacentre. “Everywhere we can, we try and put in a common platform to save both companies money,” he says. “We have a common core switch, as well as common e-mail, virus scanning and surf control for the Web.”

Future-proofing

All of the new datacentre’s cabinets are pre-wired, a move that was more expensive upfront, but will offer huge payback over time. Each cabinet has a 10G connection to a core switch. “If you need to put a new server in, you don’t have to pull a fibre run all the way back to the switch,” Wakefield says. “It’s all there already. We just drop the server in, connect in our patch panels and we’re ready to go.”

In addition to pre-wiring 10G and fibre, he also future-proofed by installing Category 6 cabling to support not only both companies’ data but also their voice via a new Cisco VoIP system. And all of this means the new datacentre should easily serve the two companies (and any others that may be added) for anywhere from five to seven years.

“Eventually, depending on new fibre technology, I may have to add some more fibre in, but in the grand scheme of things, it’s pretty solid for several years to come,” he says.

Doing it right

After many 80-plus hour weeks for his staff, Wakefield says his team successfully cut over the Snelling side of the business in May and moved in the Intrepid side, from its old home in Edina, Minn., in July. They did it all, start to finish, in less than six months. “I wouldn’t recommend that timeframe,” he says.

But overall, Wakefield and Rustin are pleased with the results. “We spent years dealing with a poor setup,” Wakefield says. “In the old building, when we wanted to add a server, we were always having to trace runs out to determine where they went, or crawling up on ladders to pull cable,” Wakefield says. “Over the years, it just drove us crazy. And Dameon and I always said, if we ever get to build our own, we know what we’re going to do. We’ll do it right. And I think we did.”