Mainframe for rent
On-demand services, a new data center promise, bring high-powered computing to the mainstream.
By Jennifer Mears, Network World, 08/22/05
Simulation software firm Exa has a customer list that reads like a who's who of the automotive industry: Audi, BMW, DaimlerChrysler and Ford, to name a few. The automakers rely on Exa for computer-aided design software that enables them to build the most aerodynamic vehicles possible.
And Exa relies on IBM to provide the super-duper computing power that the carmakers need in fits and starts.
"The key that's turning us at this point is that with IBM we have almost unlimited capacity," says CEO Steve Remondi. "We have IBM's balance sheet at work, as we like to say here, and that has made an enormous difference. Now there is no size problem, or set of computational work, that we can't do."
Exa is one of a growing number of companies expanding computing oomph with hosted, on-demand offerings. While Exa uses IBM's Deep Computing on Demand centre, other companies opt for Big Blue's Linux virtual services on the mainframe or Sun's Sun Grid, for example. With these offerings, corporations get expansive computing power at their fingertips and pay only for what they use.
While the basic idea is nothing new -- time-sharing was the way many companies harnessed computing power in the 1960s and 1970s -- the prospect of being able to consume computing cycles on a grid or mainframe for specific, computationally intensive workloads is bringing more companies back around to the idea of renting instead of buying CPUs.
"What makes this interesting now is that businesses have a business reason to acquire resources in this manner," says Mike Kahn, managing director at The Clipper Group.
As an example, he cites pharmaceutical companies, which have huge surges in computing requirements at certain stages in their drug testing. These companies would likely find it makes better business sense to tap into the resources they need for peak demands, rather than spend the money to buy hardware that would sit idle much of the time, Kahn says.
Clout for the little guy
At the same time, the ability to pay only for what you need and avoid upfront capital expenditures opens the door for smaller companies, such as Exa, to have computing power that makes them competitive with larger firms.
"What's cool about what IBM and Sun are doing is they're opening up technology that really was only available to a narrow subset of users," says Charles King, principal analyst at Pund-IT research.
Exa, one of the first customers to tap into IBM's Deep Computing on Demand centre, certainly has seen a shift in its capabilities. Before Deep Computing on Demand, "some [computations] just were not done," Remondi says. "People said, 'Well, we have fixed-capacity machines. I can't get all these computational simulations done.' So they would just go without the information," he explains. "Now there are whole classes of vehicle programs that we are supporting with this centre that we would not have been able to do any other way."
As many as 40 per cent of Exa's 100 customers tap into the IBM Deep Computing on Demand centre for hosted access to Exa's software, Remondi estimates. The rest run the software in-house. Capacity demands range from 1,000 to 5,000 CPU hours.
Having a name like IBM behind Exa's service gives the small company the clout it needs when assuaging big customer concerns, such as security, Remondi adds.
Security is the biggest hurdle for QuantumBio, a drug discovery software company, says the company's chief software engineer Lance Westerhoff. QuantumBio has been tapping into IBM's Blue Gene on Demand service for a validation study aimed at showing proof of concept to its customers, he says.
"We'll use about 1,000 processors for two weeks," says Westerhoff, who is a principal investigator for a Department of Defense contract to use modelling for determining countermeasures for bio-terrorism. Once customers see the real-world results of being able to tap into such powerful computing muscle, they'll see the benefits outweigh the risks, he adds.
IBM designed Blue Gene, which is made up of specially designed Power-based nodes, to provide extremely dense processing power. A Blue Gene system at Lawrence Livermore National Laboratory has earned distinction for two years running as the fastest computer in the world on the Top500 supercomputer list.
The cost to bring a commercial version of Blue Gene in-house starts at around $2 million, Westerhoff notes. "Certainly, there is no way that QuantumBio, and indeed most of our customers, would like to put down that kind of cash, plus the costs of administration to run these jobs, when the CPU time can be bought on an as-needed basis at a much lower overall cost," he says. "They might do that in the future, but everybody right now is still at a point of trying different technologies."
New business opportunities
For PGA Tour, the ability to tap into an IBM mainframe, rather than buy one and house it onsite, enabled the golf Web site to launch an interactive application through which users follow their favourite golfers in real-time. PGA Tour unveiled its TourCast offering in 2003 after working with IBM to have the application hosted on Linux partitions on a mainframe.
"The business-model component was the biggest challenge. This was our first subscription-based application so we didn't really have a great set of numbers we could count on to figure out how many people would subscribe," says Steve Evans, vice president of IS for PGA Tour. "The other option would have been to basically guess at how much server capacity we really needed."
By sharing the mainframe with other users and paying only for what it uses, PGA Tour expects to cut costs by 35 per cent over three years compared with what it would have spent had it brought in Intel servers to stand by in case of spikes in demand, Evans says. At its peak, Evans guesses that PGA Tour would have needed about 100 Intel servers to handle the TourCast demand.
Regardless of whether the computing power comes from a mainframe or a grid or cluster of smaller systems, being able to tap into CPUs as needed stands to change the way companies think about their computing infrastructure.
You're no longer limited by capacity or cost," says Exa's Remondi. "You can just pay for a single job, and you don't have shelfware and hardware lying around and excess capacity. You eliminate the need for big capital-equipment purchase decisions. If a job is valuable you do it; if it's not, you don't."