For much of its existence, Woodward Governor Co. has been building fuel nozzles and other parts for jet aircraft engines the old-fashioned way - carefully designing them, building physical prototypes out of raw materials, conducting tests and finally creating production parts.

Typically, it took the Fort Collins, Colorado, company weeks to produce workable plans for specialised new nozzle designs. Now, that time frame has been trimmed down to several hours by cutting out the steps needed to make physical prototypes.

Instead, Woodward is using powerful cloud-based supercomputers that can run complex software to quickly create and evaluate designs that can go right into production. In other words, the nozzle designs can go from creation to production 80% faster because the company no longer needs to first build models to prove the designs, a change that's projected to save the company about a half-million dollars each year.

"Woodward had some fairly robust computer workstations internally" to produce designs, but the company didn't have anything powerful enough to run large simulations, says Robert Graybill, an HPC consultant who worked with Woodward to help add on-demand supercomputer power to the company's production capabilities. Woodward's pilot project lasted 10 weeks, and additional testing will continue through October, followed by reviews of the project to determine if and how Woodward might use on-demand supercomputing in the future.

Graybill is the CEO and president of Nimbis Services Inc, an intermediary that helps match businesses that need more computing horsepower with HPC providers that can deliver it on an as-needed, pay-as-you-go basis. (Woodward IT representatives were not available for comment.)

Dan Olds, an analyst at Gabriel Consulting Group, says on-demand supercomputing is an idea that's blossoming. It's particularly useful for businesses that sometimes need processing help but can't afford to buy their own full-time, on-site HPC equipment.

"And it really opens the door for other users where this isn't even on their horizon yet," Olds says. "They could do these things on their own with the modest equipment they have, but it literally could take years." Alternately, users could simplify the problems they're trying to solve so their projects could run on systems they can afford, "but that might not give them the results they need," Olds says. On-demand supercomputing helps companies avoid both those compromises without breaking their budgets.

The market for such services has been fairly limited in the past, said Charles King, principal analyst at Pund-IT. "But a weak, rocky economy makes on-demand services a lot more affordable and attractive than purchasing and maintaining a dedicated supercomputing cluster," he said in an email.

Check on software compatibility

The biggest challenges are more likely related to software than hardware, King says. Many companies that use HPC systems have developed their own applications or have tweaked packaged applications for their own use. So how providers of on-demand supercomputing services work with customers to run their existing software is a key issue.

For its part, Woodward got started with on-demand supercomputing when it was invited to participate in a pilot project through the University of Southern California's Information Sciences Institute (ISI), where Graybill is the director of innovation. The goal was to explore how manufacturers could use HPC cloud computing services to improve their industrial design and modelling processes. The work was done under a $3.67 million contract from the Defense Advanced Research Projects Agency (DARPA).

Woodward began accessing HPC services through an IBM Computing On Demand facility in Poughkeepsie, New York. It's a cluster of 256 System x3550 servers and 128 System x3450 servers with a total of 16GB of RAM, providing more than 19 trillion floating point operations per second (TFLOPS) of sustained performance, according to IBM. Woodward is able to use the extra computing power as needed, Graybill says.

The Woodward pilot project was one of four similar projects that DARPA funded in the past couple of years. The three other projects involved virtual metal forming for ACE Clearwater Enterprises, chassis weight reduction for AlphaSTAR of Long Beach, California, and electromagnetic interference signature analysis for AltaSim Technologies.

Reducing waste

To date, the extra computing horsepower is helping Woodward produce new jet engine nozzle designs much more efficiently, while also reducing waste materials by 50%, according to figures from the pilot project. Costs were reduced by $275,000 per engineer annually by using supercomputers to eliminate the need for physical modeling, saving more than $500,000 per year, the study concluded.

"The access to cloud-based supercomputing really does change how you design your products," because supercomputing expands your available technologies, tools and resources, Graybill says. "It impacts your time to market and your quality in good ways."

And because it's able to buy the HPC time only when needed, Woodward doesn't have to deploy expensive on-site HPC clusters, which would bring additional energy costs and require maintenance by additional IT administrators, he explains.

"It's sort of making a full circle from the days of buying time on a mainframe in the old days," he says. "This cloud computing is really driven by users that need additional computing capacity."

Similar kinds of public-private HPC projects have been undertaken through the Ohio Supercomputing Center (OSC), where businesses based in that state can get both computing capacity and expert advice to help improve their bottom lines.

"We have been helping with building economic development in Ohio, giving businesses from large to small access to supercomputing," says Ashok Krishnamurthy, the interim co-director of the center, which is based in Columbus. More than 25 businesses have directly participated in OSC's Blue Collar Computing program, which provides supercomputing power to companies that typically hadn't considered using it previously. More than 250 smaller companies have also used the services by accessing them through related organizations such as the Edison Welding Institute (EWI), he says.

Welding gains from HPC analysis, too

EWI, a member organization for companies that do welding, works with the OSC to allow member companies to log into a Web portal, the E-Weld Predictor, to simulate complex welds using supercomputers before doing the actual welding work, Krishnamurthy says. "This simulates a whole bunch of prototypes to cut down the time it takes to create a welding process from six months to two weeks."

"This is huge, because it reduces the time it takes to get new welding procedures in place," says Chris Conrardy, chief technology officer and vice president of technology and innovation at EWI. "It lets you optimise the weld and reduce the risks of bad welds."

What's really novel about this, Conrardy explains, is that the welding process is not linear. "It has lots of variables and historically has been difficult to model on computers. All sorts of things can happen to the properties of the materials depending on how you apply the heat," he says. And for particularly critical applications -- a pipeline, say, or a pressurized vessel -- "folks can spend an awful lot of time and money figuring out how to weld those things."

Typical workstations couldn't handle all of the calculations that are involved in this process, he says.