From the beginning, IT executives at Boscov's department store have had a mainframe bias. Today, as they think about evolving this retail chain's data centre into a more flexible, business-driven computing resource, little has changed: They consider the mainframe more important than ever. That might come as a surprise to those IT executives who consider the mainframe a dinosaur. But when considering which core computing platforms are best suited to support the new data centre, twists on the old become newly viable options.
"The mainframe will stay, but its role will be substantially different from what it is today," said Joe Poole, technical director for Boscov's. Mainframe workloads will transition from the traditional batch jobs into a more fluid environment. For instance, Boscov's is merging Linux and the mainframe. The company deployed Linux on its IBM z900 mainframe in 2001 and began turning processes previously run on Windows NT servers into Linux instances. It has consolidated about 40 of about 70 NT servers onto the mainframe. By using middleware such as IBM's MQSeries, transactions can flow from machine to machine, he says.
Letting business processes flow, regardless of hardware and operating system, is behind lofty vendor strategies for pooling computer resources that grow and shrink in response to demand. System vendors are busy promoting their on-demand programs - HP with its Adaptive Enterprise, IBM with eBusiness on Demand and Sun with N1 - but analysts say the concept won't be reality for many years. As a result, users today should focus on establishing the core computing platforms that will lay the foundation for that eventuality.
For Boscov's part, it is considering buying mainframe capacity on demand from IBM and virtualising Windows servers. These kinds of technologies would reduce management headaches while assuring that communication among all servers and the mainframe continues and that infrastructure is used efficiently. Poole needs to ensure this because Boscov's expects the number of transactions to jump significantly as it brings technologies such as radio frequency identification (RFID) and wireless to its 39 stores.
Little by little
Beyond open source operating systems, IT executives have numerous other core computing options for moving from the status quo to a new data centre that is easier to manage and that can support services-oriented and Web-enabled applications. These include industry-standard 64-bit server platforms, server clusters, blade servers, grid computing and server virtualisation.
Analysts and other industry observers suggest that IT executives attack such decisions one at a time, rolling out pilot projects to see what works where and then figuring out how components of the data centre can be integrated.
"You have to make this process decision tree that says, for example, 'Am I going to move away from [symmetric multiprocessing] to scale out, and, if so, where does Linux or clustering fit in, and where do some of the database capabilities running on a clustered environment fit in?' " says Vernon Turner, group vice president for global enterprise server solutions at analyst house IDC. "So you're starting to break down your data centre into the smallest manageable components. That's important because in the utility environment you have to be able to bill out in as small increments as possible."
IT executives at financial publishing firm Bowne & Co. knew they needed to address server inefficiencies. They decided to start fixing the problem one application at a time.
The company had built up enough capacity to handle spikes in demand from the printing of quarterly and annual financial statements, but that left the servers underutilised for most of the year. The publisher considered bringing in blade servers, but after carefully analysing application demands and infrastructure capabilities, determined that a grid architecture likely would be a better choice, said Ruth Harenchar, CIO at Bowne.
So the company decided to deploy a grid, starting small. Working with IBM and grid software maker DataSynapse, Bowne figured out that the statement-processing portion of its proprietary typesetting application would work best in a grid environment. It then determined which servers to use for the grid, based on application load and utilisation. "We had to find servers that had a similar configuration, the same operating system. In our opinion, we needed to have a minimal number of variables with the grid to work with, in a pilot," Harenchar said.
Since migrating that application from a Dell PowerEdge 1150 to a grid of two PowerEdge 2650 servers, processing time has dropped by 50 per cent, she said. Next, Bowne plans to spread the application across a grid of 10 servers, reducing processing time by another 40 per cent, she added.
Before the grid, the statement processing application was running on a server at a very low utilisation rate and, when traffic spiked, performance took a hit. Harenchar says she is quite happy with the performance improvements from the grid and the ability to get more efficient use of her hardware. She plans to expand the use of grid technology within her data centre.
Harenchar attributes Bowne's success with the grid to a clear understanding of what it was trying to achieve. "Having set out our criteria and our objectives, we were able to pick the right application and the right servers, and things went quite smoothly," she says.
When it comes to the choice between platforms, such as grid computing vs. blade servers, some analysts say integration and flexibility issues could cause a company to hold off deploying the tiny servers.
A lack of standards in chassis design that locks buyers into a specific vendor's products, plus huge power demands of the compact blades can be troublesome, they say. This lack of standards stands in the way of achieving a truly adaptive infrastructure, but interoperability efforts are underway.
In December a new Distributed Management Task Force group, led by Dell, HP, IBM and Intel, began studying ways to manage heterogeneous servers, regardless of platform. This server management working group plans to deliver its first specifications by the beginning of July 2004.
Standardisation is one of the reasons why First Trust, an independent trust company, scrapped its 32-bit IBM Unix boxes and moved a transaction-processing database onto Itanium-based servers from HP, says Jeff Knight, the firm's vice president of technology and vendor relations. Use of standards-based 64-bit systems, which handle more memory and processing on each chip, has let First Trust improve performance and save on licensing costs.
It also has enabled the streamlining of data centre operations. "It's given us a common architecture in development, testing and deployment," he says. "The fact that it's an industry standard product - the architecture is industry standard, the way the software is moving is industry standard - it really allows us to have a more cohesive data centre instead of having to have a specialty product for this business and a specialty product for that business."
But Knight cautions peers not to jump on new data centre technologies before they're proven well enough. "While you always want to lead with ability to deliver great services," he says, "you want to make sure that there is going to be a world around you that can help you get there and then help you maintain it once you do get there.