Morphing the Mainframe
As distributed systems take on more mainframe-like qualities, the future of big iron hinges on its ability to adapt to the distributed computing revolution without being consumed by it.
Robert L. Mitchell, Computerworld
At Bank of New York, the mainframe is still king. Nearly three quarters of all transactions are processed on big iron, and 20 per cent to 25 per cent of the remaining transactions rely on the mainframe for at least some business processes. "The mainframe today is still the platform that we are able to drive to the highest level of utilisation," says Edward Mulligan, managing director of the technology services division at Bank of New York.
That's slowly changing. Like many companies, the bank conducts most software development projects on Windows or Unix servers. These distributed systems are more open, offer more-agile software architectures and are less costly to run and maintain than mainframes, Mulligan says. And distributed systems are increasingly offering traditional mainframe benefits, such as availability, scalability and server utilisation. Mainframe technologies, ranging from channel architectures to virtualisation, have migrated down to distributed systems and have begun to mature. "Most of the server solutions available today are morphing to become more like a mainframe," Mulligan says.
Mainframe goes distributed?
But the mainframe is also becoming more like distributed systems. Designs are evolving to incorporate technologies such as Fibre Channel, InfiniBand, Unix and Java. The success of those efforts will determine whether the mainframe will survive as a distinct platform or simply be absorbed into the world of distributed computing.
Robert DiAngelo, vice president and CIO at MIB Group, says he doesn't trust distributed systems with his high-end applications for insurance fraud detection. "I'm in an environment that's easy to maintain, very secure, highly reliable," he says of his IBM z890 midrange system. DiAngelo is redeploying his applications in a three-tier architecture that includes Java, WebSphere and DB2. But the entire architecture, plus his development and quality-assurance testing environments, are consolidated into a single logical partition on the mainframe. Everything fits into a cabinet in his data centre. "This is a lot easier to manage than 80, 90 or 200 servers that are spread out," says DiAngelo. MIB Group is a poster child for IBM's strategy of promoting the mainframe as a consolidation platform, although DiAngelo acknowledges that he's "out in front" of most organisations in taking this approach.
As mainframe technologies trickle down to distributed systems, those systems are getting better at hosting mainframe-class applications. Meanwhile, IBM, Unisys and others are moving to more open, industry-standard technologies. Distributed systems based on Unix and Windows are eroding the low end of the mainframe installed base. The mainframe still firmly holds its edge in complex environments. But the battle for the midrange—applications of up to 1,000 MIPS, where the majority of mainframe applications fall—has already begun.
Unless the relatively high costs of mainframe hardware and software become more competitive, and unless more-agile software architectures, such as .Net and J2EE, can be successfully deployed on mainframe systems at scale, the mainframe could eventually be eased out of corporate IT. "IBM mainframes are going to become marginalised to the high end if IBM can't significantly reduce the cost," says Gartner analyst Dale Vecchio.
Adoption of industry-standard technologies is key to the mainframe's survival. IBM has based its strategy on Java, WebSphere and Unix/Linux and positioned the zSeries mainframe as a consolidation platform. IBM also released last July its System z9, which reflects an investment of more than $1 billion and includes innovations such as an encryption processor and the ability to support up to 54 processors and 60 logical partitions. "That's an enormously impressive technology. They doubled everything except the price," says Gary Barnett, an analyst at Ovum Ltd. in London. While mainframes are incorporating additional open architectures, they're also likely to continue to be technology leaders, says Chander Khanna, vice president and general manager at Blue Bell, Pa.-based Unisys. "They are at the top of the waterfall. I don't foresee that changing," he says.
At the hardware level, distributed systems have incorporated industry-standard versions of technologies with mainframe roots, such as Fibre Channel, InfiniBand and IBM's Chipkill error-correction technology, which is used in memory for high-availability systems. "Every big server now has dynamic partitioning, a channel architecture—things like InfiniBand—and they all have 64-bit support and large memory," says John Abbott, an analyst at The 451 Group in New York.
While IBM says proprietary channel architectures such as Ficon and Escon have advantages, Bank of New York's Mulligan would rather have standardised I/O. "An imaging application we have and a storage device we'd like to leverage are not supported cleanly by IBM's Ficon architecture," he says. "You end up buying these esoteric boxes that emulate the protocols."
But for MIB Group's I/O-intensive application, channel performance is more important than using open-standards hardware. Today, InfiniBand can't drive the number of concurrent channels DiAngelo needs. "We need that back-end channel capacity, and that's something the mainframe does very well," he says.
Mainframe remains proprietary
Still, proprietary I/O hardware is costly. "You pay a hell of a lot to get those channels in place," Abbott says. Most mainframe applications would do just as well with InfiniBand and off-the-shelf adapters, he adds.
That's the direction that IBM is moving in, says Guru Rao, an IBM fellow and chief engineer for the eServer line. While the mainframe is the system most capable of handling complex environments, he says, "a high-value system cannot provide only unique technologies. It has to be able to exploit and leverage high-volume capabilities in the industry." IBM already offers some support for Fibre Channel, and the next-generation mainframe will also support InfiniBand, Rao says. That evolution to support standards-based, commodity hardware architectures is necessary if IBM is to narrow the price gap separating the mainframe and distributed systems.
Mainframe vendors have also struggled with proprietary processor designs, which can't compete on price with high-volume Intel chips. The IBM plug-compatible mainframe market all but disappeared as the costs of keeping up soared. Bull and Unisys have both thrown their lot in with Intel (although Unisys says it will continue to offer some designs of its own), but IBM is taking a middle road. Its Power architecture is used in gaming systems, and IBM says it plans to leverage the economies of scale generated from those volume products to develop a more competitive, "higher value" version of the Power5 for the mainframe. "We are going to provide the same benefit to the mainframe as we have for the iSeries," Rao says. The zSeries processor will include "elements of the Power5 architecture," but the chip set will remain unique, he adds.
IBM's efforts are bringing costs down by about 20 per cent per year, says Abbott. However, the price/performance improvements for x86-based systems have been in the 30 per cent to 45 per cent range, he says.
Virtualisation beats the mainframe?
The emergence of virtualisation technology on the x86 and Itanium architectures and the evolution of tools such as VMware are increasing utilisation levels for distributed systems, but they still fall short of the mainframe's capabilities. "To be realistic, we still have a ways to go before [distributed systems] can achieve the complete richness that the mainframe environment offers," says Barnett. But the gradual maturation of non-mainframe virtualisation technology could eventually make it possible to migrate larger workloads of 1,000 MIPS and higher off the mainframe, he says.
"Virtualisation is the key for bringing the mainframe and open-systems technologies together," says The 451 Group's Abbott. IBM offers its Virtualisation Engine, which Rao says will increasingly be used to optimise resources across systems. "In our view, the way to deal with customers' complexity is to use a virtualisation engine that will run not only on IBM platforms but also on other leading platforms," he says.
Another type of virtualisation—hardware emulation—is also making it easier to move mainframe application environments by abstracting them from the underlying hardware. Paris-based Bull offers virtualisation that enables its GCOS 8 mainframe operating system and the applications on top of it to run unchanged on its Intel-based NovaScale 9000 hardware, says Joe Alexander, Bull's director of strategy and planning. For now, however, high-end customers will have to wait for faster chip sets and performance improvements to the emulation software.
Similar technology for z/OS and OS/360 is available from Platform Solutions. The Fujitsu spin-off sells a system that, when used with the vendor's virtual I/O subsystem, can support z/OS, as well as Linux, Unix and Windows partitions, on one x86-based system. "We bring the characteristics of the mainframe and the execution of the operating system to commodity hardware," says Platform's CEO Michael Maulick.
As a method of balancing workloads, IBM's approach with Virtualisation Engine sounds a lot like grid computing—an activity that's largely being driven in the open-systems arena today. "Even if IBM has an edge, it's not going to last with so much activity going on elsewhere," Abbott predicts.
Software is key
The bottom line is that the hardware platform is becoming less and less relevant, says Unisys' Khanna. "It's more of what's in the operating environment and what's in the middleware," he says.
Mainframe operating systems, while proprietary, still have some key advantages in several areas. "The operating system provides the efficiency, isolation, the address spaces, the encryption, and supports an efficient clustering model," says Rao. The mainframe operating system is also the most trusted platform for doing key management, he says. Buffer overflows on z/OS are unheard of, says MIB Group's DiAngelo, adding that "it's a hell of a lot easier to secure one box."
But the biggest issue remains what to do with the more than 40 years of mainframe code—much of it tightly woven into the mainframe operating system and hardware architecture—that needs to play in a world of distributed computing and Web services. Mainframe users are sitting on more than a trillion dollars' worth of legacy mainframe code, says Rao. Bank of New York, says Mulligan, is "dealing with tens of millions of lines of code." And that amount of code couldn't be ported in his lifetime, he says.