This is a two-part atricle. Part Two appears tomorrow
For 10 years, IT managers have heard the promises: cheap server farms will replace expensive mainframe systems, lowering costs and improving competitive advantage through modern applications. And for 10 years, it hasn’t happened.
Vendors are still feeding IT the same sales pitch. This time around, it’s coming from the likes of HP, Microsoft and Sun, all companies with vested interests in selling their own boxes and legacy migration tools. Can these modern solutions really deliver on promises first made a decade ago?
As it turns out, the answer is a qualified yes. Distributed server systems can, in fact, replace the mainframe at a lower cost, especially in organisations running lower-end mainframe systems that offer 500 or fewer MIPS of computational power, according to Forrester Research analyst Phil Murphy. As an organisation’s IT infrastructure scales up, however, the answer is less clear, notes IDC Research Director Steve Josselyn. Some large organisations find the mainframe to be a much more efficient and economical platform, whereas others realise dramatic cost savings by migrating away.
A migration triple play
According to Ted Venema, a consultant at legacy modernisation vendor BluePhoenix, when an organisation does decide to move applications from the mainframe, it typically faces three migration challenges: the hardware platform, the database system, and the application development language.
The Danish Commerce and Companies Agency (DCCA), which processes business registrations and shares data with the tax agency of Denmark, migrated all three aspects of its mainframe environment this year. DCCA’s legacy transaction system was based on the Adabas database and the Natural 4GL (fourth-generation language) from Software AG , running on an IBM System 390/MVS mainframe. In a typical month, the agency would run some 800,000 transactions over approximately 2,700 applications.
Given the Adabas/Natural platform’s 35-year lineage and diminishing market share, DCCA was concerned that its legacy environment would have few support options as time wore on. According to project manager David Graff Nielsen, the agency already had to rely on an outside firm to manage its code. Moreover, DCCA wanted to move to a more Web-enabled transaction environment, which would allow businesses to register and update their information over the Internet -- something the Adabas/Natural platform did not easily support.
So the agency moved its applications onto 16-processor x86 application and database servers running Suse Linux and Oracle 9.2i. The new platform is at least 25 percent cheaper to operate and maintain, Nielsen says, freeing up money and people for the agency’s goal of improving and Web-enabling its services, instead of merely reducing costs.
DCCA hired BluePhoenix to translate roughly 1 million lines of Natural code into Java and convert existing IBM JCL (Job Control Language) code to Korn shell scripts, using automatic tools developed especially for this project. That translated code isn’t particularly object-oriented or well-formed, Nielsen says, but it functions well, and most importantly, it’s accessible for maintenance and fine-tuning. The new system uses the network more heavily because of increased traffic between servers and to support 3270 terminal emulation for connections to some outside systems. Nielsen, however, says this was a manageable increase.
Swapping out mainframe hardware, code, and databases at the same time introduces a lot of risk. Nielsen says a key factor in ensuring that DCCA’s transition ran smoothly was avoidance of translating and changing the code’s functionality simultaneously, as had been done in a previous, failed migration effort. Such changes would introduce too many variables, he says, making it difficult to verify that the new code was correct. By doing the translation first, the agency could rework the translated applications later, optimising them and adding new functionality -- in the meantime it could still run its business using the translated code.
Sabre pushes the limits
Sabre Holdings -- parent of the Travelocity online consumer booking service and the Sabre travel reservations and ticketing system, which handles about 40 percent of worldwide travel reservations -- is in the middle of one of the largest mainframe migrations. Todd Richmond, the company’s vice president of enterprise architecture, says Sabre has the world’s third-largest implementation of IBM TPF (Transaction Processing Facility) mainframes. In an effort that began almost six years ago, however, Sabre has migrated most of its domestic booking services to four-way, Opteron-based HP NonStop servers running 64-bit Red Hat Linux and the MySQL database.
Initially, Sabre’s IT managers thought they would have to migrate everything to the costly NonStop servers, but in the past year they discovered that they can use standard x86 servers for less-intensive work. Sabre will continue to use NonStop servers for database transactions because they are able to process the 14,000 transactions per second more reliably across large data sets typical of Sabre’s environment.
Sabre’s road away from the mainframe has not been easy, and the project is still several years from completion. This year, the company encountered unexpected problems in managing its server farms. “It’s our No. 1 challenge,” Richmond says, adding that Sabre had to build a lot of middleware to replicate the mainframe’s end-to-end monitoring and self-management capabilities. “There are more hops now, so we have to be diligent about latency.”
Sabre still experiences periods when reliability isn’t the same as it was on the mainframe, Richmond says, but it has gained the advantage of much shorter development windows -- perhaps half as long -- owing to the combination of the move away from assembly language and the use of desktop development tools. Richmond’s staff has also been able to code functions such as calendar-based flight availability in C++ and Java, which he believes could not have been done using mainframe code.
Richmond says Sabre expects to transition its international and multi-route domestic services before the end of the year. The step will allow the company to retire one of its three TPF data centres, each of which contains about eight mainframes. During the next 18 months, Richmond expects to migrate Sabre’s core passenger itinerary service to the distributed system as well, eliminating a second TPF data centre. That will leave only the master transaction database. Richmond thinks he may need to stick with IBM TPF for that one, at least for a while, as HP isn’t yet certain it can deliver the TPF-level fault-tolerance that that database needs.
All told, this ambitious, multi-year migration effort costs “a significant percentage” of Sabre’s annual $150 million IT budget, but Richmond says it’s well worth it. He says costs are already less than half of what they had been, mostly due to savings in per-transaction charges for the TPF facility.
To be continued. Part Two of this article will be published tomorrow.