When you think about it, you have only two ways to travel. You can select a destination and make travel plans to get you there. Or you can focus on the journey and wander with no real end-point in mind. When it comes to migrating today's IT infrastructure to its next-generation form, the new data centre, network executives already on their way agree that the destination method is superior. Nevertheless, many say they feel forced into wandering.

"When I think of enterprise architecture, I think of our general direction on how we want to build and deploy systems. We plan about five years out, but we keep changing that plan about every month, it seems, to remain responsive to the business," says Dave Cogswell, director of technical services at Data-Tronics, the IT subsidiary of ABF Freight System. Being nimble lets IT be driven by business need but it makes the feasibility of accurate, long-range planning "very debatable," Cogswell contends.

Yet, when considering the enormous changes required to build the new data centre, a long-term architecture goal becomes a must, experts argue. Only with a destination in mind can network executives make intelligent decisions about the technologies they need to build a new data centre infrastructure.

"You can sit back, follow the herd and stay in the mainstream. [But] it's too hard to interpret the world in that kind of read-and-react approach. If you see a change that's dramatic coming down the pipe, it behoves you to have a plan and to have an architecture for where you are trying to get," says Geoffrey Moore, author and managing director of consulting firm TCG Advisors. Moore advocates an architectural plan that looks as far as 10 years out.

What is the new data centre?
The new data centre has moved from the conceptual idea it was a year ago to a production infrastructure that today's early adopters are testing and deploying. As the new data centre evolves, all agree that a long-range plan will be based on two ideas. First, the new data centre relies on a new business model, the extended enterprise, which is in itself the basic building block for yet another emerging business model - the global ecosystem. Second, the basis for the extended enterprise's (and, eventually, the global ecosystem's) IT infrastructure will be change management.

The network executive's objective when building and supporting the extended enterprise business model is to create an environment that offers equally safe, reliable and productive IT systems for all user constituents, be they employees, customers, suppliers or what have you. The challenge for network executives in executing on this promise will be providing a consistent level of service while being extremely adaptable - the only constant will be change. Vendors and analysts across the board are evangelising their visions and terminology for this adaptive (HP's term), grid-like (Oracle), on-demand and autonomic (IBM) dynamic IT (IDC) infrastructure.

All visions have common core components. These visions see the infrastructure as a system of componentised, automated and hot-swappable services glued together with best practices. This is a vastly different mindset than the traditional view of the infrastructure as a mix of wires, boxes and software.

Services not hardware
With the traditional viewpoint, the technology stars as the basis of the architecture vision. But because technology changes more rapidly than a large company can examine, test and deploy it, such a viewpoint has become problematic.

"Rather than being an inhibitor of change, IT should be an enabler. The fact that it isn't is ironic since flexibility was a key expectation of IT from the beginning," says Russ Daniels, CTO of software and adaptive enterprise at HP. "We think it is important that you approach [adaptive enterprise architecture] holistically, not as a technology concern. We don't think it is about moving from this messaging system to that messaging system or rewriting code from this to that. You will become adaptive as a result of thousands of architecture decisions you make over a period of years."

Daniels learned this while re-engineering HP's IT architecture after the Compaq merger. "At the time of the merger, we were spending 72 per cent of our IT budget on operations and 28 per cent on what we identified as innovation - new capabilities available to the business. The objective of HP is to be at more of a 50/50 split," he says.

To get there, Daniels realised, HP had to identify and automate the manual tasks in which human-caused errors were driving up maintenance costs. Automation can become an ideal lens by which to envision the entire new enterprise architecture, he says.

Users who have tried this approach agree. "The whole change management process - utility computing - is the way to view IT," says one IT executive from a Fortune 500 insurance company that recently completed an 18-month project to automate server and application management with data centre automation software from Opsware.

"Our soft spot was systems, where IT head counts per server were high. We wanted to increase the number of servers being supported by one person, and we looked at automation tools to help us automate processes," says the user, who asked not to be identified.

"Tools like Opsware will help us get where we want to go without re-engineering the whole infrastructure." He now is looking at how change management can become the foundation of his company's next-generation, utility computing-type architecture.

A new model
As network executives build out their extended enterprises into global ecosystems, they are learning that their industry will dictate much of their architecture needs.

"A lot of what happens in on-demand only comes to life in an industry context . . . formed and inspired by industry thinking," says John Lutz, global vice president for on-demand at IBM. To that end, vendors such as IBM, HP and Oracle have developed detailed architectural maps for most of the major industries. These maps are excellent starting points for long-range planning, users say

Many companies in the utility, automotive and electronics manufacturing industries already are shifting their architectures to the new data centre model and can serve as examples. Key indicators of how far along a particular industry is include the adoption rate of bellwether technologies such as virtualisation, XML, Web services and service-oriented architectures, says Frank Gens, senior vice president of research at IDC. Other factors include how volatile the market is and how information-intensive the industry is.

While the particular industry might dictate the specific architectural map, several constants are emerging as the basis for the new data centre. For instance, the infrastructure is becoming a series of services that can be built in-house, outsourced or both.

The generic model comprises five layers, plus two cross-layer elements:

  • The client layer. As always, the client includes computers and voice devices, and combination voice/data wares. But a different kind of client emerges in the new data centre: one that will end up generating most of the traffic. This will be a machine-to-machine automated device, such as anything sporting a radio frequency ID chip (appliances, automobiles and pallets of goods).

With so many more clients on the average network, the infrastructure will be handling a "quantum leap" of data, Gens says. This is one of the pressing reasons that a long-term move to the new data centre will be necessary. With the client/server-based architecture of today, "if you throw an order or two more of data on top of it, the thing breaks. Let's build a more solid foundation," he says.

  • The network data access services layer. This includes LANs and other private enterprise networks, leased-line WAN services, public access networks like the Internet or metropolitan-area networks and, eventually, business-class public networks, such as infranets.

  • The federated identity management services layer. This layer provides the security backbone. Through a centralised service, users and machines will gain access to specific business services. Automated roles-based provisioning will play a big part, quickly granting and revoking access based on job function or type of machine.
  • The business process management layer. This is emerging as the heart of the infrastructure, composed of three pieces - management, analytics and the business processes.

In Daniel's view, network executives would be wise to make a distinction between business processes offered as services (say a funds-transfer application, available to all financial applications) and processes that ensure a service's availability (such as performance monitoring tools and network/storage/CPU capacity). This distinction will let network executives understand the costs involved in providing specific business services. With that they can conduct ROI analysis or build an adaptive, utility computing structure complete with charge-backs. Analytics also should be an element. These are IT systems that help the business folks document performance and predict future needs.

  • The virtualised infrastructure services layer. This is the computational and data access foundation on which all else rests and is the layer that most companies currently are exploring or implementing.

Servers and storage become managed as a single pool of resources. Over time, grid computing will be added to many enterprise infrastructures and will boost CPU power for computational-intensive tasks.

In fact, the concept of virtualisation is being applied generally across the entire model - making network capacity available on demand, and purchasing enterprise applications such as salesforce automation or enterprise resource management in application service provider form.

Two architectural beams will support all the layers: industry-standard best practices and procedures, such as the IT Infrastructure Library (ITIL), and systems and security management, for monitoring and performing QoS functions.

Getting from here to there
Of course, envisioning an ultimate architecture is the easy part. Taking action to get the IT organisation there is another. The migration path requires testing out architecture planning for smaller time frames and in smaller chunks, says Scott Richert, director of network services for Sisters of Mercy Health System.

Richert is restructuring IT to standardize and automate server and application management and to implement identity management, among overhaul projects. He creates three-year technology road maps for individual portions of his infrastructure using what he calls "life-cycle phasing." Technologies are investigated and used in pilot programmes during the "sunrise" phase, he says. Buying and implementation take place during the "mainstream" phase. During the "sunset" phase, IT supports the systems but does not procure new systems. After sunset, the technologies are retired.

While Richert has yet to create a single, long-term life-cycle phasing plan for his entire network, he is inching toward one. "I'll do multiple parts to a map and I encourage my team to take the time to map and to take it seriously. The data centre infrastructure is being built around a business process framework and we need to plan for key metrics such as cost vs. benefit, redundancy and meaningful service-level metrics, tied to businesses processes. That's not easy to do, but it's important," he says.

Plus, as any traveller can tell you, the value of an accurate map cannot be underestimated.