Of all of the technology trends that have been important in the world of enterprise IT, there's one that stands out as the 800-pound gorilla - the pink elephant if you will - of the last few years: virtualisation.
Virtualisation has brought us new ways of doing things, from managing desktop operating systems to consolidating servers. What's also interesting is that virtualisation has become a conceptual issue - a way to deconstruct fixed and relatively inflexible architectures and reassemble them into dynamic, flexible and scalable infrastructures.
Most of the virtualisation systems we use today, such as VMware, Xen, Parallels, VirtualBox and Virtual Iron, apply the technology at the level of individual servers.
(Digression: We just stumbled across an interesting screenshot of VirtualBox running under Vista, which is running under VirtualBox, which is running under Windows XP. Unless they have overclocked some gonzo hardware we're guessing performance is sluggish. Even so, it is very cool in the same way that compiling a compiler using itself is cool.)
To tie lots of individual servers together most of these products have some kind of management console that provides a consolidated view so that virtual machines can be run, monitored, moved and so on. This works well, but it isn't the ultimate model for virtualisation for the enterprise.
"What then," you may be wondering, "is the ultimate enterprise virtualisation model?" The answer is infrastructure virtualisation. You see, existing management consoles we mentioned still define physical views of virtualised resources, which is like having to manage virtual machines on a physical server by assigning the specific memory addresses they use - wholly unnecessary overhead.
The same philosophy applies to managing multiple servers: Why do we need to know where the virtual machines are running? Why not just treat the entire multi-server infrastructure as a virtualised flat space of resources that are allocated to specific functional goals?
In this virtualised infrastructure the resources might be, for example, 20 processors and 40GB of RAM, and the functional goal could be to run, say, a load balancer front-ending three Web servers and a database server on four processors and 4GB of RAM.
In such a system we shouldn't have to worry about which processors are used and where they are; we should just be able to specify our configuration, launch it and monitor its performance.
We've looked at a few products that address this vision of virtualised infrastructure, and the only one that delivers the goods is AppLogic from 3Tera.
AppLogic is a grid operating system designed specifically for infrastructure management, which makes it different from the traditional grid operating systems. Those grid systems are mainly designed to aggregate and manage the resources required to solve large-scale computational tasks such as weather forecasting and running finite grid simulations.
Under the bonnet, AppLogic is a set of services installed onto the freeware, open source CentOS 4.3 operating system, a Linux distribution based on Red Hat Enterprise Linux. The services are the Distributed Kernel, which "abstracts and virtualises the grid hardware and provides core system services" and replaces the existing CentOS kernel; the Disposable Infrastructure Manager, which manages "applications"; and the Grid Controller, which provides the node management and monitoring services.
Once AppLogic is up and running, what you have is a flat, virtualised enterprise infrastructure that hides the underlying servers and network connections and turns service configurations (that is, 3Tera applications) into incredibly easily deployed, managed and expandable solutions.
Find your next job with techworld jobs