Virtualisation has created a significant buzz in the IT marketplace. This isn't surprising, since it has the potential to cut across major data centre issues: utilisation, security and peak capacity, among others. It will also affect development, quality assurance, testing and provisioning.
What, exactly, is virtualisation? It's a familiar concept in operating systems. Virtual memory gives each running process on a computer the illusion of its own memory. That memory is strictly under the operating system's control; processes cannot access memory outside the range given by the operating system, and the operating system can even move virtual memory out of physical RAM and onto a disk if it wants to.
Operating system virtualisation takes this to the next step, allowing an entire hosted, or guest, operating system to run under the control of a supervising master program, usually termed a hypervisor or virtual machine monitor (VMM). Such virtualisation can allow multiple separate virtual machines to run at the same time on a single computer and can even allow a running VM to migrate between host computers.
There are different ways to achieve this. Some rely on hardware support, others rely on cooperation from the virtualised operating system itself. In many cases, there is still a single privileged host operating system providing primary control over hardware devices. Depending on the implementation, this may or may not be distinct from the hypervisor. The guest operating system is the operating system and environment installed on each virtual machine.
There are typically three types of virtualisation seen on commodity hardware:
1. Full virtualisation (also known as transparent virtualisation, or FV). This model creates an entire virtual computer. From a guest operating system perspective, the VM environment appears to be a generic system. No modifications to the guest operating system are required. This is a great model to enable legacy systems, but it requires hardware support for x86 to function well. Because of complexities in the x86 architecture, there are performance implications on such systems.
2. Para-virtualisation. Similar to transparent virtualisation, para-virtualisation (PV) addresses the performance issues encountered by full virtualisation. This does require the guest operating system to be modified in order to be virtualised -- hardware operations that can't efficiently be virtualised are simply replaced by direct calls to the hypervisor. These modifications eliminate the majority of overhead associated with FV. The overhead to provide FV is roughly two to four times that of PV.
3. Single kernel image (aka SKI). This model takes traditional resource management provided by the operating system further. It is the logical conclusion of containers: a single OS provides multiple instances of its services. For each application running in a container, it appears to have a distinct installation of the OS.
Virtualisation (examples: VMware, Xen with hardware support )
|- Complete system emulation enables bare-metal simulation.
- Host OS
and guest OS have no dependency relationship.
Significant impact on system performance from the virtualisation system. This is helped by hardware support, but not eliminated.
|- Allows low overhead virtualization of system resources.
- Can provide direct hardware access in special cases (e.g., dedicated NICs for each guest OS).
- Allows hypervisor-assisted security mechanisms for guest OS.
|Requires para-virtualisation-enabled guest OS.|
Kernel Image (examples: Solaris Zones, SWsoft Virtuozzo)
|- Enables virtual environments to be deployed on a base system.
- Lightweight virtualisation.
partitions are limited to the host OS version.
A kernel exploit exposes all partitions on the system.
- Virtual environments are tightly coupled with OS patch level.
While the third model allows high-performance virtualisation, there are a number of issues with SKI. The migration of virtual systems is difficult: The same deployed operating system version and patch level must be available for migration. Most environments are disparate (even if they're running the same operating system), making this requirement a significant challenge for most corporations. And it is not possible to run different versions of the operating system at the same time. For the rest of this article, I will focus on virtualisation of entire operating system instances.
There are some powerful developments in virtualisation: An existing VM can be run on different hardware with no re-engineering required. Simply provide another system with the same VMM, and all existing VMs can be run in place. This disaggregation of a system from bare-metal hardware is extremely powerful. For example, you can now run an OS that doesn't support ACPI (Advanced Configuration and Power Interface) on modern dual-core systems, fully virtualised. This could result in reduction of hardware, application performance increase and reduced cost of legacy maintenance and engineering.
Virtualisation enables completely separate installations to run on a single machine. One physical system can have distinct VMs, each with its own system administrators, users, applications and libraries even different operating systems. Additional configuration information might be associated with the VM: machine state, global storage configuration and resource requirements.
In upcoming instalments, we'll take a look at how virtualisation can be used to address the common complaint in most data centres: under-utilised computing resources. The contributing factors to the under-utilised systems include the following:
* Security. Existing security models do not easily allow applications from business units to be run on a single server.
* Management complexity. Each application requires distinct middleware, libraries and environment configuration.
* The high cost of provisioning a system (in terms of time and risk).
* Peak capacity. Real-time peak capacity must be attainable, which can result in a high number of idle systems during off-peak hours.
Brian Stein is engineering manager for Red Hat's Emerging Technologies Group.