What happens when you take a blade chassis and remove the disk, NICs, HBAs, and state from the blades? You get the Egenera BladeFrame EX. At first glance, the BladeFrame EX appears to be a standard data centre rack populated with 1U servers, and a few larger units near the middle. When you take a peek around the back, however, there’s no nest of Ethernet, FC (Fibre Channel), KVM, and power cables; and there are only a few Ethernet and fibre-optic cables coming from the front of the rack. So what gives?
Egenera has taken a unique approach to the concept of adaptive computing. Rather than pushing images around to physical servers to achieve server mobility, the BladeFrame EX is populated with blades that have no local disk and no server state. These blades are nothing more than CPUs, RAM, and cooling fans. All the disk and network I/O is handled by central switching modules, and the server instances that run on these stateless blades are managed by central controllers. Scalent’s Virtual Operating Environment is similar to the BladeFrame EX in function, but it’s a software-only solution.
Egenera’s history dates back to 2000, with the first-generation hardware released in 2001. In the meantime, the company has found significant traction in the banking, health care, and government markets, as well as installations in the private sector. If you’ve ever used MapQuest, you’ve used a BladeFrame, albeit from the other side.
The BladeFrame EX is built into a 42U chassis, but the maximum population is 24 1U blades, although these blades are available with two or four CPUs of AMD Opteron or Intel Xeon flavours. Maxed out, a single BladeFrame EX can house 96 CPUs and 768GB of RAM. Sitting in the middle are the Control Blades and the Switch Blades. The Control Blades are essentially 4U servers with multiple high-speed I/O connections: four 2 Gigabit FC SAN ports, four fibre-based Gigabit Ethernet ports, and four copper Gigabit Ethernet ports. These ports comprise all of the rack’s external I/O capabilities, and they are duplicated on the second Control Blade for redundancy. The Control Blades function in an active/active fail-over configuration, so a total of 16 gigabits of external network I/O and eight 2 Gigabit FC connections are available, assuming that both Control Blades are online.
The Switch Blades handle all internal I/O for the entire chassis, which includes internal virtual switches, connections to the SAN, and all backplane communications. The BladeFrame has no internal storage other than small disks residing in the Control Blades, so all storage is delivered from an external FC SAN.
Making it real
The key to making this solution work is Egenera’s PAN (Processor Area Network) concept, which encapsulates Egenera’s stateless approach to server hardware. Building servers on the Egenera platform is similar to building normal servers; installation media is used to install an OS to a disk, and that disk is then booted to bring the server online. However, to make that server stateless, there are a few different methods in the middle.
Each server instance, called a pServer, is initially configured through the Egenera PAN Manager, which is a Java interface accessed via a Web browser. The SAN must be configured with a suitable number of LUNs (logical unit numbers) that will be used as disks for each pServer. Building a pServer then becomes a matter of selecting an available LUN, selecting the number of virtual NICs, DVD-ROM drive mappings, and a single blade or a pool of blades that should run that pServer. When this has been done, the OS can be installed.
Egenera’s methods are far different from normal server hardware, so special steps are required to install and boot any OS. For Windows, Egenera supplies code that will combine a standard Windows Server 2003 installation and Egenera-specific drivers, producing an ISO image that is burned to a CD and used to install the OS. For Red Hat and Suse, Egenera supplies a boot image for each revision that includes the drivers necessary to boot the install image and performs post-install tasks to complete the picture. After an OS has been installed on a LUN, it can be duplicated and used to build another pServer, requiring Sysprep for Windows images.
One downside is that new OS installations must be performed from a physical CD for Windows, and via CD or network install for Linux and Solaris. Installations from ISO images aren’t possible, which is an unfortunate limitation.
After it has been installed, the pServer can then be booted on the BladeFrame on any available blade. Blades can be cut up into logical pools -- for instance, a pool might consist of six 2P blades, while another pool consists of six 4P blades. A pServer instance assigned to either pool could be booted on any blade in that pool and, in the event of a blade failure, be booted on another blade in that pool -- or a designated fail-over pool.
An interesting feature introduced in the 5.0 release of PAN Manager is the ability to “warm boot” a blade. This basically pauses the blade’s boot cycle just after POST (Power On Self Test) but before an OS begins to load. Thus, a warm-booted blade boots far faster, reducing the time to bring up a pServer in the event of a blade failure.
Network I/O for each blade is delivered across the backplane and is represented to the OS as a standard NIC. In addition to any assigned NICs, the Egenera drivers and agents on the OS establish an internal network used for controller communications. With the NICs in place on the OS, each pServer is assigned to a virtual switch for intra-chassis communication -- and potentially to a virtual switch for external communication if required. Servers that don’t need to contact systems outside the rack, such as database servers, need not have any form of external access available to them. My throughput measurement tests put this internal switching interconnect at roughly gigabit speeds.
Disk is delivered in a similar fashion, with the Egenera drivers presenting each SAN LUN to a pServer as a native SCSI drive. Thus, no HBA drivers or WWNN (World Wide Node Name) is required for any server; the controller handles all communications with the SAN. From the OS point of view, each connected disk is a local SCSI drive. On Linux OSes these drives may be added and removed at any time.
Through the looking glass
Because the BladeFrame EX lacks any form of KVM, console access to the pServers is handled through standard serial communications. Each Linux pServer runs serial gettys to permit local access through the PAN Manager application and a Java terminal application. Windows servers are handled in much the same manner, with the Windows boot process driven by the serial console and Microsoft’s SAC (Special Administration Console) used to configure the OS. Further console interaction on Windows uses RDP for standard terminal services connections. Egenera’s modified Windows installer enables remote administration by default.
Administration of the BladeFrame EX is straightforward, with root-level admins able to grant access to pools of resources to other users without allowing those users to access core-level configurations. This form of delegated administration makes the BladeFrame EX quite modular in terms of serving multiple masters without the overhead of constant IT involvement.
Also of note in PAN Manager is basic application virtualisation. This feature is limited to Linux and relies on the aforementioned hot-add SCSI disk capabilities and code that permits scripts to be executed when an application is moved from one server to another. The application and all supporting start/stop scripts must reside on a specific LUN that is moved from one running pServer to another and then is invoked automatically. For instance, a dedicated Apache installation with all configuration elements and Web application code could be moved between pServers in this fashion.
I did note some odd UI issues, such as the inability to name or comment specific LUNs, requiring the admins to know exactly what’s on LUN 126.96.36.199 rather than rely on a configurable name. Also, the relatively limited external network I/O may be a problem for some installations requiring more than 16-gigabit throughput from 24 blades, and disk I/O performance is based on the performance of the external SAN. It would be nice to see the next generation leverage 10 Gigabit Ethernet to achieve greater external throughput.
Overall, the BladeFrame EX is an impressive piece of hardware engineering wrapped in well-crafted management tools. It provides a solid method to reduce data centre costs through the flexible allocation of processing, storage, and networking resources.
Egenera’s BladeFrame EX is a model of tightly coupled hardware engineering and great design concepts. The management UI is functional but lacks panache, and the overall solution lacks some small features you might expect to be present. Nevertheless, Egenera succeeds in delivering a modular, high-performance, and highly adaptive blade server system.