In today's M1000e, a brand-new set of chassis management tools offer many features suited for day-to-day operations, and the chassis-wide deployment and modification tools are simply fantastic. The downsides include some lack of visibility into chassis environmental parameters and the absence of multichassis management capabilities. Unless you put external management tools to use, each Dell chassis exists as an island.

Chassis and blades

The M1000e blade enclosure squeezes 16 half-height blades into a 10U chassis with six redundant hot-plug power supplies, nine hot-plug fan modules, six I/O module slots (supporting Dell PowerConnect gigabit and 10G switches), three different Cisco modules (supporting gigabit internals and 10G uplinks), a Brocade 8Gbps FC module, and both Ethernet and 4Gbps FC pass-through modules. If InfiniBand is your flavor, there's a 24-port Mellanox option as well.

On the front of the chassis is a 2-inch colour LCD panel and control pad that can be used to step through initial configuration and to perform chassis monitoring and simple management tasks.

The blades used in this test were Dell PowerEdge M610 units, each with two 2.93GHz Intel Westmere X5670 CPUs, 24GB of DDR3 RAM, and two Intel 10G interfaces to two Dell PowerConnect 8024 10G switches in the I/O slots on the back of the chassis. If there's only a single switch in the back, only one port will be active per blade. This is a limitation shared by all the chassis tested.

The blades themselves have a very solid, compact feel. They slide easily in and out of the chassis and have a very well-designed handle that doubles as a locking mechanism. The blades are fairly standard, offering two CPU sockets and 12 DIMM slots, two 2.5-inch SAS drive bays driven by a standard Dell PERC RAID controller, two USB 2.0 ports on the front, and a selection of mezzanine I/O cards at the rear to allow for gigabit, 10G, or InfiniBand interfaces. An internal SD card option permits flash booting of a diskless blade, which can come in handy when running embedded hypervisors like VMware ESXi. There's also an SSD option for the local disk.

One drawback to the Dell solution compared to the HP blades is the relative lack of blade options. Dell offers several different models of blades, but they're all iterations of the same basic compute blade with different CPU and disk options. There are no storage blades or virtualisation-centric blades. You'll get two or four CPUs, DIMM slots, and two 2.5-inch drive bays in each. Some of the blades do offer internal SD-card options for booting embedded hypervisors like VMware ESXi.

Unlike the HP and IBM blade systems, Dell's setup doesn't have virtualised network I/O. The 10G pipe to each blade is just that, a raw 10G interface without the four virtual interfaces provided by HP's Virtual Connect and IBM's Virtual Fabric. This means that the onus of QoS and bandwidth limiting and prioritisation falls to the OS running on the blade, or the QoS present in the PowerConnect 8024 10G modules. On one hand, this is a drawback, but on the other hand, it simplifies management in that the PowerConnect 8024 10G switch is really a switch and can be configured as such. No specialised management structure is necessary, unlike with the HP and IBM solutions.

Management tools

One of the knocks on Dell's blade solution has always been the spartan management tools. Although functional, they were definitely not overly featured. That changes with this test, however, as Dell introduced a completely rebuilt Chassis Management Console that offers a wide range of new features.

Leveraging a bit of AJAX magic, the new CMC is a highly functional and attractive management tool. A lot of thought has gone into making it simple to push actions to multiple blades at once, and even demanding tasks such as BIOS updates and RAID controller firmware updates can be pushed to groups of blades with a few clicks right from the CMC.