Ten-gigabit Ethernet was so last year.
Standards-based 40- and 100-gigabit Ethernet switches and routers are starting to show up in enterprise networks, following ratification of the IEEE 802.3ba specification in mid-2010.
It's easy to understand the motivation: Fast downlinks require even faster uplinks. The current solution, link aggregation of multiple 10-gigabit pipes, works well but only scales up to a point.
At the same time, servers for some high-performance applications now use 10-gigabit network interface cards, again requiring a faster uplink at the switch. It won't be long before 10-gigabit interfaces will be a standard part of server motherboards, just as gigabit Ethernet comes standard today.
For network managers, migrating to "higher-speed Ethernet", as it's been dubbed by the Ethernet Alliance, will definitely require some changes. Most of these are at the physical layer (new cabling is required, for starters). Also, some monitoring and management gear may not be able to keep up with HSE rates.
On the plus side, HSE will help reduce prices for 10G Ethernet devices. "The real leverage with HSE is with pushing down the price point of 10-gigabit Ethernet, rather than the first-order effects of 100-gigabit deployment," says a senior architect at one of the largest ISPs in the US, who requested anonymity. "If bigger pipes are good, then bigger pipes that are affordable and create greater commoditization of 10-gigabit Ethernet are better."
Also, HSE is far more evolutionary than revolutionary. Network professionals who've worked with Ethernet will feel right at home with the 40G- and 100Gbps versions. Still, an understanding of what's new is essential.
No design changes
Migration to 40- and 100-gigabit Ethernet requires no modification to upper-layer network design, networking protocols, or applications. "It all looks the same to the upper layers," says Brandon Ross, Eastern US director of network engineering at Torrey Point Group, a network design consultancy. "There aren't any changes needed."
That means, for example, that a network using the rapid spanning tree protocol (RSTP) between switches and open shortest path first (OSPF) between routers can continue to run these protocols across HSE interfaces, without configuration changes.
Applications, databases, and server farms similarly won't be affected by the addition of HSE interfaces to enterprise networks. Lower latency and improved response time should be the only noticeable effects, although Ross cautions that adoption of faster networking technologies inevitably exposes bottlenecks that weren't previously visible. If, for instance, network latency previously masked a disk I/O bottleneck and HSE is now faster than the bottleneck, application performance won't improve as much as expected.
That raises a critical question when it comes to HSE adoption: Even if the protocols are ready, is the network infrastructure ready for higher speeds?
As with previous speed bumps, capturing and monitoring traffic at higher rates will impose higher horsepower requirements on network management and security systems. The old adage "you can't manage what you can't see" still holds true. In other words, the requirement to see all traffic doesn't go away just because the network got faster.
A migration to HSE may require upgrades to network management systems and security devices such as firewalls and intrusion detection/prevention systems. Network managers are well advised to check with vendors as to the maximum frame rates their devices can support with zero frame loss.
Faster versions of Ethernet pose special challenges for security devices that use Secure Sockets Layer (SSL). Dedicated hardware already is needed to encrypt and decrypt SSL traffic at gigabit and 10-gigabit rates.
Performance penalties for SSL traffic at HSE rates are, if anything, likely to be even steeper. Network managers may want to consider deploying dedicated SSL decryption/encryption systems alongside existing security devices to mitigate SSL performance bottlenecks.
Another key observation point is the "span" or monitor port capability in many Ethernet switches. This feature allows traffic entering or leaving a switch port (or both) to be copied to another port, where it can be redirected to an external analyser. Some switches support multiple monitoring instances. In an HSE context, the key question is whether the switch can copy monitored traffic at 40- or 100-gigabit Ethernet line rates with no frame loss.
Even if the switch can support such a capability, there is then the question of whether the analyser will be able to capture frames in real time. Software-based protocol analysers are woefully underpowered for this task.
What's needed here is a hardware-based analyser capable of lossless traffic capture at 40G- or 100Gbps rates. The analyser also will need much larger storage capacity. At 100Gbps rates, an analyser capturing 1,518-byte frames at line rate will need to store 750GB of data per minute. At 40Gbps rates, the storage requirement is "only" 296GB per minute, but keep in mind both numbers are only for a single port. Hardware-based monitoring tools often are used to capture multiple simultaneous feeds, each with very significant storage requirements.
A final item in the monitoring checklist is network taps - line splitters capable of HSE rates. Not all network devices have embedded monitoring capabilities, and there are situations in which monitoring can't be enabled for administrative or technical reasons.
In these cases, an external network tap is needed. These are available in copper and fiber versions with anywhere from two to dozens of ports. The key question with these devices, as with all gear monitoring HSE traffic, is whether they can keep up with 40G- and 100Gbps rates without dropping frames.
One standard, many flavors
At the link layer, the most striking feature of HSE is that it's exactly like previous versions of Ethernet in so many ways. Ethernet's well-known framing format is untouched. The interframe gap - the amount of space between frames - hasn't changed. Neither have the minimum and maximum basic frame lengths of 64 and 1,518 bytes respectively. (The IEEE did extend the maximum "envelope" size to 2,000 bytes a few years ago to accommodate tunneling and VLAN stacking mechanisms, but that's unrelated to HSE.) In terms of frame contents, nothing has changed from previous versions.
Also like earlier versions, there are multiple physical-layer definitions of HSE to accommodate various distance and medium requirements. These different flavors range from a version intended only for internal device backplanes, to copper- and fiber-based versions for use in the data centre, to a long-haul version of 100G Ethernet intended for spans of at least 40km.
For enterprise data centres and many campus networks, the short-reach versions with maximum spans of up to 125 metres will be the most interesting and cost-effective. For greater distances such as high-speed links between campuses or for metro-area deployments, long-haul versions with maximum spans of 10km or 40km may be more appropriate.
While HSE won't require changes in upper-layer protocols, it is different at the physical layer. Both 40G- and 100Gbps technologies will require not only new transceiver types on switches and routers, but also new cabling and new connectors and interfaces. Rolling out HSE isn't simply a matter of buying faster blades for switches and routers; network architects will need to upgrade cabling plants and test distance limits as well.
The concept of lanes drives many of the new requirements. In current gigabit and 10-gigabit Ethernet products, traffic uses a single path in each direction. That works well at 10Gbps rates, but modulating signals in high-speed optics proved difficult at higher speeds. Engineers instead adopted a parallel-optics approach in current versions of HSE products that splits traffic across multiple channels called lanes, each running at 10Gbps, and each using a dedicated fibre.
Thus, 40G Ethernet is today a four-lane technology, while current 100G Ethernet interfaces use 10 lanes in each direction.
Evolution toward single-lane HSE interfaces is likely to occur over time. The IEEE 802.3ba specification describes electrical interfaces that could be implemented with a single lane in each direction, but products that implement these aren't yet shipping. For the time being, both 40G- and 100-gigabit are multi-lane technologies - and that, in turn, introduces new cabling requirements, both for fibre and copper.
Bulking up with fibre
Unlike 10G Ethernet, where fibre-optic cables use only two fiber strands, optical cabling for 40G Ethernet currently uses 12 fibres: Two groups of four fibres send traffic in each direction, and four fibres between the two groups are left dark.
Multi-lane 100G Ethernet requires even denser optical cabling. Here, cables use 24 fibres: Two arrays of 10 fibres for traffic in each direction, with two fibres on each side of each array left dark.
Both 40G- and 100G flavors of HSE require "laser-optimised" fibre cables. For multimode fibre, this means using OM3-type cables for distances of up to 100 metres. For longer spans, OM4-type cables support runs of up to 125 metres.
OM3 and OM4 cables (which usually have aqua-coloured jackets, to distinguish them from older OM1 and OM2 cables with orange jackets) are already recommended for 10-gigabit Ethernet, but that doesn't mean OM3 and OM4 cables can be used interchangeably between 10G- and 40G interfaces.
HSE will require new cabling, but Torrey Point's Ross notes that a migration could end up reducing the numbers of interfaces and cables needed. "People today use multiple, aggregated 10-gigabit connections" for network uplinks, Ross says. "When they move to 100-gigabit Ethernet, they actually reduce fibre demand."
There are two significant differences when it comes to HSE optical cabling: Fibre count and transceiver types. As noted, fibre optic cabling for multi-lane HSE uses either 12 or 24 fibres, compared with two fibres with 10G Ethernet. Thus, network planners will need to specify whether 10G-, 40G-, or 100G versions of OM3 or OM4 cabling are needed.
The transceiver most commonly used for 40G Ethernet is called quad small form-factor pluggable plus (QSFP+). This connector, available for both fibre and copper cabling, resembles a slightly larger version of the SFP+ connectors used in 10G Ethernet, with the "quad" designation referring to its four pairs of transmit and receive lanes. Although QSFP+ transceivers only support 40G Ethernet today, they may support 100G Ethernet in the future when lane rates increase.
For now, 100G interfaces use C form-factor pluggable (CFP) transceivers. These are slightly larger than a smartphone and support single-mode optical cabling with 24 fibres. CFP transceivers also can support multimode fibre and copper cabling, though QSFP+ is more common. A smaller variation of CFP, called CXP, will support all the various HSE versions but these are not yet widely available.
Requirements for copper cabling are somewhat simpler than those for fibre. Like fibre, HSE copper cabling uses multiple transmit and receive channels, each carrying one lane running at 10-gigabit rates.
Copper interfaces are defined for both 40G- and 100G versions of HSE, but only for short-reach applications such as connections within a rack or between adjacent racks in a data centre. Using the currently defined interfaces, HSE connections over copper can be at most seven metres apart.
QSFP+ transceivers are used for 40G Ethernet over copper. Some large enterprise data centres already use 10/40-gigabit top-of-rack switches. These typically support 48 SFP+ ports for 10-gigabit connections to servers, and four QSFP+ ports for 40-gigabit Ethernet uplinks for switch-to-switch connections.
Conversion between 10- and 40-gigabit rates is an especially useful feature of QSFP+ ports, both for copper and fibre. In many top-of-rack switches, a QSFP+ port also can function as four 10G Ethernet ports. This involves the use of copper or fibre "breakout" cable with a QSFP+ connector on one end and four SFP+ connectors on the other. Both sides see the connection as four 10G Ethernet interfaces.
This convertible feature offers a migration path for data centres planning, but not yet ready, to deploy 40G Ethernet. It's also cost-effective to buy and manage just one type of switch and then deploy either 10G- or 40G Ethernet ports as needed.
Direct attach copper (DAC) cables already are a common means of connecting servers and switches over short distances at 10-gigabit rates, and that's likely to continue with 40-gigabit DACs. The 40-gigabit units, which comprise two QSFP+ transceivers connected by a run of twinaxial cable, are typically three metres or less in length.
If longer spans are needed (for example, to connect racks more than seven metres away from a core switch), network managers will need one of the fibre-based options.
In addition to QSFP+ connectors for copper or fibre, there's another fibre-only connector type known as multi-fibre push-on (MPO). These connectors support the parallel-optics approach and require MPO connectors on fibre cabling and MPO interfaces on switches and routers. The LC and SC fibre-optic connector types used in earlier versions of Ethernet aren't suitable for HSE because they terminate only two fibres.
Migrating to HSE will certainly require changes. The good news is that the changes are evolutionary in nature, and can be rolled out incrementally. Ethernet has undergone many changes over the years, but the traffic on HSE networks retains the same basic characteristics as previous generations. In the end, it really is "just Ethernet."