The one problem with LANs is that they're, well, local. A LAN doesn't traditionally extend beyond the physical boundaries of a data centre, or at least a corporate campus. For many applications and services this isn't a problem, and WAN connectivity between data centres and campuses does the job just fine. However, not all services are created equal, and certain functions simply can't be pushed through a traditional routed WAN. For instance, you can't migrate a running VM from one data centre to another and have it maintain network connectivity.

Or can you?

Last week, Cisco walked me through a demonstration of Cisco OTV (Overlay Transport Virtualisation), a novel approach to connecting remote data centres at layer two while skipping some of the pitfalls normally associated with such an endeavor. The tech is deceptively simple - elegant, in fact - but as with any cutting edge technology, there are some gotchas.

At its core, Cisco OTV is simply a way for far-flung Cisco Nexus 7000 switches to share MAC address tables. Normally, if you have two or three data centers, for example, each exists as a layer-2 island with its own set of VLANs, spanning-tree, and so forth. Extending one of those networks into another data center generally runs into issues related to broadcast storms, spanning-tree loops, and other problems that aren't generally at issue within a local switched LAN but can be disastrous if propagated across expensive and lower-bandwidth WAN links. In short, it's generally more trouble than it's worth. That's where OTV comes in.

No LAN is an island
The implementation is quite simple: A switch running at each data centre has a trunked interface to the local switched LAN and plays on all VLANs relevant to the data centre extension. On the other side is a link to the WAN transport to all of the other data centers. That WAN link could conceivably be any flavour, but it will need to be OC-12 or better to make good use of OTV.

With a few commands, a pseudo interface is created on the switch, and a group access address range is specified. At that point, the switch begins receiving MAC table updates from the other participating switches and transmitting its own. It also then begins responding to requests for remote MAC addresses it's learned on the local LAN segment, essentially proxying those addresses.

When an OVF switch receives a frame destined for another data center, it encapsulates it in a normal IP packet and transmits it over the WAN to the data center where that destination MAC resides. On the receiving end, the local OVF switch strips the encapsulation and drops the frame on the appropriate VLAN as if nothing ever happened. The sending and receiving hosts never know that that they are in different datacenters, or that a WAN link was involved at all.

The underlying table information and routing transport for this scenario is a pretty neat adaption of existing technology. Cisco is leveraging some of the capabilities of the IS-IS (Intermediate System to Intermediate System) routing protocol to make this happen, although the IS-IS configuration is completely under the covers. It really is only about five commands to add a data centre to the mix, although the necessary configuration of the Nexus 7000 switches might be a bit more involved.

The upshot is that even though the overlay transport is transparent to the ends of the connection, there's no fear of spanning-tree looping as each site maintains a distinct spanning-tree topology and BPDUs aren't forwarded across the WAN. The OVF functions as a gatekeeper for the frames that should remain local while forwarding those that should be allowed to pass.

When databases fly
In the demo I saw, Cisco used OTV to migrate a loaded SQL Server VM from one VMware ESX host to another over a simulated WAN, with the hosts residing at different data centers the equivalent of 400km apart (4ms latency). The VM migrated over in about 30 seconds or so without losing the connection with the client load... with one catch. Although the VM definitely moved, the virtual disk didn't. (Moving an 8GB VMDK through an OC-12 would take far longer than 30 seconds, and such a trip isn't really feasible for a VM under load anyway.)

In the demo, Network Appliance's FlexCache technology bridged this gap, enabling the VM disk to remain in the original data centre while keeping the delta at the new data centre Naturally, this isn't a scenario that lends itself to a permanent migration, but it might prove useful in some load-balancing and global distribution scenarios.

It's important to note that the established connections to migrated VMs continue along their original data paths. Even though the VM ends up running at the remote data centre, the existing TCP connections to that server must still pass through the initial data center to maintain the consistency of the connection.

New connections could be rerouted to the remote data centre, but an existing connection cannot. This could add significant latency and bandwidth consumption to the WAN links if not monitored. It should also be noted that current technologies put a distance damper on any effort like OTV, since VMotions on links with greater than 4ms latency can get problematic really fast. This roughly translates to 400km of physical separation. This isn't a limitation of OTV, but it's still a constraint.



Cisco OTV will be available in April 2010 and existing Nexus 7000 customers can deploy OTV through a software upgrade at an additional licensing fee.

OUR VERDICT

Cisco Overlay Transport Virtualisation helps make multiple physical data centres look like one logical data centre. Needless to say, Cisco's OTV isn't a technology that many companies need. However, to those that do, it's quite compelling. OTV isn't immediately ready to handle intercontinental data centre linking, but it could certainly be used to connect data centres in New York City and Washington, DC, or anywhere within a 250-mile radius. Although those distance limitations are the result of current data transport technologies, the framework is there to support anything coming down the pike. Once it's feasible to achieve 4ms latencies across a 2,500 mile link, OTV will be ready. As such, it goes a long way toward allowing geographically disparate data centres to play in the same pool while greatly reducing the chance of Layer-2 boogymen compromising the network. It's an important step in localising remote computing resources.