When the University of Birmingham upgraded its core network a few years ago, it set in course a train of events that would lead to it becoming one of the UK's largest - and yet secure - campus wireless LANs.
The university now provides its 30,000 students and 6000 staff with both wired and wireless network access. It also provides wireless access to visitors and guests, and by the end of this year expects to have complete coverage over three sites, from offices and laboratories, through student bedrooms, and right across the playing fields.
The process began back in 2003, when the university put out a public tender for a new core network, says head of networks John Turnbull.
Its existing backbone used 3Com kit on an FDDI ring, and had grown to the point where the inter-switch links between the four core switches each comprised eight aggregated Gigabit connections. The IT services group realised that upgrading to 10G trunks would not only improve performance, but would also free up cross-campus fibres, some of them 2km or 2.5km long, and release a lot of LX GBICs for re-use.
"We had to go to EU procurement because the project was £6 million in tin, and another £6 million in fibre," Turnbull says. "The usual candidates bid for it, we produced a shortlist and asked for demo kit to put through serious testing."
Three companies obliged - Foundry, Extreme and Cisco, he adds. "Foundry had just released its Jetcore architecture [which became the FastIron range] and came out on top in performance, closely followed by Extreme. On ease of management it was the same, with Cisco way behind because it had three different operating systems."
But where evaluation turned into revelation was when the university IT team discovered flow monitoring: "We took a [Foundry] 4802 stackable and plugged it into our main network, and all of a sudden we could see what was going on - it was like we'd been blind before," Turnbull recalls. "We saw a lot of viruses, student downloads, and so on."
So one of the things that sealed the decision to build the new core around Foundry's FastIron switches was that Foundry had sFlow monitoring every port, yet with no discernable performance impact on the switch, while at that point Cisco had not yet built NetFlow into its ASICs, so flow monitoring on the Cisco switches produced a performance hit.
"We upgraded the network building by building," Turnbull says. "With sFlow in the core we had reduced the off-site network traffic by 50 percent within days, because 50 percent of the traffic was not legitimate - sFlow was brilliant, though we also educated the users so they knew the network was now being monitored."
The decision to go with Foundry produced other benefits too, he says, in particular it has a single OS across its switches, so his administrators need less training.
"We have a team of 10, with two on the wireless side. That team covers everything - not many others like to hear that, especially other universities - but we can get away with a smaller team because it's all the same manufacturer's kit with the same OS," he explains.
The core upgrade also enabled the university to add wireless as a parallel infrastructure, instead of as an overlay, thanks to those spare fibres it freed up - although the start of work on the wireless side was delayed by concerns among the IT team that wireless security was still immature back then.
"We didn't want VLANs on the core from an operational point of view," Turnbull says. "We want a flat core with no routing, and all the VLANs are in the buildings where we separate out departments and it's easier to troubleshoot. It's parallel for security, but also for resilience - we have used it a couple of times as a fall-back."
Installation of the Wi-Fi was done in phases, beginning with a traditional set-up where access points (APs) on the non-interfering channels 1, 6 and 11 are interspersed to avoid having adjacent APs on the same channel.
However, for phase 2, the university instead went with Foundry's IronPoint Mobility controllers and APs. These use virtual cell technology developed by Meru Networks, which puts all the APs onto the same channel.
That might sound counter-intuitive but it actually produces great benefits, says the university's senior network specialist, Chris Lea.
"The virtual cell approach is an overlay, with a controller to manage associations and disassociations," he says. "As clients move there's no re-association, the client just sees a single MAC address providing access.
"The mobility controller time-slices access to the AP so everyone gets a fair share. The traditional approach has clients contending, and you see a lot of scatter - some get good service and others get knocked back."
No more site surveys
A big benefit is that it has removed the need for detailed site surveys, adds Turnbull. Instead, you just put APs where they are needed.
"I would definitely advise people to use virtual cells," he says. "Surveying buildings is quite time consuming - we did it in phase 1, and phase 2 is quite a lot quicker."
It has mitigated some of the issues with competing devices, such as the Sports Sciences department having a wireless alarm on channel 6, because the virtual cell can simply be moved to a non-competing channel.
"It has automatic rogue AP detection built in too," Lea says. "It triangulates, and we pay them a visit, find out what they need it for, and convince them to use the official version - the rogues have mostly been legacy APs, from before we had complete coverage."
He adds that if he had to do it all over again, the only thing he'd do differently is to use fewer wireless controllers - perhaps two larger IronPoints, instead of the half-dozen mid-sized versions that the university currently has deployed.
Turnbull notes that there is one big difference between the wired and wireless networks, which mirrors their typical usage: devices on the 45,000 wired ports have to be registered, but not on the wireless - users just need their username and password.
That in turn means that the wireless side requires security. The site already used Juniper Netscreen 5200 firewalls, and Turnbull's team decided to add a Mirage NAC (network access control) device, although as yet it concentrates on blocking bad behaviour, rather than checking to see if client systems are secure.
"The NAC scans for unusual activity and disconnects that client via the firewall - the user gets a warning page. For example, a port scan will get you cut off," says Chris Lea.
"We haven't implemented endpoint security yet, as we don't want to have to push clients out, especially to visitors, because of liability issues. As it stands, the ability to disconnect seems to be enough."
Unfortunately, the extra hardware did bring more operating systems into play, and required extra training for the admins, but it's still simpler than it could have been, notes Turnbull. He adds that the whole set-up is run via HP's OpenView management software, with Inmon's traffic server for sFlow monitoring.
A surprisingly simple NAC
He says he's been pleased how little work was needed in the Mirage device to provide the level of security that he wanted.
"We have 16 NAC rules in play, that covers every worm known," he says. "I was quite surprised you could do so much with so few rules, but most attacks kick off with the same activity."
If things do go badly wrong, the university has a central NOC with a map of the APs, so it's fairly straightforward to switch off an area. Turnbull adds that while there were fears that they might have to disable the wireless LAN during exams, for instance, these have so far proven unfounded.
Instead, the network is seeing more and more use: "Students need access to the student portal for course information and our virtual learning environment, for their lecture notes, exam results and so on," he says, adding that it's also been used in student elections, to support conferences, and for media coverage of sports events, for example.
Wi-Fi coverage of the university's three campuses should be complete by the end of this year, thanks to more than 650 wireless APs. The next plan - and one of the reasons for Wi-Fi-enabling the sports fields - is to use the network for IPTV, both incoming and outgoing.
"I'm talking to our neighbours at Warwick University, and we'd like to multicast live coverage of inter-university matches to Warwick over SuperJanet [the high-speed academic backbone] and vice versa," Turnbull says. "I think the main challenge will be getting from the camera to the network."
He adds, "We are releasing IPTV for learning this September as well - 2Mbit/s per channel into student bedrooms."
IPTV at 2Mbit/s over shared Wi-Fi on a single channel? Well, it certainly ought to work, according to Turnbull.
"There's no issue on the wired side, especially if it's multicast," he says, "but it will be interesting to see how it rolls out on wireless."