For network managers, it's not so much a case of New Year's Resolutions as New Year's Higher Resolution, argues Adam Powers, the CTO of network analysis specialist Lancope. He predicts that this will be the year when flow-based technologies finally overtake existing packet-based methods of network monitoring, giving managers a deeper insight into what's actually going on in the network.
Chief among these technologies will be the behaviour-based techniques developed by the likes of Arbor Networks, Lancope, Mazu Networks and Q1 Labs, he adds. Built on top of deep packet inspection (DPI) technology similar to that used in IDP (intrusion detection & prevention) systems, these look for anomalous behaviour on the network that could signal the presence of a hacker or Trojan, or a virus propagating itself, for example.
"Our technology came out of research by Dr John Copeland of Atlanta University, who was looking for faster ways to move ATM cells. He developed flow analysis technology and discovered you could see patterns in flow data. We now have something like 140 algorithms looking for patterns," Powers says.
He adds that as well as using a probe to collect network traffic for DPI, systems such as his are also beginning to make use of flow data generated elsewhere in the network, in particular NetFlow and sFlow data.
So why are so many network performance specialists - Packeteer recently hooked in NetFlow, too - getting interested in flow data? Partly, it's the realisation that there is no point reinventing the wheel - there's already an awful lot of devices out there generating information that could be very useful if it were correlated.
NetFlow gets Flexible
That's been helped along by the development by Cisco of Flexible NetFlow, and of switches and routers with enough processor power to run NetFlow without it affecting their ability to do their real job, says Powers.
"Flexible NetFlow lets you put whatever information you want into NetFlow," he says. "We're getting to the point where, instead of being afraid of it, people will turn NetFlow on and use it. Moore's Law has helped there."
It can even save money, by removing the need to deploy monitoring hardware to remote networks, he adds. "In big distributed or meshed networks, you need to be able to see and track and audit communications between remote sites, and the only way to do that is to deploy a probe on each site. A really nice way to do that is to use NetFlow, so your switch or router becomes a virtual probe - after all, you already have the equipment there."
And what of sFlow, the flow reporting technology preferred by the likes of Extreme, Foundry and HP Procurve? Powers says it works in a subtly different way, and obviously is a much smaller market, but does have some notable advantages.
"With NetFlow, the router has to maintain a cache - though its overhead is less than a network engineer would expect - but sFlow has no memory consumption," he explains. "Most sFlow implementations use an ASIC in the linecard that only has to send a UDP packet with the datagrams, so it can be done at much higher speeds and is easier to adjust as well, you just change the sample rate.
"sFlow works great for traffic analysis or DoS detection, but my opinion is if you need specific information on transactions - to audit a server, say - sFlow may not be the best."