A combination of congestion, latency, and chatty or greedy applications, often means poor end-user application performance over WANs, where bandwidth is at a premium and large latencies are not at all unusual. Various companies have therefore developed technology to deal with this, among them Expand, Packeteer, FineGround and Peribit, each taking a different approach to the topic.

Peribit's stance is that intelligent edge devices can rein in the more extreme bandwidth demands to enable traffic prioritisation to take place. Tim Richards, its northern European technical manager, says that this approach addresses a wide range of application performance issues.

“For some customers it is a pure bandwidth issue - they simply cannot run the applications they want, or squeeze in the number of users they want," he says. "But for others, some applications can be greedy, and tend to usurp all available bandwidth. The only way around that, other than an outrageously expensive network, is some sort of prioritisation to control and manage the bandwidth demand.”

Latency and error correction
Latency can be a big contributor to performance difficulties in the WAN, says Richards, especially for chatty applications. “Even a huge fat pipe can have a latency of hundreds of milliseconds and when it comes to applications, such as disaster recovery, backup and replication, and storage area networks, the limitations of TCP often mean tasks cannot be accomplished within the available backup window.” He adds that TCP termination [spoofing] can give some improvement in these circumstances.

Another common problem is simplistic error correction within applications, which may simply retry an entire message in the event of an error. However, if that message becomes 50 TCP packets, a resend of the whole lot will have a significant effect on performance, disrupting prioritisation efforts.

Error-correction mechanisms such as this, along with the peerings, re-conversions and multiple hops seen when operating VPNs over the Internet, can badly impact performance. “The net result is that the throughput experienced from point A to point B will be a lot less than the bandwidth connecting the end points,” says Richards.

Traffic volume and prioritisation
He believes that when faced with such a complex array of performance issues, prioritisation is little use unless it is combined with other measures. “Obviously you need something to be able to reduce or compress data, and you need fairly high levels of compression or reduction or you’re not going to see any return on investment.”

Peribit’s approach to WAN performance enhancement therefore starts with its MSR (Molecular Sequence Reduction) technology which is designed to reduce and compress TCP traffic. It cuts down on the conversations, reduces the overall traffic volume, and reduces the number of packets, says Richards.

Once that’s in place, “The network is in a state where you can apply QoS policies to the reduced traffic," he adds. "First, you need a range of capabilities to integrate with existing prioritisation schemes, if any. But it’s also essential that the QoS scheme is adaptive and dynamic, and doesn’t tie up a fixed amount of bandwidth.”

He says that Peribit’s devices implement QoS policy from the top down, defining priorities and allocations by application and corporate location. These policies are then dynamically applied to ensure that each application receives an appropriate bandwidth allocation.

Essential requirements
Richards suggests a list of key features for prospective purchasers to look for in any prioritisation scheme, not just those from Peribit:

- a device that will drop into and take advantage of the existing infrastructure, preserving the end-to-end flow so existing applications will continue to run unaffected;
- look for high levels of data reduction out of the box: a finicky configuration process will impact the usability and value of any solution;
- automatic collection of subnet configuration information, using all the open and proprietary protocols likely to be necessary;
- forward error correction, to avoid the need for an application-level retry of a complete dialogue;
- automatic reversion to pass-through in the event of a device or power failures;
- a reduction even for encrypted traffic, such as Outlook web access, Citrix or Lotus Notes;
- scalability to work on the range of speeds and bandwidths that might be needed well into the future, including low end (such as branch offices on 64K ISDN) as well as high end;
- ease of management, with central management of all devices;
- ease of maintenance, with one version of firmware common to all sizes of devices.

And finally, introducing devices like these into the network mustn’t make matters any worse then they are already, either by introducing latency, or by talking to themselves so much that they actually add a significant volume of traffic to the network.