Five times, 10 times, 20 times or more: The performance benefits from application acceleration are real - provided you understand what the technology can and can't do for your network. Here are best practices for deploying application-acceleration devices.

1. Define your goals

There's no one way to make an application go faster. For some users, reducing WAN bandwidth consumption and cutting monthly circuit costs may be the key goals. For others, the key goal will be speeding up bulk-data transfer. For others, improving the response time of interactive applications is most important.

Deciding where to deploy application acceleration also is a consideration. Pairs of acceleration devices work in the datacentre, deployed on both ends of a WAN link; increasingly, they also are deployed as client software on telecommuters' or road warriors' machines. Identifying the biggest bottlenecks in your network will help you decide which parts of your network can benefit most.

It's also worth considering whether application acceleration can complement other enterprise IT initiatives. For example, many organisations have server-consolidation plans underway and are moving remote servers into centralised data centres. Symmetrical WAN-link application-acceleration devices can reduce response time and WAN bandwidth use, giving remote users LAN-like performance. In a similar vein, application acceleration may help enterprise VoIP or video roll-outs by prioritising key flows and keeping latency and jitter low.

2. Classify before you accelerate

Many acceleration vendors recommend deploying their products initially in pass-through mode, meaning that devices can see and classify traffic but not accelerate it. This can be an eye-opening experience.

The adage "you can't manage what you can't see" applies here. It's fairly common for enterprises to deploy acceleration devices to improve the performance of two to three key protocols - only to discover their network carries five or six other types of traffic that also would benefit from acceleration. On the downside, it's also all too common for enterprises to find applications they didn't realise existed.

The reporting tools of acceleration devices can help. Most devices show which applications are most common in the LAN and WAN, and many present the data in pie charts or graphs that can be understood easily by non-technical management. Many devices also report on LAN and WAN bandwidth consumption per application - or per flow.

Understanding existing traffic patterns is critical before acceleration is enabled. Obtaining a baseline is a mandatory first step in measuring performance improvements from application acceleration.

For products that do some form of caching, a corollary to traffic classification is understanding the size of the data set. Many acceleration devices have object or byte caches or both, often with terabytes of storage capacity. Caching can deliver huge performance benefits, provided the data gets served from a cache. If you regularly move 3TB of repetitive data between sites, but your acceleration devices have only 1TB of cache capacity, caching obviously is of only limited benefit.

Even without deploying acceleration devices, it's still possible (and highly recommended) to measure application performance. Cisco NetFlow and similar tools, or the IETF's open sFlow standard, are implemented widely on routers, switches and firewalls, and many network management systems also classify application types.

3. Choose between inline or off-path

If they're forced to choose between high availability and high performance, network architects inevitably opt for better availability. This is understandable - networks don't go very fast when they're down - and it has implications for the decision about which type of acceleration device to select.

WAN-acceleration devices are designed as inline or off-path. An inline device forwards traffic between interfaces, just as a switch or router would, optimising traffic before forwarding it. An off-path device may forward traffic between interfaces or it may just receive traffic from some other device, such as a router; in either case it sends traffic through a separate module for optimisation.

Because this module does not sit in the network path, it can be taken in and out of service without disrupting traffic flow.

Which design is better? There's no one right answer. Sites that put a premium on the highest possible uptime will prefer off-path operation. On the other hand, there may be a higher delay introduced by passing traffic to and from an off-path module, which may or may not be significant, depending on the application.

If minimal delay is a key requirement, inline operation is preferable to off-path. Some devices combine modes; for example, Cisco's Wide-Area Application Services appliances perform off-path optimisation of Windows file traffic but use the inline mode to speed other applications.

Note that pass-through operation is different from inline or off-path mode. In case of a power loss, virtually all acceleration devices will go into pass-through mode and bridge traffic between interfaces. Devices in pass-through mode won't optimise traffic, but they won't cause network downtime either.

4. Choose transparent or tunnelled traffic

One of the most contentious debates in WAN-application acceleration is whether to set up encrypted tunnels between pairs of devices or to keep traffic visible to all other devices along the WAN path.

The answer depends on what other network devices need to inspect traffic between pairs of WAN-acceleration boxes.

Some vendors claim tunnelling benefits security because traffic can be authenticated, encrypted and protected from being altered en route.

That's true as far as it goes, but encrypted traffic can't be inspected - and that could be a problem for firewalls, bandwidth managers, QoS-enabled routers or other devices that sit between pairs of acceleration devices.

If traffic transparency is an issue, acceleration without tunnelling is the way to go.

On the other hand, if there are no firewalls or other content-inspecting devices sitting in the acceleration path, transparency is a non-issue.

5. Know your limits

Application acceleration is not a silver bullet. It's important to distinguish between problems that acceleration can and cannot solve. For example, acceleration won't help WAN circuits already suffering from high packet loss. The technology can help keep congested WAN circuits from becoming even more overloaded, but a far better approach would be to address the root causes of packet loss before rolling out acceleration devices.

Further, traffic based on some protocols is not a good candidate for acceleration. Some devices don't accelerate Network File System traffic, multimedia traffic or other types of traffic based on the User Datagram Protocol (UDP). Even devices that optimise UDP-based traffic may not handle VoIP traffic based on the Session Initiation Protocol (SIP), because that protocol uses ephemeral port numbers (this problem isn't limited to acceleration devices; some firewalls also don't deal with SIP).

Despite these limitations, application acceleration is a technology very much worth considering. Its performance benefits and cost savings can be significant, even when the few caveats discussed here are taken into account.

David Newman is president of Network Test, an independent test lab in California.