More than perhaps any other area except virtualisation, the term "application acceleration" can be defined and characterised in almost as many ways as there are products on the market. Still, if you take the right approach, it is relatively easy to build a taxonomy that will help you choose what is right for your needs.
At The Tolly Group, we've been testing application acceleration products since before the technology picked up that appellation. Over time we've had the opportunity to benchmark and dissect products from, among others, Allot Communications, Citrix Systems, Expand Networks, Ipanema Technologies, Packeteer and Riverbed Technology (and more in the works)
In the beginning, acceleration typically was implemented using compression to make the oversubscribed WAN pipe appear larger. In a way, that was virtualisation as well - because a T1 could function as if it virtually was multiple T1 links bonded together.
To this usually was added a QoS function that made sure packets belonging to latency-sensitive applications wouldn't sit behind a time-insensitive file transfer that might be shovelling thousands of large data packets across the link.
Over the years, however, the techniques employed to deliver acceleration increased dramatically. In many technology areas, the new replaces the old; that was rarely the case in application acceleration. The new technology and techniques enhanced the old and often were layered on.
Over the years, however, the techniques employed to deliver acceleration increased dramatically. For example, vendors dissected elements of the data stream and found, for example, that TCP acknowledgement packets flowing back and forth on the link could slow down user performance. Or they found information was sent repeatedly across the link - again, a cost to performance that offered little benefit. As a result, application acceleration products began to tinker with the underlying protocols, reaching ever higher to the ultimate target of acceleration: the application.
That brings us to the "divide and conquer" of the title. Somewhere along the way, some vendors realised specific applications often had unique characteristics that, once understood, could be optimised.
Thus, we see the split between vendors offering general-purpose acceleration vs. those citing very specific applications that their products optimise. Make sure you are aware of the difference, because some vendors might not make it crystal clear. If you note that a product's literature says, "improves Microsoft SharePoint" or "better PeopleSoft response time", it may well be that the acceleration is tuned to those applications. Although the product might improve general applications also, realise that your mileage may vary: You might not get the dramatic benefits that will accrue to users of the applications the product specifically optimises.
In addition, with all these products it is critically important the scenarios benchmarked are relevant to you. Labelled "real world" or not, the only important real world is your world. Because any application-acceleration benchmark, practically speaking, looks at only a very small slice of all network and application scenarios, prudent network architects must diligently attempt to understand the specifics of the optimization being illustrated and ultimately, its relevance to the corporate environment.
And just to make things more interesting, our friends at Microsoft now are making an impact on the world of application acceleration. As noted earlier, many application-acceleration products "fix" parts of protocols, such as TCP and Server Message Block (SMB), to improve WAN performance.
With the introduction of Vista and Longhorn, Microsoft also introduces the first major rewrite of the TCP stack in years: It implements RFCs that improve WAN performance. And SMB 1.0, which has been around forever (since about 1980, at least) finally is being replaced by SMB 2.0.
Will this stir the pot in application acceleration? Yes, but that's another story.
Find your next job with techworld jobs