Secure Sockets Layer (SSL) is the up and coming technology for securing personal VPN connections and protecting your online transactions. But the protection it offers requires processing overhead — is there a better way to do this than use up valuable CPU cycles on your servers, when they could be better utilised serving your users?

Performance impact
The relevant cryptographic operations of SSL occur during the data transfer phase. The SSL records are encrypted, and a digital signature of the Media Access Control (MAC) is included with each record transferred. There are also public key cryptographic operations, associated with the SSL handshake process, that need to take place at the start of every SSL session. This cryptography is expensive as far as server performance is concerned, so having to do this on your web servers can slow them down to the detriment of the overall service they are providing.

Alternative SSL termination devices
So you can opt to offload the SSL operations on to a separate device, typically known as an SSL accelerator. This accelerator (you would have more than one, for resilience purposes, as we’ll detail below) acts as a central SSL termination device for all the servers in your data centre. There are multiple benefits to this approach.

There are two main ways of deploying an external SSL accelerator. In its most basic deployment, it is connected to your data centre LAN, where it terminates the SSL transactions and passes web requests to the relevant server. Remember that this last part of the data flow will therefore be unencrypted, so it is your responsibility to make sure that the network is in itself secure.

Typically, it will be deployed in a non-transparent, or proxy, mode, whereby the server will see the source address as being that of the SSL accelerator, rather than the end client. This provides better privacy to the clients and enhances scalability on the servers (less memory used up in keeping addresses). Since the addresses are hidden from the end application, tracking of users has to be done via cookies, which with some older applications was an issue, but now should not cause problems. The use of cookies does allow for more user information to be gathered.

Designing for resilience
A more flexible and fault-tolerant implementation, however, is to integrate the SSL offload function with a content switch, and in fact many content switches have optional SSL functionality embedded. The content switch adds flexibility and resilience in that a virtual server address presented to the user side (including the SSL hardware or software, which, if embedded, does its part before passing the decrypted traffic to the content switching device for passing on to the web server) hides numerous real servers within the data centre. The content switch will choose which server to send a request to based on a number of parameters, such as type of transaction, what content is actually required, or the load on the servers.

If one fails, it can transfer the session across to another server. This allows for more optimal use of your servers, and lets you load balance traffic or provide priority services to specific users — directing customer traffic to higher-performance servers, for instance, if they are on your site to buy something rather than just make a general enquiry.

With multiple SSL accelerators and content switches you can build a high-availability model to ensure your users have constant secure access to the servers they need. And the use of relatively few SSL offload devices makes for easier management of certificates. With only a couple of accelerators, rather than SSL functionality on dozens of individual servers, you’ll not have to pay for as many SSL licenses either.