Every so often, you come across a security product that is a bit different, and InterDo is one such package. It's a firewall product aimed specifically at Web services. Instead of attempting to provide generic support for all types of service, the authors have concentrated on HTTP and its encrypted sibling, HTTPS. The package runs on Windows, Linux and Solaris servers; we tried it on a 450MHz Pentium III machine running Windows 2000 Server, standard edition. There are two components to the system (three if you count the pile of PDF-format documentation). There's the server itself, which does the packet examination and blocking work, and then there's a Java-based management application that you can run either on the same machine as the firewall itself or on a separate machine. The server can be told where to accept management connections from, which aids security if (as will normally be the case) you have a separate machine running the management application. Once you've installed the server application (which is a simple process), you fire up the management program. Although it's a Java application, the authors have made it look just like a Windows MMC snap-in, so finding your way around it is very simple. As you'd expect, you can manage a number of InterDo boxes from a single console, and you can group machines together to make the display comprehensible if you have loads of them. Although there's no way to define a single policy and then automatically deploy it to a number of servers, it's not far off – you can define a policy, copy it to another server, make a small number of minor changes (e.g. the IP addresses in the tunnel configuration) and then hit ‘go’. Whenever you make a change to the configuration, you have to restart the system – but this isn't as bad as it seems, as it merely tells the security software to re-read its configuration file. When you hit ‘restart’, the server stops answering requests, waits for active requests to finish processing, re-reads the config, then starts accepting requests again. This never took more than about five seconds when tested. Policies, policies…
Once you've added your InterDo server(s) to the list of machines the management program knows about, you pick one, drill down, and set up your security policies. The first step to a policy is to define a ‘tunnel’. This is rather like setting up a NAT mapping on a traditional firewall – you map a port/IP combination on the external ‘untrusted’ interface onto a port/IP combination on the internal ‘trusted’ interface. There are three types of tunnel you can define: HTTP, HTTPS and protocol-independent; the latter is useful if you want your KaVaDo installation to deal with Web traffic itself and allow other stuff through to another firewall for examination. When you set up an HTTP (or HTTPS) tunnel, you tell the machine a number of things. You can tell it what Unicode codepage to use (it maps everything to Unicode for processing), and you can define connection limits (both active connections and queued connections). There's also some checking on key parameters of HTTP interactions, and you can define size limits for things like headers and page bodies. The system checks some fundamental stuff for everything flowing through a tunnel. It'll look at character sets and make sure that encoded characters aren't there for nefarious purposes, as well as interpreting the URL to see whether (for example) the directory component is of the form ‘../../../’ – i.e. it resolves to something above the Web server root directory. Once you've set your tunnels up, you define your applications. Because a Web server comprises a series of directories, and different directories generally have different purposes (i.e. you might have one set aside for images, another set aside for uploads, a set of directories for server-side includes, and so on) you basically define a permission set for each directory and then tell the system what actions are permitted to that directory. Permissions for each directory are defined using a series of ‘pipes’, where each pipe has a specific purpose and a specific way of working. At present there are 10 different pipes, though you can choose to disable any or all should you so wish, and new ones can be added via a simple option in the management console as they are written by the manufacturers. So you have a pipe for defining which HTTP methods are permitted to given directories (you'd allow only GET to your images folder, for instance), one for controlling file uploads, one for examining database-related commands coming in and out, one for verifying cookies (cookies being served are examined and a checksum attached, then when the client comes back with that cookie it can be examined to see if it's been changed), one for defining a ‘white list’ of permitted client addresses, and so on. There's one that allows you to define limits on specified parameters (so you can say things like ‘userid must never exceed 20 characters’, for instance), and another that, when provided with an appropriate WSDL file, will examine calls to Web Services, and ensure they're both RFC compliant and conformant with your WSDL specification. The pipes listed here all work on ‘technique blocking’ – that is, they don't use AV-style ‘signature files’ to look for sequences of bytes in data streams, but instead look at the context of the data streams and use logical deduction to decide whether something should be blocked. There is one pipe, however, called the ‘vulnerabilities pipe’, which does examine byte streams (which might include uploaded files, of course) for known attack signatures. Some of the pipes are either ‘on’ or ‘off’, but others (such as the one that examines cookies) can be placed in ‘learn’ mode, where traffic is logged but permitted, and the administrator can view the results and choose whether to permit or block that type of connection. ‘Learn’ mode is one of those things that should be used sparingly, but it's very useful when you first get going to give you an idea of what types of connection are taking place without you accidentally blocking something that should be let through. Most of the configuration process is similar for both HTTP and HTTPS tunnels. There's a very neat feature, though, that maps between the two – that is, you can define that an incoming HTTPS connection should be decrypted and mapped to a plain HTTP connection for passing on to a Web server. (Obviously you need to install an appropriate Web server certificate on the InterDo server in order for it to know how to do the decryption). This is an insanely neat idea, not least because it means you don’t have to faff about with certificates on the Web servers themselves – all you have to do is run boring old HTTP on your Web servers, yet you can still present the site to the outside user as an SSL-encrypted session. Logging is comprehensive and understandable, and there's a neat feature called ‘quick click refinement’ that allows you to double-click the entry for something that's been blocked and have the system define a ‘permit’ rule for it – an immensely useful feature if, like us, you set up your ‘include’ folder as a ‘don't permit any access’ and forgot that the style sheet for the entire Web site also lived in there. We really like InterDo. It's straightforward to configure (though if you want a complicated set of rules, you can make it look complicated), it has some clever protection schemes, and it's clearly an extensible system (the company claims some more ‘pipes’ are under development and will be released soon).


Application-specific firewalls are designed to be used as part of a collection of security devices. They specialise in one field, and so if you have a number of different application types to protect, you'll need a collection of devices, each protecting its own particular range of connection types. This approach will be more expensive than a single ‘do it all’ box, but will probably be more secure.