If you have deployed numerous firewalls and Intrusion Detection Systems, chances are you’re getting swamped by alerts and log entries back at your management station. You have three choices:


  1. Ignore them until you have a problem, and then go back and see if you can figure out what it was caused by
  2. Spend all your time studying them, so you don’t have any time to actually look after the network
  3. Use correlation software to remove false alarms and highlight the things that really could be a potential threat.

The third of these is the most tempting, but if you don’t understand what that software is doing to make your alarm display look manageable, you run the risk of ending back at option 1.

Normalisation
Correlation engines, such as netForensics’ Security Information Management Solution or Symantec’s Incident Manager , take alarm and log outputs from different security devices from different vendors. Security vendors do tend to offer management software for their devices that can do a level of aggregation, to reduce the amount of false positives: alerts that look like alarms, but are actually benign. The trick is in amalgamating the outputs from different types of devices, such as firewalls and IDSs, at the same time. And doing it for different manufacturers, since it’s not uncommon for enterprises to install say firewalls from two vendors at different point of the network.

The process of taking all the different formats of alarms and turning them into one consistent format that can then be worked with is normalisation, and is usually done by means of plug-in software agents that you buy for each vendor/device combination that you need to support.

Correlation
If you have firewalls from CheckPoint and Cisco, and a handful of RealSecure IDSs from ISS, how do all the alarms tie up to tell you what’s going on in your network?

There are two main types of correlation that software can use to link events together to reduce alerts and to prioritise them according to the potential effect on your particular network. These are rules-based correlation and statistical correlation—though this latter term has a variety of names, including algorithmic correlation, impact-based correlation, anomaly detection etc, depending on vendor.

Rules-based correlation works like it sounds—it depends on rules configured into it to tell it what is and is not acceptable behaviour. These tend to have a time-dependency, in that two events, which might not seem particularly untoward in themselves, may indicate a potentially serious security breach if seen together within a certain time period. It’s not unusual for a failed login attempt to be recorded for instance, as someone mistypes a username or password. However a failed login just a few minutes after a port scan may well be an indication of an attack attempt, and should be highlighted. This is where tuning may be needed—although rules will be configured to begin with, you’ll probably need to add more or change those that exist. Changing time periods can greatly impact the number of false positives reported versus the chances of missing real attacks.

Statistical correlation takes the type of threat, the importance of the destination under attack, and the vulnerability of that destination to that type of threat and creates a risk assessment score which translates to ‘how important is this potential attack to me’. For instance, while an IDS will report on every possible attack, the good correlation engine should determine that if it’s a Windows-specific attack, say, on a segment of your network that has only Unix hosts, then it’s probably not a huge immediate danger. You still need some indication that someone has gained access to your network, but it can be prioritised down in comparison with another attack on your email server, for instance.

By virtue of the amount of processing that they have to do and the complexity of their calculations, correlation software is neither cheap nor simple to set up. Most will work to some extent out of the box, but you’re going to have to baseline your network, configure rules and set up asset groups so that the software knows which are your most important devices, if you want it to operate effectively. In many cases it’s worth getting your vendor or systems integrator to do a lot of this for you as part of the installation package. Again, this will have a price tag associated with it, but they’ll do it quicker and without getting dragged off to do some other task. If you don’t set it up correctly from the start, you won’t gain the benefit from it. At worst it makes the installation a waste of money: at worst you’re putting your network at risk by putting all your faith in a misconfigured system.