LogLogic’s range is a set of appliances that collate log data and provide a means to interrogate the resulting datasets both for short-term problem identification and long-term reference.

There are two product sets in the family: the LX and the ST. Both are Linux-based, and both ship as appliances. The LX is a data capture and analysis system, and has an on-board MySQL Server instance for storing data as it arrives; it’s designed to sit as close as possible to the systems generating log data and to capture the data both for analysis and for passing on to other devices (more about that later). There are three products in the range: the LX500 (capable of processing up to 200 incoming log messages per second), the LX1000 (1,500 mps) and the LX2000 (3,000mps). The LX devices take incoming log messages and do two things: first, they process the content and derive metadata (basically pre-processed report data) which is then stored in the database for ad-hoc reporting; second, they pass on the data to ST devices if required.

The ST family, which are designed for long-term storage and quick retrieval of raw log information, doesn’t have the MySQL database and doesn’t store metadata like the LX does. Instead, it stores the raw log data in a proprietary full-text-indexed structure to enable you to search quickly and easily through the logs to dig out the detail that relates to what the overview stuff in the LX unit has told you. There are two products in the ST line: the ST2000, which has no on-board storage but instead leaves you to provide a data store via NAS; and the ST3000, which has a 2TB on-board data store.

All devices except the "baby" LX500 have dual NICs and dual power supplies, have their operating systems served via a mirrored pair of disks, and are able to cluster as redundant pairs in an active/passive arrangement (with a handy twist that we’ll come to later). The data store on the ST3000 is implemented as a RAID5 array with a hot-spare disk thrown in for good measure.

Data is captured in the LXs by pointing the logging settings of the devices on your network at the LX unit in question. The protocol of choice is Syslog – a standard that’s supported by all proper operating systems and the majority of network infrastructure devices in modern networks. The obvious exception is, of course, Windows systems, for which you’ll have to use a Windows-to-Syslog mechanism such as Snare or LogLogic’s own "Project Lasso".

Once data has been captured by an LX unit, it can be passed to a variety of other devices based on rules the administrator defines. First off, you can if you wish make it send stuff to non-LogLogic devices either en masse or based on a filter (so you could, for instance, make it send stuff relating to your PIX firewalls to your CiscoWorks system). More importantly, though, the LX will also forward the raw log data to one or more ST units via a TCP-based connection. The trick is to put the LX devices as close as possible to the machines generating the Syslog traffic, since the latter is UDP-based (which means delivery isn’t guaranteed, especially over congested WAN links) whereas the LX uploads to the ST via a TCP (and therefore guaranteed-delivery) link.

A typical setup, then, would see LX units on distributed sites, collecting data predominantly via UDP-based Syslog signals, and passing this data over the WAN to a central ST unit (though there’s nothing to stop you having STs at some or all of your distributed sites too, if you want to give them on-site historical reporting facilities).

When compared to the LX’s operation, the ST’s data input approach is really quite straightforward; it accepts raw log data, writes it to its data store and indexes it to death so you can do text searches on it.

OK, so we’ve got the data; what about reporting on it? Reports are done, like everything on these devices, via a web-based GUI. The GUI’s not the most colourful in the world, but it’s very well done and it makes a pretty good job of addressing my main desire in this kind of product: most of the time, when you think: “I’d like to be able to click on this bit here to drill down and get to a screen that tells me X”, that’s precisely what it does. Your GUI session starts with a basic overview "dashboard" that tells you fundamental stuff like what kind of log message volumes you’re seeing, but then you can look at reports that address your particular needs.

The reporting capability on the LX extends to the ability to do statistical analysis on its internally-held metadata (which is limited to the last 90 days). So it’s on the LX that you do all your problem investigation (“tell me about instances where the proportion of failed logins was too high”, for example, along with good old "top ten" reports that address the usual MIS information desires). To do a report on an LX, you simply define the criteria you want to filter the data by and hit "Go". So you might select a particular set of servers, or all network devices of a particular type, or devices on a particular subnet, then dictate what protocols you’d like to filter in/out, define a date range, and so on. Filters can be applied both before a report is run and afterwards – which is nice, since it lets you see the basic results from your original query and then further tweak the criteria to filter out stuff you’re not bothered about for the moment. The way the LX reports work is quite clever, and is the whole point of them processing log information as it comes in. If you initiate a report on one unit that involves data on another, there’s a SOAP-based metadata interchange that lets the unit you’re running the report on ask the remote unit(s) to work out their bit of their report and pass the answer back to the master for collation into the final result set.

The reporting on the LX does more than just collate "top ten login failures", though; the main emphasis of the product is not really on helping techies gather log data together, but instead revolves around compliance management. There’s quite a lot you can tell about the way your organisation works, compliance-wise. So, for example, if your systems log the fact that you’ve disabled a user, you can show your auditor that when Fred Bloggs was fired last month, there’s a record that his account was removed from the corporate directory and the VPN server within the tmescale specified in your standards manual. Similarly it lets you do stuff like “Where you’ve seen a user deletion on any system, tell me of any subsequent activity by that user no matter where in the organisation it took place”. Oh, and as well as running real-time reports (which really do seem fast – nothing took longer than a few seconds to run) you can also tell the LX to run them on a schedule and/or to send alerts to system management stations when administrator-defined criteria are met.

When you report on an ST, you’re out of the comparative, statistical world and into arbitrary searching of log files. While the LX is for problem solving, statistical analysis/comparison and checking your compliance status, the ST is designed simply to hold raw log files and let you search them – thus addressing the increasing need these days to store shedloads of log data just because some compliance standard says you have to. Whereas the LX stores only 90 days’ metadata, the ST stores as much data as the available storage will allow it to. You can search the data pretty much arbitrarily, and if you drill down into a result it’ll show you it in the context of what came before and after (so you can look at, say, a reboot event and see from the nearby stuff whether it was a controlled restart or a system crash).

LogLogic’s products are a bit of an eye-opener, because they make you realise just how relevant your log files are even to non-techies, not least in relation to compliance with modern-day information storage requirements such as Sarbanes Oxley or Basel II. The architecture is cleverly designed, you can deploy LX and ST units in whatever combinations fit your purposes, and they’re easy to use, manage and integrate into your organisation’s network.

OUR VERDICT

Although the price tag will grow proportionately with the size of your company and its information storage requirements, you will at least see a huge reduction in the time and effort (and thus cost) of analysing your systems and/or dredging up historical log data