The Portable Network Consultant (PNC) and Itheon QoS (formerly known as the iAM QoS – they've changed a few product names here and there) are aimed at organisations that want to monitor the performance levels of the applications running over their network, and to diagnose issues when the observed performance becomes unacceptable.

I'm writing about them in the same review because they're fundamentally the same software product: it's just that the QoS is a rack-mountable appliance and the PNC is a laptop-based package (ours came on a Dell Latitude D610) that you can carry about with you.

The devices work by monitoring the traffic on a network segment and breaking down the IP transmissions into their constituent parts. The software has two components: the detector, which does the actual data collection and collation; and the central appliance which collates everything.

Both the QoS and the PNC can run on-board detectors (referred to as "integral detectors" in the GUI), but you can host external detectors too. Of course, when you're connecting any detector to the network you need to make sure that the traffic you want to monitor is actually visible to the detector, so you'll need to use port mirroring or CAT5 splitters copiously if you're in a switched network.

Getting everything set up is a simple case of plugging in and turning on. The GUI is web-based (so on the PNC you run up a browser window and point it at "localhost", or you can even connect to it over the LAN from your own desk) and it follows the usual two-pane arrangement of menus on the left, detail on the right. You tell it what detectors it has (rather usefully, particularly in the PNC, you can turn individual detectors on and off at will) and it sits and gathers data. Our PNC had two LAN ports, so you could have one doing the monitoring and one for remote access.

Data is stored on a session basis. For TCP connections, "session" basically means "connection", so for instance, a POP3 session lasts from the moment the client connects to the server to the moment it disconnects after downloading. The software measures the various timings of stuff happening on the network and uses this information in its reports. You can drill down into the data gathered by a detector from three viewpoints: by protocol (TCP, UDP, ICMP, etc); by application (HTTP, POP3, FTP, etc) or by IP address (if you’re interested in a particular client or server).

It took me a minute or two to realise that there are actually two actions when you click an item within a detector's scope. If you click on the text of the section (eg. Applications) the detail window lets you report on the total traffic for that section as a whole, but if you click on the little folder icon, it breaks down the menu into a list of individual applications/addresses/protocols, on which you can click in order to report just on that single entity. It's obvious once you’ve realised, of course.

When you drill into an item you're given the choice of a number of standard reports. No surprises here – it's your common set of top talkers, slowest responders, top packet re-senders and the like. Clicking on an item throws you straight into a graph pop-up, and when you're viewing the graph there's a set of options you can fiddle with in order to customise the view (eg. start and end time, or source/destination address). This is done well, as it means that you get something useful (in this case an overview graph) with a single click, but can then tweak it further if you wish. If you want to add your own custom graphs, you can do so by clicking the Quick Graph button and defining your criteria; you can do one-offs or save your settings to be used again later.

As well as providing the ability to record and report on data, the system can alert you if something goes pear-shaped, via triggers that you define for each detector. To set up a trigger, you define what type of data it will be based on (network response time, for instance, or an application failing to respond) and then tell it what criteria should cause the trigger to fire. An example might be the utilisation of a network segment exceeding a particular level for a certain number of seconds.

Triggers on the detectors are related to events on the central appliance, so when you define a trigger, you tell it what severity of event it should generate, and the appliance's event screen then displays it appropriately (you can define custom views that only show events above a particular severity level, for instance).

The PNC and QoS are refreshingly simple to use, and the ability to collate events from distributed detectors is a sensible way to make it scalable. The GUI is good, you can tweak the presentation to a surprisingly large extent, and I like the idea of selling the same software product as both a rack-mountable long-term logger and a portable problem-finding tool. Data capture was almost instant in our test – that is, it wasn't many seconds between the client having a session and that session appearing in the PNC – and the way you navigate into graphical reports means that you always get something informative with a number of clicks, but can further refine it if required.

There are plenty of performance monitoring tools on the market, so anyone wanting to compete in this area has to stand out. PNC/QoS stands out because someone has sat down and thought about how to build a flexible product and make it usable. That is, it doesn't feel like something a techie wrote in his spare time and which someone in marketing decided to tack a GUI onto and try to flog. Historically, I've only come across three vendors who give me this warm feeling: NetScout, Peribit (now Juniper) and NetBotz. I've now found a fourth.


You probably wouldn?t buy both a QoS and a PNC. If you want ongoing reporting, buy a QoS and use remote detectors, then do your troubleshooting via a browser from the comfort of your desk. If you just want troubleshooting, buy a PNC.