We knew Wi-Fi testing was going to be a big issue. IT managers won't buy kit unless they know what it will do, and can compare it against other kit. And they can't do that without proper tests. And besides, we've done enough wireless reviews ourselves, now, to want some reliable way to compare data between them.

Which is why we are pleased to see the IEEE set up a group to look into Wi-Fi testing. Bob Mandeville, a leading figure behind the Wireless Performance Prediction (WPP) group, has a long history in wired network testing. Bob set up the European Network Laboratories in the 1990s, developed tests for wired Ethernet, and wrote two testing standards, the IETF's RFC 2285 and 2998. Now he is president of US-based test labs Iometrix, and editor of test methods for the Metro Ethernet Forum.

We spoke to him about his hopes and what problems wireless testing is going to bring up.

"The comparatively slow take off of wireless in the office space, is simply because of a lack of … is trust too strong a word?" he says. "The IT manager will ask questions at a deeper level than a SOHO user with a single access point."

If a SOHO access point doesn't work, the user can simply get a new one, he says: "If you are supporting 1000 users, who are used to wireline levels of performance, that is not an option."

"The IT manager also needs to know, if he invests in wireless, what applications can he trust to the wireless LAN?" says Mandeville. The problem is there are simply no figures to answer this kind of performance question.

Right now, the figures don't mean anything
The motivation of the group is clear: give us a way to compare performance of Wi-Fi products. Mandeville takes the example of roaming time, the time taken for a device to disconnect from one access point and connect to another, where the lack of a clear definition means you can't tell what a vendor's claims mean: "One vendor reported a 57ms roaming time, while our tests showed 28.6 seconds, on an 802.11g network" he said. The problem isn't that vendors are lying, but simply they don't have a defined way to measure performance, he says: "It depends on what you actually count as roaming time - whether you choose to include or exclude rate adaptation."

In this example, rate adaption clearly should be included, says Mandeville, since the connection may be technically in operation in less than 60ms but applications won't work properly for nearly 30 seconds, a delay that would be unacceptable in voice applications and, to be honest, pretty much all applications. "The test group will have to define the metric applied to roaming," said Mandeville.

Moving on to standards
For now, the group is a "study group", which cannot define standards. The first thing it will have to do is define terms. "For the next six months, the WPP group will be researching the scope of the problem, establishing metrics for issues such as interference," says Mandeville. "We will just be establishing the lingua franca."

However, he has very clear ideas for what is going to happen beyond that date. "This is a stepping stone to creating a working group [the IEEE's forum for creating standards]," he said. "The reason for the study group is to convince the larger IEEE body that it is worthwhile to create a standard.

WPP is going to focus on practical issues and Mandeville hopes to pull in members of the user community. For now, the chair is Charles Wright, chief scientist of Azimuth, a test specialist who launched a major Wi-Fi test system in November.

Even before it gets the nod to create standards, it is going to look at the kind of standards that will be required, and how they will be tested. "After terminology, we will have to define methodology," he says. With definitions for things like roaming time and forwarding rate, the group can move on to the controlled environment required to test them.

The first danger: define methods, not test products
Any test should be repeatable and that means a controlled testing environment, says Mandeville. "But what do we mean by a controlled testing environment? That will inevitably come up in the context of any discussion."

The group could find itself in deep waters when it starts defining the method that vendors should use to measure things. "It is very clear that the definition of test methods should be careful to steer clear of defining testing implementations," says Mandeville. "It has to be sufficiently generic to allow different test vendors to arrive at the same method, through different implementations."

For example, Wi-Fi equipment needs to be tested in a controlled environment, so that changes to the RF characteristics around it don't affect the signal strength. One obvious way to do that is to set up purpose-built rooms, but that is expensive. Azimuth, with the only purpose-built Wi-fi test rig on the market (as far as we know), isolates the Wi-Fi components in a rack of shielded boxes that have specific RF characteristics.

If the standard defines things too closely, it could end up requiring a particular test rig. "The standard should not be the product specicification of Aziumuth, or anyone else," says Mandeville. "It should be the other way round. The Azimuth - or any other - test system should be an implementation that satisfies the standards to meet the metrics defined in the first phase."

The second danger: don't specify good performance
But there's another danger: "Lurking in the background, is a slippery slope," warns Mandeville. "We will recommend best practices. But a standards body does not want to define good and bad performance. The work of the group should be restricted to how to measure performance, not defining what is good or bad."

"The group's terminology and methods should allow users and vendors to optimise wireless networks," he says. "That is the underlying objective. But this should not translate into the group specifying what is good or bad. Any test group should be extremely careful not to get mixed up in that kind of definition. It would be disastrous."

Is it like defining a speedometer for a car, we asked? "It is not the work of the standards body to determine that it is illegal to go more than 60 mph. That belongs to police," he said. But, in fact, the standards body won't even determine the speedometer, he pointed out, only the figures that go on it and what they mean.

It's that kind of precision thinking that's got Mandeville where he is today and it's a good indication that the WPP group won't fall into any of the traps he mentions.

The rest of the group are no slouches, he reckons. "The people in the group have a lot of experience. Some of that comes from working within the IETF's benchmarking working group [BMWG]." Intel, Microsoft, Broadcom and Texas Instruments are all active members, with a good pedigree in testing.

Why go with the IEEE, when his previous test work has been in the IETF? "It seemed to many of us that the group that created 802.11 should be the home for the definition of the metrics. The IETF's competency is not in the wireless domain." Ethernet switching ended up at the IETF, because Layer 3 involves routing, he explained.

The group launches on Monday March 15. The wireless industry is going to be watching what it does very closely.