When you think about how network performance is typically ensured, one technique, overprovisioning, rises to the top. And it's relatively simple to implement - just make sure there's more capacity in a network than will be required.

For example, the reason that Skype works so well over the Internet is largely due to overprovisioning, since there isn't any class-of-service or quality-of-service mechanism in the Internet for Skype (or any other application seeking time-bound performance) to take advantage of.

In other words, the goal is to provision enough excess capacity so that sufficient throughput is available for all users and their applications, whatever they may be, optimising for productivity. The concept of overprovisioning, as it turns out, is easily applied to wireless LANs. As WLANs have gotten cheaper, it makes sense to deploy more access points (AP) and thereby add more capacity by installing the APs closer together, more densely, creating what we call a "dense deployment."

We used to think (and some still do) of WLAN coverage as being on the order of 50,000 square feet or so per AP, assuming a radius of coverage of roughly 125 feet. (Some have even used as much as 300 feet, which is most certainly not recommended!) But, as I've noted before, there is an inverse relationship between distance and throughput (and thus reliability and capacity) in most wireless systems, so it makes sense to keep the distance between endpoints as short as possible to maximise throughput.

Range is no longer an issue
Another advantage of such an approach is that we can re-use frequencies over shorter distances and raise capacity by using the maximum number of non-overlapping channels in a given area. Note this is potentially a very large number - up to 27, if we use 802.11a, and you should. If you're concerned about the range of 802.11a (and, despite rumors to the contrary, it really is about the same as .11g for any given throughput over the entire 6 to 54 Mbit/s range supported by .11a), well, dense deployments put that problem to bed - range is simply no longer important.

Dense deployments are just that - think 3,000 to 5,000 square feet of coverage per channel. However, we've heard of cases where the coverage per cell was only 1,000 square feet, a maximum of roughly 18 feet between AP and client. There's really no problem in getting the maximum throughput available in this case. Multiplying the maximum throughput per AP by the number of APs deployed is thus the formula for getting the most capacity in any given installation. If you're thinking, by the way, that you have enough capacity, think again - you can never have too much throughput or overall capacity. Networks, wired or wireless, are never too fast.

And less range means fewer retries due to radio propagation errors, again improving capacity and overall responsiveness. And finally, capacity requirements, in terms of users, the applications they run and the throughput requirements of those applications, only grow over time. Dense deployments are thus a great way to plan ahead.

It's down to smart controllers
In terms of increasing capacity, just adding more APs isn't going to work by itself. Rather, the situation is one akin to that found in the operation of modern aircraft. Today, when the pilot operates the controls, what's really happening is that the pilot's intent is being communicated to a large number of processors onboard the aircraft. Software and computers then decide how to carry out the pilot's commands, with the result being improved safety, control and reliability. Modern aircraft just won't fly with the manual controls used in the past. There are simply too many variables for a human to consider in real time.

Similarly, modern WLANs require a controller of some form to set transmit power levels and channel assignments and adjust these and other key parameters in real time. Dense deployments couldn't be implemented via hand tuning; it would be like trying to fly a plane without the onboard processors. Given that the radio environment can change as the physical configuration of an enterprise changes (and as the number of users and their locations change), it's often necessary to reconfigure the WLAN from time to time, and the controller takes care of this automatically.

Configuring, tuning and reconfiguring basic radio parameters is part of what we call radio-frequency spectrum management (RFSM), a key component of the control and management system of a WLAN that can also be used to monitor and even control elements of security, integrity and reliability. Again, a real-time control mechanism, implemented in the wireless switch or appliance, is essential in making dense deployments and indeed, all of this, work.

But, when it does, we can dramatically improve capacity and reliability via dense deployments. In fact, I expect that dense deployments will become the norm, likely via an incremental process as the need for more capacity is identified. I believe dense deployments are the key to the ultimate replacement of wired infrastructure by WLANs - something that was unthinkable a decade ago, but that is now clearly going to happen in enterprises of all sizes.

Craig J. Mathias is a principal at Farpoint Group, an advisory firm specialising in wireless networking and mobile computing. He can be reached at [email protected] This article appeared in Computerworld.