Reviewing products such as servers and workstations, whether for a magazine such as this one or for your enterprise, means measuring the kit against a given set of criteria. High on the list has always been upgradeability -- also called scalability and expandability by vendors. But what does that really mean? And does anyone actually care? We don't think you should.

Back in the Dark Ages of the PC -- around 20 years ago -- one of the elements of the design that got us all excited was the notion that the machine could be upgraded by adding this memory board or that graphics card. And a few of intrepid souls did just that, and usually paid the price by having to spend the next week reinventing its configuration so that the machine could take advantage of the new hardware and revert to a notional standard of working properly. Most users didn't bother, and still don't. While hardware tends to be more reliable these days, though by no means perfect, the situation hasn't really changed.

On servers it's the same -- only a tad more critical. It takes only one errant driver or broken piece of hardware resulting from an upgrade and the whole machine dies, taking dozens, hundreds or even thousands of users with it. Not a scenario any data centre manager is keen to experience.

Yet we tend still to review systems on the basis of how easy they are to upgrade, and of course vendors are more than happy to market products to do the job. Yet one has to wonder just how much incremental revenue actually finds its way into their pockets.

Hard facts are hard to come by, as vendors are -- naturally -- highly secretive about where exactly their revenues come from. Analysts reporting on markets often need to sign confidentiality agreements before major players will lift the veil shrouding the shape of their business.

This is why we've started a forum thread (see link below) on the topic so that we might nail this issue with some real-world experience. And the topic is this: do you upgrade your servers? If so, why?

Our assertion is that, contrary to popular belief, almost every server when bought contains all the hardware installed within it that it will ever carry, at least while in the ownership of its purchasers from new. In other words, the whole idea of expandability is an urban myth, apart from the replacement of failed parts.

Who, for example, upgrades the CPU, or swaps out a working hard disk in order to replace it with a bigger one -- assuming of course that the server is a traditional design that contains storage devices within it? It sounds highly unlikely.

For the incremental performance that might result, the risk is huge. Not only do you have to take down a perfectly working system, you may then have to spend hours rebuilding it if you've installed new storage or a new, faster RAID controller. And a new CPU could have unforeseen consequences due to additional stresses placed on memory subsystems and other components. Exceptions are likely to be confined to large, multi-way chassis-based systems where components are designed to be swapped out on a regular basis.

Apart from the technical reasons why most servers aren't upgraded, the price of upgrades is high, costing a large amount of both downtime and techie time. And the budget from which that price is extracted is yours, that of the IT department. Planned capital spending on the other hand is pre-allocated on a regular basis, so it's far better for all concerned to buy a new, fully populated server than to fiddle with an old one, given the risks involved.

In other words, don't bother spending money on scalability: spend it instead on the best hardware for the job. Discount any expandability options from the spec sheet, since the odds are that you'll never make use of them.