Many still cling to the belief that there's little difference between an ordinary desktop PC and a file server. Though both may be based on similar motherboards - especially in these days of multi-processor desktop PCs - and use similar processors, memory, disks and other subsystems, the differences are crucial when it comes to the attributes of a shared system - which is what a server is - characteristics that include high performance and capacity and, primarily, reliability.
How much performance you need depends - as ever - on what you want the server for. Performance assessment starts with the processor. If the machine will just be serving files and printers in a branch office or small department, then a single, fairly cheap CPU will manage the task more than adequately. However, it's wise not to skimp where growth is expected since CPU scalability is not always simple.
Multi-processor servers, now quite affordable, come into their own for running applications such as email gateways and servers, security applications, Web servers and of course CPU-hungry applications such as Oracle or SQL Server. Whether Linux or Windows, your application server is likely to benefit from the fastest CPU or CPUs your budget can stretch to. It will also make the machine more flexible, should you need it to take on other tasks halfway through its allotted lifespan.
As for memory, today's OSes are memory-eaters: the more you can give them the better - RAM has been described as the cheapest upgrade you can buy. With 64-bit architectures now on sale at relatively low (and falling) prices, you can add considerably more than the 4GB limit which applied to 32-bit systems. Exactly how much RAM and CPU you need is a topic highly specific to your application requirements but manufacturers such as Dell provide online tools to help you get closer to a real world system.
However, one component a file server is bound to include is a PCI-64 bus or, if it's based on the latest technology, PCI Express. This is now being included in new server products, and offers a serial rather than a parallel interface for hardware components; above certain speeds, the problem of keeping bits in sync on a parallel bus or cable is greater than the problem of speeding up a serial data flow. As proof, serial ATA is in the process of overtaking the established but parallel ATA disk interface.
It goes (almost) without saying that reliability is the most critical attribute: without near-permanent uptime the server is worse than useless, costing time (and possibly money) while its down, and money and time to fix it. Not only is unplanned downtime a major event, even if you fix the hardware, damaged data needs to be rebuilt and lost applications restored. As data sets grow, the time it takes to perform such exercises grows - along with the costs, of course.
Server vendors therefore build resilience into the server hardware with the aim of ensuring that, if a hardware fault occurs, there's a backup unit ready and waiting to shoulder the task. The idea is to eliminate single points of failure. Called redundancy, this technique sits at the core of most failure protection systems used by not just servers but almost all IT systems. Buying twice as many components as you actually need may cost twice as much but it's still cheaper than the consequences of a major hardware failure, especially given the fall in hardware prices in recent years.
With no moving parts to wear out, processors and motherboards tend to be reliable. As long as they're supplied with a continuous and steady stream of power, they probably won't be your major concern. Guaranteeing that continuous stream of volts should be though, and an uninterruptible power supply (UPS) gets you to first base - again, online tools can help you specify a UPS. A redundant, or second, UPS offers additional insurance against a double-fault scenario.
However, memory is prone to glitches. ECC (Error Checking and Correcting) memory is essential on a server as the memory subsystem can correct occasional, small memory errors. This mitigates the risk of data corruption or, at worst, a system crash.
Assuming the current flow is maintained, one of the main by-products is heat. Computer components are designed to deal with a certain amount of heat but, even if you don't reach those operating limits, the cooler the system runs, the more reliable it's likely to be. Too much heat and the whole system is at risk.
Redundant cooling fans are the answer here. As moving parts, they have a high failure rate so a combination of hardware and software monitors to assure you that they are still working is essential; this allows you to replace failed fans before their absence becomes a problem. Most servers include temperature and voltage sensors along with appropriate management software.
Redundancy can be built into storage components too. The most basic technique is disk mirroring, which involves duplicating data on a second set of drives. However, this still leaves the disk controller as a single point of failure so duplicating the controllers as well as the disks - known as duplexing - adds another layer of protection. Of course deploying a RAID adds even greater redundancy and makes fixing disk faults easier. Outside the server itself, greater insurance can be had by mirroring the whole server.
Only on a small network are you likely to want a 10/100Mbit/sec. network connection. With Gigabit NICs now approaching commodity price levels, not installing a couple of these -- two means redundancy, remember - means the server cannot expand as your needs change. And installing new components a year or two down the line will always be more fraught than achieving a stable configuration from the outset.
A server-oriented adapter will include some form of on-board processor or TCP/IP Offload Engine (TOE). Typically it will offload TCP/IP packet processing from the host CPU. The idea is that packet processing on a high-speed network can add a significantly to the CPU's workload, and a TOE can relieve it of most if not all of that load, depending on its design - although some have recently questioned whether this actually makes that big a difference. With gigabit Ethernet components now at commodity level prices, it's worthwhile running your own benchmarks to see what effect it would have on your servers.
Highly popular in today's server farms are blade servers. Packed into chassis that slot into standard racks, these allow you to cram multiple servers into the kinds of spaces that would previously only accommodate a single machine. Huge savings in space and therefore cost can be made, say vendors, through a high-density computer farm.
Designed physically like a chassis switch, you hot-plug a server module into the backplane, which offers services such as cooling, I/O, cabling and so on. The main problem with blades is that there are no standards: you can't buy a blade from vendor A and expect it to work with vendor B's chassis. There are also no standard ways of managing them. However, in time, expect these issues to be addressed - user pressure is likely to force vendors to work more closely together.
Perhaps more importantly, the problem of cooling is probably greater than many users at first appreciate. Blade servers increase the amount of heat generated per square metre, causing problems that may need the installation of specialised cooling units.
Perhaps the biggest change in recent years has been the division from the server of its associated storage. Smaller installations - whether in small offices or branches or departments of larger organisations - are still likely to include storage within the server. But for larger setups, most now perceive storage as being separate from the server, even if the installation is either a stand-alone or a small handful of machines. The result is the storage area network (SAN), which you can read more about in Techworld's Storage Knowledge Centre.
Room to upgrade and add extra storage and other peripherals is important, but less so than the key attributes of better performance and reliability. Though only a brief overview of what constitutes a server, you can see that these machines, though once just a desktop with maybe an additional disk drive, are far removed from the average desktop system.