Everyone loves benchmarks. You say you don’t like them one bit, and you tell me they’re all meaningless, misleading, or skewed to the service of the vendors that sponsor them. I tell you that the benchmarks I run aren’t the least bit fun, and I’d just as soon be rid of them.

But all of us who groan about and deride benchmarks realise the truth: We need them. Without benchmarks we’d have nothing but hard numbers to guide us: clock speed, gigabytes per second, rpm, polygons, I could go on.

How excited can one be about a 2.2GHz Opteron when there is a 3.6GHz Xeon within reach? If you can afford either, is it smarter to buy five 7,500-rpm Serial ATA drives or three 15,000-rpm SCSI drives? DDR2 (DDR, second generation) is faster than DDR, right? These read like word problems from your algebra textbook, but at least those had right answers. You can know every last detail about your hardware and still be more or less winging it when it comes to buying choices and, later, allocating systems and storage to tasks.

It’s no easy thing to balance and prioritise the needs of several groups within your organisation. Some hard numbers do make these decisions for you — I didn’t need to be convinced that Gigabit Ethernet was a smart move once switch prices fell below $30 per port, but you don’t get many shots at simple maths like that. When you’re spending serious money on systems and storage, you need to put more faith in squishy numbers. Benchmarks make good business partners if you choose them well, know where they fit in your strategy, and know when to listen and when to ignore.

A benchmark’s primary function is to provide you with a necessary measure that hard numbers can’t give you: capacity. An enterprise is often more interested in how heavy a load its technology can shoulder than in how quickly it can move the load from here to there.

There really are good benchmarks out there. I know because I’ve been shacking up with one. I’ve been in touch with my lab-coated alter ego lately about climbing back into the SPEC (Standard Performance Evaluation Corporation) benchmark suites. I wrote a story about AMD’s new dual-core Opteron CPU and, although I’ve done it so many times before, this time I couldn’t let it go without including some benchmarks. In the process of doing the benchmark thing, I learned and conveyed knowledge that couldn’t have been derived any other way.

It may be that the benchmarks I favour are unique. I have no use for packaged, one-click benchmarks built for something older than my hardware. I have no need for an ad hoc benchmark that lives predominately on one platform and changes so often that its test results cannot be compared from year to year. I built 64-bit SPEC CPU2000 benchmarks using Intel 64-bit compilers on Microsoft 64-bit Windows on an AMD 64-bit Opteron system. I went into the process with lots of theoretical knowledge and left with the balance tilted toward the practical.

It’s those hard, steady, take-to-the-bank numbers that are too often arbitrary, misleading, and untrustworthy. I stand behind my squishy benchmark numbers because I know exactly where they came from. There’s power in that.