The statistics of mean time between failures (MTBF) and average failure rate (AFR) have got lots of attention lately in the storage world, especially with the release of three much-discussed studies devoted to the topic in the last year. And for good reason: Vendor-stated MTBFs have risen into the 1 million-to-1.5 million-hour range, equalling 114 to 170 years, a life-span that no one is seeing in the real world.

Three studies over the past year on MTBF include the following:

-- Google's "Failure Trends in a Large Disk Drive Population"
-- Carnegie Mellon University's "Disk Failures in the Real World"
-- University of Illinois: "Are Disks the Dominant Contributor for Storage Failures?"

"MTBF is a term that's in growing disrepute inside the industry because people don't understand what the numbers mean," says Robin Harris, an analyst at Data Mobility Group who also runs the StorageMojo blog. "Your average consumer and a lot of server administrators don't really get why vendors say a disk has a 1 million-hour MTBF, and yet it doesn't last that long."

Indeed, "how do these numbers help a person who wants to evaluate drives?" says Steve Smith, a former EMC employee and an independent management consultant. "I don't think they can."

Even storage system maker NetApp acknowledges in a response to an open letter on the StorageMojo blog that failure rates are several times higher than reported. "Most experienced storage array customers have learned to equate the accuracy of quoted drive-failure specs to the miles-per-gallon estimates reported by car manufacturers," the company says. "It's a classic case of 'Your mileage may vary' - and often will - if you deploy these disks in anything but the mildest of evaluation/demo lab environments."

Study results

The upshot of the recent studies can be summarised this way: Users and vendors live in very different worlds when it comes to disk reliability and failure rates.

Consider that MTBF is a figure that's reached through stress-testing and statistical extrapolation, Harris says. "When the vendor specs a 300,000-hour MTBF - which is common for consumer-level SATA drives - they're saying that for a large population of drives, half will fail in the first 300,000 hours of operation," he says on his blog. "MTBF, therefore, says nothing about how long any particular drive will last." In other words, MTBF does a very poor job communicating what the actual failure profile looks like, he says.

It's like providing the average woman's height in the US but without showing the numbers used to derive that average, Smith says. "MTBF became the standard because it was perceived as a simpler answer to the question of reliability than showing the data of how they arrived at it," Smith says. "It's an honest-to-God simplification."

Stan Zaffos, an analyst at Gartner, agrees. While he believes MTBF is an accurate representation of what the vendors are experiencing with the technology they're shipping, it's also difficult to translate into something meaningful to end users. "It's a very complex and tortuous route to undertake, requiring a lot of solid engineering experience and an understanding of probability and statistics," he says.