Hacker News new | past | comments | ask | show | jobs | submit login
How long do disk drives last? (backblaze.com)
311 points by vayan on Nov 12, 2013 | hide | past | favorite | 158 comments



There are extremely large difference in reliability between different drives makes and models. Here are a couple of numbers (we're building storage servers, running 24/24 in various environments):

* my company installed about 4500 Seagate Barracuda ES2 (500 GB, 750 GB and mostly 1TB) between 2008 and 2010. These drives are utter crap, in 5 years we got about 1500 failures, much worse than the previous generation (250 GB to 750 GB Barracuda ES).

* After replacing several hundred drives, we decided to switch boats in 2010 and went with Hitachi (nowadays HGST). Of the roughly 3000 Hitachi/HGST drive used in the past 3 years, we had about 20 failures. Only one of the 200 Hitachi drives shipped between 2007 and 2009 failed. Most of the failed drives were 3 TB drives, ergo the 3 TB HGST HUA drives are less reliable than the 2 TB, themselves less reliable than the 1 TB model (which is by all measure, absolutely rock solid).

* Of the few WD drives we installed, we replaced about 10% in the past 3 years. Not exactly impressive, but not significant either.

* We replaced a number of Seagate Barracudas with Constellations, and these seem to be reliable so far, however the numbers aren't significant enough (only about 120 used in the past 2 years).

* About SSDs: SSDs are quite a hit and miss game. We started back in 2008 with M-Tron (now dead). M-Tron drives were horribly expensive, but my main compilation server still run on a bunch of these. Of all the M-Tron SSD we had (from 16 GB to 128 GB), none failed ever. There are 5 years old now, and still fast.

We've tried some other brands: Intel, SuperTalent... Some SuperTalent SSDs had terrible firmware, and the drives would crash under heavy load! They disappeared from the bus when stressed, but come back OK after a power cycle. Oh my...

So far unfortunately SSDs seem to be about as reliable as spinning rust. Latest generations fare better, and may actually best current hard drives ( we'll see in a few years how they retrospectively do).


Yev from Backblaze -> We love Hitachi drives. They make us really happy. Unfortunately they are also more expensive than WD and Seagates, and since our #1 factor in drive purchasing is price, we don't get them very often :(


But aren't Hitachi drives now owned by WD?

http://venturebeat.com/2011/03/07/western-digital-buys-hitac...


While operating under the same corporate umbrella, the engineering divisions of WDC and HGST are being kept separate for now as part of an agreement with the Chinese to get their approval for the acquisition. The upshot of this is that HGST drives are going to be a completely separate product line for at least the next few years.


thanks


But isn't it cheaper on the long run if your drives fail less often?


Technically yes, especially from a manpower perspective (Takes folks to replace the drives), and it gets factored in to our "what are we willing to pay" model, so far, cheaper drive still win out over longevity. Now, this is different from other people with large data farms, we operate a bit differently, but thus far cheaper drives make more business sense. If that ever changes, we'll switch to the good stuff :)


Depends on a lot of factors, including your internal rate of return.


I remember seeing somewhere that failure rates correspond strongly with number of platters (Something along the lines that doubling platters doubles chance of failure within 3 years).


Sysadmin here. My experience:

1. Infant mortality. Drives fail after a couple months of use.

2. 3 year mark. This is where fails begin for typical work loads.

3. 4-6 year mark. This is when you can expect the drives that haven't failed earlier to fail. By this point, we're looking at 33% fail.

Interesting that my experiences roughly match up with Chart 1.

My experiences are 10 to 15k SAS drives. Slower moving 7200rpm drives? No idea. Haven't used them in servers in a while. They seem more of a crapshoot to me. SSD's, thus far, are even more of a crapshoot and we don't use them in servers and only hesitantly in desktops/laptops and only Intel.


Agreed RE: SSD drives ...

It is very disappointing how flaky and unreliable SSD devices have been when their promise was just the opposite, due to lack of moving parts.

Back in 1999/2000 I had a habit of building some personal as well as commercial servers in datacenters with compact flash parts (plain old consumer CF drives) as boot devices with the goal of fault tolerance in mind. There was a price to be paid in that these devices needed to be mounted, and run, read-only.

But they ran forever. I never had one part fail. Just plain old CF drives mated directly to the IDE interface.

Now fast forward to 2013 and new servers we deploy for rsync.net have a boot mirror made of two SSDs ... things have gone well, but our general experience and anecdotal evidence from other parties gives us pause.

One thought: an SSD mirror, if it fails from some weird device bug or strange "wear" pattern would fail entirely, since both members of the mirror are getting the exact same treatment. For that reason, when we build SSD boot mirrors, we do so with two different parts - either one current gen and one previous gen intel part, or one intel part and one samsung part. That way if there is some strange behavior or defect or wearing issue, they both won't experience it.


They get the same writes but not the same reads, so depending on the bug source it may not hit both at the same time. The read pattern itself may affect the way the writes are performed to the flash (delaying or speeding up writes pending for commit) that it may have a butterfly effect on the rest of the behavior and removing the disks from being in sync with regard to firmware bugs.

If you'd still follow up on your idea of using a read-only root like you did with CF cards and figured a safe place for the logs you could still use the SSDs in the same mode. Why not go that route?


Yes, depending on the bug source. But the bug source might be related to reads. Nobody knows. Splitting the risk across two different vendors / implementations seems to be good insurance.


I mostly handle server appliances and the read-only boot disk is bread-and-butter for anything I do. Bonus points for using initramfs and never hitting the boot disk after the initial boot is completed.

But if you stick to boot SSDs that are read and written to using different makes sounds like a good strategy.


It would of course be hard to avoid read-only flash no matter what you did - both bios and pxe rom from the ethernet card would presumably be read only flash today (that is writeable, but in practice only used for reading).


I did exactly the same thing with CF. We had a default config that would operate read-only and leave the machine in a reachable state no matter what. Once we got that far, we'd mount some spinning rust or NFS and pivotroot and run a secondary init.

It was a huge win for uptime.


I've used mostly 7200rpm SATA and Nearline-SAS and they are mostly fine in fact, didn't play enough with higher rpm SAS yet but so far I don't see a big difference between them and the 7200 NL-SAS drives.

I'd echo the sentiment seen elsewhere in the comments about Seagate drives vs. Hitachi drives. Both for SATA and NL-SAS. Hitachi 1TB were rock solid compared to Seagate.


Completely anecdotal... but, the 640gb two platter drives have seemed to be the most rock solid.. ymmv though... With the much larger, and much more expensive (since the taiwan floods) drives, who knows anymore... it's relatively anecdotal for anyone at this point, since after 3-5 years all the warranties have expired.


The hard disk drive quality has dropped over the last few years.

* Most consumer drives over 2TB have extremely poor reliability. Just check any Amazon or Newegg review (DOA and early mortality show up with more frequency). Yes, I know using reviews are not accurate but since there is no public information of drive failure rates there is not really much to go on.

* The reduction of manufacturer warranty since Thailand floods. Surprise, they never changed it back to the original 3 year warranty.

If you have a large array of disks, there is nothing to really to worry about. If you have a small set of drives, spend a little extra and get the "Black" or RE drives with 5 year warranty. Avoid any "Green" drive.


I have to suspect its because bleeding-edge drives have to be over-rated to compete. They have better margins which should mean they could afford better quality. But they can barely make the drive at all; they want to ship before reliability is quite there; so top-end drives are sketchy.


Why avoid the green drives? I assumed those were less power hungry and spinning slower so more reliable. I've been ordering them for RAID5 arrays and not had too many issues yet.


Greens have had problems with aggressive head parking. If you have an idle set of them you can go through their design limit of head parks in a couple of months and start to get failures shortly after. Done that.

Check your S.M.A.R.T data. Look at the head park number. (Load cycles I think it is called, can't look it up now). If it is a six digit number, you are in trouble. For a server you want if to be in the same order as to power ups. Anything else and you have to explain to yourself "why?"

Edit: adding. The 1TB and smaller greens were disasters. I ruined a lot of them. I was told all of the 2TB and up greens didn't have head park issues, but spent part of last week replacing a storage unit populated with 2TB greens when a spindle failed (>200 unrecoverable blocks) and found that some of the 2TB greens were load cycling into the 200000 range, others weren't running up. They were all identical models purchased at the same time. Maybe they had different firmware? I replaced hem with REDs. They aren't supposed to park and they won't try to recover a bad sector for more than a few seconds so the don't hang your RAID when they get bad sectors.


As someone who inherited 240 24/7 running WD-Greens: http://idle3-tools.sourceforge.net/ works fine but disabling the timer has negative performance impact. 3000 seconds is fine through. But you need a complete powercycle before the changes take effect. No more parking. Does make a difference in longevity in my not very scientific opinion.

I can second the 200> bad blocks. Sometimes they still work fine after using badblocks -w a few times on them and raising the timer.


Good to know. I JUST bought a green WD drive (still in transit from Amazon) so my future thanks you.


I assume you mean Start_Stop_Count. A quick check on two servers, each with 4 green drives in RAID6 and RAID5 setups tells me this hasn't been a problem. Both have Start_Stop_Count's below 100 (on the order of the number of boots the servers have had). I don't see any other number that could be the head park.

The number I have been finding to be high is Hardware_ECC_Recovered (values between 1036555546 and 2699460003). Not sure if that's normal. I've also had two 1.5TB drives now end up with unrecoverable sectors. RAID recovers from that just fine but I've been replacing them as it keeps reoccurring and is supposed to be a signal of failing disks. These 1.5TB drives are a 3+´years old and I've been thrashing them a bit lately. I'd have expected them to last longer though.


I may have spoke too soon. One of my servers has 2 Samsung Green and 2 WD Green drives in RAID6. Here's the SMART value that you seem to be discussing:

  $ for dev in `ls /dev/sd?`; do echo $dev; sudo smartctl -a $dev | grep Load_Cycle_Count; done | cut -d " " -f 2,40
  /dev/sda
  Load_Cycle_Count 24
  /dev/sdb
  Load_Cycle_Count 24
  /dev/sdc
  Load_Cycle_Count 1947798
  /dev/sdd
  Load_Cycle_Count 1907706
sda and sdb are the samsungs and sdc and sdd are the WDs. I also just replaced a failing Samsung Green drive in another machine with a WD and it already has a Load_Cycle_Count in the 10000s. I guess I need to start avoiding Green WDs at least, maybe the Greens altogether.


I'm curious, why run 4 disks in RAID6 instead of RAID10? You loose two disks worth of capacity to RAID in either case, but with RAID6 there's also parity overhead, slower recovery, slower performance, especially in degraded mode?


RAID10 only gives you room for one disk failure in some scenarios.


Yep, that's it. The servers I run are all personal and their main workload is keeping my files safe. Being able to survive 2 drive failures in all situations is important. I just discovered in this thread that on that RAID6 array 2 drives are actually suffering from excessive head-parking. So having used RAID6 and bought 2 different types of drives bought me some insurance against the simultaneous failure of the two drives. The performance is fine anyway.


And for anyone that doesn't think this is worthwhile:

We recently had 3 servers have two drives each fail within hours of each other, with about two weeks between each of the 3 servers. These were 3 out of 4 servers that had been configured at the same time, with drives from the same delivery - clearly something had gone wrong.

Usually we try to drive types, but we didn't have enough suitable drives when we had to bring these up. Thankfully we do have everything replicated multiple times and very much specifically avoided replicating things from one of the new servers to another.

When we brought them back online we got a chance to juggle drives around, so now they're nicely mixed in case we get more failures.

For my private setup, I've gone with a mirror + rsync to a separate drive with an entirely different filesystem + Crashplan. Setups like that seems paranoid until you suffer a catastrophic loss or near loss...

My first big scare like that was a decade or so ago when we had a 10 or 12 drive array of IBM Deathstar (Deskstars) that started failing, one drive after the other, about a week apart, and the array just barely managed to rebuild... Particularly because it slowed the array down so much during the rebuild, that we were unable to complete a backup a day while running our service too, and taking downtime was unacceptable. So our backups lagged further and further behind while we waited for the next drive failure.. Those were some tense weeks.


That's a great example of what I worry about. On my first server I bought 4 identical drives when I built it and then when I needed more space I again bought 4 identical larger drives. Since this was all on RAID5 the risk was actually pretty high. On my second server I bought two of each manufacturer and used RAID6 so now I can survive a whole batch going wrong at the same time. Next time I need to build one from scratch I may even go for 4 different drives (mixing red/green/etc).

What I am doing now as soon as I get unrecoverable errors from the drives is to replace them one at a time with whatever is the best cost/TB drive. Whenever all the drives have been upgraded I can resize the array to the new minimum drive size.


This might be anecdotal but external WD (MyLife) drives are usually from the Green series and I had 2 different ones fail on me after about 1 year of use. Same happened to 2 friends of mine. I blamed it on the constant head parking (it went idle after 10 minutes of unuse).


I have 4 WD greens, one from an external enclosure the others being internal drivers, the first being maybe 3 years old now, all still going fine, also anecdotal


My experience is that they spin down way too often for server usage, and eventually break much faster than others.

Some of them were also crippled in firmware so you couldn't use them in RAID1 arrays, but this might have changed.


Isn't spin down controlled by the host and not the drive?


The host can spin down a drive manually, but most often it's done autonomously be the drive.

‣  In Linux you can manually ...

• check the power state of your drive using: hdparm -C /dev/sda

• manually spin down the drive (standby) using: hdparm -y /dev/sda (it will immediately spin up at the first attempt to read a sector)

‣  Or you configure the automatic standby of the drive (which also does not involve the OS)...

• hdparm -S n /dev/sda will configure the timeout of the drive to a value encoding the time to spin-down on a non-linear scale, check the manpage

• hdparm -B n /dev/sda will configure another type of power management which doesn't specify a fixed timeout, but rather a vendor-defined type of arbitrary power saving measures on a scale of 1..254 (1: waste power, 254: conserve power, n>128 allows spin down)

The latter two options are handled internally by the drive and (as far as I know) even stored non-volatile.

http://linux.die.net/man/8/hdparm

(Edit: fixed my broken English ;-) )


So, actually you got -B backwards. n < 128 allows spin-down.


Some drives like WD's Green series have variable RPM to save power (and therefore heat). I don't think the OS is involved at all with controlling how fast those disks spin (so long as they are spinning at all).


WD's marketroids tried very hard to give people that impression, but it's false.

http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771...

At the very bottom, in small, grey text, you'll find "IntelliPower" defined as:

"A fine-tuned balance of spin speed, transfer rate and caching algorithms designed to deliver both significant power savings and solid performance. For each WD Green drive model, WD may use a different, invariable RPM."


The RED drives you are refering to actually have some of the worst reviews around compared to even normal consumer drives. (Source: newegg reviews you mentioned)


i'm not sure where you're getting that info:

http://www.newegg.com/Product/Product.aspx?Item=N82E16822236...

http://www.amazon.com/WD-Red-NAS-Hard-Drive/dp/B008JJLZ7G

been running 3x of these in a raid-5 NAS, no issues so far (not that it's any kind of indicator on a system which idles as a backup all day)


About half of the Newegg reviews are 3-eggs or lower. And even some of the 5-egg reviews mention dead drives! They only got 5 eggs because of Newegg's easy return policy. The drives are clearly unreliable.


great :(


Managing harddrives, especially in redundant setups can be helped in one small way if you're sure to:

1) select the make and model of drive you want

2) buy the same model of drives from multiple vendors which have different serial and build numbers.. even if you're buying two drives, buy each from separate locations or vendors.

3) mix up the drives to make sure they don't die. place stickers of purchase date and invoice number on each drive to keep them straight.

This all.. because when one drive goes due to a defect or hitting a similar MTBF, other ones with a close by serial number or build number can tend to die around the same time for similar reasons.

From owning hard drives over 8 or 9 generations of replacing or upgrading since the 90's on all types of servers, desktops and laptops: The day you buy a new piece of equipment is the day you buy it's death. Manage the death proactively as it gets more and more tiring to deal with it each time.


Backblaze may not know because they are "a company that keeps more than 25,000 disk drives spinning all the time". After 3-5 years, you'd better have a back-up of a drive you choose to spin down. Every drive I've lost (in the last 10-15 years and ignoring two failed controllers subject to a close lightning strike) failed to start back up when I had powered the machine off for maintenance.


I've bought perhaps 50 drives in the past 20 years, and maybe 10 of them died, the others mostly becoming obsolescent. I only started taking serious logs about 6 years ago.

Drives have died for me both in 24/7 powered systems and through power cycles. Drives have reported intermittent failures for many months, but still lived for years without any actual data loss. The oldest drive I still have spinning is a 200G IDE containing the OS for my old OpenSolaris zfs NAS; must be getting on for 9 years.

I advise having a back-up of every drive you own, preferably two. I built a new NAS last week, 12x 4G drives in raidz2 configuration; with zfs snapshots, it fulfills 2 of the 3 requirements for backup (redundancy and versioning), while I use CrashPlan for cloud backup (distribution, the third requirement). Nice thing about CrashPlan is my PCs can back up to my NAS as well, so restores are nice and quick, pulling from the internet is a last resort.


The one thing to know about cheap consumer cloud backup solutions like CrashPlan and Backblaze is that they only have one copy of your data. So if their RAID array where your data is stored dies and cannot be rebuilt, it's all gone. You can Google for a few disaster stories about both companies.


Like I said, they are only one third of my backup strategy. My house burning down, or someone breaking in and going into my attic to steal my 30kg 4U server, should be the only two realistic scenarios in which I will need to rely on CrashPlan.


I do same, but in reverse. I use cloud servers, with versioning backups, but still beam additonal backups back to office. Just to survive total data center destruction disaster.


Backblaze doesn't use RAID, they use blob-level replication. I'm sure they can lose data, the question is how likely they are to lose data simultaneous with you losing your HD and local backup.


How many levels of backups for your backups do you need?


Ideally in most circumstances you should have at least one cold (non-raid, non-connected) backup of all data, and an offsite one. More being much better.


These numbers line up nicely with what I've experienced on much smaller scales (I've never personally cared about more than a few hundred spinning drives at once), which is that in a nice mix of old, middle-aged, and new drives, 5-10% go kaput each year.

Incidentally, about "consumer-grade drives", the last time I looked into this, I was led to believe that if it's SATA and 7200RPM (or less), there's no hardware distinction. It's just firmware. Consumer drives try very hard to recover data from a bad sector, while Enterprise/RAID drives have a recovery time limit to prevent them being unnecessarily dropped from an array (which will have its own recovery mechanisms). That's it.


Well Intel tells us different story[1] that promises 1. More performance 2. Less vibration (improves performance) 3. ECC Memory

There is a long feature reference that mentions things like: higher RPM, more quality, larger magnets, air turbulance control, dual processors, etc.

I'm not a spec in hard drives, just that I remember reading this stuff when trying to figure out do I need it. In the end, For my small-scale corporate file server, I chose zfs raidz with consumer grade disk drives.

[1] Enterprise-class versus Desktop-class Hard Drives: http://download.intel.com/support/motherboards/server/sb/ent...


A marketing team at Intel tells us a vague story about what is either a very vague or very specific set of drives, or may be about an entirely hypothetical set of drives. It's not clear.

They even admit to the problem themselves at the end:

"Some hard drive manufactures may differentiate enterprise from desktop drives by not testing certain enterprise-class features, validate the drives with different test criteria, or disable enterprise-class features on a desktop class hard drives so they can market and price them accordingly. Other manufacturers have different architectures and designs for the two drive classes. It can be difficult to get detailed information and specifications on different drive modes."

That PDF tells me nothing interesting. It's marketing crap for clueless executives, not a technical analysis. (Given their absurd obsession with "Higher RPM" as some sort of defining characteristic, it's not even relevant to the statement I made in the first place.)


I wonder how many will die before they age out of usefulness. If you're still spinning 250Gb hard disks which are using the same power and space that a 4Tb drive could be using - it might not actually be economically sensible to keep running them.

Certainly the old 9.1Gb SCSI disks that were so popular 10 years ago are well past be justifiable to give power to now.


That may be true for blackblaze.

But these drives will still be useful. What about, say, shipping them to ONGs located in Africa?


That's a horrible idea! The logistics to collect and ship the old tech to Africa alone would probably cost more then just buying it bulk from China and ship in one batch.

But there are other considerations:

* This would also result in a big pile of waste in Africa, as their recycling infrastructure is limited.

* They need food, shelter, stable politics and functional education before they can make any use of computers.

* They have limited energy supply. Low powered tablets / laptops are much more useful.



You could not believe the price they have to pay for computer. It is easily 3x, 4x the price we pay for the same hardware. And I they do need computer.

But, it is possible that the logistic/shipping is more expensive than the production of new hardware.

Also, cheaper more energy efficient hardware could be more cost effective.

source: A friend of mine worked for ong in Burkina Faso


I do believe you. It's economics of scale: they don't have it.

While most of the western world has huge fright ship docks that can load/unload and ship relatively price efficient, Africa does not.

This means, the primary means to ship stuff there is via airplane, which is notoriously weight constrained, which in turn makes shipping bulk goods expensive.

This makes shipping newer, lighter tech even more cost effective than old hardware.

p.s. This is probably true for anything they need to import, not just tech.


I'd be curious to hear more about your history in international shipping, particularly to countries in Africa.

Many countries on the continent appear to have rather robust capacity for their economies.

http://ports.com/browse/africa/map-view/


I think a lot of that price is actually import duties.


Maybe at such scale.

I don't think that's an horrible idea, because I routinely donate old computer tech to non-profits, which do make use of it. Usually to assemble computers for children. Granted, this has negligible cost to me, as we are in the same country.

When one has access to cheap tech from the BestBuy just around the corner, it is difficult to imagine how expensive it can be in third world countries. It may also be difficult to understand how incredibly old are some of the computers that do exist.

I can't believe that it would be so expensive to ask people and companies to donate their old "junk", fill up a container and ship it.

> This would also result in a big pile of waste in Africa, as their recycling infrastructure is limited.

That may be so. That much tech waste could also create the necessary conditions for a recycling industry to start. Assuming that it doesn't already.

> They need food, shelter, stable politics and functional education before they can make any use of computers.

Is that so? Can't computers help them achieve those goals?

> They have limited energy supply. Low powered tablets / laptops are much more useful.

Yes, many locations completely lack power and ordinary desktop computers wouldn't work. But I don't think that's true for all african countries.


Agreed. Better yet, keep them in a Google data center. Its far more efficient to make cloud storage available at a reasonable price. How about a diskless laptop that boots from cloud storage? That would be a sweet spot economically (depending on network costs I guess).


>How about a diskless laptop that boots from cloud storage?

While this is a cool idea, how much bandwidth would you need to boot at roughly the same speed you do today? Some SSDs have 500 MB/s (~4 Gbit) read. You'd need to have gigabit networking with almost zero latency for that to perform well.

I suppose a smaller OS like Chrome OS would be perfect for that. Even if this worked on fiber, how would you boot over a cellular network? Aside from costing you a ton of money, it would take forever to download.


Time is a fungible commodity. E.g. boot time may affect bottom line (rhyme!) in the EU; it can be different in Africa.


How reliable are the networks in Africa? Outside of the cities?


I'm thinking, because they got network late, maybe better than my house in Iowa. I'm still on copper.


I suspect the overheads of collection, shipping and distribution would dwarf the savings from not buying the optimal price/GB drive du jour.

Hard drive space per dollar grows exponentially, and they're big weighty things. The window of time where it would be economical to reuse would be short, and value dubious.


Here's a tv programme about the e-waste dump in Agbogbloshie, Accra, Ghana. It's probably on the Internet somewhere. (http://www.bbc.co.uk/programmes/b00sch78)


Even with the energy used for shipping them end-to-end?


Microsoft did a metaanalysis on general hardware failure based on the error reports sent by literally millions of consumer PCs. Although the results weren't particularly interesting (Hard drives fail the most, with rates consistent with what backblaze observed in the posted link), I was impressed by the sheer volume of data available to the study.

http://research.microsoft.com/pubs/144888/eurosys84-nighting...


Thank you, this is a real gem. I'm very grateful for MS Research in general, as they do some very interesting things that are only possible when you're the size of Microsoft, et. al. but I really do wish more academic papers came out of huge institutions. This knowledge really is worth sharing!


tl;dr: Here's some statistically significant data on how 25000 drives have worked over 4 years, please now comment on how the 3 drives you've owned died.


I enjoyed this comment.


You can use default warranty information to figure out the lower bound on useful life. Companies price the value of the warranty into the product and perform statistical QA to ensure that 95%-99% of all products released will work correctly for the length of the standard warranty. Also added warranties aren't worth the cost. Just replace the product when it breaks.


Agreed. Pretty much they assume you won't return a failed drive either. Since they last about as long as the warranty, you have only a few weeks at best to remember to send it in for replacement.

I've been hit-and-miss, gotten a few drives replaced, had a few warranties expire. But pretty much every disk drive fails eventually.

Think about it - its a commodity. If it lasted much longer than the warranty, they spent too much on robustness for the price.


If some of them live a long, long time, it makes it hard to compute the average. Also, a few outliers can throw off the average and make it less useful.

Proper statistical analysis would help you there.


> Proper statistical analysis would help you there.

Yes, if you know the probability distribution. If you don't know the distribution, you can not calculate your confidence, and thus can not do a proper statistical analysis.

And, guess what, nobody knows the probability distribution of hard drive failures. That's exactly what they are trying to find out.


There are actually many methods in survival analysis -- just as in the rest of statistics -- that do not impose strict distributional assumptions, and account for the fact that many drives are still operational. But as someone else mentioned, the median is also a good statistic to report :)


Simply using a median solves some of these problems pretty easily.


Articles like this one are the reason I went with backblaze over carbonite. It may not mean their tech is any better, but it does 1) increase my confidence in them and 2) teach me something interesting each time. Both of those are, in my book, good reasons for giving them my money.


And we love your money! Please tell more people to give us your money ;-) Seriously though, we're glad we can entertain you and help you back up. We figure being open about this stuff leads to awesome discussion and sometimes, like in the case of our storage pods, we learn a thing or two from the world at large! It's a win/win :)


I only wish you'd consider a (possible different-brand) spin-off that targeted server backup and/or power users :-)

Amazon could use some competition in this space, IMNHO.


Its complicated. Here's a link to a paper modeling disk drive failure in data centers. tl;dr: its about half a percent per 1000 hrs operation.

http://www.ssrc.ucsc.edu/Papers/xin-mascots05.pdf


Approximately 40 PB raw storage in our datacentres here, half of them Supermicro servers with whatever disk that came, half HP Proliant with $$$ HP Enterprise class disks, all < 5 years old, so quite comparable to the Backblaze situation.

80% drives surviving after 5 years seems right, this is what we're seeing as well. The hardware is decommissioned faster then the drives fail.


Does ANYONE know what hard drives Google, Facebook, and Dropbox use at their datacenters? This 2006 article says Google buys Seagate: http://tech.fortune.cnn.com/2006/11/16/seagate-ceo-google-we....


There are only three disk vendors now and I would assume that large customers buy from at least two of them. But knowing this information won't help you, because a new model from company X may be a dud even if the company's previous models were reliable. By the time any model has enough accumulated reliability data that you can tell whether it's reliable or not it's obsolete and you don't want to buy it.


This is the only publicly published information that's even slightly relevant (AFAIK): http://research.google.com/archive/disk_failures.pdf. It doesn't mention any manufacturers by name, but you should be able to draw some inferences from the paper.

I'm not sure that the information would be all that valuable anyway. Google's data-center environments, workloads and requirements are likely pretty different than your environment and requirements, so I'm not sure how the information would be useful?


Part of the problem is the move by manufacturers to have consumers basically burn-in test their products for them as cost reduction and shift the expense to retailers.


Well ultimately it's the consumer/user who does the work. I've considered buying second-hand disks just to get around this problem. Although that usually comes with other issues...


Do they plan on sharing any data on which vendors and models have the highest failure rates?


Backblaze wrote about it here (2011): http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v... and here (2013): http://blog.backblaze.com/2013/02/20/180tb-of-good-vibration...

We are constantly looking at new hard drives, evaluating them for reliability and power consumption. The Hitachi 3TB drive (Hitachi Deskstar 5K3000 HDS5C3030ALA630) is our current favorite for both its low power demand and astounding reliability. The Western Digital and Seagate equivalents we tested saw much higher rates of popping out of RAID arrays and drive failure. Even the Western Digital Enterprise Hard Drives had the same high failure rates. The Hitachi drives, on the other hand, perform wonderfully.

In the second article they say that WD-RED are 2nd in reliability (WD-RED did not exist 2011). I'm happy that I've got a cheap Hitachi Ultrastar. But who knows.

As a personal anecdote: WD-Green failure rates are huge here. 24/7 Desktop machines, 240 drives. I've replaced in last 12 month at least 20 drives.


While WD has taken over Hitachi, they had to give Toshiba Hitachi's desktop 3.5" HDD operations in part of the deal of taking over Toshiba's (TSDT) Thailand factory. So if you want Hitachi's good 'ol reliability, buy Toshiba desktop drives.


Yev on Backblaze | We actually hope to see if we can collect enough data to release a "per manufacturer" report on had drive failure. We're currently collecting that data, but as always, follow the Backblaze Blog for info on that type of stuff!


From what I understand, at this point all 3 major hard disk vendors have very similar failure rates. The thing to look out for is that certain runs of drives may be "contaminated" during the manufacturing process, but knowing which ones is pretty unpredictable.


Also some combinations of devices can create problems. We had a RAID controller that randomly rebuild mirrored sets because the drive was 'dead'. After the rebuild the drive seemed fine. We concluded there was some firmware delay that caused the controller to time out some operation and call the drive 'failed'.

We replaced the drives with a different brand and the 'failures' went away.


> We concluded there was some firmware delay that caused the controller to time out some operation and call the drive 'failed'.

There is indeed a firmware setting on how long the drive spends checking for errors after it detects one, during which the drive doesn't respond. Sometimes this is too long for the RAID controller, so it drops it.

It used to be you could buy WD Caviar Black drives and tweak that timeout setting on the controller to effectively have WD RE drives (enterprise version of the WD Black drives). They removed that "feature" a few years ago.


Ha! Yes they were WD drives we replaced.


lol my money is on Seagate.. I thought it amusing how they rebadged their crap drives as Samsungs when they bought the name because of their very good rep in the sector.


My oldest drive currently in use is the original 20MB disk on my IBM PS/2 Model 30.


Anybody knows how long floppy disks, diskettes and their respective drives last?

Odd question, but I've always been wondering. These things just seem to hast forever.


According to this article[0], the data from old floppy disks is pretty much gone.

[0] http://ascii.textfiles.com/archives/3191


I recently went through all my old 3.5" floppies I could find and most of them were still quite readable. This seems like a bit of hysteria to me. I didn't preserve as many of my 5.25" floppies, but I also don't have a drive for them. My 3.5" drive is an old USB one that came from a Toshiba laptop circa 2000. It was in a drive bay that was replaceable with a cdrom drive.


Anecdotal data-point: a few years ago I rescued the data off my 3.5" floppies from the early 90's (IBM PC and Atari ST formats) and they all seemed to be fine.

I don't have any hardware to read my pile to 5.25 Atari 800 disks.



Ugh. Backblaze is one of those companies with an extraordinarily poor design that they flout and "open source" as if anyone would follow their lead. Take a look at the physical design of their system and combine that with the published data. Consider that to remove any harddrive from their setup requires physically removing a 4u rackmount storage pod from the rack. http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v...

Also, no hardware raid, battery, or cap.

Source: worked at Eye-Fi, built 2PB storage


Disclaimer: I work at Backblaze but I'm on the software side, I barely ever touch the storage pods anymore.

It is not true that the pod team must remove the 4U server from the rack. It is slid out like a drawer (no tools required, takes maybe 10 seconds). The drive or motherboard is then replaced, then you slide the drawer back in. So the 4U server must slide 18 inches one way, but zero cables have to be unplugged or replugged when done. This only takes one technician and no "server lift", the drawer supports all the weight.

I'm not defending this design, just correcting a mistake. Backblaze frankly "makes do" with this design because nobody will step up and make anything that fits our needs better. The number 1 criteria is total system cost over the lifetime of the system INCLUDING all the time spent on salaries of datacenter techs dealing with the pods. "raw I/O performance" is not that important for backup, so trying to sell us an awesome EMC or NetApp that costs 10x as much and has 10x the raw I/O performance is not very compelling to us. But if you came up with a design making it faster for our datacenter technicians to replace a drive faster while not significantly increasing overall costs in another area, we SURELY would listen.


Thanks for the clarification. That the PODs were on rails was never made clear to me. Still, I count that as "physically removing a 4u rackmount storage pod." Those suckers cannot be light. 10 seconds sounds rather fast. I don't imagine you could do it that fast for any of the upper pods.

While I don't recommend them outright, we settled on 3U boxes from SuperMicro. http://www.supermicro.com/products/chassis/3u/837/sc837e26-r...

We somewhat affectionately dubbed them "mullets" as in business in the front, party in the rear.

They make 4U devices as well. Cost was about $1000. We added LSI Megaraid 9280 controllers, about another $1500 and ran min-SAS back to a controller node responsible for 4 JBODs.


It's a different trade-off. The Supermicro boxes use drive trays, so swapping a hard drive requires a datacenter tech to handle the tray mounting and unmounting. The PODs just drop drives right in. They've traded off tray mounting work for chassis sliding.


Yev at Backblaze | One of our designs was for an aluminum pod..it made it..."lighter". :)


HW Raid is a PITA for the following reasons:

1. you have to muck around with more firmware and sometimes reboot in order for changes to take effect

2. if a controller dies, you have to replace it with (almost) the exact same controller in order to read the data

3. Datacenters rarely lose power, take the HW raid money and instead put servers on true A+B power feeds.

CPUs are so fast these days that they can easily handle in software all the "stuff" that HW raid used to do.


They do a different tradeoff here. There is no need for a hardware raid if performance is not your main concern (and even if it is hardware raid is no panacea), if they save everything to disk before acknowledging it they don't need a battery and I'm not sure what you refer to as cap.

Their hardware design is specifically geared towards their use-case and I applaud them for knowing how to optimize for their use-case. I wouldn't use it for mine but only because it's not a good fit.

They can open-source the hardware because the real secret sauce is the software and the hardware open sourcing gives them a nice edge in marketing.


cap=capacitor. http://www.lsi.com/downloads/Public/MegaRAID%20SAS/MegaRAID%...

Edited to add: They've optimized for hardware purchase price and given up reliability (HW RAID, battery, cap), performance, and maintainability. The strange thing is the overall cost of the storage system is driven by power, not purchase price. Smarter RAID controllers, like I link above, let you manage power by spinning down disks as they are unused and thereby reducing your power draw. Can't do that with SW RAID that I've ever seen. Take a look at Amazon Glacier which I suspect is using this power-off strategy to drastically reduce their costs.


Their use case is mostly write-once, they fill the data and never delete. The write-rate is probably more limited by the upload speeds of its users than the disk bandwidth and the multitude of port-multipliers that they use. Recovery is anyway mostly about copying all the user data to an external HDD and ship that which is a lot less performance critical as shipping the HDD will take a lot longer than reading all the data from across their systems.

As for saving power by spinning down disks, it is likely to be useful to them and is completely possible even in SW RAID though it requires some managing to perform effectively.

There isn't much that is applicable directly to most other use-cases but if your data is mostly sitting idle and you only need occasional access to it the backblaze pod is a nice design. If you care about performance and do not deploy multiple pods with redundancy between them you are not likely to be happy with the result.


> Recovery is anyway mostly about copying all the user data to an external HDD and ship that which is a lot less performance critical as shipping the HDD will take a lot longer than reading all the data from across their systems.

I've restored just a few files from Backblaze. While it's an "offline" operation where you choose the file, then get a notification when it's ready to be downloaded, it took only a handful of minutes.

It's not why I signed up with them, but it was delightful that it worked.


Actually, Glacier is probably backed mostly by tape with a disk cache. Writing then becomes cheap, as you can grab the next blank tape, but restores take a while, as they have to pull the specific tapes needed from whatever storage system they're using.


Seriously doubt Amazon is using tape.


No one (outside Amazon) truly knows what Glacier is on, likely it is a combination, and tape may play a role, that's why it's relatively inexpensive to house the data, but the costs to get it back are very high and are for "emergency, everything else has failed" situations.


Asked what IT equipment Glacier uses, Amazon told ZDNet it does not run on tape. "Essentially you can see this as a replacement for tape," a company spokesman said via email.

http://www.zdnet.com/amazon-launches-glacier-cloud-storage-h...

See also: http://en.wikipedia.org/wiki/Amazon_Glacier#Storage


Just out of curiosity, I wonder what the savings would amount to if this company used something like SpinRite to fix / recover failed drives? Although I've never used it from what I hear its pretty good at saving drives...



It's actually not that good at saving drives, however it sometimes saves your data. Lately I've used it to recover a 250 GB drive that was part of a failed RAID-5 array, and miraculously got the data back.

So SpinRite may be handy, but throw the drive away after use.


It depends on the warranty period of the drive. Your hard drive manufacturer knows precisely how long the drive will last and sets their warranty period to expire right before your drive gives up the ghost.


Yeah, if that was true that would be completely against EU regulations,so you could sue manufacturers and win millions. So if you have any proof that they are doing that, you are practically a millionaire, I don't know why are you not on your way to the court yet.


You realize drives are made for specific markets, branded for specific markets, and warranty periods are specified on a per market basis?

Not everyone lives in the EU. In fact, the majority of people don't.

Outside your regulation happy haven, warranty periods aren't random and do indicate durability under normal use.


How so? That is how warrantees work. They're designed to protect you against premature failure. Unless it's a necessary competitive advantage no company is going to warrantee something past its expected lifetime. And in that case it's going to be included in the price.


Designing a device to fail after a pre-determined amount is very much against the law. Expected lifetime is different, but this is not what the previous poster was suggesting.


The manufacturer has some good information about the statistical likelihood of your drive failing within a certain time period and tunes the warranty accordingly. They do not know exactly when your specific drive will fail as this would require some form of time travel.


> Your hard drive manufacturer knows precisely how long the drive will last

Care to back that up with any real data instead of baseless consumer speculation relying on time travel?


They don't know exactly, but they have a very good idea. It's all based on statistics.


And a statistical average is very different than exactly when your specific drive will fail.


We dont know, but backblaze sure knows how to: 1. print backblaze's brand as often as linguistically possible in the same article. 2. Get to the top of HN over a 50 yrs old technology's failure rate without clustering for brand spin speed density etc. (read not much)

Am I unaware that there are new paid spots on the first page of HN? (it would make sense I guess, from a business perspective)

TIA to anyone that can be of help on this, cheerio, (and good luck to Blackblaze, backblaze a path to a backblazing success!)


There's always been paid articles. This is not one of them. They get to the front page because they're usually well written, contain interesting information, and are relevant to the HN user base.

Don't like it? Don't vote for it.


Yev from Backblaze | One might say that we are "Backblazing a trail"? To my knowledge we've never paid for a HN top article. People tend to like our pod and storage related stuff and we're always thrilled to see it on here, so we come and chat about it along with everyone. The discussions are awesome. I'll chat with the writing team to see if we can take out one or two Backblaze's from future posts ;-)


Hey, it was tongue-in-cheek. Your service looks awesome, and as an awareness piece it was bang on. Although I don't recall hearing about backblaze before, the name has been ringing in my ears all day. Keep up the good work and good luck!


Damn, my last two harddrives have failed in around 3 years exactly. Did I have bad luck or am I being too mean to them? My computer is on mostly all the time and is often reading files throughout the night (for slow uploads for example). Does it make a difference how often you read/write a drive or only the spun-up time? One died suddenly without any warning in the SMART data and the other got badblocks and started to struggle reading data.


The data presented in the article only makes sense if you buy a large quantity of hard drives; if you only have had a handful then you were just unlucky.

I suspect the reason why people do "burn in" tests on hard drives is to make drives that suffer from early failure ("infant mortality" as described in the article) fail early enough that you can RMA them with the manufacturer. Apart from that, I don't think there's much you can do to improve your chances.


Well, it's hard to say as two is not really an adequate sample size.

The article actually makes this point (about anecdote), but their data suggests that failure rates do rise substantially after three years.


Extremely useful info here. Most of my HDs have been running for years and still work fine, but you go online, and all you read about is that HDs are horrendously unreliable and all fail after a year or two. (manufacturer propaganda, anyone?)

The optical drives I've had, on the other hand, are actually unreliable. They all seem to break down after about four years, and I don't use them all that often!


Your disks are going fine, do you feel the urge to report about it in the reviews on amazon or newegg or where you bought it from?

There is an inherent bias in the reviews. Which makes the backblaze report so interesting, they have less of a bias though they do not report actual disk vendors and models to really draw direct inference only the general trend.


I think the floods has probably been a factor in reduced reliability. It took forever for prices to come down to where they were and the manufacturers are probably cutting corners everywhere to save costs. Why ramp up factories when the tech itself is on the way out?

I think this is most evident in the reduced warranty periods compared to before when 5 years was quite normal.


Yev from Backblaze | Yes, prices still haven't come back down to pre-flood levels unfortunately. They used to drop about 4% per month in price (over time) but now it's going down more slowly.


One pure storage disk I use has been alive with top S.M.A.R.T. values for over 8 years now. The one with more regular reads & writes is 5 years old now. And while I have local backups I'm now finally in the progress of uploading 800GB of (semi-) important data to backblaze. I'll probably be done in another month.


Since there seem to be backblaze staff posting in the thread, is there anyway as a 'personal' (non-business) user to have multiple PCs configured with one account? Think of something like a family plan.

Can't seem to find relevant information on the website anywhere for this.


It'd be very useful to have more detailed information about read/write volume and capacity, MTBF should vary a lot depending on those. Until then I'll keep being paranoid.


Has blackblaze budgeted for the cliff failure rate that is coming?


Yev from Backblaze | Somewhat. Luckily, not all the drives fail at the same time. If a drive fails in one pod we replace it, so even if there is a large fall-off after 5-6 years, it won't be like we are losing a majority of our farm in one fell swoop. We buy our gear as we need it and have lots of drives to spare, so unless all drives automatically shut off after 6 years (which they don't) we should be OK ;-)


Based on the linear extrapolation they make over the third year failure data into the future (to calculate an estimated half life of roughly 6 years), I would say they probably have not.

A little statistics is a dangerous thing.


Well, as long as they IPO before that happens...


Yev from Backblaze | I know, right?


I'm just jaded, you can ignore me :)


I was going to say, I learned about this in class! Then I read the article and the link to "CMU’s study." ... My professor was one of the authors. Go Garth Gibson!


Spot on, I have experienced my drives to either fail quite soon, or "never". I am still running one of my very early 40GB hard disks. It must be like 8 years old.


Not long.


Surprisingly long, all things considered.


Wow! Great info in here!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: