Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On WD Red NAS Drives (westerndigital.com)
313 points by hysan on April 21, 2020 | hide | past | favorite | 248 comments


>SMR is tested and proven technology that enables us to keep up with the growing volume of data for personal and business use.

Absolute baloney. SMR is useful if it allows you to attain capacities otherwise unattainable, which isn't the case here. Note that the higher capacity Red drives don't use SMR. Older Red drives of these capacities, and non-Red drives of these capacities, don't use SMR.

Also, notice how now they're openly admitting that these are SMR drives. Nowhere in this post do they admit that they previously deliberately concealed this fact.

EDIT: Also note that they talk about these being "drive-managed SMR" drives, and they are, but they reportedly don't advertise themselves to the OS as being drive-managed SMR, as they're supposed to do as per the ATA specifications (unlike other WD DMSMR drives which do). They literally designed the drives themselves to lie about this.

>Additionally, some of you have recently shared that in certain, more data intensive, continuous read/write use cases, the WD Red HDD-powered NAS systems are not performing as you would expect.

They literally built a NAS drive which can't handle RAID, and are acting like this isn't their failing.


From the blog:

"The data intensity of typical small business/home NAS workloads is intermittent, leaving sufficient idle time for DMSMR drives to perform background data management tasks as needed and continue an optimal performance experience for users."

A NAS drive, which features RAID specific firmware no less, is _very_ likely to be put in a RAID, and rebuilding the RAID after replacing a disk is exactly the opposite of an intermittent workload.

What they should have done is this:

- Change the name slightly, "WD Red Lite" or something.

- Make sure the product information page (and firmware if applicable) says this is a DM-SMR drive.

- Say it is intended for single-disk or RAID-1 (mirror) setups, and is not recommended in RAID-5, RAID-6 or similar setups, with links to their Red Pro disks for that.

That would allowed them to capitalize on the "WD Red" name as something consumers have heard about, while also not right out lying about a very important aspect of the product.


If you change the name to "lite" the jig is up and everyone knows it's a poorer quality product than what they're used to buying. By keeping the same name they can increase margins by duping unsuspecting buyers into buying a poorer quality product.


Maybe "Red Lite" is a poor name, maybe "Red Home" is better.

Either way yeah this was an attempt at a cash grab. Which was quite stupid, because there's no way they'd be able to hide the fact that it's SMR for long, the performance characteristics is just too different.

But it's also totally unnecessary, because SMR is perfectly fine for use-cases with sparse random write workloads. Something most casual home users or small businesses with a 1 or 2 disk Synology or similar has.


They already created Red Pro for business use, and market it as "Designed and tested for RAID environments 8-16 Bay NAS", at a 37% markup relative to the Red in my market. My quick Google sleuth indicates those are not SMR. The Red line is marketed as "Designed and tested for RAID environments 1-8 Bay NAS", which implies they're for more than just a 2-bay Synology box.


An Ethical company, with Ethical Marketing would do that

WD has proven they are anything but ethical with this move, I moved from Seagate to WD awhile back for many of my personal drives, looks like it is time to move back to Seagate or Toshiba from now on..


I dropped WD after buying, and returning, their WD MyCloud Home device.

I expected, per their advertisements, a cheap barebones LAN visible NAS with SMB shares.

What i got was basically a self hosted google drive that: - doesn't work without internet even on LAN - can be accessed as SMB share only with WD spyware installed! - WD spyware crashed my graphics drivers - WD spyware works only when there is internet connection available - can be only managed via internet - costs more than space on any cloud storage provider for self hosted solution - you have utter lack of privacy as they scan, and analyze each file on your drive as per their ToS, same thing with their application - hence i call it spyware.


Both Seagate and Toshiba do the same. You have nowhere to move.


Kinda... They do sell undisclosed SMR drives into the Desktop and external Space, but they both have come out to Affirm that they do not do that in the NAS space

https://arstechnica.com/information-technology/2020/04/seaga...


Seagate will happily brick your drives with bad firmware. Toshiba is it.


Yah, the idea that someone at WD thought it a good idea to put these kinds of drive in a RAID means that they don't technically understand RAID. Which is sort of scary, and makes you wonder what other gocha's they have lying about.

Even regular idle scrubbing is going to potentially keep drives 100% busy for extended periods of time. Combined with online backup/etc. Its crazy! Do they think that only enterprises run their NAS'es with backup, or workloads that provide a low level of near constant background activity?


Doesn't seem like it would work in a RAID1 setup: a rebuild typically requires reading the entire source drive and writing it to the other drive in the mirror. Some filesystems like ZFS might understand where the used blocks are and only copy them, but others where RAID is a layer above the filesystem (Linux mdadm) will have to copy the whole drive, even if it's empty.


Pretty much exactly my thoughts. This is the diesel "cheat device" of magnetic disk recording.

This kind of behavior really annoys me and I hope the market brutally punishes them for it, but I'm not holding my breath.


I haven’t been buying WD for my home projects for a long time. But they now own HGST Ultra/Deskstar line and will likely make it suck.


Toshiba has the 3.5" HGST tech [0] so you can buy a toshiba drive if you are afraid WD will ruin the HGST technology they bought.

0: https://en.wikipedia.org/wiki/HGST


> Also note that they talk about these being "drive-managed SMR" drives, and they are, but they reportedly don't advertise themselves to the OS as being drive-managed SMR, as they're supposed to do as per the ATA specifications (unlike other WD DMSMR drives which do). They literally designed the drives themselves to lie about this.

I wonder if reporting to the OS that it is a SMR drive would kick in host-managing SMR code paths that conflict with how the drive managed SMR works. While I'm not terribly inclined to give WD the benefit of the doubt, this seems certainly possible. It seems like it should be possible to test by modifying whatever part of the Linux kernel or userspace tools control this and having it use the model number instead of what the drive reports as its SMR status. Granted on a closed device like a Synology NAS, this isnt possible for a user to do though Synology themselves could, and probably should test this.


From what I've been reading the main difference is simply exposing the TRIM command, which can in no way be a detriment - the only reason to disable TRIM is to hide the fact that it's SMR


Interesting! I never would have thought of a spinning drive as needing a TRIM command, but it makes sense for drives where data is literally overlapping.

But, why would failure to expose TRIM cause errors in in rebuilding a RAID, in which case old drives should be more or less read-only, and replacement drives write only? My understanding is that TRIM should affect re-writing sectors only.


TRIM isn't the cause. The problem with SMR is that disk failure in a RAID array causes read/write patterns during a rebuild that SMR simply was not designed for. I'm sure SMR friendly RAID software will be available in the future for but this requires the software to know that the drive uses SMR. SMR also tends to have really awful performance for regular desktop usage.


Yah, by blocking foreground read/write activity until an entire SMR region has been filled. That will work well...


Sorry what I meant was the difference between host-managed and DMSMR. Having TRIM doesn't make any difference in the rebuild, it just means you can detect it's SMR and you can help the drive in normal usage. Either one will have trouble in a RAID rebuild. Being able to detect it's an SMR drive could probably let some RAID software handle it in a better way, but I don't think any existing software does (since nobody is crazy enough to use SMR in a RAID)


In theory, SMR drives shouldn't have issues with reading and they should also write fine if you write entire stripes. With HMSMR it should be possible. With DMSMR - I don't know. Do DMSMR drives partially skip staging area on large writes and fully if you align them?

I don't see why RAID rebuilds should take notably longer on SMR drives, if only they were properly managed - by both the RAID driver and drive FW. Though it's hard to properly utilize a DMSMR black box.


RAID's are generally rebuilt online. Which means that the normal read and write activity is still occurring. Rewriting a block of data in the middle of a SMR region requires reading the entire SMR region, updating the changed data and then writing it back. So, normally a raid controller will just stop updating a strip (frequently much much smaller than a SMR region) and perform whatever forground activity is needed. The smarter ones rebuild the the strip of the new data along the way. This is going to really mess up rebuilding on SMR because suddenly what you assume is probably a linear operation isn't from the perspective of the drive.


TRIM is not involved when RAID is rebuilding. You are correct in that.


Cannot agree more. IMO, SMR based drives should be used for archiving purpose only. It does not fit for day to day usages, and not even think of using it in NAS/RAID.


But guess what: there’s literally no other options when it comes to spinning rust, WD owns most of the brands.


"We've rigorously tested up to 8 drives" - bullshit. An 8-bay synology will fail a RAID rebuild every single time. If you're lucky, it'll work after the 3rd retry - which will be about 2 weeks later because of how slow it is.

This is a money grab, plain and simple. Had they substituted SMR drives at a lower price they might have a leg to stand on. But as others have mentioned, they didn't - they lied until they got caught and are now pretending like they were up-front. Talk about making a bad situation worse. The headline should've been "We're sorry, here's what we're doing to fix it" - instead it's "there's nothing wrong, it's YOUR fault you caught me cheating".


I think the worst part of this fiasco is that the drives may appear to operate normally until the moment when the raid is most vulnerable.


I wonder if this explains why a a couple of brand new Red drives have brought down an 8-drive ZFS array for me.

Oh, well. i'll return them and keep returning them until they work right.


Companies pretending that nothing is wrong are the worst.


Wow, this is a horrendously bad response, no apology, no admission of guilt, probably written by the legal team.

I am a total layman w.r.t Hard Drives, but it seems pretty simple (please correct me if I'm wrong):

- They used a technology that impacts performance, that allows them to make higher capacity drives for cheaper.

- The technology used prohibits them from being used in certain application context (citation needed/general consensus?)

- They marketed the product as being suitable for those certain application contexts, when they aren't.

- They come out with a blog, telling consumers to buy their more expensive drives instead?


So it seems like ultimately, what they've done is to move what used to be "Red" into "Red Pro" and make "Red" a more budget oriented line (without clearly communicating this in advance).

The only reason to do this is to trick customers into buying the wrong product by accident! There is literally no good faith reading of this change. If they wanted to provide better value for the people who don't mind the SMR performance characteristics, that's great, but that would involve clearly marketing the line as such, not simply redefining what "WD Red" means.


It's brand exploitation. They build a good brand, than sacrifice it for short term gain to rip off customers, while building a new brand to replace it. It's very common in the fashion industry (Gucci, etc)


Sort of like how Pyrex used to mean borosilicate glass, which has a low coefficient of thermal expansion, and is thus oven-safe.

This was the case for sixtysome years.

Then some douchebag decided that Pyrex wasn't a material, it was a "brand", and started slapping the name on kitchenware made of soda-lime glass, which will straight-up explode if you treat it like borosilicate.


In the UK and probably the rest of Europe you can still buy borosilicate Pyrex made in France, but this single decision destroyed any trust I had in the brand. It's not even a new decision[0] and I am mostly unaffected by it but any discussion about Pyrex in person will invariably lead to me butting in with an emphasis on borosilicate making the glass temperature resistant and not the brand, it shouldn't even be necessary but occasionally import Pyrex makes its way onto places like Amazon.

[0] http://themainframe.ca/content/images/2015/09/Pyrex-infograp...


Given the enthusiasm for class action lawsuits in the US I am amazed this passed muster with management and the legal department.


war on drugs was their reasoning; not sure if it's true or acceptable and yeah I'm pretty amazed it flew in the US.

But then another brand by Corning is the same, they had a range of unbreakable product called Corelle, the new stuff since it got sold off just isn't the same.


Old school Corelle is the best ceramics you could get. I have 40+ year old Corelle plates and bowls in the cabinets right now. Used daily and dropped hundreds of times over the years.


> war on drugs was their reasoning

Uh what ? What do you mean, I don’t see any relationship between Pyrex and the war on drugs at all and I’m really curious !


I'm not sure how their PR dept spun it but borosilicate glass is incredibly resistant to sudden changes in temperature as well as having better chemical and acid resistance making it ideal for chemistry lab work, presumably some news stories broke about 'amateur chemists' using Pyrex cookware for a different type of cooking. It's not like borosilicate is a controlled material though, the reason for the switch is likely that soda lime glass is significantly cheaper to produce, they did suggest that their soda lime glass was more mechanically durable but then again borosilicate glass is also known for being mechanically durable.


This is exactly what Intel did to the Pentium line. Remember when that brand was synonymous with high-performance consumer CPUs?


They burned the brand name through incompetence, not malice.


To be fair they weren't using Pentium for high performance processors for a while. It's not like they put out 2c/4t parts and called them i9.


Which is what they have been doing the i7/i9 line. Suddenly you can buy a i7/i9 laptop, despite it not having anywhere near the performance of the desktop class processors except maybe for a few seconds of burst activity.

A few years ago i7 also meant basically the high end desktop market (socket 2011) which meant more ram and IO. But then they slapped it on the dual channel desktop socket market.

So, yah the brand becomes meaningless when its just marketing for "this one will cost you more".


For anyone who's wondering, a bit of homework with the major parts suppliers here in the UK suggests a 50-60% price increase is typical moving from (new) Red to the Red Pro of the same size.


The Red Pros are 7200 RPM vs 5400 for the non-Pro Reds. That will affect heat output and noise.


Higher RPM drives also generally have a slightly lower linear density (the TPI should generally remain the same), meaning for the same capacity they require more platters. Frequently with little in the way of throughput increases (due to said lower density).

The one thing faster spinning drives do buy, is lower seek time because rotational latency tends to be the largest contributor to average seek.

That is part of the reason you don't see a lot of drives spinning faster than 7200RPM these days. SSD's ate the market which wanted speed over capacity,

I've been patiently waiting for 3600/4200 RPM 5.25" drives to make a comeback.. <chuckle>


I would agree. The old reds were far more along the lines of what Red Pro is now in terms of reliability.


Shrinkflation for tech, I guess.


> They used a technology that impacts performance

I feel like this is an understatement. In certain scenarios for which they were designed, because of this performance issue, they fail to work at all. Maybe that is covered by the following points but I feel like "stops responding" and "impacts performance" are very different things.


The worse performance is major part, but what makes it worse is that these drives regularly fail on RAID rebuilds when a lot of writing happens, which is a time when your data is most vulnerable and increasing risk of losing data.


> no omission of guilt

Freudian slip?


Ha, fixed. Thanks


This is pretty much correct, yes.


This is probably an archival worthy example of corporate PR speak/spin around "we got caught". SMR is pretty clearly not the use case for what they branded "RED" for.

Wishing them a VW level wave of karma.


They’re also upselling their Red Pro and Gold lines. Sleazy.


I read that as 'for critical applications, buy a competitor's drive.' Perhaps reading comprehension is limited on my part.

The key thing I want from a drive is reliability. I don't want to lose my data. That far beats size, noise, speed, cost, or just about anything else in what I will shop on. A drive that loses data will cost my thousands of dollars.

My perception of reliability comes down to trust and transparency.If I can't trust a drive maker, I won't buy from them.

WD just went into my don't-trust don't-buy pile.

I actually don't have any non-critical applications. If my mom's laptop hard drive fails, that's still thousands of dollars of my time helping her. It's no less important than enterprise (indeed, I'd argue more, since she doesn't have RAID).


> buy a competitor's drive

Unfortunately... The competitor(s) you want, no longer exist! Here are some facts.

* Seagate - WD's main competitor - historically didn't have a reputation of high reliability in the industry [0][1]. Its ST3000DM001 [2] drive was terrible enough to have its own Wikipedia article, which is extremely unusual for a hard drive. In terms of reputation of reliability, WD was better, and HGST (Hitachi) had the best.

* Then WD brought HGST. It was kept as an independent operation for a while, since it was required by the regulators. But now it has been fully merged into WD. As parts of the deal, Toshiba received some hard drive assets from WD, thus, Seagate, WD, and Toshiba are the only three hard drive manufacturers, forming a worldwide oligopoly. Wikipedia has a great diagram showing the process of corporate consolidation: https://en.wikipedia.org/wiki/File:Diagram_of_Hard_Disk_Driv...

* Toshiba, Seagate shipping slower SMR drives without disclosure, too [3].

So what can we do? Not much. Buy Toshiba exclusively to make WD and Seagate less powerful? Maybe. But none of them is honest in the SMR affair. Nevertheless, relatively speaking, I do find that WD's denial has made the other two more honest comparatively.

[0] https://news.ycombinator.com/item?id=8355860

[1] https://news.ycombinator.com/item?id=11110902

[2] https://en.wikipedia.org/wiki/ST3000DM001

[3] https://news.ycombinator.com/item?id=22906959


There are quite a few other competitors. They make SSDs. It's a different price/performance curve (lower on the low-end, higher on the high-end), but as I said, reliability trumps all.

In the early days, I waited to adopt SSDs because of horror stories about wear leveling algorithm bugs causing early failures (because SSD makers wanted to eek out a little bit of extra speed). Speed is nice-to-have, but fast, unreliable storage has negative value to me.

On the flip side, if I now can't trust HDDs, I won't buy them. 8TB is nice-to-have, but reliability is critical. 8TB of unreliable storage has negative value to me.

I mean, I was pretty productive in the days of 133MHz computers, 32MB of RAM, and 1.6GB disk space. I prefer gigahertz of multicore performance, 32GB of RAM, and terabytes of disk space, but it's not worth sacrificing reliability for. A computer failure can cost me a week of time.

If it all worked, I'd have a 1TB SSD RAID and an 8TB HDD RAID, each with two drives so if one fails, the other goes on.

That's not just me. A random computer buyer might now know better, but if you buy a computer and something breaks, whoever makes it devalues their brand. If the HDD makers collude to give untrustworthy storage, they won't have a market left.

I'll mention cloud storage is a competitor too.


Unfortunately NAND Flash is not an archival medium, as data can fade in as little as one year when powered off. I realize disk is the new tape, I didn't realize the hard drive makers would take it so literally...

As far as I am concerned, SMR are unfit for any purpose other than hyperscale archival storage. If you work for AWS Glacier, good for you, but those drives should never ever be sold to consumers.


> Unfortunately NAND Flash is not an archival medium, as data can fade in as little as one year when powered off.

A marginally-related comment: Curiously, HDDs are not totally immune to this problem in the long run. Many types of EEPROMs (basically used by everything that has a CPU/MCU inside) only have an officially data retention duration of 10 years, beyond the date, there's no guarantee by the manufacturers that the firmware won't be lost. Yet surprisingly, most consumer electronics work fine after one or two decades, indeed it's because the EEPROM manufacturers are being conservative, not to mention that most electronics are stored room temperature.

There is wild variations of rated lifetime in different microcontrollers, a most common rating is 10 years, but some are 40 years or even 100 years at 25 °C. It indicates that it's related to the semiconductor process, and it also implies EEPROMs have a lot of potential but it's expensive and/or there are too many uncertainties to make any guarantee. How long will today's technology last in the wild is still largely untested in practice.

Long story short, I see EEPROMs as a time bomb of the digital (dark) age. Imagine when a future archeologist wants to download data from a HDD made in 2010, only to find that all firmware and parameters in the EEPROMs are gone, permanently bricking the electronics, even when the platters, motors and the head are good.

Conclusion: In terms of data archiving, SSDs are bad, HDDs are much better and suitable for most people, but tape drives are still the real archival medium.


> Long story short, I see EEPROMs as a time bomb of the digital (dark) age. Imagine when a future archeologist wants to download data from a HDD made in 2010, only to find that all firmware and parameters in the EEPROMs are gone, permanently bricking the electronics, even when the platters, motors and the head are good.

The future remake of Indiana Jones seems way less fun.


> In terms of data archiving, SSDs are bad, HDDs are much better and suitable for most people, but tape drives are still the real archival medium.

What do you think of optical media like BD-R?

I've heard that the inorganic substrate they use these days is decent for archival.


M Disc claims to sell DVD-R and BD-R discs that are good for a thousand years. Many modern disc burners support burning to M Discs already. They're a little pricey though.

https://en.m.wikipedia.org/wiki/M-DISC


I’ve heard that too. Fairly normal ones right? 25GB/disc sounds cumbersome to handle though.


I have a few types of data:

* Most data is reasonably small, and super-critical. I might have legal documents, personal emails, source code, etc. My business might have basic employee information, or sales transactions. Most of that, efficiently-stored, would fit in a box of 1.44MB floppies.

* I have medium-sized data like photos (product, family, etc).

* I have big data, like videos. A workplace might have surveillance videos, while I might have copies of movies I like, and ISOs of software I bought.

The above is a bit of a obfuscated to not reveal personal information (I took analogies to the types of data I and my business have).

In many cases, it doesn't make sense to have multiple storage locations and to manage all that yourself (management gets too expensive). It's cheaper to keep your big stuff on expensive media than to take time or to hire a software engineer.

At some point, I also start to either discard things explicitly (erase a file), or implicitly (photos/videos with lossy compression). A question is when that happens.

The answer, I think, just moved from "at 8TB, on a WD drive" to "at 1TB, on an SSD."


SSDs configured as RAID in a NAS is the exact opposite of an archival medium. People are concerned about the usecase of "I don't need the data immediately but it shouldn't take several minutes" and SMR completely fails at that.


I fairly recently went 100% SSD on my home build - the price is still high, but over the last year we had some historic NAND price lows - I've got a 1TB NVMe drive for boot/apps/games, a 1TB SATA drive for user profile and another 1TB SATA drive for large file storage.

I used to run 1 SSD + 4 drives in two RAID 1 arrays for this, but with how reliable SSD's have become, and with faster internet allowing for cheap offsite backup, those days are gone.


> There are quite a few other competitors. They make SSDs.

There are some overlaps, but it's a different market. If someone still buys HDDs, certainly it means SSDs are not suitable for the job, at least in term of cost.

> I'll mention cloud storage is a competitor too.

Maybe for cold storage, syncing data and object storage. But still, there is no replacement for a box of spinning hard drives in many applications.


> SSDs are not suitable for the job, at least in term of cost

I think what just happened is that CMR drives just got more expensive, which may erase some of that cost advantage. Not all of it, not yet.

But principle is worth something, too. I expect a few folks who are sortof on the border will be so soured on spinning rust by this whole ordeal, as to swear it off completely.


I think the only difference for most use cases is cost. As soon as the $/byte ratio lines up between SSDs and HDDs the market for HDDs will basically be long term archival storage only I suspect.


My workstation is 100% ssd, but I also have 100TB capacity of network storage, and 100TB of backup storage (about 48% used today) it is not economical for me to use SSD for that

>I'll mention cloud storage is a competitor too.

It is, but today for my home system it cost me about $2-3/TB/MO (maybe less that number is about 3 years old, and drives have dropped in price recently) to build and maintain reliable storage, with backups, that include electricity and amortized capital costs along with replacement hardware every 5-7 years

The absolute cheapest cloud storage out there right now if $5/TB/MO so it getting close, and if drive manufacturer start playing shading games like this that increase the Cost per TB then that might be the tipping point.


I thought it was $5/TB/month too, but turns out you can get down to $1/TB/month with "Deep Glacier" etc., $2.5/TB for retrieval


Glacier is far more complicated than you might think. It only makes sense in case of disaster recovery where you lost everything including your non glacier backups.


That might be useful as backup storage, but archival storage can not be used as a replacement for general use NAS storage


For what it's worth the continued reporting on Blocks & Files at least left me with a better impression of Toshiba than the competition with respect to owning up to it.

When we asked Seagate about the Barracudas and the Desktop HDD using SMR technology, a spokesperson told us: “I confirm all four products listed use SMR technology.”

In a follow-up question, we asked why isn’t this information is not explicit in Seagate’s brochures, data sheets and product manuals – as it is for Exos and Archive disk drives?

Seagate’s spokesperson said: “We provide technical information consistent with the positioning and intended workload for each drive.”

( https://blocksandfiles.com/2020/04/15/seagate-2-4-and-8tb-ba... )

Whereas Toshiba's representative at least cooperated and did not just spout further delusional corporate doublespeak when questioned about the lying.

Blocks & Files asked Toshiba to confirm that the 4TB and 6TB P300 desktop drives use SMR and clarify which drives in its portfolio use SMR.

A company spokesperson told us: “The Toshiba P300 Desktop PC Hard Drive Series includes the P300 4TB and 6TB, which utilise the drive-managed SMR (the base drives are DT02 generation 3.5-inch SMR desktop HDD).

“Models based on our MQ04 2.5-inch mobile generation all utilise drive-managed SMR, and include the L200 Laptop PC Hard Drive Series, 9.5mm 2TB and 7mm 1TB branded models.

“Models based on our DT02 3.5-inch desktop generation all utilise drive managed SMR, and include the 4/6TB models of the P300 Series branded consumer drives.”

The company also told us which other desktop drives did and did not use SMR:

    MD07ACA – 7,200rm 12TB, 14TB CMR (base of X300 Performance Hard Drive Series branded models
    MD04 – 7,200rpm – 2, 4, 5, 6TB CMR (base for X300 Performance Hard Drive Series branded models
    DT02 – 5,400rpm – 4, 6 TB SMR (base for P300 Desktop PC Hard Drive Series 4TB and 6TB branded models)
    DT01 – 7, 200rpm – 500GB, 1,2,3 TB CMR (base for P300 Desktop PC Hard Drive Series 1/2/3TB branded models)

( https://blocksandfiles.com/2020/04/16/toshiba-desktop-disk-d... )

And while the use of inferior technology should certainly have been announced in all these cases, watering down the product seems slightly less unforgivable for the class of slow Desktop brand drives, in my perception, as the expectations there shouldn't clash quite as badly with the performance issues.


Good point.


For what it's worth, I have had 5x Seagate ST3000NV000 drives in a Synology NAS since mid-2014 with zero failures. It's a different model drive (thankfully), but I wanted to put it out there that not all their drives are/have been terrible in my very anecdotal experience.


I have a ST3000DM001 and a ST3000NV000 (and some random WD disk) that, together, make a btrfs filesystem. It's been working perfectly fine for the last four years, and the only problem reported by SMART is on the WD disk. It's always possibly to simply be lucky.


It’s specifically ST3000DM001 after Thai flooding where precision mechanical factories are located.


WD bought out Hitachi's storage business in 2012, so those Ultrastars are still their own, albeit different branding.


Hitachi drives (not the NAS category) have been at the top of Backblaze’s annual hard drive reliability charts for several years even after the WD acquisition. So that division is still churning out drives that are better than the WD branded drives.


They're now all branded WD.


An Ultrastar failed on me a month ago. Unlucky.


WD also bought out SanDisk apparently for SSD drive market.


Yah, so they have come somewhat clean about SMR, but they have a lot of other bits they have been hiding. Some of it in plain sight.

"5400 RPM class" which means what exactly?

And of course the whole desktop vs nas differentiation was the early return on error logic being hard coded instead of configurable by the OS/RAID. Remove a feature charge more..


Especially as the Red drives were already upsold from their Blue/Green drives for years as the "home power user" model.


This is exactly why WD's "selling for the use case instead of by specs" model is so frustrating. How am I supposed to know if my use case really falls within the boundaries of their specific definition or not?


Ambiguity is a primary feature of product differentiation from WD's perspective.


Presumably blue and green are now higher performance than red. I don’t imagine running windows update on an SMR drive would work very well.


Greens have been rebranded to Blue for a few years now, which leaves the Blue line a confusing mess. They've also apparently included SMR drives in that line too. So the Blue line includes standard drives (no power saving, normal cache, not SMR), green drives (aggressive power saving which results in higher latency for non-continuous usage, less cache) and SMR drives. It's very hard to know what you're getting.


Rumor has it that the next generation of Blue drives are just 5G modems transmitting data to the cloud.


>Having built this reputation, we understand that, at times, our drives may be used in system workloads far exceeding their intended uses.

A use case which seems to cause failure is inserting a drive into a RAID array. These drives are advertised as being for use in RAID arrays.


I’m wondering if we’re living on the same planet as these people. They literally put NAS in the name of the drive??

Does NAS drive arrays exist outside of a RAID? Is there such a thing in consumer, prosumer, or professional setups?


Single-drive NAS exists. Apple's now-discontinued Time Capsule is probably the most famous example.


That true, but isn't exactly the most common use-case, though, or what most people would assume.

WD's actions are still inexcusable, especially because the non-SMR and SMR models cost pretty much the same. No savings being passed on to us. Pure greed.


Yep, though they are actually advertised as being for RAID, not just NAS (though NAS is in the name)


Yes. Take a look at UnRaid. It’s essentially a drive array with some number of parity drives (like a RAID), but no striping of files across drives.


There's plenty of cheap drives sold as NAS drives which have no RAID, or enclosures which default to JBOD, so their suggestion seems to be that the standard Red drives are now only suitable for "Build your own MyBook" use.


THis is in NO WAY a defense of WD, just an awnser to the question "Does NAS drive arrays exist outside of a RAID?"

RAID has several levels.. Any RAID that uses Parity will have a problem with SMR, but Mirror and Striped array probally will not

A common use case is a 2 Disk Mirror NAS for SMB.

Further NAS manufactures like QNap, and even WD do make single disk NAS Appliances

However the WD Red line is branded for NAS upto 8 drives, Most likly that will be a RAID5 or 6 array making SMR not usable in that configuration


My understanding is that SMR drives fail because the write load is continuous for a long period of time and overwhelms the background process organizing the data for efficient SMR writes. In a 2-disk mirror (RAID1) rebuild, all data from one drive has to be copied to the other drive, which seems likely to overwhelm an SMR drive. So it would seem to only work in a RAID0 array (concatenated drives - bad idea) or JBOD NAS setup.


The only thing I wanted them to say was:

"These are the drives we currently sell which use SMR. When we use SMR in the future, we'll disclose it."

Had they done this, the whole snafu would have actually made me more likely to buy WD drives than before!

But they didn't do it. Oh well.


Same! And I still don't feel confident that the Red Pros are actually not SMR. Has any independent outlet confirmed that yet? Or am I stuck buying Toshiba next?


Toshiba does the same thing too: https://news.ycombinator.com/item?id=22906959


However, when asked, Toshiba was explicit about which models do and do not use SMR: https://blocksandfiles.com/2020/04/16/toshiba-desktop-disk-d...


In EU we have a conformity warranty which basically gives you the right to return the product (or demand a partial refund) if it does not meet the advertised specs. [1]

Does it apply in this case? IANAL but my guess is that it should apply, as it meets two of the prerequisites:

- are not fit for the specific purpose required by the consumer, of which he/she informed the seller at the moment of conclusion of the contract, and which the seller accepted;

- are not fit for the usual purpose of goods of the same type;

[1] https://www.europe-consommateurs.eu/en/consumer-topics/buyin...


I‘d guess that the german „Sachmangel“ would apply here. However, this law applies to the seller - so every dealer would have to start arguing with WD about getting their money back. Sachmangel does not apply to the same extent for business customers...


IANAL but doesn't sachmangel require the seller to know about the drives being SMR when the purchase happens?


If the product lies by design it's far beyond "not meeting specs".


> If you are encountering performance that is not what you expected, please consider our products designed for intensive workloads.

It seems very strange to argue that customers should just know beforehand what they need, while also at the same time acknowledging the hiding (or at the very least, omission) of the information customers need in order to make that decision.


Yeah, that's terrible advice. They advertise these as high-end prosumer drives.

To the product manager at WD who is inevitably reading this: if your hardware doesn't live up to your own marketing, I'm not going to throw more money your way. I'm switching to your competition.


The WD product manager's response: "Bwahahaha!!! Seagate's drives are crap too, and we own Hitachi! Where are you going to go now? Bwahahahaha!"


Toshiba's still around


My reply:

I'll take my chances with ${not_wd}. At least they can survive a rebuild, so if a drive dies early I can replace it with something else.


We can always stop buying HDDs.


I've read that SSDs can't be used to replace HDDs for long-term archival use: if you leave them powered off for too long, the data degrades. I can store data long-term on a regular HDD and stick in a closet or safe-deposit box and then get it out after a few years, plug it in, and read it just fine.


If you’re suggesting switching to SSDs, WD owns SanDisk


"We're sorry you bought your drives for the clearly advertised use. Shame about them being kicked out of your RAID - sounds like you should have paid a little more! May we interest you in some more expensive drives? Don't worry, you can keep the ones you bought as paperweights or door-stoppers."


This is a pretty awful "I'm sorry you feel that way" response. Also, thanks to HN for highlighting this misleading setup in the first place (I wouldn't have heard about it otherwise).

I happen to have replaced a failed drive in my NAS box with one of these. I just checked the logs for the rebuild time and while my array has 8T, I'm only using 3.3T so it had to rebuild about 800 GB (this is Linux software raid on an older 4-bay synology diskstation). It took about 7 hours end to end, which while not amazing, is still ~30MB/second.

To be clear, this was on a Saturday night, and I don't use my NAS box for plex serving or anything (nor do I use ZFS, so I totally understand the rage of folks with random I/O rebuilds that have been screwed by this). Is everyone else assuming a rebuild of a very active NAS box?

To reiterate, I still disagree with WD on this, but I'm not holding my breath for a $5 after-laywers-fees class-action settlement check :).


Meanwhile Seagate says unequivocally that SMR isn't useful for any NAS application...

https://arstechnica.com/information-technology/2020/04/seaga...

Seagate does seem to think SMR is applicable for desktop use, however. But that's a fight for another day...


Reasonable response. WD did insane decision.


It seems quite unfortunate we’re coming up the other side of the U shaped price graph for HDD technology. From collusion between the three manufacturers and a consistent push towards flash it’s unlikely we’ll see any further reductions in price per GB without major performance hits like SMR.

This wouldn’t be a problem if flash were cheap enough in the volumes required for archival level storage, but it isn’t. Flash storage is actually going up this year as demand continues to increase.

I suspect there’s no ready answer for those that hoard data now or in the future.


NAND is down to $.12/GB, and HDDs are at $.02 (source: checked newegg just now).

Hard disk raid rebuild is basically an offline process for wide stripes and raid 6. Assume 2x parity overhead (eg 2 raid one disks, or a 2+2 setup) for hard disks, and close to zero for NAND (5+2) and SSDs are 3-4x more expensive than spinning disk.

Also, you can be more aggressive with compression and dedup with NAND, and it takes up less power, cooling and case space.

So, NAND is somewhere between 4x-1x the price of disk for archival. At scale, to get to 4x you’re probably talking about nearline retrieval.


Flash storage is lacking the longevity of HDDs though. They were certainly not suitable for archival the last time I checked.


Neither are hard drives: if you really care about it, you need multiple copies which are regularly accessed. Putting a box on the shelf for years is playing with fire.


True, but still, there was significant difference between HDD and Flash lifetimes.


Not enough that I’d trust it with anything I cared about. Drives have many mechanical failure modes, the higher density ones are more exposed to media degradation, and you have different environmental concerns – e.g. flash will survive water better.

The best answer is multiple copies, preferably on different media: hedge your bets and use both, and verify it periodically so you’d know before your “working” drive throws a ton of errors when you actually try to read the data.


Well, the only thing I trust is my RAIDed ZFS with weekly scrubbing. However, that's not a backup solution. Spinning rust doesn't degrade materially over time when powered off but Flash certainly does.

If tape was cheaper I'd consider that.


> Spinning rust doesn't degrade materially over time when powered off but Flash certainly does.

Magnetic media does degrade, moving parts freeze or fail in other ways, and the circuitry can also fail. It’s often possible recover that at some cost but it’s much more expensive than n > 1 copies.


Yes, true, but for a hard disk at _rest_ (in a controlled environment) I posit it's failure distribution over time is dramatically favorable to Flash (especially TLC and QLC). I feel pretty comfortable keeping a ZFS drive pair (= mirrored) on the shelf for a decode or two whereas I'd abandon all hope for a modern SSD in the same conditions.

EDIT: typo


You sure about that? I remember reading comments like this when drives seemed to be "stuck" in the 300gb range, and again when 1-2tb was about as big as you could get for a while. Now we have 16TB drives; so is there really some issue for those who want to hoard?

Honestly, how big is your personal hoard anyway if a few 16TB drives won't cover it? Isn't it the case that at some point you will have amassed more media than you could possibly consume?


In my opinion from the previous price fixing scandals and no real competition between the few major suppliers there's very little driving innovation in this space. Perhaps the major suppliers will get better deals or bigger drives but there's no reason for the players in the market to make a product consumers can buy more appealing. We're such a small part of the market and have such little choice.


bluray quality video is about 13GB/hr for 1080p or 33GB/hr for 4K. it's cost prohibitive for people to accumulate a huge amount of this content legally, but torrents are still a thing. it adds up really fast if you aren't willing to prune your collection.

there isn't currently a ton of 4K content available, but there's lots of 1080p. if you rip/download enough bluray 1080p to watch an hour every night, that's almost 5TB per year right there. if there were enough 4K content to support your habit, it would be more than 12TB a year.


Assume average movie length of 90 minutes and 4k Blu-ray price of $20 (I don't buy physical Blu-ray's but found some year or two old recent marvel movies cost about that much - new releases cost more, typical boxsets come out as costing less per item)

So that's 50GB/20$. Or $400/TB. So the content to fill those hard drives is about 5x the drives yourself. So it's not infeasible that people could fill them with legal movies.

Not to mention TV shows, video games, and content that is much denser in GB/$.


When my kids were young and could ruin a DVD just by taking it out of the case, I put together a home theater PC and ripped all the kids DVDs to it. There's well over 100 movies on that disk, and I still have the original DVDs for backup.


My personal horde is a 9x16TB raid-6 yielding 101TiB usable.


"640K ought to be enough for anyone." - Unknown

It's likely that 16TB won't keep up with further technological advances.


They say they rigorously tested that the drives can handle 30 overwrites per year without going into a GC storm (180TB/6TB = 30) From those numbers, you won’t exceed the duty cycle as long as you throttle your raid rebuild to take 12 days, and don’t issue reads or writes during that time. More transparency is needed, whether or not the product is as bad as my back of the envelope calculation suggests. For example, what’s the maximum sustained 4kb random write iop rate for the drive? What’s the sustained sequential bandwidth?


> throttle your raid rebuild to take 12 days, and don’t issue reads or writes during that time

So, 12 days of downtime for the NAS. Doesn't sound like a very NAS-specific product to me. That's 3.3% downtime in 12 months from a single HDD failure. 12 days of sweating on another HDD not failing during the rebuild process.

Not fit for purpose.


When is the last time you contacted customer care and got useful advice for a product such as hard drives.

I am wary of so many differentiated products out there. Just in case I want to save 5 cents on an SSD so I can get a 50x increase of 95% latency. Reliability is most important and the way you get there is volume. I'd rather see a good drive that is made in large numbers rather than 100 SOUs that can never really be tested.


>Reliability is most important and the way you get there is volume.

Is that really true? The likelihood of unrecoverable sector errors goes up as the number of sectors increases.

Reliability is only solid if you have a good backup/recovery plan being practiced. Multiple copies on multiple formats in multiple locations. Original content on SSDs? Make a back up on HDD and/or recordable disc formats. Keep a copy at a friend's/parent's house. Keep a copy in the cloud (Backblaze as an example). If the data is worthy of backing up, do it right.


Volume of production, not of disk space.

As for backup that is a vastly unmet need. You have the carbonite scam that is on the rush Limbaugh show (unlimited yeah right) the corporate backup clients that turn home working to not working, and tape drives that cost more than 10 times the size of a drive they can back up, not counting the cost of the tapes, high probability that you pick a bad technology generation of lto, high probability that restore fails, etc.

The computer press has been telling people to back up for years but effective solutions have never been there for the consumer.


Ah, customer support, the /dev/null of technical complaints.


I dunno. Often they are good at giving you an RMA for a failed drive.


Point. So a black hole with an occasional lever for replacement.


What else can they do at scale?

A hard drive or SSD is a complex device. Are you going to expect the customer service people help you dump the firmware out that JTAG port? People like that can be trained and equipped with simple diagnostics and remedy options, they can't be expected to get into the fine details of product selection.


I can imagine a non-throwaway world where drives had a standard diagnostics port and they could diagnose issues and repair.

But yeah, I understand why it doesn't exist.


I've been having issues with what I suspect is a faulty powerline adapter from TP-Link. I spent over an hour on the phone to them, factory resetting, for them to decide my issue was solved because the status lights were functioning correctly when I moved both adapters to the same extension lead.

There was no option to escalate, continue troubleshooting, nothing. I went back to the Netgear ones I was trying to "upgrade" from.


I didn't have any background on this and didn't know what SMR was. Apparently the background is: people try to add these things to RAID arrays, and when they do some rebalancing, the RAID software rejects the disk. The reason is because they use "SMR", which severely hurts random-write performance. The controversy is that it's pretty shady to sell a drive as a "NAS Drive" when the drive is too slow to be usable with any RAID software.

(I will spare you my rant on how bad RAID software is. But this is yet another interesting edge case that RAID software doesn't handle and reacts by just blowing away your data.)


> The reason is because they use "SMR", which severely hurts random-write performance.

It hurts random-write performance after a threshold, when an on-disk staging area becomes exhausted. For short bursts, the drive behaves well -- it probably would legitimately work in a RAID array if the array were initialized from a clean slate and not rebuilt.

> But this is yet another interesting edge case that RAID software doesn't handle and reacts by just blowing away your data.

It's not obvious how a RAID controller should "handle" this. The drives have no outward indication that they suffer from random write saturation. From the controller's perspective, the degraded performance looks very much like a drive failure.


> the degraded performance looks very much like a drive failure.

Sure, but given the choice between "During a rebuild, it looks like another drive isn't doing so well, so I should give up and trash the array"

and

"During a rebuild, it looks like another drive isn't doing so well, so I should notify the administrator and meanwhile try to maintain as much redundancy as I can"

which is the sensible choice?


There is no choice to be made. Once too many disks fail the entire array has to be taken offline and that's exactly what happens.


Disks "failing" is the problem. If you treat drive state as binary (flawless / eject), you can easily eject too many drives for errors on different 0.000001% of data, crashing the array along with the data.


A raid array that cant rebuild is near 100% worthless (outside of raid 0)


But if the reason it can't rebuild is that the drives are unfit for purpose, you can't really blame the controller.


Which is why the manufacturer shouldn't have branded them as NAS drives!


You're preaching to the choir. SMR drives are absolutely not NAS-ready, and it's insulting for WD to pretend otherwise.


> SMR drives are absolutely not NAS-ready

DM-SMR at the very least. If the RAID controller (hardware or software) is SMR-capable, then host-managed or host-aware can work.

For example, ZFS with the recent support for separate metadata devices[1] seems to be close to what you'd need for it to become SMR-capable.

[1]: https://github.com/openzfs/zfs/pull/5182


Fair distinction.


In this case there also seems to be a bug in the WD firmware that causes rebuilds to fail, not just be excruciatingly slow.


> I will spare you my rant on how bad RAID software is.

I am interested in your rant.


RAID software has one thing going for it; at least it's not RAID hardware :P


This is making a bad situation , adding oil to it, and lighting it on fire hoping that the problem will go away unnoticed.

I have a lot of WD(HGST) (more than 2 PB) in service as of now. I will never buy anything WD until they come clean on shit like this. They should talk to VW about Karma and how it bites your behind.


Is there a list of known not SMR WD RED drive model numbers? I have 4TB REDs in my Synology but they are a few years old; is there a way I can find out?


Here is a list of known SMR drives [1].

The newer WD##EFAX ones are SMR but if your drive is a few years old then it's probably a WD40EFRX which is NOT a SMR drive.

[1] https://www.ixsystems.com/community/resources/list-of-known-...


Except only the EFAX drives with 6TB or less capacity.

The 8TB and larger drives use the exact same WD##EFAX model number, but aren't SMR.


Looks like my RAID NAS at home is safe (from this at least) then. Thank you :)


Thanks, it's the EFRX.


There are various lists all over the interwebs. It will be interesting to see the manufacturers responses in market segmentation and product lines in the future.

> The easiest way to detect whether it is a SMR drive is the cache size. Old drives (WDx0EFRX) had 64MB cache whereas new replaced drives (WDx0EFAX) feature 256MB.

This information will become less useful as newer drive models come out and cache sizes for non-SMR drives are bumped up.

Source: https://nascompares.com/answer/how-to-tell-a-difference-betw...


> the EFRX is the faster CMR drive, and the EFAX is the much slower SMR drive.

From https://arstechnica.com/gadgets/2020/04/caveat-emptor-smr-di...


This site has a filter for Recording process: https://skinflint.co.uk/?cat=hde7s

You could also enter your harddrive model number in the search bar and have a look at the recording process there.

Note: I have no idea how accurate the information is for every hdd out there, but it does seem accurate with the three hdds I have (that are relatively recent). The models I looked at are: Seagate ST2000DM008 (SMR) WesternDigital WD10EACS (CMR) Toshiba HDWD130UZSVA (CMR)


Based on the reports a week or so ago:

Earlier models (WDx0EFRX) were not SMR.

Newer ones (WDx0EFAX) are, up to 6TB.


You probably have the WD40EFRX models, same as me.

AFAICT, we're safe.


Did the text get cut off? All I read was some corporate PR nonsense, did I miss the real content? Never buying WD drives again!


Agreed, it would have been better if they posted nothing.


> If you are encountering performance that is not what you expected, please consider ... our WD Red Pro or WD Gold drives, or perhaps an Ultrastar drive.

So, they're saying the new low end WD Red drives are unfit for purpose. And boy do they have a solution just ready and waiting to go.

Which presumably, uses the older CMR (eg normal) tech to do the same thing. They probably just rebadged the older drive models that work. ;)


As I'm WFH next to my Synology and look down at my ESD bag-wrapped spare WD Red, I can't help but be reminded of Peter Graves glancing down at his tray with a perfectly cleaned fishbone in "Airplane!"


"WD Red HDDs have for many years reliably powered home and small business NAS systems" Yes, the non SMR ones."

You are misleading the consumer and should have called it WD Pink, because the SMR drives are a totally different product.

"If you have purchased a WD Red drive, please call our customer care if you are experiencing performance or any other technical issues. We will have options for you. We are here to help."

Other than refunding or providing a replacement non-SMR drive without extra cost to the consumer are the only options.


You just have to look at the BackBlaze hdd reports to know WD has been out of the game for a long time. What really upset me was when they bought HGST out in 2012, but fortunately that management doesn't seem to have impacted the HGST quality as much as I feared, but time will tell.

Also, stop using raid 5 people. The risk of failure cascades given the build times on such large drives means it should be almost totally deprecated imho.


What should you use instead of RAID 5?


RAID-10 (stripe of mirrors) is also an alternative.

Not as cost efficient, but rebuild times should be much lower. Is also more flexible in some cases, like when using ZFS for example.


Raid-6+ (16, 60, 100, etc), Raid-z2+ (raid-z1 being the equiv of raid-5), Ceph, etc


OK, good to know. So the process of rebuilding after having lost a disk will take less time with those other solutions?


It's not about the process taking less time (even though aiming to reduce that is a valid approach, which is usually tackled by using ssd/flash and not over-provisioning). It's much more about being able to handle a second disk failure during rebuild. With raid 5, if that happens, you are in a world of hurt (think 3k fees to send raid to data recovery place).

I've learnt all this the hard way.


Ah, makes sense. Thanks!


RAID 6 (adds an extra parity disk) or generalized erasure coding (more flexible but not so easy to manage).


Nitpick: RAID-5 and RAID-6 don't use "parity disks". They stripe the parity across every disk, just like RAID-0 stripes data, so you lose a disk worth of capacity (or two disks worth of capacity in RAID-6), but the parity is on every disk. Having a disk dedicated to parity would be an incredibly burdensome write bottleneck, which is exactly what RAID-4 is, and why you probably haven't heard of it.


I still use RAID 5 for small 3/4-disk arrays; I think the odds are in my favor then. But if I need 4x disk size, I go to RAID 6 (so I jump from a 4-disk RAID 5 to a 6-disk RAID 6.)


ZRAID1 oder ZRAID2


RAID 6


This is exactly one of those cases where the ambulance-chasing class action lawsuit lawyers are useful. I would eagerly join a class action lawsuit against ALL the drive manufacturers on this issue.


So, you get your check for $3.50 or a coupon for $20 off a new WD Red "NAS" drive from their web store. And then what?

Most people bought multiple drives at a time, spending hundreds of dollars. This seems like a prime candidate for technology-assisted small claims court, for the full purchase price of the counterfeit drives.


You do not need to get a non-trivial compensation for the threat of a class-action lawsuit to be a deterrent for behavior like this. It's the price the defendant pays that matters.


A coupon is a super bad deal because it doesn't punish the company. Trivial amounts of money are fine as long as it adds up to a huge number for the company involved. Think of it as more of a fine or a penalty to the company rather than getting your money back.

Even if all the money goes to lawyers, it's still a better deal for consumers than getting nothing.


European regulators usually have much more bite than class action suits. I'm hoping this will get their attention at some point.


I guess we are boycotting WD now or voting with our wallets saying how garbage a response this is. What’s a good alternative to WD reds?


Sadly, I've bought a lot of WD over the years. Seagate seems to be a minefield, HGST is owned by WD, leaving Toshiba as the only remaining option. Looks like the N300 is their line of NAS-optimized drives:

https://www.toshiba-storage.com/products/toshiba-internal-ha...

https://pcpartpicker.com/products/internal-hard-drive/#m=111...

They also have surveillance video storage drives and enterprise drives that might do well in the same applications.


According to this, Toshiba has been doing the same thing...

https://www.tomshardware.com/news/sneaky-marketing-toshiba-s...


But Toshiba did it on their desktop drives, instead of on NAS drives which you buy specifically for their reliability


I just hate this rebranding ... Red vs Red "Pro".

It used to be so simple

Green - slow cheap, Blue - faster more pricey than green, Black - fastest more pricey than blue, Red - fast as Blue but pricey but good for NAS


I will never purchase another Western Digital product unless they make a MAJOR about-face on this. Western Digital owns SanDisk, so that goes for them too.


They suggest moving to Red Pros if performance is a concern, but it's not clear from the post if the Red Pros are also SMR-based platters. I have a couple of new ones (soon to be) integrated into a NAS. Anyone know if they are SMR or CMR?


The problem is that they're believed to be CMR right now, but they could change that this afternoon and wouldn't tell you.


Yep. Given this response, I have no desire to buy any Western Digital product—of any type, for any purpose—ever again. They knowingly sold a drive that would obviously be inadequate for its advertised use-case. Who knows what other false advertising they might also be doing.


"We never said that it didn't have sandpaper on the heads! If you want a drive without sandpaper, consider upgrading to our Pro line."


"Perhaps your workload is more suited to our "WD Red Pro Elite Plus" lineup."

Western Digital, circa 2025


Thanks to all. That’s frankly terrifying. If it weren’t for COVID I would send them back post-haste.


For some reason I remember most HDD customers are more technical people. So I wonder how much this is going to cost them long term.


Has WD also bamboozeled NAS Raid enclosure sellers such as Synology? If you go into their compqtibility lists you will find the WD Red with SMR (the WDxxxEFAX series) listed as compatible, but with a specific reference to their SMR KB page which just warns not to mix SMR and PMR drives in the same raid volume.

https://www.synology.com/en-global/knowledgebase/DSM/tutoria...


> bamboozeled

Defrauded is the word you were searching.


I had no idea why my lightly used NAS RAID array with 2 NAS-rated WD drives was failing every few months. Now I know why. Last year, I was researching a small hedge fund and found they had a heavy WD position, they cited metrics like $/gb and called WD a cost-leader. Now I know why.


Oh wonderful. I have 4x12TB RED disks in my Synology. I wasn't aware of this at all until seeing this post. Guess I'll be looking to slowly replace them before they go belly-up. Unfortunately, it seems with each swap out I risk the very problem this is all about.


Fortunately, it's not used on 8TB+ disks yet, so you're safe.

The problem will be getting replacements for these, as it's not safe to just buy HDDs anymore.


SMR explained:

Be aware that this drive uses SMR technology (Shingled Magnetic Recording) to achieve such high density in a small package. If you don't know what that is, think of how shingles on a roof are laid out partially overlapping each other - that is how the data is laid out on the drive platters. While this allows significantly higher capacities, it creates some complexity to writing. If the drive needs to write data in the middle of existing data, it can't just "place" it there like a HDD using PMR technology because other data also overlaps it. What it has to do is put the data in a temporary location, then re-write all the shingled data afterwards to the end of the track break.


After reading this, my only question is, what do we do now? Western Digital already bought HGST and discontinued it. Maybe they felt safe enough to pull this because they know the alternate options are limited.


I thought it discontinued just the HGST brand name. HGST branded drives still seem to be available in the market (or some markets). Last year’s Backblaze hard drive reliability report also had some HGST drives near the top of the rankings.


Class action lawsuit in 3... 2... 1...


Who got the 3.5" part of HGST? Wikipedia says Toshiba, but WD is selling drives under the Ultrastar name, not Toshiba.

I mean, where do you go for reliable storage at this point?


HGST is owner by WD. They have operated independently for a while, unsure of the current state.


So basically don't buy WD Red for RAID and buy WD Red Pro for any real NAS application. It's a way to silently raise prices, I guess.


wd red pro are 7200rpm. Power consumption and heat.


SMR is such a horrible backwards "technology", I wonder why vendors still push it aggressively on users after all these years.

1. Its performance is noticeably worse than CMR.

2. The density increase SMR provides compared to CMR is not that much. It's 25% at most.

3. SMR is not a separate evolutionary path that can be developed further to have increasingly more density compared to CMR. It is more like a variation of the same technology which provides slightly more density in exchange for serious disadvantages. SMR and CMR both directly depend on, and benefit from developments in platter densities. When a new, denser platter generation is introduced, CMR and SMR benefit from it same.

4. A huge portion of the drive must be reserved as a non-SMR cache area to mitigate the performance penalty SMR brings. Notice that this is an extra area that does not exist on a CMR drive so it reduces density increase SMR brings.

5. SMR drives have much larger DRAM cache to mitigate the performance penalty (64MB on a CMR drive vs 256MB on SMR drives), increasing the cost for that part.

6. Logic and mechanism of writes are much more complicated on a SMR drive than a CMR one. Drive is separated into different zones, data must be stored on a cache zone, then new data should be written into permanent zones in an optimal manner in background during idle time, managing different zones, cache area, rewrites, background tasks etc. all these makes the drive firmware much more complicated. Compared to a SSD firmware it is actually worse of both worlds: You're getting all the complexity of a SSD firmware with a worse performance than a regular hard drive.

7. What is even more worrying than those technical details are that vendors are sinking more and more resources and pushing very hard on technologically dead-end SMR technology. WD even wrote a PR piece called a "Zoned storage initiative" for trying to paint host managed SMR technology the future of storage and developed a linux filesystem called zonefs to back it up.

8. Despite all those shortcomings, they don't even sell SMR drives cheaper than CMR drives.

TL;DR: SMR is a harmful technological "drug" HDD vendors use to buy at most few years of time in platter density but with a very serious side effects.


I wonder if they have a bunch of binned trash and pushing it into the lower end consumer spaces is a way to get rid of it.


I wish they were. They're actively pursuing it while discontinuing CMR. 5 years ago 1TB CMR laptop drives were abundant. Nowadays you'd be hard pressed to find a non-SMR one.


Western Digital has updated the blog post after this HN post went up and listed the technology used by their different hard drives in an update at the top of the post.

All the backlash seems to have worked.


Are those drives really safe for RAID-1 arrangements? I read somewhere they could not survive resyncs, so why RAID-1 wouldn't be affected, either after a resync or a faulty drive swap?


As an owner of Red drives, it’s surprising WD would trot out a new acronym for in device features to a group already literate in acronyms.

It just makes me wonder how easily they believe their customer base can be placated.

Had I known I was not receiving what I thought I had purchased, I would have likely purchased a different drive.

The least WD could do is unilaterally extend the warranty of these misrepresented drives to easily deal with any fears, if they really believe in their products as such.

It will be interesting to see if the MTTB on these drives is in a smaller spectrum of time.


Extending the warranties on SMR drives does _nothing_ to solve the performance issues. The drives were marketed for use in NAS RAID systems, but the abysmal performance (due to the drives' SMR) in many cases will cause the NAS or RAID system to kick out the drive as a bad drive.

Given that RAID rebuild times are increasing as the size of the drives are increasing, and SMR exacerbates the longer rebuild times, these drives should not be used in the NAS or RAID systems for which that they were marketed.

By obfuscating the use of SMR in their WD Red line, WD willfully and intentionally harmed their customers.


I agree that it was completely misleading and that extending warranties solves nothing SMR related specifically for performance.

An advance ship program to swap drives with SMR is what should happen.


Is anyone really surprised? HDD manufacturers have been misleading consumers since they all started advertising kilobytes as 1000 bytes


It wasn’t just marketing. Historically, a kilobyte was 1024 bytes in memory contexts where everything was based on powers of two but nowhere else. This was obviously confusing because everywhere else used kilo as 1000 — a 420MB hard drive would have been measured in units of 1024 for capacity and 1000 for transfer speed.

When the SI units were standardized in 1998, it helped drive manufacturers advertise larger numbers but it also rectified this accident of history where a standard prefix had been used with a non-standard definition in only one part of computing.


A kilobyte is 1000 bytes. A kibibyte is 1024 bytes. Windows measures in kibibytes but then says the result in kilobytes, making it look less than it is, i.e. if you connect a 1000 GB (931 GiB) drive to a Windows machine, it will tell you that it's 931 GB. It isn't. You're getting every byte you paid for.

Other operating systems don't lie to you.


This is the one thing that has rustled my jimmies every time I have had to deal with it. They could have easily had marketing jargon for the general consumer and then left the real specification number in generally accepted computing values.

This is the kind of stuff that happens when marketing and accounting take over and start running the engineering departments.


There really needs to be some kind of capital punishment for companies pushing bullshit like this and deliberately misleading customers. As a bonus we would also get rid of homeopathy and MLMs.

EDIT: Not capital punishment for the people involved obviously, but the companies should be severely sanctioned. Possibly even closed. The involved management should be prohibited from holding management positions in the future for 10 years or more. Oh, and all bonuses paid out since the fraud started should be reclaimed with personal fines.


Silly customers. It is your fault of course. Using NAS branded hard drives in common NAS configurations.


Jesus what a pile of drivel. Zero commitment to addressing the misleading advertising or making amends.


WD is far too large a company to have a soul so there is zero chance they're actually sorry.


It has nothing to do with a soul... it's just the nature of large business to revolve around marketing and accounting to maximize profits, while disregarding sound engineering.


TL;DR: We know we were concealing a material fact from our customers. Please don't sue us.


...except that they didn't say "please".


Has anyone seen a benchmark example of e.g. an array rebuild, and how large the difference is?


I think the issue is that with the bad drives, an array rebuild will never finish. One drive will degrade to the point where it simply stops responding.


I understood that to be mostly a problem with hardware controllers, those tend to be more picky? (someone I knew tried a hardware RAID of WD Greens back in the day, that also exploded regularly because of response times)

I'd thought it would just slow down a lot when software tries continuous writing, and am curious how much that is. (ideally, this is something WD should publish, since they claim it's fine, but ...)


Something I don’t understand - why does the drive slowing down cause a rebuild to abort?


Imagine your garbage collector runs for 30 seconds. It will cause timeouts.


Ah, so various RAID systems hard-code a timeout that’s too short for shingled drives?


Is this why my Synology 4 bay, had had 8 WD reds in the last 4 years? I don't even use it much. It's just always powered on. Drives me insane that I have to pay 200gbp a year just to replace drives that supposed to last years.


This HDD is dead anyway




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: