Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Also note that they talk about these being "drive-managed SMR" drives, and they are, but they reportedly don't advertise themselves to the OS as being drive-managed SMR, as they're supposed to do as per the ATA specifications (unlike other WD DMSMR drives which do). They literally designed the drives themselves to lie about this.

I wonder if reporting to the OS that it is a SMR drive would kick in host-managing SMR code paths that conflict with how the drive managed SMR works. While I'm not terribly inclined to give WD the benefit of the doubt, this seems certainly possible. It seems like it should be possible to test by modifying whatever part of the Linux kernel or userspace tools control this and having it use the model number instead of what the drive reports as its SMR status. Granted on a closed device like a Synology NAS, this isnt possible for a user to do though Synology themselves could, and probably should test this.



From what I've been reading the main difference is simply exposing the TRIM command, which can in no way be a detriment - the only reason to disable TRIM is to hide the fact that it's SMR


Interesting! I never would have thought of a spinning drive as needing a TRIM command, but it makes sense for drives where data is literally overlapping.

But, why would failure to expose TRIM cause errors in in rebuilding a RAID, in which case old drives should be more or less read-only, and replacement drives write only? My understanding is that TRIM should affect re-writing sectors only.


TRIM isn't the cause. The problem with SMR is that disk failure in a RAID array causes read/write patterns during a rebuild that SMR simply was not designed for. I'm sure SMR friendly RAID software will be available in the future for but this requires the software to know that the drive uses SMR. SMR also tends to have really awful performance for regular desktop usage.


Yah, by blocking foreground read/write activity until an entire SMR region has been filled. That will work well...


Sorry what I meant was the difference between host-managed and DMSMR. Having TRIM doesn't make any difference in the rebuild, it just means you can detect it's SMR and you can help the drive in normal usage. Either one will have trouble in a RAID rebuild. Being able to detect it's an SMR drive could probably let some RAID software handle it in a better way, but I don't think any existing software does (since nobody is crazy enough to use SMR in a RAID)


In theory, SMR drives shouldn't have issues with reading and they should also write fine if you write entire stripes. With HMSMR it should be possible. With DMSMR - I don't know. Do DMSMR drives partially skip staging area on large writes and fully if you align them?

I don't see why RAID rebuilds should take notably longer on SMR drives, if only they were properly managed - by both the RAID driver and drive FW. Though it's hard to properly utilize a DMSMR black box.


RAID's are generally rebuilt online. Which means that the normal read and write activity is still occurring. Rewriting a block of data in the middle of a SMR region requires reading the entire SMR region, updating the changed data and then writing it back. So, normally a raid controller will just stop updating a strip (frequently much much smaller than a SMR region) and perform whatever forground activity is needed. The smarter ones rebuild the the strip of the new data along the way. This is going to really mess up rebuilding on SMR because suddenly what you assume is probably a linear operation isn't from the perspective of the drive.


TRIM is not involved when RAID is rebuilding. You are correct in that.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: