I'm curious about their methodology here. Just from a materials standpoint, a M.2 SSD has much less physical volume than a 3.5" HDD and fewer disparate materials (no moving parts = no coils,bearings,motors...)
Perhaps the embodied carbon of memory chips is higher than I expect. To be cynical, there's probably a lot of corporate spin - Seagate is a HDD company after all.
The thing is that anything electrical energy intense is going to become less CO2 producing every year as energy systems transition. Its the use of Iron and other necessary fuel usage that will become the CO2 intense items over the next 5 to 10 years. The energy usage wont be the issue and CO2e for production of any item is going to drop quite drastically.
Is anyone using EUV for SSDs? I don't think it's economical for NAND manufacturing. I'm not sure if SSD controllers are using EUV yet; they usually aren't on leading-edge process nodes.
Not all energy is carbon intensive. Using a lot of power is fine as long as you get the power in a sustainable way. That isn't necessarily true for a lot of current production. But that's changing rapidly. Especially in the parts of the world where this stuff is produced.
OTOH, if you have solar panels and drive power to the grid, you can sell "credits" to large power users who will use them to "offset" their less-green power sources to appear to get their power in sustainable ways / be carbon-neutral. Kind of an "I'm using dirtier power, but I teamed up with someone who's using fully sustainable power, so it all comes out in the wash!"
It will certainly become more fine so long as the grid is transitioning to renewable energy sources which almost all countries are doing. In the next decade the intensity of CO2e for electrical energy usage should be considerably lower than it is now and CO2e from necessary fuel usage will begin to dominate in industrial processes. Iron I think will be a big contributor to this since at the moment electrical production is not viable.
China the largest producer of iron and steel has started restricting coal based for electric arc based furnaces for steel making so iron and coal carbon use will probably come down as well.
Its just going to take a bit longer for Iron and steel production to transition, they likely wont bother until we see the full reduction in electrical price that comes with renewables. They do seem to have a process it just needs some refinement. The main grid transition is happening first and faster.
This is part of the issue with Seagate's assessment, its changing rapidly right now and in 5 years time Taiwan will be on mostly renewable energy and suddenly all of Seagate's iron usage will make them more expensive, then in 10 years time they are using electrically produced iron and its not an issue anymore.
The poster has highlighted a major issue that is often glossed over in terms of actual carbon footprint. It's like saying a Chinese coal plant is the same as a Scottish hydro dam otherwise which is very, very wrong.
Yes nothing is perfect, but running a dam in an area with little water silt for many many years is different in carbon footprint to a coal plant run at 80% efficiency from strip mined coal.
Again look at Spain, not all power sources are equal and just because it's been connected to a network doesn't magically mean all of the load is then spread between all sinks.
If your high school science teacher told you so, they were wrong. It's time to talk about proper grown up electronics in this discussion.
No, electronics at scale isn't the same as a 9v and filament bulb. There's real effects which aren't overly observable in a small setup that need to be addressed.
It's like pretending all of compiling is a direct translation to a lower language all the way down. It's not you make changes, adapt and optimise for the language and system you're in not just write Fortran in assembly
I was including the weasel world 'fairly' in front of 'fungible' exactly because I foresaw such a discussion.
To be slightly more precise and wordy in what I wanted to express:
The original comment said:
> Not all energy is carbon intensive. Using a lot of power is fine as long as you get the power in a sustainable way.
And that's true. You can buy 'green electricity' from the grid. But that mostly just means that the people who don't care get allocated a larger fraction of 'non-green' electricity.
(That works, until you lose enough of that buffer of people who don't care where they electricity is coming from, that you have more demand for green electricity than is available on the grid. Then you actually need to spin up more green electricity or raise prices for that 'colour'.)
Yes, there's different demand levels over time (throughout the day, and throughout the year etc), and different demanders need different reliability levels. And all the power sources have different profiles for when they are available, and how easy (or hard or impossible) it is to turn them on or off on short notice.
> If your high school science teacher told you so, they were wrong. It's time to talk about proper grown up electronics in this discussion.
A HDD has 1 controller chip and that controller is relatively simple compared to the controller chip on an SSD, its certainly a lot smaller and on a less advanced process because it has to deal with ~200MB/s not 15000MB/s. On top of the controller chip a 2280 NVMe SSD will have 4 or 8 memory chips as well, which are on a recent process and have many many layers stacked together. Then the SSD will also often have a DRAM chip as well. An SSD uses substantially more silicon than a HDD does.
Not disputing your broader point necessarily, but don't hard drives also include memory for cache while dram cache has become less and less common on SSDs? (to point where it is now uncommon on new SSD devices)
HDDs use much simpler electronics than SSDs. The controller in a hard drive mostly just moves the read/write head and spins the motor. It doesn’t need to do much processing. These chips are built on older manufacturing processes that are cheaper and less energy intensive.
SSDs though need a fast processor to manage the flash memory for wear leveling, error correction, and keeping track of where everything is written etc. This requires more advanced chips, built on newer process nodes, which take a lot more energy and resources to make.
And SSDs also need extra chips like DRAM and obviously the flash memory itself.
Is the controller really going to be that much simpler on a HDD? On HDD you still have things like command queuing, caching, checksumming and bad block remapping just as with SSD. Sure you don't have the flash translation layer, but I'd expect processing of the analog signal going to/from the heads to be a processing headache too. I wouldn't be surprised if they have to do temperature compensation and factory and/or runtime calibration of the analog path. Of course much of that will be done in fixed-function custom hardware blocks, but almost certainly so would a SSD for eg. ECC calculations.
HDD controller calculates and checks ECC either and en/decodes the user's 0/1-bits to the 0/1-bits actually to be written on the platter but at much smaller speed (0.3 GB/s vs 14 GB/s SSD max). HDD has an extra servo controller. HDD has an extra 512 MB - 1 GB DRAM.
WD HDDs 20 GB+ have some FLASH inside for metadata offload.
> and fewer disparate materials (no moving parts = no coils,bearings,motors...)
But even so, isn't the lifetime of a SSD less than a HDD given the same amount of writes? Especially when you try to have the same amount of storage available.
So say I want 16TB, I can either get 1 16TB SATA drive, or something like 4 4TB SSDs (or 8 2TB) to have the same storage, is the physical volume still less in that case? And, if I write 16TB of data each month to each of these alternatives, which one breaks first and has to be replaced?
A typical consumer hard drive is going to last about 5 years, maybe more but it will wear just from existing. If you wrote 16GB to it a month that wont be a challenge for it.
The SSD however will be specified on a life of how many writes it can do. For example a 990 Pro from Samsung will have a specified life of 1200TB. That is about 76,800 months of operation at 16GB, this is not only not a challenging workload for an SSD but you have 8 of them so 8 * 76800 months of operation on average.
I have SSDs that are quite old, my original Intel 80GB G1 still works and still has 80% of its life left but its utterly obsolete as its too small to be useful and 500MB/s on SATA is a bit dated. All the OCZ SSDs are dead long ago but the Crucial M4 is still trucking along and its 512GB so still useful enough as is the Western Digital 1TB. I have barely scratched the surface of their usage life despite writing 35TB on them they still have 95% life over 42,000 hours of operation.
So at 16GB writes monthly they are going to last as long as the silicon and PCB lasts, which if made well could be decades. They will certainly become obsolete in speed and size more than likely rather than dying.
> I have barely scratched the surface of their usage life despite writing 35TB on them they still have 95% life over 42,000 hours of operation.
What are you using these drives for, if I may ask? It seems to barely be used for anything, or I'm an outlier here. Here's an example of my main drive in my desktop/work computer:
Model: Samsung SSD 980 PRO 2TB
Power On Hours: 6,286
Data Written: 117,916,388 [60.3 TB]
I'm just an average developer (I think), although I do bounce around a lot between languages/projects/manage VMs and whatnot, and some languages (Rust) tend to write a lot of data to disk when compiling.
But the ratio difference seems absurd, you have ~0.0008 written per hour, while I have ~0.0096, that's a huge difference.
Both of these are games, so they tend to sit with contents for longer. My boot SSD is 31TB in 6183 hours so quite a bit faster usage than those old drive but about half your rate. Its very workload dependent how long SSDs last. 16GB a month is extremely slow, much slower than my usage at 120GB a day. SSDs you could write their entire contents every day and they would still be working 5 years later but probably not 10.
My main OS disk (256 GB SATA) reports 40k power on hours and 27.5 TBW. I've also got a 4 TB nvme drive that's at 16.5k power on hours and 5.7 TBW and an 8 TB SATA drive that's at 18.5k hours and 25 TBW.
A server machine with multiple busy databases has 24.6 TiB written in 1349 hours. That’s on ZFS. As such, I’d say your usage patterns appear rather non-average.
Perhaps the embodied carbon of memory chips is higher than I expect. To be cynical, there's probably a lot of corporate spin - Seagate is a HDD company after all.