Main content is:
The maximum capacity of today’s 3.5-inch hard drives is 3 terabytes (TB), at about 620 gigabits per square inch, while 2.5-inch drives top out at 750 gigabytes (GB), or roughly 500 gigabits per square inch. The first generation of HAMR drives, at just over 1 terabit per square inch, will likely more than double these capacities – to 6TB for 3.5-inch drives and 2TB for 2.5-inch models. The technology offers a scale of capacity growth never before possible, with a theoretical areal density limit ranging from 5 to 10 terabits per square inch – 30TB to 60TB for 3.5-inch drives and 10TB to 20TB for 2.5-inch drives.
>I see 60TB is the upper bound for 3.5 inch, so not much wrong with the title, is it?
Well the problem there is that it's a theoretical upper bound, which is another way of saying "we have not yet found a specific reason that 3.5 inch 60TB drives should be impossible with this technology". That is a far cry from "on their way".
This is similar to the announcements from Intel about advances in lithography. Intel announced the 32nm process technology in 2009 -- that process gives them a platform to refine and improve their technology.
So now, a top of the line Xeon is faster, has 10 multi-threaded cores, includes AES acceleration, and uses 20% less power than it's predecessor 3 years ago... thanks to that base technology.
If you're running a business that generates/consumes alot of storage, this announcement means that you need to start thinking about how to deal with a petabyte of storage capacity with performance characteristics similar to what you have today. (for 1/30th capacity)
HDD transfer speeds do increase as storage density's increase. It might be more accurate to say 30x the space but only square root (30) times the transfer speed.
Random access performance will be an issue, unless they work some magic on that, too. Same for bus-speeds.
Today you can fit about 96T into 4U if you squeeze a little.
That's 48 spindles then, sharing the load.
With 60T drives that turns into 2.8P on the same 48 spindles.
At today's maximum SAS bus-speeds (roughly 10 GBit/s) it would take 33 days just to write such an array full (assuming all components can sustain that throughput).
And let's not get started on IOPS...
Anyway, it will be a while before 60T drives hit the shelves. Until then at least the bus-speed issues will probably be sorted out.
The maximum capacity of today's 3.5 inch hard drives is 4tb, and has been since last year. I know it's a press release, but that's bending the truth a little too far for my taste.
The meaning of the title is that Seagate is claiming that after a few years of stuttering they've jump-started exponential growth for the HD once again.
I'd say it's a big deal that Seagate claims they will develop 60TB HDs with your usual exponential growth curve (bearing in mind we're starting at 2-6TB.) "On the way" doesn't mean tomorrow.
And I've got a 2.5" 9mm internal one, but I'm fairly sure that they're talking about density per platter and all 2.5" HDDs larger than 750 GB seem to use two platters.
I don't know about right now, but last I looked (~1 year ago) the 1TB 2.5" drives were not standard height. Some people were able to jam them into specific laptop models that had enough room (or flex), but that was it.
Another way to look at it is to see how many bare drives are being sold at those specs (1TB 2.5") vs. external drives. The bare drives won't sell as well if they are a non-standard height (at least I assume they wouldn't).
Samsung and WD are selling 9mm (standard height) 1TB drives, and have done so for almost a year. I think Toshiba or Hitachi recently released a 7mm (extra slim) 750GB drive. Seagate has been selling extra thick (15mm) 1.5TB drives, but only inside external enclosures, for some time now.
Many laptops will also accept the thick 12mm drives you're referring to.
Well, they are standard enough to fit into a MacBook Pro (even the 13") so I think they were quite popular, I have one, and know quite a few other people that do.
Since it requires a pulse from the laser to write does this also mean data will be more durable and less vulnerable to damage from external electromagnetic fields?
Have you ever lost data on a hard drive because of external magnetic fields?
For sure, if you get a degausser or other very strong field you will cause damage. But there are two strong magnets inside the case near the platters, so I'm not sure what kind of magnetic field is available in common domestic or office settings that might be harmful.
Magnetic field strength does not follow any specific dropoff law because it's a dipole.
The dropoff depends on how far apart the ends of the poles are. (i.e. from a distance the two poles blend together and average to zero), it also depends on the orientation - are the poles parallel or perpendicular?
True, but for the common case where the distance from the magnet is substantially greater than the distance between the poles, inverse cube is plenty good enough as an approximation.
I really look forward to the day, when, as a general consumer and user of computers, I can just completely ignore hard drive capacity (cause it's just always bigger than I need). Even though it won't be, I will just perceive it as limitless. I'm almost there, but not quite. It's amazing how much useless busy work limited hard drive capacity generates on an individual and organizational level. Can't wait to be free of that.
Sadly, if you look at the mean-uncorrectable error rate you will notice that for a 2.5TB drive, the chance of getting an error when reading it all the way through is 100%. Or, basically you can't use 'mirror' RAID to protect your data because you can't reliably read the entire drive.
New and more interesting error correcting bits are going to be needed here. I could imagine a device with 60TB of raw capacity providing 20TB of data you could read reliably though.
WD lists their 2.5TB drive as having "Non-recoverable read errors per bits read" as "<1 in 10^14" and the drive has about 2 x 10^13 bits, so while the chance of an error in one whole disk read is non-trivial, it's not 100%.
Mirroring will still work fine as long as the error is detected. The chance of read errors in the exact same sector of two disks should still be small. Silent errors were always a problem for mirroring.
At what point do we as consumers start considering drives to be defective? Is the underlying ECC suffering because of the need to hit greater storage numbers?
The day came for me back in about 1998. Ever since then, the smallest drive money could buy has been much, much larger than the sum total of all the data I owned.
I may be conservative, though. I mean, I also still instinctively gauge program size by how many floppies it would take up . . .
Depends on what you do. If you do video, HDDs are actually behind the tech. Every time they double the resolution (DVD -> 1080p -> 4k -> 8k), you need 4 times more space. That's not even considering lossless formats, which need even more space.
Same with photography. Each raw photo from my 5D II is 25MB. The megapixel race isn't over.
Now you've got me curious. Are you saying lossless video formats require more than four times the space to store four times the pixels? How can that possibly be?
I believe OP was meaning that the storage requirements for compressed video are already increasing at a huge rate, and it's exacerbated if you are working with uncompressed video
By the end of this decade we will have extremely high resolution displays pushing 10k x 6k most likely (consumer, given the ipad3 just came out and it took us 6 years for 1080p to get mainstream and ultra high resolution is in labs), and encoding video for that resolution will be on the order of potentially hundreds of gigabytes per hour, even with how lossy container formats like vorbis allow a logarithmic size growth relative to resolution.
I don't see the rate at which we use space slowing down any time soon, but we have physical magnetic platter limits to worry about going forward.
the amount of storage you use and need grows over time, so this may be a hard dream to achieve. If I look at how much storage I require now compared to just 10 years ago, it is more than quadrupled. I expect that trend to continue.
Which makes them perfect for today's and tomorrow's computers - use SSDs as main drives and HDDs for data storage. But I think the write speed will be good enough to use these drives as main drives, as well...
As for reliability - I don't think it's going to be that much of an issue - I've had CD-RWs that I used as portable storage and wrote/erased/read dozens of times (with scratches on them from constant use). A fixed platter in a clean housing should last quite a while...
What the greater storage allows for me is to be more profligate in my usage of it. For example, I've re-encoded my CDs at 320kps, and I've been scanning my old books at much higher resolutions, and am less picky about which ones to scan.
You seem to be in an entirely different universe from the point. Most workloads would take weeks, months, even years to write out 60TB of data. It will be unnecessary for the filesystems to garbage collect old data with any frequency, and you may very well not want them to -- when you have so much space, why not have an available record of every single I/O transaction back to the beginning of time?
> Most workloads would take weeks, months, even years to write out 60TB of data
Today, yes, but this isn't going to give us 60TB drives today, or even tomorrow. It's going to give us 6TB drives. It might give us 60TB drives at some point down the line, and it would be unwise to make any strong predictions about workloads at indeterminate points in the future.
60TB is 2+ years of HD video at typical bitrates. It's two months at a patently absurd 100mbps. You could write every bit of traffic passing through a 10gigabit link for 14 hours.
Assuming a 100-year lifespan, if every second of your life were recorded at 5mbps, you would need just 30 such drives.
Yeah, I'm pretty confident about my prediction as to most workloads.
I get 9 months at 20 MBit/s [1] which is kind of low for a blu-ray[2] or a high-end consumer camera, today.
I find it kind of difficult discussing 60 TB since it will be soo faar in the future. By then we will at least have 4k and probably a bitrate in ~100 MBit/s (since the cap is at 50 MBit/s today that is aiming low but there will probably be a limit where you say that it is enough (then again, 3D imagery could potentially demand much more)), a small drive (same density but ~2") would hold about 2 weeks of footage which IMO sounds reasonable for a camera in an age where you don't have to compromise that much.
It will put some constraints on usage patterns, but I don't feel that 60 TB will be that much more data than 4 TB is today. Sure, for text and audio you basically will have an infinite amount of space (as a consumer) but that is the case even today. 60 TB is only 15 times more than we have to day in a single drive, and it will take many many years to get there. And I can barely fit my grandfathers VHS camera tapes on a 4 TB drive today, and since I want some redundancy 4 TB isn't enough for even that.
> I get 9 months at 20 MBit/s [1] which is kind of low for a blu-ray
Blu-Ray bitrates are bloated for various reasons, like older encoders, film transfer, and a lack of anything better to do with the space. You don't need 20mbps with modern H.264 High encoders, it's just a waste. Anything past 10 is at best questionable
> By then we will at least have 4k and probably a bitrate in ~100 MBit/s
Even if 50mbps were something that we would ever need at 1080p (it's not), 4k would not justify a doubling in compressed size, there is no reason to have 100mbps compressed video.
4k is also way beyond reasonable for home display. Just 3840x2160 on a 60" display would be something like 23 arc seconds at 3 meters, you physically cannot see that.
> 60 TB is only 15 times more than we have to day in a single drive
4TB is already orders of magnitude more than most workloads will ever need. I was using video to demonstrate the absurdity of thinking typical workloads would actively use 60TB, because it's not a typical workload, it's a worst-case scenario.
> Blu-Ray bitrates are bloated for various reasons, like older encoders, film transfer, and a lack of anything better to do with the space. You don't need 20mbps with modern H.264 High encoders, it's just a waste. Anything past 10 is at best questionable
But most people won't want the hassle of re-encoding the data they receive. If it takes up twice as much storage, so what? A lot of people will buy that 60TB drive instead of a 30TB one.
> Even if 50mbps were something that we would ever need at 1080p (it's not), 4k would not justify a doubling in compressed size, there is no reason to have 100mbps compressed video.
So consider 4K + twice the frames to handle 3D + doubled framerate. That's 16 times the amount of uncompressed pixels. And I'm not convinced that's the upper bound of the amount of raw data we'll see.
It's not that long ago you'd have been called crazy if you suggested we'd need anything beyond 1080p, and a lot of people were questioning whether HD would ever get any traction.
> 4k is also way beyond reasonable for home display. Just 3840x2160 on a 60" display would be something like 23 arc seconds at 3 meters, you physically cannot see that.
I know people with projectors. 60" is by no means the upper bound on size people will want. 100"-150" maybe.
While I agree that there are diminishing returns when increasing the bitrate for modern encoders there certainly is a noticeable difference between 20 Mbit/s and 10 Mbit/s.
We would need those high-resolution displays if we ever want to be able to show off pictures decently, and we have (finally) just begun to experiment with high-resolution displays. 4k displays are already being demoed.
My point was that 60 TB vs. 4 TB, given the time difference, isn't such a big deal. Biggest hurdle is to get a stable COW-filesystem. But as today SSDs will rule the workstation (and pretty much everywhere you value performance) and SSDs have much bigger technical challenges to get that big.
Regular spinning harddrives will thus only be used where you have a untypical workload requiring lots of data.
People won't buy drives much bigger than their workload can use. It's possible that 60TB drives would only be used in massive storage systems, not PCs.
People will, if it comes bundled with their computer.
Nearly every grandparent I've run across has a >100GB hard drive (if not >500GB). They're all using roughly 20GB of that drive, after a couple years.
With only very rare exceptions, people don't choose what goes into their computer. They buy what looks good for a price that they're willing to pay, for the simple reason that buying a computer is full of more buzz-words, acronyms, and useless numbers than any other purchase I've ever made. It's insane, and it gets people to buy more than they need, so the sellers won't stop doing it.
> "People will, if it comes bundled with their computer."
But what will a computer look like in 10 years?
What I've been noting, from the whole mobile explosion, is that people really don't care if their computer only has 64GB of storage. Of all the complaints that people have about tablets and phones replacing 'real' computing, disk space rarely enters the discussion.
So even if tablets and smartphones don't notably replace traditional PCs, their effect on consumer perception of the necessity of large disks can't be ignored.
Sure, it's easy to market 'more gigabytes than your old machine' to a regular person. But so is "faster boot" and "less waiting".
That's why laptop makers can't seem to ditch spinning disks fast enough.
And what motivation does a PC builder have, to continue bundling increasingly-capacious hard drives with little markup? Why not bundle increasingly-faster and increasingly-cheaper SSDs and save the spinning drives for higher-margin positions as add-ons and external storage?
I'd be incredibly surprised if the market for spinning disks doesn't shrink over the next 10 years. They'll certainly remain, even for consumers. But there's no good argument that they'll still be standard.
As far as I've seen, disk space hasn't been in computer-purchasing discussions at all for a few years now - salespeople just say "more than enough" and leave it at that. If you know otherwise, you're in a special class of consumers, and it still matters to you. Plus, if we get 60TB drives tomorrow, we'll have new ways to fill it within the week, and some people will do so, just as we've done to this point.
I totally agree that spinning disks are on the way out, except for high-storage purposes. And then only until flash storage meets / exceeds their density (solid state has a habit of doing this). And I totally agree that the HD size means nothing to most people buying computers. But it does drive sales over smaller numbers, so we'll keep seeing them go up as long as that's true. Maybe the tipping point is now, but I'm not putting any money on that.
As for 'boots faster / less waiting', it isn't something that's generally quantifiable because it comes with a wide range of caveats. Even if it were (I haven't seen any, but I haven't PC shopped for a while), most people I know are well aware that computers slow down over time. They may not know why, but they have seen it happen to every computer they've ever owned - "It wasn't this slow when I bought it" is a common complaint. On top of that, the vast majority of people I know simply don't shut down their computer until an update forces them to. Computers resume from sleep very quickly - my laptop is awake and responding by the time I can get my hands on the keyboard to punch in my password. Cutting that time in half gains nothing, it has reached the 'fast enough' point that it's not an incentive.
I love that video for some reason. When it came out I actually saved the FLV file from their website because my kids liked to watch it. Great marketing, IMHO.
This is cute.
And because it's a flash vector animation and not a video, you can do a neat trick.. Zoom in on the page and it will scale up the animation smoothly. This clip is perfect quality at any resolution.
In the PC consumer's choice between SSD and traditional HD, I suspect that the decreasing marginal returns from this increase in storage space, will not outweigh the benefits of SSDs. However, in the server market this increase will continue to favor traditional HDs.
Hybrid drives will also be a big deal. You'll have 16GB RAM, 512GB SSD, and 4TB HD. Plenty of space for a media collection (no, you will never have enough), and fast access to frequently-used stuff (like databases, docs, programs).
I don't disagree, but to play the devil's advocate..
How soon until Dell, HPs (etc) ship more desktops with SSDs than with mechanical drives? The average consumer doesn't understand the difference and doesn't want to pay extra (even though it's probably the best performance/$ upgrade possible).
On the flip side, I wouldn't be surprised if we saw significant adoption for servers that can live with the space limit (or that can design around it). Server-based applications are often I/O bound, established solutions are expensive, and the people responsible know and care.
IF the disk continues to rotate at the same frequency, then you get an I/O increase exactly proportionate to the increase in linear (not areal) density. A new technique which puts twice as many bits per inch of track should read twice as many bits per second.
Write techniques may or may not need to be adjusted (preheat a track? a sector? realtime per bit-domain?) but should eventually converge to the same speed-up as reading.
My home system has 3: Two in a RAID mirror and a third that I rsync stuff over to. Not that I'm paranoid or anything. Then there's my external backup...
This technology "Heat Assisted Magnetic Recording" (HAMR) is expected to be supplanted by a new technology to put even more bits into a hard drive "Krypton-Yittrium Joined-Electron Laser Light Injection" (K-Y JELLI).
So I can either trust my data to a big stack of transistors and hope they don't just randomly die, or to a pile of spinning rusty metal discs and hope that a motor doesn't fail or something scrapes the rust off the platters. :)
if you are trusting any one piece of infrastructure, you're doing it wrong. keep your stuff backed up and there's no need for your data security to be based on trust and hope.
Main content is: The maximum capacity of today’s 3.5-inch hard drives is 3 terabytes (TB), at about 620 gigabits per square inch, while 2.5-inch drives top out at 750 gigabytes (GB), or roughly 500 gigabits per square inch. The first generation of HAMR drives, at just over 1 terabit per square inch, will likely more than double these capacities – to 6TB for 3.5-inch drives and 2TB for 2.5-inch models. The technology offers a scale of capacity growth never before possible, with a theoretical areal density limit ranging from 5 to 10 terabits per square inch – 30TB to 60TB for 3.5-inch drives and 10TB to 20TB for 2.5-inch drives.