In case you didn’t notice, this isn’t an actual, usable product. Scientists in a lab created a technique to encode bits into very tiny spaces across 100 layers.
The proof of concept technique they created in a lab is so slow that it would take a full second for 10 pixels on the disc.
The headline claiming that this is a new cousin of DVDs is clickbait. It’s a lab technique, but someone extrapolated it into the same area of a DVD and imagined how many bits it could theoretically hold.
A lot of IEEE Spectrum is clickbait, I find. Coupled with aggressive marketing and silly priced subscriptions, mean hype takes top billing on almost every story. Light not worth the candle.
> (writing) an energy per bit of information of 17 μJ [0]
It's interesting to think about data as μJ per bit. The article seems to hint that this is too high for practical use, but I'm honestly clueless how this would compare to other storage mediums. It also makes me wonder what amount of energy premium are we paying for remote data and what amount of energy premium we are paying for wireless transmission.
Not exactly. "dual-beam volumetric nanoscale writing [...] contained two laser beams [...] writing laser beam [...] energy per bit of information of 17 μJ [...] and a [...] deactivating laser beam [...] energy per bit of information of 3.64 mJ".
That brings us to $800/TB to write.
Also, if pixel == bit (I doubt this, due to error coding there should be more than one pixel per bit), the following is also very relevant: "The pixel dwell time (that is, the light exposure time) was 100 ms, which involved 4.2 × 10^6 pulses [...]. [...] took 0.35 ms to move from one pixel to the next [...]"
This would require 25440 years to write 1TB.
Scientists will need to find a way to succeed in writing a bit in less than 4.2 million "attempts" until this technology is commercially interesting. Still, this is an exciting development because it may produce a commercially viable medium if the density ambition is reduced from 200TB/disk to something like 20TB/disk or 2TB/disk.
37.8 kWh per TB, so about 10 kW at a write speed of ~75 MB/s (~3.7 hours for the TB).
That means you'd not only have to deal with the heat dissipation of ~five space heaters at full power, but you'd also need three-phase power to supply it.
Who says we need to fill it up in a reasonable time frame? If each disk costs a comparable amount to a dvd (cents) then the expensive part is the electricity (and maybe the equipment to write to them) and it's ok if most of the theoretical storage capacity goes unused. The important metric here for something like data backup is cost/byte to write and cost/byte/month to maintain.
No, I'm suggesting that it could take a decade and still given the numbers in the thread I replied to (which I agree appear not to reflect reality) be a viable product. It never needs to be 'full'.
I tried some numbers for writing to hard drives (it seems like SSDs are hard drives are not super far-apart on energy per byte written)
And it seems like writing to HDDs is between $.15 to $.56 per TB depending on write speed and watts/time assumptions. It gets tricky to think about because there the sunk cost of powering the drive when idle, and the delta between idle and write power needs, but I think that range is representative for $/TB on a HDD.
It seems doubly weird, as blu-rays and other such things they are comparing to aren't written, but pressed as is when sold in large copies.
I get that a lot of uses may be as a hard drive, but imagine ginormous AI model files or some such, pressed and shipped as updates. The energy cost would be far different.
Or maybe Ultra Mega Super HD 4D video.
Or... or... blockchain history for bitcoin in a few years.
> The researchers note that the entire procedure used to create blank discs made using AIE-DDPR films is compatible with conventional DVD mass production and can be completed within 6 minutes. Gu says these new discs may therefore prove to be manufacturable at commercial scales.
Petabyte-sized models will be impractical for some time still. OTOH, 30 years should give us a million fold improvement in capacity, and, with in-memory computing (you really don’t want to move a petabyte) I’d say there is hope.
>All in all, a DVD-size version of the new disc has a capacity of up to 1.6 petabits—that is, 1.6 million gigabits. This is some 4,000 times greater data density than a Blu-ray disc and 24 times that of the currently most advanced hard disks.
1.6 petabits is 0.2PB, or 200TB. The highest shipping density hard drive is 22TB or 30TB depending on whether you are consumer or cooperate customer. But even with 22TB is is only 9 times. Not 24 times.
The highest density hard drives have 10 platters[1], storing about 2.2TB each. So if they're talking density in terms of bytes per square cm, 24 is probably reasonable. HDD platters are certainly smaller than DVD-size, and the 24x claim would be true if they were ~25% the size of a DVD which seems about right.
Yes. 8PB per 1U. Or Nearly 1 Exabyte in 3 Racks. Unfortunately the 256TB was really a Tech demo only. Last time I checked even the 64TB EDSFF wasn't selling due most companies thinking 32TB was the sweet spot for redundancy, safety etc.
A DVD's surface area is around 134cm2.
A microSD card area is around 1.65cm2. This means around 80 microSD cards in the surface area of a DVD. Multiplied by 2TB, the largest currently available microSD size, we get around 160TB, which is 1.28 Pb (Petabit).
I find the inverse calculation interesting also with CPUs. I used to really enjoy NES games. Apparently its based on a MOS Technology 6502 that has ~3500 transistors on a 16.6 mm2 area.[1][2]
The most recent Intel CPU I can find is a Haswell-E with transistor data, at 2,600,000,000 transistors on a 356 mm2 area. [3]
I should be able to get an NES that fits on 22um x 22um in the modern era. Which, Wikipedia says is approximately the size of fog droplets, hair, or conveniently, the size of the original transistors in the Intel 4004.[4] I could breath NES CPUs and barely notice. Fog droplets.
My understanding is that SD cards are not good for even medium-term storage. They have to be powered on regularly or there's a good chance the data will be corrupted or lost entirely.
And most current optical disks rot - the layer written is usually in an organic coating and beyond that the glue holding the data layer(s) can break down over time. Magnetic storage has been the most reliable stuff I personally have used. Given I haven't sought out the 100 gold disks or anything.
Vinyls warp and scratch. Film degrades and becomes foggy. Books are damaged by too much moisture or not enough moisture. Most clay tablets have been smoothed by erosion.
Optical disk rot is an overstated problem. Every medium has longevity issues. I often deal with 25 year old CDs and old DVDs and they are fine if they were stored properly.
I bet there are a lot of people with an old digital camera or two lying around with some photos on which they hope to look at one day. And if the same is true for solid state hard drives, we will probably find the same behavior with old laptops in about 5-10 year.
Apparently Springer (the publisher behind Nature) has realized that researchers are getting fed up with giving away their work for free to publishers just to have it locked away behind paywalls that they don't profit from and that prevent them from sharing their own articles: https://www.springernature.com/gp/researchers/sharedit
I, for one, am not looking forward to another round of write-once optical storage media. Managing your data on DVD-Rs when they were bigger than your hard disk was a pain in the ass. I've gotten used to the fact that the largest storage media are not only rewritable, but are the same as the primary storage devices you use in your computer.
> like storing uncompressed/minimally compressed video footage, it would be fantastic.
ugh!! I would hate to see the stack of shiny round discs to handle the content from a 3 day shoot on something that doesn't record to crappy MP4 files. You'd get laughed out of the room for suggesting such a thing in a professional environment, much worse than I'm doing to your suggestion here
A "stack"? "Crappy MP4 files"? I don't have any idea what you're talking about.
I'm not talking about recording video straight to disks, but using them for long-term storage. And if a single disc stores a petabyte, then a single disk could handle weeks' worth of shooting a couple hours of uncompressed 8K footage a day.
The amount of data isn't the point. The point is the time it takes to write. The point is it is a shite physical format for purpose. Something susceptible to scratches and other issues that shiny round discs are prone to just makes it a bad idea
We're talking about a hypothetical product anyways, so how would you know how much time it took to write?
When you're trying to figure out a format to archive all your source footage in once you're finished with a project, ultra-high capacity optical media could be fantastic -- it's not going to fail catastrophically like hard drives do, and the assumption is that it's far cheaper too. As well as taking up less space.
And for archiving stuff, who cares if it's a bit slower, as long as you can just leave a copy operation running overnight or something? And scratches are a non-issue for archival. Discs are kept in cases or caddies. And basic scratches can be fixed anyways. We're not talking about teenagers throwing loose CD's in their glove compartment here.
If a DVD gets scratched, that'll buff right out. If a drive platter gets scratched, goodbye data! If you don't want it scratched, put it in a caddy. Caddy-load disc drives have been around for years.
Your concerns about scratches are valid for really old technology, like CED, but I don't think those were ever made writeable.
I think we’ve reached a point where drives have grown substantially more than file size (a movie was 700mb when my harddrive was <400gb, then 4-12gb when I had 2-3tb, now they’re ~50gb if you want 4k but a 16tb drive is a few hundred dollars. Aside from movies I have little need to large volumes of storage, music files, photos and documents are insignificant in comparison.
One thing I would like is a cheap backup mechanism capable of 1-20tb - if it’s cheap enough, immutable would be a good thing.
This reminds me of a recent talk by Nvidia's CEO Jensen Huang. He discusses video compression history a bit. He proposes in the future one would just send a very small amount of data & then the computer displaying the video would recreate everything based on prior knowledge of the person talking.
Aside from personal use, it's well known that the amount of data being created every day is growing exponentially. Based on the article these discs are not meant for personal use but for data centers.
A 90fps 8k UHD 10-bit movie that's 3 hours long is 120TB uncompressed. You might think that's too much, muh compression algorithms etc, but we have consumer devices that can use this kind of storage, and no economic means of transporting one-to-many, multi-TB files. I would absolutely welcome them, and use it to distribute the entirety of libgen (hypothetically, don't lock me up yet). Or really high resolution VR porn or documentaries. Either way there's good uses for it. It won't replace your hard drive but there's a market gap to fill.
>"The strategy the researchers used to write the data relies on a pair of lasers. The first, a green 515-nanometer laser, triggers spot formation, whereas the second, a red 639-nm laser, switches off the writing process.
By controlling the time between firing of the lasers, the scientists could produce spots smaller than the wavelengths of light used to create them."
Nice!
More impressive, perhaps, than a 100+ layer CD/DVD/Optical Disk storing 1.6 PB or more in the future (and let's not kid ourselves -- that's pretty impressive!) -- is the process that gets us there...
I wonder what other interesting applications this laser-based process could have in the future...
Perhaps it could be used for future IC lithography (chip manufacturing) of some sort...
There are legitimate reasons for storage to be specified in bits. Having to put everything in bytes is something you have to do for computer memory because it's only byte addressable. This is actually inconvenient and inefficient for a lot of real data. DNA and protein sequences come to mind. But storage isn't just about computers.
Internet and PC memory manufacturers have no excuse, though. It's just bigger numbers despite not being able to work with individual bits.
It's a petabit in raw data, the way they're writing it. After error correction, framing and whatever else is done to the data, the capacity that users care about will be a fair bit smaller.
Parent comment was likely hinting more at the bytes to bits switcheroo. Yes, technically the figure is correct but it's about as honest as a conversation about finance and switching from US dollars to Zimbabwean dollars.
This is exactly what I have been waiting for but development seems to have gone silent. I like the idea of no moving parts. It was supposed to have massive throughput and capacity though by today's standards 125GB is not that big. I would love to have a cartridge that inside has a 3D crystal like the crystals from the older superman movies. That idea to me seems less fragile than a spinning disk and has the potential for substantially higher throughput.
Abstract is talking about bacterial proteins specifically rather than bacteria. Either way, to keep either stable for long periods of time you need to store them at -80C, unless you want to opt for liquid nitrogen and -200C. Not sure either is going to be a household advancement any time soon.
It seems dishonest to say it has the same dimensions as a bluray when even that article somewhat says it will be sold as a cartridge, but Sony did make and sell that product.
Pretty cool, uses 2 femtosecond laser wavelengths for reading and another 2 for writing, with certain combination of beam shape to localize the spot to ~50nm and can focus 3d in 1 um spacing. Not clear on how much z the spot takes since they say 100um total with 1um spacing. I think the claim of storage per disc is with 100 layers per side. The "SharedIt" link to the paper reflows the text wrong on my laptop but looked okay on my phone. I want to go back to the Microsoft glass optical storage tech to see how this compares in technique. Comment section here really bums me out
An LTO-9 tape cartridge can store 18 terabytes (they advertise 45 TB but that's with the built-in compression scheme and we want to compare raw bits to raw bits). One of these new 1.6 petabit discs would be just over 11 tapes, well under a station wagon.
> writing speed of about 100 milliseconds and an energy consumption of microjoules to millijoules
Since this is on an IEEE website, I'm gonna give them the benefit of the doubt and assume that the writer accidentally omitted the unit of data this applies to, but understands that the sentence is meaningless without it.
But the fact that apparently no one proofread the text still makes this a garbage-grade publication.
At least historically, the definition of "byte" has been "the smallest addressable unit" which on most of today's machines is 8 bits, but not all (I believe the SHARC floating point DSPs are still 32-bit bytes, and there are probably more). Using bits for things makes sense from this perspective as the number of bytes would depend on which processor it is connected to.
I know that these days people consider that a byte is 8 bits by definition, but I think that probably isn't a great linguistic move (we already have a word for that - octet)
Bytes are machine-specific though. We've (mostly) settled on 8-bit bytes for a while, but there's no guarantee that this will remain the case in the future.
Perhaps - convention is a powerful thing. I am very confident that any future byte would be a power of two, but I'm not sure that 8 will remain ascendant. A 32-bit byte might be practical - even english language text is commonly no longer composed of 8-bit characters, so why bother with 8-bit addressing, especially when the majority of the world needs more than 8 bits per character? Memory is cheap, and a little bit of "wasted" space could reduce errors and simplify text handling.
That ship has largely sailed. UTF-8/UTF-16 will be around for a long time to come. It's encoded into data that is archived. It's built into practically everything we use today. It's reasonably space/transmission efficient. It's standardized across many locales. Of course, you can use all the bytes in memory that you want to. Some languages even do!
My mind was blown when I found out that there are invalid utf-8 sequences. I was then impressed to find out that some exploits started out on this premise against software that didn't understand/protect against this. What a mess indeed.
The proof of concept technique they created in a lab is so slow that it would take a full second for 10 pixels on the disc.
The headline claiming that this is a new cousin of DVDs is clickbait. It’s a lab technique, but someone extrapolated it into the same area of a DVD and imagined how many bits it could theoretically hold.