Hacker Newsnew | past | comments | ask | show | jobs | submit | rushi_agrawal's commentslogin

It is interesting that even after falling prices of HDDs, S3 costs have remained the same for at least 8 years. There's just not enough competition to push them to reduce costs. But imagine money it brings in in AWS because of this.


Same with every other aspect of their offerings. Look at EC2 even with instances like m7a.medium, 1 vCPU (not core) and 4GB memory for ~$50 USD/month on demand or ~$35/month reserve 1 year. It isn't even close to be competitive outside other big cloud providers.

EDIT: clarity on monthly pricing.


There is inflation, so it has effectively dropped in price. But your point is taken: inflation’s effect on prices is most assuredly slower than the progress of technology’s effect.


How much have hdd prices really fallen? AFAIK the incredible improvements in price per byte in HDD had slowed so much that they'll be eclipsed by SSDs in a few years.


Flash went from within 2x the price of DRAM in 2012 or so to maybe 40-50x cheaper today, driven somewhat by shrinking feature sizes, but mostly by the shift from SLC (1 bit/cell) to TLC (3 bits) and QLC (4 bits) and from planar to 300+ layer 3D flash.

Flash is near the end of the “S-curve” of those technologies being rolled out.

During that time HDD technology was pretty stagnant, with a mere 2x increase due to higher platter count with the use of helium.

New HDD technologies (HAMR) are just starting their rollout, promising major improvements in $/GB over the next few years as they roll out.

You can’t just look at a price curve on a graph and predict where it’s going to go. The actual technologies responsible for that curve matter.


> mostly by the shift from SLC (1 bit/cell) to TLC (3 bits) and QLC (4 bits) and from planar to 300+ layer 3D flash

That "and" is doing a lot of work.

In 2012 most flash was MLC.

In 2025 most flash is TLC.

> During that time HDD technology was pretty stagnant, with a mere 2x increase due to higher platter count with the use of helium.

They've advanced slower than SSDs but it wasn't that slow. Between 2012 and 2025, excluding HAMR, sizes have improved from 4TB to 24TB and prices at the low end have improved from $50/TB to $12/TB.


This is one of those times a downvote confuses me. I corrected some numbers. Was I accidentally rude? If I made a mistake on the numbers please give the right numbers.

If my first line was unclear: We might say the denser bits give us a 65% density improvement. And quick math shows that a 80-100x improvement is actually nine 65% improvements in a row. So the denser bits per cell aren't doing much, it's pretty much all process improvement.


It’s mostly 3D, not process.

3D flash is over 300 layers now. The size of a single 300-bit stack on the surface of the chip is bigger than an old planar cell, but that 300x does a lot more than make up for it.

3D NAND isn’t a “process improvement” - it’s a fundamental new architecture. It’s radically cheaper because it’s a set of really cheap steps to make all 300+ layers, not using any of the really expensive lithography systems in the fab, then a single (really complicated) set of steps to drill holes through the layers for the bit stacks and coat the insides of the holes. Chip cost basically = the depreciation of the fab investment during the time a chip spends in the fab, so 3D NAND is a huge win. (just stacking layers by running the chip through the process N times wouldn’t save any money, and would probably just decrease yields)

A total guess - 2x more expensive for extra steps, bit stacks take 4x more area than planar cells, 300 layer would have 300/8 = 37.5x cheaper bits. (That 4x is pulling a lot of weight - for all I know it might be more like 8x, but the point stands)


I was counting all the 3D manufacturing innovations as "process improvement". I'm not sure why you don't.

Anyway the point stands that bits per cell is barely doing anything compared to making the cells cheaper.


Because they made something different with the same process, instead of making the same thing with a different process. Feature size didn’t get any smaller. (or, rather, you get the order of magnitude improvement without it, and those gains were vastly more than the feature size improvements over that time period)

Also because “process improvement” usually refers to things where you get incremental improvements basically for free as each new generation of fab rolls out. Unless you can invent a 4D flash, this is a single (huge) improvement that’s mostly played out.


> with the same process

Same process node.

Node is part of process, but all the layering and etching techniques they figured out to make 3D cells are also process. At least that's how I see it.

Oh well, I don't want to argue definitions, I just want to clarify what I meant.


Oh, and no one has a solution to make HDDs faster. If anything, they may have gotten slower as they get optimized for capacity instead of speed.

(Well, peak data transfer rate keeps going up as bits get packed tighter, but capacity goes up linearly with areal bit density, while the speed the bits go under the head goes up with the square root.)

(Well, sort of. For a while a lot of the progress came from making the bits skinnier but not much shorter, so transfer rates didn’t go up that much)


Magnetic hard drives are 100X cheaper per GB than when S3 launched, and are about 3X cheaper than in 2016 when the price last dropped. Magnetic prices have actually ticked up recently due to supply chain issues, but HAMR is expected to cause a significant drop (50-75%/GB) in magnetic storage prices as it rolls out in next few years. SSDs are ~$120/T and magnetic drives are ~$18/T. This hasn't changed much in the last 2 years.


Reducing costs is the wrong incentive. If you look at a modern vendor such as Splunk or CrowdStrike, they have huge estates in AWS. There are huge swaths of repeating data, both within and across tenants. Rather than pointing this out, it is simpler and more effective to charge the customer for this data/usage, and use simple techniques so that it isn't duplicative. Reducing costs would only incentive and increase this asinine usage.


Relevant paragraph: The species was discovered when a Malaysian amateur photographer, Hock Ping Guek, posted a picture of it to the online photo-sharing site Flickr. A California state entomologist saw it and was unable to identify the species; nor were any colleagues he shared the image with. Eventually, he contacted the photographer and was able to obtain a specimen. Further testing at the Natural History Museum in London confirmed that it was indeed a new species. Its discovery has been described as a triumph of citizen science.


Didn't see a mention of a very fast way of exiting Vim, so sharing:

'The iTerm way': map cmd-l to send a hex code '0x6A 0x6B 0x3A 0x71 0x0D' (translates to jk:q<enter>, where jk is my Vim binding to get out of insert mode)

For me, just like any other touch typer, left thumb is always on cmd and right ring finger is always on 'l' key, so this arrangement takes least amount of time for me to quit Vim.


Similar concept: cheat.sh


I'll add it to the "alternatives" section


I remember there was a link on HN which had a list of sites similar to this which are greener in general (i.e. which plant trees, or which use only green energy like solar, etc). I've been searching for it but couldn't find it. If someone has it, can they post it ? Thanks!



Thanks! Yes, this is what I was looking for.


I've had an instance where the author was reading his book himself, and his accent was so difficult that only then I started to realize the effort professional readers must have put into to make the book listenable. Had to read that book at .9X to be able to follow it.


I'm continuously listening to audiobooks while commuting since last couple of years or so. To increase retention, I've done a few things which have helped (I don't have enough time to go through the physical book or ebook though I tried it a few times, so can't comment on comparison there):

1. Driving slowly, so as to still have safety as the most important priority, and so that you miss lesser moments when you have to take your mind away from listening and to the things happening on the road (talking about Bangalore's i.e. India's traffic here). The slight gain in commute time (I think it should be around 10% and definitely not more than 20%) is completely worth it. This actually made my driving safer in general too.

2. I have a 'repeat last 30 seconds' functionality at my fingertips. You'll invariably miss a portion after which (sometimes) the story will stop making sense. The attitude which worked for me here is, it's okay to spend a lot more time to re-listen a part multiple times, than to let laziness take over by thinking it's anyway an unimportant portion and okay to miss.

3. Take out 5-10 minutes after a commute is done, to write notes about what I thought are important learnings and should not be forgotten with time. I believe if the goal is to not let the commute time go waste, then this notes-writing time is also a part of it, without which your learning is incomplete. This is way better than only remembering something like <2-5% of the book after a year. I use Microsoft OneNote (this trumped over using Vim because I can edit/read my notes while on the go without a laptop). One area of improvement is, I need some way to _remind_ myself to read those notes :)

4. Forcing myself to re-listen books which I found too useful to let my mind forget in any way, instead of jumping over to that next interesting book. I read mostly non-fiction books of a specific category (scientific-study-oriented books about humans, their interactions, behaviour and their flaws). Forcing re-learning is partly due to the fact that after a time I thought I'm reading less important books (i.e. I'm running out of extremely good books in this area), and partly due to forgetting to transfer that new book to my phone to listen.

5. I've lately realized that cramming all your free time to reading/learning is not helpful too. You should have a reasonable time to 'ruminate' in the day, each day, i.e. time when you're doing nothing (social network, news, sports, or any screen-time in general, doesn't count)

Feel free to ask questions. Also, feel free to provide suggestions and book recommendations. I'm all ears :)


If DynamoDB guys can come up with a mechanism similar to their CPU credits concept, that'd be a really nice feature to have. Of course it can't be as straightforward as CPU credits.


I realize that after reading the article, most people (including me, unfortunately) read the article as 'Problems WE have with Python'. Maybe a line by author at the top or bottom of the article, reiterating that it's the problem 'he' has with Python -- I know nobody would think such a second clarification would be necessary, but hey, we're humans! -- would help.


Yeah, I'll keep that in mind -- some people, even programmers, apparently don't like to read carefully. :)


CLIs are designed for automation and are not as human friendly as they can be. I think we can make a CLI which rates very high on ease-of-use. Taking AWS CLI as an example, I built a project which has convenience as the top priority.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: