Hacker News new | past | comments | ask | show | jobs | submit login

According to the table in the article:

- 2011: 32nm

- 2012: 22nm

- 2014: 14nm

- 2018?: 10nm

I don't know much about foundry processes, but it seems that it's taking more and more time for lesser and lesser gains, right? At this rate, how long until we reach sub nanometer? What are the physical limits on these processes, and does it have any implications for end users? Will we be using 2nm CPUs for 50 years?

Would love to hear the thinking of anyone educated on the topic.

Edit: very intrigued by the sustained downvotes on this ¯\_(ツ)_/¯




The step from 14 to 10 nm is huge. Both from a technological perspective on the manufacturing side as well as on the effect it will have to the number of transistors on a die and the power consumption of those transistors. Remember that power consumption and the number of transistors are related to the surface area so there is a square factor in there. 14 nm ^2 = 196, 10 nm ^2 = 100, so that's almost a doubling of the number of transistors and approximately a halving of the power required per transistor for a given die area.


Okay, so the node names are effectively useless at this point. They used to refer to gate length, but no longer, even for Intel. Oh, and Intel's 10nm will actually have lower performance than their 14nm.

Besides, it matters not, the bottlenecks today are in memory and interconnects.


> Oh, and Intel's 10nm will actually have lower performance than their 14nm.

Less than 14++, sure, but 10+ and 10++ will fix that.


Yes, but I was just pointing out that scaling is now neither a panacea nor free.


Performance compared to what? Same power, Same price, or just the performance of the fastest processor of the series?


At some point, maybe we'll start seeing RAM put onto the CPU itself.

I mean, more than it already is (with cache).


You will see stacked memory and silicon interposers, but you won't see main memory on the CPU die. DRAM is based on an array of what is called "trench capacitors." The fabrication process is sufficiently different that they don't even make these in the same facility, much less on the same die process. An array of trench capacitors will always be smaller than transistor based memory (SRAM.)


There are alternatives on the horizon, e.g. STT-RAM, MRAM, memristors, PCMs, ReRAM (same as memristors according to some...), spintronic devices, etc.

There are also other attempts such as FBRAM to replicate DRAM structures without the need for insane A/R trench capacitors.

I believe that such solutions are necessary to continue scaling.


It is not a big problem to make DRAM on pretty much every SOI process, just power consumption and refresh rates will have to be quite big.

The problem with MRAM is unreliable reads, they are excellent for low clock speed devices, but as you go into gigahertz range, signal quality of an mram cell begins to degrade, and you have to put a darlington on top of it, or a bicmos transistor, thus negating its cell size advantage


For DRAM are you talking about standard deep tech capacitor dram or FBRAM?

Agree with MRAM, but it is also a very immature technology, so there's hope at least. unless you're talking about crossbar crosstalk which can be solved with a diode.


About floating body ram and others that rely on capacitance of the substrate itself rather than a dedicated capacitor


Ah, I see. But it's not significantly higher than other DRAMs, right? At least last I checked the difference wasn't that big.


I believe that data published by people peddling embedded dram ip is their "best case scenario" with still significant alteration to manufacturing process


While true, the same math holds for 22 nm^2 (484) to 14 nm^2 (196).

Real world gains will never be as high as the math suggests, as you get into leakage currents etc.


Sure, I never meant to imply that previous process steps were much smaller, just that this one is still formidable in its own right. Real world gains will not be 100% but they're a very large fraction of that. Obviously any technological advance in a mature industry is going to show reduced return on investment at some point, it's rather surprising that the ROI on these process shrinks is still worth it given that we are now well beyond what was thought to be possible not all that long ago.


Yeah, so the node names now apparently refer to the "smallest feature size", which is some random thing on the M0 metal layer. Source - from a former Intel engineer for more than a decade


So not like when games consoles used to advertise how many "bits" they had: take whatever has the widest bus and advertise that as the number of "bits" or use tricks like the Atari Jaguar: 2x 32bit cpu's = 64bit, right? RIGHT?


That's how Intel define it; other fabs have their own definitions.


There are various limits in play: https://www.extremetech.com/computing/97469-is-14nm-the-end-.... At sub-nanometer we're talking about features of 5-10 atoms across. At that scale you get effects like electrons quantum tunneling between transistors: https://www.theverge.com/circuitbreaker/2016/10/6/13187820/o.... We probably won't get there with existing silicon technology.


I read on HN, a couple months ago, these numbers no longer represent the physical size of anything but are now just a marketing label, a sort of 'performance equivalent to a theoretical size of'.

Anyone know if there's any truth to this? Might try to find the comment later when I have time.


Think of them as relative indications of feature size and of spacing between identical parts (arrays if you want to use a software analogy), so even if an actual transistor will not be 10 nm or 14 nm their relative sizes will relate on one axis as 10 nm to 14 nm. Keeping the numbers from a single manufacturer will definitely aid in the comparison.

There is a ton of black magic going on here with layers being stacked vertically and masks not having any obvious visual resemblance to the shape they project on the silicon because of the interaction between the photons / xrays and the masks due to the fact that the required resulting image is small relative to the wavelength of the particles used to project it.

There is a super interesting youtube video floating around about this that I highly recommend, it's called 'indistinguishable from magic':

https://www.youtube.com/watch?v=NGFhc8R_uO4

It's up to date to 22 nm. Highly recommended.


Here is the talk a few years later: https://www.youtube.com/watch?v=KL-I3-C-KBk


I've seen that video several times now and it just doesn't cease to amaze me.

It's really a must-see for anyone interested in processor technology.

Oh!, there's a new video! awesome!


If you think that lithography is challenging, your brain is going to invert looking at the litho technology for 7nm and onwards :)


Yeah, 7nm doesn't mean anything now. Refer to them effectively as product names. Intel is really no better.

The "7nm/10nm" transistor fin pitches are something like ~50 n, and the length is something like 100-150nm


I don't have any direct experience with "deep submicron" stuff, but from what I've read you basically can't trust these numbers to be comparable. The various sizes/spacings don't scale together the way they did for larger feature sizes, so you could have e.g. a "14nm" process where the area of an SRAM cell, NAND gate etc. ends up the same size as another foundry's "20nm" process even though the actual transistors are smaller.


Intel's are actual measurements. TSMC, GF, and samsung are all marketing bs.


They're all marketing, Intel is no exception. At 40nm and over, the Intel node names were larger than the industry average, now it's the other way around.


> I don't know much about foundry processes, but it seems that it's taking more and more time for lesser and lesser gains, right? At this rate, how long until we reach sub nanometer? What are the physical limits on these processes, and does it have any implications for end users? Will we be using 2nm CPUs for 50 years?

The lattice constant of crystalline silicon is 0.54 nm, and since it's an FCC structure the distance between neighboring atoms is 0.38 nm. So with a hypothetical 2 nm CPU, some feature would be only roughly 5 atoms across, which leaves VERY little room for manufacturing tolerance. How would one manufacture a device containing billions of transistors with such small tolerances? I don't think we'll ever see such things, at least not with a lithography approach.

Heck, I think it's close to black magic that they are able to produce transistors on a 10nm process, but apparently experts say that up to 5nm (13 atoms!!!) might be possible.


I think that beyond our current processes, there is the potential for different materials to take the place of XXnm silicon processes, which could fit more transistors in a smaller area.

Research like: http://news.stanford.edu/press-releases/2017/08/11/new-ultra...

"3D" processes which build multiple layers on top of one another may also see more investment as other methods become prohibitively expensive. And once you've cheaply gone to 2 layers in a big blob of epoxy, what's stopping you from doing 4? 8? 16? 32? [Heat dissipation, probably]

But whatever, people have been saying Moore's Law is dead since Moore's Law was invented. Who knows whether we'll technically hit one milestone or another. Things get faster, what the hell.


People are already stacking multiple layers today for memory, although the layers are always manufactured separately and then bonded together in a separate step.

I wouldn't be surprised to see more of that in the future, think caches on top of cores, but I doubt we'll ever see multiple layers of transistors produced in a single step. Technical challenges aside, the economics of trying to produce multiple layers at once are just always going to be worse: higher latency from a wafer entering a fab to finishing the wafer, and much higher rate of defects. (When you produce the layers separately, you can potentially test them separately before putting them together, which is a huge win for defect rate.)

It's possible that manufacturing multiple layers at once might eventually allow for a higher density of vertical interconnects, but I just don't see that becoming the deciding factor.


widely assumed 5nm is the limit. I know i've seen others discuss some ideas around how close they can get, but i'm struggling to find the thread.. In any case, this may help: https://en.wikipedia.org/wiki/5_nanometer

Found it.. http://semiengineering.com/will-7nm-and-5nm-really-happen/ and the HN discussion from several years ago: https://news.ycombinator.com/item?id=7920108


also Is 7nm The Last Major Node? https://semiengineering.com/7nm-last-major-node/


Going from 14nm to 10nm increases the number of transistors per mm from 37 to 100 millions. That's a huge difference!!


This is a sensible question. Not sure either why the downvotes. I'm curious as to the answer myself. Although, I don't know if we'll ever see sub-nanometer. Maybe that's the reason for the downvotes, that sub-nanometer is not really in the realm of what's possible with current CPU architectures and the physics of silicon. Although, that's simply based on today's physics. Who truly knows what the future will bring.


Didn't 14nm arrive late 2015 or in 2016?


If you consider relative change it looks like

2011: 32nm

2012: 31% size reduction

2014: 36% size reduction

2018: 28,5% size reduction


Yes, but in 1, 2, and 4 years respectively.


In 8 years, in 2026, we expect 7nm.


All you get now is power reduction. Performance is completely bottlenecked since 22nm. Peak computing!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: