I don't know much about foundry processes, but it seems that it's taking more and more time for lesser and lesser gains, right? At this rate, how long until we reach sub nanometer? What are the physical limits on these processes, and does it have any implications for end users? Will we be using 2nm CPUs for 50 years?
Would love to hear the thinking of anyone educated on the topic.
Edit: very intrigued by the sustained downvotes on this ¯\_(ツ)_/¯
The step from 14 to 10 nm is huge. Both from a technological perspective on the manufacturing side as well as on the effect it will have to the number of transistors on a die and the power consumption of those transistors. Remember that power consumption and the number of transistors are related to the surface area so there is a square factor in there. 14 nm ^2 = 196, 10 nm ^2 = 100, so that's almost a doubling of the number of transistors and approximately a halving of the power required per transistor for a given die area.
Okay, so the node names are effectively useless at this point. They used to refer to gate length, but no longer, even for Intel. Oh, and Intel's 10nm will actually have lower performance than their 14nm.
Besides, it matters not, the bottlenecks today are in memory and interconnects.
You will see stacked memory and silicon interposers, but you won't see main memory on the CPU die. DRAM is based on an array of what is called "trench capacitors." The fabrication process is sufficiently different that they don't even make these in the same facility, much less on the same die process. An array of trench capacitors will always be smaller than transistor based memory (SRAM.)
It is not a big problem to make DRAM on pretty much every SOI process, just power consumption and refresh rates will have to be quite big.
The problem with MRAM is unreliable reads, they are excellent for low clock speed devices, but as you go into gigahertz range, signal quality of an mram cell begins to degrade, and you have to put a darlington on top of it, or a bicmos transistor, thus negating its cell size advantage
For DRAM are you talking about standard deep tech capacitor dram or FBRAM?
Agree with MRAM, but it is also a very immature technology, so there's hope at least. unless you're talking about crossbar crosstalk which can be solved with a diode.
I believe that data published by people peddling embedded dram ip is their "best case scenario" with still significant alteration to manufacturing process
Sure, I never meant to imply that previous process steps were much smaller, just that this one is still formidable in its own right. Real world gains will not be 100% but they're a very large fraction of that. Obviously any technological advance in a mature industry is going to show reduced return on investment at some point, it's rather surprising that the ROI on these process shrinks is still worth it given that we are now well beyond what was thought to be possible not all that long ago.
Yeah, so the node names now apparently refer to the "smallest feature size", which is some random thing on the M0 metal layer. Source - from a former Intel engineer for more than a decade
So not like when games consoles used to advertise how many "bits" they had: take whatever has the widest bus and advertise that as the number of "bits" or use tricks like the Atari Jaguar: 2x 32bit cpu's = 64bit, right? RIGHT?
I read on HN, a couple months ago, these numbers no longer represent the physical size of anything but are now just a marketing label, a sort of 'performance equivalent to a theoretical size of'.
Anyone know if there's any truth to this? Might try to find the comment later when I have time.
Think of them as relative indications of feature size and of spacing between identical parts (arrays if you want to use a software analogy), so even if an actual transistor will not be 10 nm or 14 nm their relative sizes will relate on one axis as 10 nm to 14 nm. Keeping the numbers from a single manufacturer will definitely aid in the comparison.
There is a ton of black magic going on here with layers being stacked vertically and masks not having any obvious visual resemblance to the shape they project on the silicon because of the interaction between the photons / xrays and the masks due to the fact that the required resulting image is small relative to the wavelength of the particles used to project it.
There is a super interesting youtube video floating around about this that I highly recommend, it's called 'indistinguishable from magic':
I don't have any direct experience with "deep submicron" stuff, but from what I've read you basically can't trust these numbers to be comparable. The various sizes/spacings don't scale together the way they did for larger feature sizes, so you could have e.g. a "14nm" process where the area of an SRAM cell, NAND gate etc. ends up the same size as another foundry's "20nm" process even though the actual transistors are smaller.
They're all marketing, Intel is no exception. At 40nm and over, the Intel node names were larger than the industry average, now it's the other way around.
> I don't know much about foundry processes, but it seems that it's taking more and more time for lesser and lesser gains, right? At this rate, how long until we reach sub nanometer? What are the physical limits on these processes, and does it have any implications for end users? Will we be using 2nm CPUs for 50 years?
The lattice constant of crystalline silicon is 0.54 nm, and since it's an FCC structure the distance between neighboring atoms is 0.38 nm. So with a hypothetical 2 nm CPU, some feature would be only roughly 5 atoms across, which leaves VERY little room for manufacturing tolerance. How would one manufacture a device containing billions of transistors with such small tolerances? I don't think we'll ever see such things, at least not with a lithography approach.
Heck, I think it's close to black magic that they are able to produce transistors on a 10nm process, but apparently experts say that up to 5nm (13 atoms!!!) might be possible.
I think that beyond our current processes, there is the potential for different materials to take the place of XXnm silicon processes, which could fit more transistors in a smaller area.
"3D" processes which build multiple layers on top of one another may also see more investment as other methods become prohibitively expensive. And once you've cheaply gone to 2 layers in a big blob of epoxy, what's stopping you from doing 4? 8? 16? 32? [Heat dissipation, probably]
But whatever, people have been saying Moore's Law is dead since Moore's Law was invented. Who knows whether we'll technically hit one milestone or another. Things get faster, what the hell.
People are already stacking multiple layers today for memory, although the layers are always manufactured separately and then bonded together in a separate step.
I wouldn't be surprised to see more of that in the future, think caches on top of cores, but I doubt we'll ever see multiple layers of transistors produced in a single step. Technical challenges aside, the economics of trying to produce multiple layers at once are just always going to be worse: higher latency from a wafer entering a fab to finishing the wafer, and much higher rate of defects. (When you produce the layers separately, you can potentially test them separately before putting them together, which is a huge win for defect rate.)
It's possible that manufacturing multiple layers at once might eventually allow for a higher density of vertical interconnects, but I just don't see that becoming the deciding factor.
widely assumed 5nm is the limit. I know i've seen others discuss some ideas around how close they can get, but i'm struggling to find the thread..
In any case, this may help:
https://en.wikipedia.org/wiki/5_nanometer
This is a sensible question. Not sure either why the downvotes. I'm curious as to the answer myself. Although, I don't know if we'll ever see sub-nanometer. Maybe that's the reason for the downvotes, that sub-nanometer is not really in the realm of what's possible with current CPU architectures and the physics of silicon. Although, that's simply based on today's physics. Who truly knows what the future will bring.
- 2011: 32nm
- 2012: 22nm
- 2014: 14nm
- 2018?: 10nm
I don't know much about foundry processes, but it seems that it's taking more and more time for lesser and lesser gains, right? At this rate, how long until we reach sub nanometer? What are the physical limits on these processes, and does it have any implications for end users? Will we be using 2nm CPUs for 50 years?
Would love to hear the thinking of anyone educated on the topic.
Edit: very intrigued by the sustained downvotes on this ¯\_(ツ)_/¯