That article is hogwash. Sure, the Cerebras "chip" is impressive. But the idea that it will accelerate Moore's law and usher in the singularity is just nonsense. Nobody has even made serious efforts to use deep learning for physical design, and its scope for improving designs is limited at best even in theory.
If this was trying to aim at solid state physics and materials research, then maybe one could be carefully optimistic about a genuine breakthrough via something like room temperature, standard pressure super-conducting. As it stands, I call blind hype.
Agreed. One of the things that fascinates me about technology is how the new thing is always treated as magic. I'm hoping we're almost out of that phase for ML, as the hype is exhausting.
Yah, it's BS. But it may be teaching TSMC a whole lot about making larger chips with good yield, and the across-reticle interconnect technology is impressive too and may find some general applicability (e.g. it sounds like something AMD might like).
Oh, for sure there are things to be learned from this. The responsibility for yield doesn't lie with TSMC though, but with the logic design: to make this kind of integration work, your design has to be able to tolerate a fault essentially anywhere on the wafer surface.
This isn't magic, of course: keep in mind that we already have SRAM with extra capacity for fault tolerance, and multi-core chips which are binned based on the number of functioning cores has been standard for a long time.
Design to tolerate failures is only one variable in yields. Wafer-scale integration exercises to the limit both our ability to tolerate defects and our ability to minimize them.
If this was trying to aim at solid state physics and materials research, then maybe one could be carefully optimistic about a genuine breakthrough via something like room temperature, standard pressure super-conducting. As it stands, I call blind hype.