Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Free version of the research paper:

https://arxiv.org/abs/2503.19544v1

The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed.

There are important applications for a very high speed small memory, e.g. for digital signal processing in radars and other such devices, but this will never replace a general-purpose computer memory, where much higher bit densities are needed.





This is a hugely important point. The de Broglie wavelength of the photon is hundreds to thousands of nm. There is no possibility of VLSI scale-up, a point conveniently omitted in hundreds of decks and at least $1B in investment. Photonic techniques will remain essentially a part of the analog pallette in system design.

Careful about "never"… individual transistors used to be large, heavy, power hungry, and expensive.

That's not true. Transistors were commercialized a few years after their invention, and already the first generation vastly outperformed vacuum tubes in size, weight, and power. Optical computing has been done for a few decades now with very little progress.

(I was being a little facetious – vacuum tubes being the original "transistors".)

I might have done the math wrong, but is this really supposed to be 330 * 290 um² * 128GiB * 8 = 96 m² big? And this is the RAM one expects per node cluster element for current LLM AI, nevermind future GAI.

NVLink Spine has 2 miles of wires.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: