The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed.
There are important applications for a very high speed small memory, e.g. for digital signal processing in radars and other such devices, but this will never replace a general-purpose computer memory, where much higher bit densities are needed.
This is a hugely important point. The de Broglie wavelength of the photon is hundreds to thousands of nm. There is no possibility of VLSI scale-up, a point conveniently omitted in hundreds of decks and at least $1B in investment. Photonic techniques will remain essentially a part of the analog pallette in system design.
That's not true. Transistors were commercialized a few years after their invention, and already the first generation vastly outperformed vacuum tubes in size, weight, and power. Optical computing has been done for a few decades now with very little progress.
I might have done the math wrong, but is this really supposed to be 330 * 290 um² * 128GiB * 8 = 96 m² big? And this is the RAM one expects per node cluster element for current LLM AI, nevermind future GAI.
https://arxiv.org/abs/2503.19544v1
The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed.
There are important applications for a very high speed small memory, e.g. for digital signal processing in radars and other such devices, but this will never replace a general-purpose computer memory, where much higher bit densities are needed.