Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also, the O(1) random memory access assumption makes it easy to take memory for granted. Really it's something like O(n^(1/3)) when you're scaling the computer to the size of the problem, and you can see this in practice in datacenters.

I forget the name of the O(1) access model. Not UMA, something else.



O(n^(1/2)) really, since data centers are 2 dimensional, not 3 dimensional.

(Quite aside from the practical "we build on the surface of the earth" consideration, heat dissipation considerations limit you to a 2 dimensional circuit in 3-space.)


More fundamentally O(n^(1/2)) due to the holographic principle which states that the maximal amount of information encodable in a given region of space scales wrt its surface area, rather than its volume.

(Even more aside to your practical heat dissipation constraint)


Just need to make sure all your computation is done in a volume with infinite surface area and zero volume. Encoding problem solved. Now then, how hyperbolic can we make the geometry of spacetime before things get too weird?


Hmm, I'll go with that


If you have rows of racks of machines, isn't that 3 dimensions? A machine can be on top of, behind, or next to another that it's directly connected to. And the components inside have their own non-uniform memory access.

Or if you're saying heat dissipation scales with surface area and is 2D, I don't know. Would think that water cooling makes it more about volume, but I'm not an expert on that.


That example would be two dimensions still in the limit computation, since you can keep building outwards (add buildings) but not scale upwards (add floors)


You can add floors though. Some datacenters are 8 stories with cross-floor network fabrics.


When you get to, say, 100000 stories, you can't build more stories. At this point your computer costs more than the Earth's GDP for a century, so talking about theoretical scaling laws is irrelevant. Eventually you run out of the sun's power output so you build a Dyson sphere and eventually use all of that power, anyway.


Oh right, so the height is practically a constant. Square root for sure then.


All algorithms are O(1) in this case


You pick what things are constant and what's variable. If you're scaling a supercomputer to fit a problem, the height is going to max out quickly and can be treated as constant, while the other dimensions are variable.


Spatial position has nothing (ok only a little) to do with topology of connections.


On the other hand, actual computers can work in parallel when you scale the hardware, something that the TM formulation doesn't cover. It can be interesting which algorithms work well with lots of computing power subject to data locality. (Brains being the classic example of this.)


Multitape TMs are pretty well studied




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: