Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wouldn't really agree with this since those machines don't share address spaces or directly attached busses. Better to say it's a warehouse-scale "service" provided by many machines which are aggregated in various ways.


I wonder though.. could you emulate a 20k-core VM with 100 terabytes of RAM on a DC?

Ethernet is fast, you might be able to get in range of DRAM access with an RDMA setup. cache coherency would require some kind of crazy locking, but maybe you could do it with FPGAs attached to the RDMA controllers that implement something like Raft?

it'd be kind of pointless and crash the second any machine in the cluster dies, but kind of a cool idea.

it'd be fun to see what Task Manager would make of it if you could get it to last long enough to boot Windows.


I have fantasized about doing this as a startup, basically doing cache coherency protocols at the page table level with RDMA. There's some academic systems that do something like it but without the hypervisor part.

My joke fantasy startup is a cloud provider called one.computer where you just have a slider for the number of cores on your single instance, and it gives you a standard linux system that appears to have 10k cores. Most multithreaded software would absolutely trash the cache-coherency protocols and have poor performance, but it might be useful to easily turn embarrassingly parallel threaded map-reduces into multi-machine ones.


You absolutely can, but the speed of light is still going to be a limitting factor for RTT latencies, acquiring and releasing locks, obtaining data from memory, etc.

It's relatively easy to have it work slowly (reducing clocks to have a period higher than max latency), but becomes very hard to do at higher freqs.

Beowulf clusters can get you there to some extent, although you can always do better with specialized hardware and software (by then you're building a supercomputer...)





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: