Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What they described early on in the article was basically how NUMA machines worked (eg SGI Altix or UV). Also, their claimed benefit was being able to parallelize things with multithreading in low-latency, huge RAM. Clustering came as a low-cost alternative to $1+ million machines. There’s similarities to persistence in AS/400, too, where apps just wrote memory that gets transparently mapped to disk.

Now, with cheap hardware, they’re going back in time to the benefits of clustered, NUMA machines. They’ve improved on it along the way. I did enjoy the article.

Another trick from the past was eliminating TCP/IP stacks from within clusters to knock out their issues. Solutions like Active Messages were a thin layer on top of the hardware. There’s also designs for network routers that have strong consistency built into them. Quite a few things they could do.

If they get big, there’s hardware opportunities. On CPU side, SGI did two things. Their NUMA machines expanded the number of CPU’s and RAM for one system. They also allowed FPGA’s to plug directly into the memory bus to do custom accelerators. Finally, some CompSci papers modified processor ISA’s, networks on a chip, etc to remove or reduce bottlenecks in multithreading. Also, chips like OpenPiton increase core counts (eg 32) with open, customizable cores.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: