Hacker News new | past | comments | ask | show | jobs | submit login

The GPU should be the motherboard, and everything sticks onto it.



This is gpu manufacturing "wet dream" endgame


So how would multi-GPU setups work?


Multi-gpu setups for gaming are dead.

Multi-gpu setups for computation could have two SKUs one "motherboard" SKU and one connectorized SKU with the connector NOT being PCIe (after a transition).

They already do multi-form-factor, PCIe and OAM for AMD, PCIe and SXM for Nvidia.

Just drop PCIe, have a large motherboard SKU with a CPU slot and some OAM/SXM connectors in quantities up to what a wall socket can supply in terms of wattage (so, like 1 or 2 lol).

Vestigial PCIe cards can hang off the CPU card, if they're even needed.

High speed networking and storage is already moving to the DPU so these big GPUs, unhindered by the PCIe form-factor, could just integrate some DPU cores like they do RT cores, and integrate high speed networking and storage controllers into the GPU.

Home user? You get 1 or 2 DPU cores for NVMe and 10-gig Ethernet. Datacenter? 64 DPU cores for 100-gig and storage acceleration. Easy-peasy.


> what a wall socket can supply in terms of wattage (so, like 1 or 2 lol).

So, you expect GPUs to take 1500W of power, each? (230V @ 16A)


No.

The motherboard and then 1 or 2 expansion sockets, for a total of 3.

3 x 450W for GPU (Nvidia says 4090 draws 450W-- I think they're lying and will wait for reviews to see the truth) and 500W for rest of system. Though that might be a bit low, the 11900K has been measured at what, 300W under load? And you'd need a budget for USB-C power, multiple NVMe, fans, and whatnot. Maybe the spec would accommodate high-core-count enterprise processors so 600W+ would be wiser.

Even in euroland 2KW, which is what a maxed out system would be at the wall socket, is a bit much. They don't even allow vacuum cleaners over 900W to be sold.


2kW isn't actually that much — you can buy ATX power supplies up to 3.5kW for usage in 230V countries. Often used for mining, but also useful if you want to accelerate your blender renders by just throwing multiple GPUs at the task.


Honestly just use chiplets technology for more cores and get rid of the multi GPU concept entirely.


Multi-GPU isn’t just about single-application performance.

Multi-GPU is necessary if you need more display output ports: with limited exceptions every GPU I’ve seen is 3xDP+1xHDMI or worse. While a single DP can drive multiple monitors it limits your max-res/refresh-rate/color-depth - so in-practice if you want to drive multiple 5K/6K/8K monitors or game at 120Hz+ you’ll need two or more cards, with-or-without SLI/etc.


You buy a motherboard with the number of GPUs you want on it.

Or maybe they could devise some sort of stacking system, with each GPU board separate and stacked.


> Or maybe they could devise some sort of stacking system, with each GPU board separate and stacked.

Yeah, I remember those stackable/modular computer concepts that industrial-design students loved to put in their portfolios from the late-1980s to the mid-1990s; I get the concept: component/modular PCs are kinda like the modular hi-fi systems of the 1980s, except those design students consistently failed to consider how coordinated aesthetics are the first thing to go out the window when the biz-dev people need the hardware folks to save costs or integrate with a third-party, etc.

...it feels like a rare miracle that at least we have 19-inch racks to fall-back on, but those are hardly beautiful (except the datacenter cable-porn, of course):

https://www.reddit.com/r/RetroFuturism/comments/gioqrp/sovie...

https://en.wikipedia.org/wiki/Sphinx_%28home_automation_syst...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: