Hacker News new | past | comments | ask | show | jobs | submit login

> Should we have GPU VRAM slots alongside CPU RAM slots? Is that even possible?

I chuckled a little at this because I used to wonder the same thing until I had to actually bring up a GDDR6 interface. Basically the reason GDDR6 is able to run so much faster is because we assume that everything is soldered down, and not socketed/slotted.

Back when I worked for a GPU company, I occasionally had conversations with co-workers about how ridiculous it was that we put a giant heavy heatsink CPU, and a low profile cooler on the GPU, which in today's day and age produces way more heat! I'm of the opinion that we make mini ATX shaped graphics cards so that you bolt them behind your motherboard (though you would need a different case that had standoffs in both directions.)




I had the misfortune of bringing up firmware for an overengineered piece of crap with DDR5 and even though we had the proper measuring equipment instrumenting properly is barely possible. It's designed with reflections in mind and there's a loopback channel at a slower word rate that I think the controller uses to characterize the analog signal because it's like, barely digital.


Did Synopsys design the DDR5 controller by any chance?


I don't remember but I think no, it was a Renenas chip? I think?


The NUC Extreme line is basically that, which IMO is a really good form factor.

Memory and nvme storage are dense enough to make anything larger obsolete for the practical needs of most users. Emphasis on ‘practical’, not strictly ‘affordable’.

The only advantages of larger form factors are the ability to add cheaper storage, additional PCI-e cards, and aftermarket cosmetic changes such as lighting. IMO, each of these needs represent specialist use cases.


Pretty much, but I'd change the dimensions of the graphics card to match mini ITX, and so that it could be bolted to the case. This provides two benefits: It can support a bigger and heavier heatsink and it also allows you to spread the VRMs around the chip and DRAMs for more consistent power flow.


Okay, how about this: The PCI slot goes on the bottom of the mini-ITX motherboard, and extends out of the bottom of the case. The GPU is in its own enclosure, with a PCI edge connector on the top, and you stack one on top of the other.

I'd really like to find a way to daisy-chain them, but I know that's not how multi-gigabit interfaces work.

Raspberry Pi hats are cool. Why not big mini ITX hats? Yes, I just want to go back to the time of the Famicom Disk System, the Sega CD or the Satellaview.


> The NUC Extreme line is basically that, which IMO is a really good form factor.

Missed that one. I still mourn BTX.


I wish a PC form factor like NLX[0] had become popular where the CPU and memory is on a board that's inserted into a backplane parallel to the add-in cards (similar to VME[1]). IIRC NLX was mostly intended for low-profile applications like corporate desktops and retail systems (point-of-sales) but it never caught on. I can see the edge connector and backplane potentially being an issue with old school parallel PCI (that's a lot of pins) but the serial nature of modern PCIe and the advent of PCIe switches would significantly reduce the number of signal lines.

[0] https://www.halolinux.us/hardware/images/2249_56_22.jpg

[1] https://en.wikipedia.org/wiki/VMEbus


> The NUC Extreme line is basically that, which IMO is a really good form factor.

Is there a case like this, but that is not part of a prebuild computer?


I have been thinking about picking up this miniITX, Cooler Master NR200P[0]. Similar form factor that would be completely adequate for a daily driver and could accommodate a sizeable GPU if required. The problem is that the smaller builds are still niche so you have to pay a premium on an appropriate motherboard and power supply.

[0]: https://pcpartpicker.com/product/29drxr/


Yes, there is a standard called PICMG used in “portable/luggable computer” industry - oscilloscope-shaped PCs for field uses. Kontron and Advantech seems to be major suppliers there.


What if GPU makers made motherboards?


I vaguely recall Nvidia motherboards were junk in the late 2000's.


They made motherboard chipsets (nforce, iirc?), not motherboards, unless I missed something.

I think by the late 2000’s, though, their discreet gpus on laptops were problematic because they got so hot they detached from the motherboards or fried themselves. In a lot of cases, these systems shipped with these gpus and intel processors with integrated igpus.

This happened to a thinkpad t400 I had a while ago, the nvidia graphics stopped working and I had to disable it/enable the intel gpu in the bios (maybe even blind).

Iirc this snafu was what soured relations between apple and nvidia.


> I think by the late 2000’s, though, their discreet gpus on laptops were problematic because they got so hot they detached from the motherboards or fried themselves.

That was indeed around ~2008-2010 - the issue was not that the chips fried themselves or got too hot. The issue was the switch to lead-free solder [1]... thermal cycling led to the BGA balls cracking apart as the lead-free solder formulation could not keep up with the mechanical stress. It hit the entire NVIDIA lineup at the time, it was just orders of magnitude more apparent in laptops, as these typically underwent a lot more and a lot more rapid thermal cycles than desktop GPUs.

> Iirc this snafu was what soured relations between apple and nvidia.

There have been multiple issues [2]. The above-mentioned "bumpgate", patent fights regarding the iPhone, and then finally that Apple likes to do a lot of the driver and firmware development for their stuff themselves - without that deep, holistic understanding on everything Apple would stand no chance at having the kind of power efficiency and freedom from nasty bugs that they have compared to most of the Windows laptops.

[1] https://semiaccurate.com/2009/08/21/nvidia-finally-understan...

[2] https://www.pcbuildersclub.com/2019/01/zwischen-apple-und-nv...


I recall my HP laptop with nvidia gpu getting painfully hot to the touch while raiding in WoW:TBC, probably could have cooked an egg on that sucker. It eventually gave up the ghost and I have to assume it was due to the heat.


All the big ones do already.


Sorry, what I meant was, what if the GPU and the motherboard fused into one unit. So the CPU and main memory in but the card is fixed.

I guess the main problem with that we have this cycle where most people have a single graphics card with one or two monitors plugged in, and the power users have two of them to drive four screens, or to have 2 cards drive one monitor by daisy chaining them together.

But in the case of the latter two, it's usually a problem of a bottleneck between the card and the main bus, which of course if you were making the motherboard, you'd have a lot more say in how that is constructed.


What if we plug the motherboard in the GPU instead


What if gpu was the motherboard instead?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: