1. While I agree we're beginning to reach absurd proportions, lets really analyze the situation and think about it.
2. Are there any GPUs that actually have performed physical damage on a motherboard slot?
3. GPUs are already 2-wide by default, and some are 3-wide. 4-wide GPUs will have more support from the chassis. This seems like the simpler solution, especially since most people rarely have a 2nd add in card at all in their computers these days.
4. Perhaps the real issue is that PCIe extenders need to become a thing again, and GPUs can be placed in an anchored point elsewhere on the chassis. However, extending up to 4-wide GPUs seems more likely (because PCIe needs to get faster-and-faster. GPU-to-CPU communications is growing more and more important, so the PCIe 5 and PCIe 6 lanes are going to be harder and harder to extend out).
For now, its probably just an absurd look, but I'm not 100% convinced we have a real problem yet. For years, GPUs have drawn more power than the CPU/Motherboard combined, because GPUs perform most of the work in video games (ie: Matrix multiplication to move the list of vertices to the right location, and pixel-shaders to calculate the angle of light/shadows).
> 2. Are there any GPUs that actually have performed physical damage on a motherboard slot?
I have a MSI RTX 3070 2 fan model. It hasn't damaged my PCI-E slot (I think), but it's weight and sag causes some bending that now makes it so that my fan bearing makes a loud noise if spun up high.
My solution has been to turn my PC case so the motherboard is parallel to the ground and the GPU sticks straight up, eliminating the sag. Whisper quiet now.
If this is happening with my GPU, I shudder to imagine what it's like with other GPUs out there which are much bigger and heavier.
Yeah, I do that for a long time. Had a few accidents where I panicked until it turned out the card was just sagging and the PCIe connection was being stopped.
After it happened the 3rd time I just cleaned up a little space and put the PC lying on its side. Zero problems since then.
I've seen a lot of people ignore which screws they're using to retain their GPU
The screw should have plenty of surface area to properly secure the card. You'll still have _some_ sag, but my 3 pin 3090 doesn't sag half as badly as examples I've seen of much smaller cards
I have an EVGA 3070 and also had the sag issue. My case came with a part to support the GPU though, but I didn't realize until I solved it another way: I just doubled up those plates that you screw the GPU into so there was no way it could pivot and sag.
> 2. Are there any GPUs that actually have performed physical damage on a motherboard slot?
Yes. I've seen both a heavy GPU and an HSM card damage a slot. One happened when a machine was shipped by a commercial shipper. The other happened when the machine was moved between residences. It doesn't occur to people that the mass of a card swinging around is a problem when the case is moved.
The HSM one was remarkable in that it was a full length card with a proper case support on both ends.
Also, this isn't just about damaging the PCI-E slot. Heavy cards bend PCBs (both the MB and the dangling card) and bending PCBs is a bad idea: surface mount devices can crack, especially MLCCs, and solder joints aren't flexible either. No telling how many unanalyzed failures happen due to this.
If you have a big GPU don't let it dangle. Support it somehow.
Another area where the conventional layout is struggling is NVME. They keep getting hotter and so the heatsinks keep getting larger. Motherboard designers are squeezing in NVME slots wherever they can, often where there is little to no airflow...
> One happened when a machine was shipped by a commercial shipper. The other happened when the machine was moved between residences. It doesn't occur to people that the mass of a card swinging around is a problem when the case is moved.
Huh. Good point. I'll be moving soon, and have kept the box my case came in as well as the foam inserts for either end of the case. I might just remove the GPU and put it back in its own box for the move, as well. Thanks for bringing that up.
2. We crossed that point generations ago. High end GPU owners are advised to remove GPUs from their system before transporting it and PC communities get posts often enough of people who had consequences from not doing so. Over a longer term even a lighter card can deform motherboards - I had a 290x which did so to an old Z87 motherboard over 5 years with the result the motherboard was no longer flat enough to mount the backplate of a replacement cpu cooler to.
4. Don't forget high end GPUs are also getting longer, not just thicker. So increasing sizes both give and take away weight advantages
PCIe extenders are a thing already. Current PC case fashion trends have already influenced the inclusion of extenders and a place to vertically mount the GPU to the case off the motherboard.
GPU sag is also a bit of a silly problem for the motherboard to handle when $10 universal GPU support brackets already exist.
I have one of these for a much smaller card, mostly so that cold airflow from the floor intake fans actually has a path to get up to the RAM and VRMs. This is a workaround for a case that doesn't have front intake, which is preferable in my opinion.
It does look a little cool, but I always worry a little about the reliability of the cable itself. Does it REALLY meet the signal integrity specifications for PCI-E? Probably not. But, no unexplained crashes or glitches so far and this build is over 2 years old.
LTT has a video where they tried to see how many PCIe riser cables they could string together before it stopped working.[1] They got to several meters. Maybe you could argue that it's worse inside a PC case since there's more EMI, but it seems like your PCIe riser cable would have to be very out of spec before you'd notice anything.
I wonder if that benchmark actually loaded the PCIe bus to any significant degree after the initial benchmark startup, or just updated a couple small parameters on a single scene and thus mainly just tested the local computation on the GPU?
You'd want to somehow monitor the PCIe bus error rate - with a marginal signal and lots of errors -> retries, something that loads the bus harder (loading new textures etc) could suffer way more.
They do briefly show a different PCIe riser made out of generic ribbon cable [1, 3:27], and say that one failed after chaining only two ~200mm lengths. The quality of the riser cable certainly matters.
You need Steve for that kind of testing, LTT would be busy putting rgb on it and then (badly) water cooling it so they could sell you a backpack with no warranty.
It's not clear whether they reached a limit of drive strength or latency (I doubt EMI is the factor, since he said those are shielded) but that's a good demonstration of the resiliency of self-clocked differential serial (and aggregated serial) buses. The technology is much closer to networking standards like Ethernet than traditional parallel PCI, with features like built-in checksums, error correction (newer versions), and automatic retransmit.
The 650 Ti is PCIe 3.0. PCIe 4.0 doubles the bandwidth. PCIe 5.0 doubles the bandwidth again. The RTX 40 series GPUs still use PCIe 4.0, which have commonly available conformant riser cables. I suspect the story for PCIe 5.0 will be different.
My proposal isn’t too different. Move the one ultra-fast PCIe slot to the top of the motherboard. It would be mounted so the GPU plugs in parallel to the motherboard, on the same plane, above the motherboard on a tower case. The few other PCIe slots that exist can stay at the bottom.
Only downside is the case has to be taller. Not sure if that would be considered a problem or not.
This doesn’t really help dual GPU setups, but those have never been common. I don’t have a good solution there. I guess you’re back to some variation of the riser idea.
Not a bad idea however you have hard limits on how physically long the PCIe lanes can be. We had problems making sure we hit signal budget for a PCIe gen 4 slot on an ATX motherboard. The problem (PCIe lane length) gets worse as the speeds increase.
On a further note, why does it even have to be inside the case? Make a slit in the case on the top, so that the PCIe slot is sticking out. Stick a GPU in that slot, supported by the case. The GPUs these days look much cooler anyways.
Not necessarily. A flexible interconnect would allow the GPU to be planar with the MB; just bend it 180 degrees. Now your GPU and CPU can have coolers with good airflow instead of the farcical CPU (125mm+ tall heatsinks...) and GPU cooling designs (three fans blowing into the PCB and exhausting through little holes...) prevailing today.
My idea is to separate the whole CPU complex (the CPU and heatsink, RAM slots, VR, etc.) from everything else and use short, flexible interconnects to attach GPUs, NVMe and backplane slots to the CPU PCB, arranged in whatever scheme you wish.
I was kind of hoping doing it that way would let you put big CPU style coolers on the GPU parts with a lot more height than a 1x or 2x expansion slot.
If you “folded” the GPU over the CPU to save height I would think that would be worse than today for heat.
Maybe I’ve got this backwards. Give up on PCIe, or put it above the rest of the motherboard. The one GPU slot, planar to the motherboard, stays below. Basically my previous idea flipped vertically.
The other PCIe slots don’t need to run as fast and may be able to take the extra signal distance needed. The GPU could secure to the backplane (like my original idea) but would have tons of room for cooling like the CPU.
> If you “folded” the GPU over the CPU to save height I would think that would be worse than today for heat.
Why? Cooling would be far better: the CPU and GPU heatsinks would both face outward from the center and receive clean front-to-rear airflow. Thus, looking down from above:
The power supply and power connectors are on the bottom. Another PCB lays flat on top to host open PCI-E slots, NVMe, whatever, connected 90 deg. to the CPU PCB with one PCI-E slot interconnect. All interconnects are short. Air flow is simple and linear. The CPU/GPU Heatsinks are large and passive: you only need intake fans.
I've been refining this. I'm actually learning FreeCAD to knock out a realistic 3D model.
One obvious change: run the CPU/GPU interconnect across the bottom: existing GPU designs could be used unmodified (or enhanced with only a new heatsink) and the 16x PCI-E lanes for the GPU would be easier to route off the CPU PCB.
There are virtually no significant differences between the motherboard layout IBM promulgated with original IBM PC (model 5150) in 1981 and what we have today. That machine had a 63W power supply and no heatsinks or fans outside the power supply. The solution to all existing problems with full featured, high power desktop machines is replacing the obsolescent motherboard design with something that accommodates what people have been building for at least 20 years now (approximately since the Prescott era and the rise of high power GPUs.)
Yeah, I have a motherboard with a bent out of shape port because of the weight of the card in it. My current desktop has a sag of easily half an inch at the end of the card and it’s not even a particularly large one by current standards. The ATX spec obviously wasn’t designed with GPUs this heavy and this power hungry.
historically cases had a bracket at the front to support full length cards. I even remember I once had a reference amd card that had an odd extension so that it would be supported by the forward full length brackets.
I have to admit I have not seen that front bracket for a long time. some server chassis have a bar across the top to support large cards. this would bet great except gfx card manufacturers like to exceed the pci spec for height. that bar had to be removed on my last two builds. now days I just put my case horizontal and pray.
I came here to mention the front support bracket. You'll find it on the larger AMD workstation cards more often than others, I first remember it on a FirePro reference card, and some searching turned up examples of it for the AMD FirePro V7900, and a few other models.
I've also had the vertical clearance issue since I try to rack mount all my gear now I've got a soundproof rack, its very annoying to need more than 4U of space just to fit a video card in a stable vertical orientation.
Gpu sag is a big issue in gaming computers. I had a small (by comparison and of more contemporary graphics cards) rx480 and bought a cheap 10 dollar graphics card brace to reduce its strain on the pci slot in 2021 to help reduce its chances of failure during the shortage. I use the brace to hold up my new ampere card now (which is maybe twice the length of the rx480).
> Are there any GPUs that actually have performed physical damage on a motherboard slot?
It's quite common to suffer damage from movement, especially in shipping, to the point where integrated PC manufacturers have to go to great lengths to protect the GPU in transit.
I’m not sure how the author, who programs GPUs, doesn’t comprehend the cooling is part of market segmentation. A 3090 Turbo is a two slot solution, but NVIDIA forced vendors to discontinue it to prevent creep into datacenters.
And I’m sure the licensing bros will come out and shout about licensing or something irrelevant. My dudes, I operate 3090s in the data center, it saves boatloads of money for both upfront, licensing, power and therefore TCO, and fuck NVIDIA.
Hard foam? Regardless I'd recommend against shipping with a CPU cooler or GPU installed but some SIs seem to get away with it with self-fitting hard foam packs.
That's not a reasonable answer. Many people want to buy assembled systems, and lack the skills/inclination to do it themselves. Pre-built systems is a huge $1B+ market, and "kill the market entirely" is not an acceptable answer.
This reply doesn't make any sense. It is a viable business model. It has total revenues across all the major players of billions of dollars per year.
And how the hell do you not ship them? You're not making any sense here. There's no alternative to not shipping them, not unless you're planning on having system builders show up individually to clients' houses and assemble PCs on the spot. That business model is way less economically efficient than simply assembling PCs centrally and accepting some breakage in shipment.
Breakage is normal in any industry, what are you suggesting as an alternative? There are companies shipping nationally and internationally with very few issues
Workstation cards are traditionally supported on 3 sides so don’t suffer this sag. In such systems there’s usually an extension bracket screwed into the card to increase the length, allowing it to reach a support slot to hold it steady.
The GPU is the most expensive component in gaming PCs these days, thus it makes the least sense for it to be the hard-wired component as there is the most price diversity in options. I have definitely upgraded GPUs on several computers over the past decade and I'm very thankful I didn't have to rip out the entire motherboard (and detach/reattach everything) to do so.
It's only the cheap components without a wide variety of options that make sense to build in, like WiFi, USB, and Ethernet.
Note this whole discussion is in the context of the 4090. If you're an enthusiast, soldering the GPU to the mobo forces you to spend $200-$700 more every time you upgrade your GPU because you also have to buy a new mobo and possibly a new CPU if the socket changed.
The GPU is also one of the easiest components to swap today. That's not something I want to give up unless I see performance improvements. Cooling doesn't count because I already have an external radiator attached to my open-frame "case".
I went through 3 GPUs before changing motherboards and I'm still bottlenecked on my 3090, not my 5800X3D. After I get a 4090, I expect to upgrade it at least once before getting a new mobo.
Having had a few GPUs go bad on me over the years, I would hate to have to disassemble the entire thing to extricate the mobo/gpu combo for RMA'ing, rather than just removing the GPU and sending it off.
The main reason is that CPUs get old much slower than GPUs, but “current” socket generations change quickly. Another reason is a combinatorial explosion of mobo x graphics card features, which is already hard.
Gpus get old slowly too, but youtube influencers convince gamers their current card isn’t enough often ans effectively (I’m not excluded from this group)
Then you lose modularity, which is a huge benefit to PC gaming? Now if you want to upgrade to the newest graphics card, you also need to get a new motherboard. Which also could mean you need a new CPU, which also could mean you need new RAM.
Right now you can just switch the graphics card out for another and keep everything else the same.
This is already happening (as was noted in other comments on this article).
One of the most prominent examples is the entire Apple Silicon lineup, which has the GPU integrated as part of the SoC, and is powerful enough to drive relatively recent games. (No idea just what its limits are, but I'm quite happy with the games my M1 Max can play.)
With mini cube PCs growing in popularity the future will probably be this, a mini PC with every part a USB type modularity for any ram or GPU or hd into that stock cube PC.
2. Are there any GPUs that actually have performed physical damage on a motherboard slot?
3. GPUs are already 2-wide by default, and some are 3-wide. 4-wide GPUs will have more support from the chassis. This seems like the simpler solution, especially since most people rarely have a 2nd add in card at all in their computers these days.
4. Perhaps the real issue is that PCIe extenders need to become a thing again, and GPUs can be placed in an anchored point elsewhere on the chassis. However, extending up to 4-wide GPUs seems more likely (because PCIe needs to get faster-and-faster. GPU-to-CPU communications is growing more and more important, so the PCIe 5 and PCIe 6 lanes are going to be harder and harder to extend out).
For now, its probably just an absurd look, but I'm not 100% convinced we have a real problem yet. For years, GPUs have drawn more power than the CPU/Motherboard combined, because GPUs perform most of the work in video games (ie: Matrix multiplication to move the list of vertices to the right location, and pixel-shaders to calculate the angle of light/shadows).