Hacker News new | past | comments | ask | show | jobs | submit login

I still don't really understand the context around the Apple fight and it's a huge bummer, since Apple hardware with Nvidia GPUs would be the best combination.

When I used Linux the closed source Nvidia drivers were better than anything else and easily available, the complaints around them seemed mostly to be ideological?

The price complaints seemed mostly about 'value' since the performance was still better than the competition in absolute terms.




Nvidia had some GPUs that ran hot and had a above average failure rate so apple were unhappy because it made a couple of models of macs look bad. They also had enough revenue of their own that they didn't care enough to invent some SKUs so people couldn't compare macs to PC laptops.

The big issue with Nvidia GPUs in Linux these days is with Wayland. There are some graphics APIs that are the current way to create contexts, manage GPU resources etc but Nvidia went their own way which would require compositors to have driver specific code.

Many smaller compositors (such as the most popular tiling one for Wayland) don't want to write or support one implementation for Intel/AMD and one for Nvidia so they either don't support Nvidia or require snarky sounding cli options to enable Nvidia at the cost of support.


Interesting - makes sense, thanks for the context.

I'd suspect part of the reason Nvidia went their own way is because their way is better? Is that the case - or is it more about just keeping things proprietary? Probably both?

If I had to guess, some mixture of ability to improve things faster with tighter integration at the expense of an open standard (pretty much what has generally happened across the industry in most domains).

Though this often leads to faster iteration and better products (at least in the short term, probably not long term).


I'd take a look at this post from a few years ago about why SwayWM does not support nvidia gpus using proprietary drivers (nouveau works).

https://drewdevault.com/2017/10/26/Fuck-you-nvidia.html


"When people complain to me about the lack of Nvidia support in Sway, I get really pissed off. It is not my fucking problem to support Nvidia, it’s Nvidia’s fucking problem to support me."

I'd suggest his users asking for Nvidia support are evidence of this being wrong.

That aside though, it seems like Nvidia's proprietary driver doesn't have support for some kernal APIs and the other vendors (AMD, Intel) do?

I wonder why I've always had better experience with basic OS usage using the Nvidia proprietary driver over AMD in Linux. Maybe I just didn't use any applications relying on these APIs. Nouveau has never been good though.

Not really a surprise given the tone of that blog post that Nvidia doesn't want to collaborate with OSS community.

Don't people rely on Nvidia for deep learning workflows? I thought that stuff ran on Linux? Maybe this is just about different dev priorities for what the driver supports?


> Don't people rely on Nvidia for deep learning workflows? I thought that stuff ran on Linux?

It all comes down to there always being two ways to do things when interacting with a GPU under Linux: The FOSS/vendor-neutral way, and the Nvidia way.

The machine learning crowd has largely gone the Nvidia way. Good luck getting your CUDA codebase working on any other GPU.

The desktop Linux crowd has largely gone the FOSS route. They have software that works well with AMD, Intel, VIA, and other manufacturers. Nvidia is the only one that wants vendor-specific code.


Thanks - makes sense.

> Nvidia is the only one that wants vendor-specific code.

Isn't that because CUDA is better and that tight software/hardware integration is more powerful?

If it wasn't then presumably people would be using AMD GPUs for their deep learning, but they're not.


Even when it comes to graphics and CUDA is not being used, Nvidia does not support the standard Linux APIs that every other GPU supports.


While AMD has been great at open source and contributing to the kernel, they also (from what I can remember) have been subpar with their reliability (both in proprietary and open source).

NVIDIA has been more or less good with desktop Linux + Xorg for the last 5-7 years (not accounting for the non support for hybrid graphics on Linux laptops).

I think you can use an NVIDIA GPU as a pure accelerator without it driving a display very easily.


> The price complaints seemed mostly about 'value' since the performance was still better than the competition in absolute terms.

Just because company A makes the fastest card does not imply that company A makes the faster card at every price point.


True, but in Nvidia's case I think they did?

Well - they charged more at each price point because they were faster.

At some prices I think it wasn't enough to justify the extra cost from a price to performance ratio, but that doesn't seem like a reason to think they're bad.

It's possible I'm a little out of date on this, I only keep up to date on the hardware when it's relevant for me to do a new build.


> Well - they charged more at each price point...

Are you joking? How is it the same price point when they are charging more?


If vendor A's cards are $180, $280 and $380 and vendor B's cards are $150, $250 and $350, it's common practice to group them into three price points of $150-199, $250-299 and $350-399 so that each card gets compared to its nearest in price.


The prices are way closer together than that, because both companies sell way more than 3 cards. There are three variants of 2080 priced differently and two of the 2070 and 2060 each. That's seven price points above 300$ alone without looking at the lower segment (2 of those cards are EOL but still available a bit cheaper at some vendors). nVidia and AMD have always had enough cards that are at the same MSRP.

E.g. the bottom of this page: https://www.anandtech.com/show/14618/the-amd-radeon-rx-5700-...

At the lower price point nVidias line up is similarly crowded: https://www.anandtech.com/show/15206/the-amd-radeon-rx-5500-...

Either way, there would be no reason to group the 350$ and the 400$ card but not the 300$ and the 350$ card.

BTW, AMD definitely didn't always have a price/performance advantage, e.g. the nice scatter plots here from ten years ago (that I randomly found): https://techreport.com/review/19342/gpu-value-in-the-directx...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: