> Nvidia have a good track record creating good products.
What incentive would Nvidia have to develop new ARM features that would assist competitors in competing with them?
The Fujitsu's A64FX Processor for example somewhat competes with Nvidia datacenter GPUs. What incentive would Nvidia have to continue to develop the Arm Scalable Vector Extension?
The only good thing from this sale would be the pivot to RISC-V.
Money? like Samsung selling their chips to Apple. Nvidia have a lot of proprietary stuff which allows them to get away even with inferior hardware. Nvidia will prob lose the AI datacenter market anyways, customs asics will just use RISC-V if Nvidia try to stop them.
Nvidia doesn't care about secondary money. They're all about integrating stuff and denying others to do so.
Nvidia is a castle because of the moats they've built. Not because they're leaps and bounds ahead in terms of silicon and technical wizardry. They shunned OpenCL, they'll probably shun Vulkan for GPGPU and make sure that people are locked to CUDA and its ecosystem. They have no intention to play fair.
We're going to see how Mellanox will evolve under their command.
wasn't OpenCL a mess even on AMD cards and isnt it almost deprecated?* And Nvidia's OpenGL drivers on windows are way better than AMD's. I dont like Nvidia especially after all the error 43 bullshit, but they are competent. Now ARM is going public, wont that make things even worse?
it would be absolutely insane not to support either of those features. An absolute shit-ton of games use Vulkan these days, there is zero chance of that being deprecated in any timespan not measured in decades. Everyone is still supporting OpenGL after all, and that's like what, an early 90s API?
there is a larger point to be made here about some of the reflexively negative responses people have when the N-word is brought up. There are certainly criticisms you can make of NVIDIA, but people have this intense and reflexively negative emotional response that tends to slide these discussions into flamewar territory.
I know I'm about to hear about how Linus gave them the finger (cause he's a totally balanced and wholesome person), but providing a closed-source linux driver and not supporting wayland (which I think they did eventually anyway?) aren't the end of the world like people make them out to be. Pick the product that fits your needs, if not move on, you don't need to become emotionally attached and you can keep your positions fact-based.
> this comment is factually untrue and anyone can check it for themselves.
I agree that it's my mistake, however there was a gap between this support periods and levels, and I've been burned by nVidia's own driver on an nVidia 800 series card. It's not out of spite or prejudice. I stand corrected, but my experience also stands. Also nVidia's practices for treating technologies they don't like is still valid.
Also, being compatible doesn't mean that these technologies are first class citizens on the driver/hardware level, and they may not be allowed to use the card up to its full potential.
> it would be absolutely insane not to support either of those features. An absolute shit-ton of games use Vulkan these days, there is zero chance of that being deprecated in any timespan not measured in decades. Everyone is still supporting OpenGL after all, and that's like what, an early 90s API?
There's support and there's support. I'm not talking about games and graphics stuff. I was doing some research about nVidia's "Vulkan Compute" support, and seen this:
"Vulkan compute on AMD and Intel might be okay but we already know of ways Nvidia puts the clamp on Vulkan compute (doesn't make use of multiple DMAs for transfer) and given their history with OpenCL and the vested interest they have in developers preferring CUDA, they're really not to be trusted." [0][1]
The guy writing seems to know what he's talking about (reading about the history gives a lot of nice low level details about how these things work), and this is what I was exactly saying about "Ditching Vulkan Compute in favor of CUDA, like OpenCL". "Yeah, it's supported, but it might not be fast, sorry."
> there is a larger point to be made here about some of the reflexively negative responses people have when the N-word is brought up. There are certainly criticisms you can make of NVIDIA, but people have this intense and reflexively negative emotional response that tends to slide these discussions into flamewar territory.
I'm personally no fan of flamewars. I also don't like to disseminate wrong information knowingly (see above). Also, I work in a HPC center where we use nVidia hardware, and personally develop scientific software. So, I'm not distant to either tiers of nVidia hardware. So, maybe not having prejudice about people one's replying to the first time is a good thing, no?
> I know I'm about to hear about how Linus gave them the finger (cause he's a totally balanced and wholesome person)...
No, you're not.
TL;DR: You can't write multi-vendor, GPU acceleration code which uses every card to its highest potential, even if there's standards. You need to write at least two copies of the code, one being CUDA. Why? nVidia doesn't like other technologies to work as fast on their cards. That's it.
> What incentive would Nvidia have to develop new ARM features that would assist competitors in competing with them?
> The Fujitsu's A64FX Processor for example somewhat competes with Nvidia datacenter GPUs. What incentive would Nvidia have to continue to develop the Arm Scalable Vector Extension?
"the enemy of my enemy is my friend".
the more R&D that goes into the ARM ecosystem, the tougher things get for x86. At the end of the day, all that R&D spending is making NVIDIA's position stronger, it's a force multiplier for their own efforts.
I fundamentally don't understand why anyone would think the ARM acquisition was about anything other than making Jensen kingmaker over one of the two most important processor IPs in the world - and he gains absolutely nothing by becoming king and then slaying all his subjects. That would be an amazingly shortsighted decision from one of the most far-sighted tech CEOs in the business.
Selling some more Tegras is peanuts in comparison and that bump would never last in the long term. The money is in Qualcomm and Fujitsu and IBM's R&D budgets working in synergy with your own, and in the ability to leverage the ARM licensing model to push CUDA into the last places it hasn't reached.
The fact that NVIDIA was even making this offer at all pretty much means they were looking at loosening up their IP licensing IMO. As a black box, sure, but I don't see a world in which NVIDIA would buy ARM and either not license GeForce as the graphics IP, or would choose not to license ARM at all ("selling a few more Tegras is not worth $40b"). If you accept those two givens, then NVIDIA would have had to provide GeForce IP as a black-box SIP license.
The A64FX has way worse performance per watt than an AMD EPYC and an A100. The A64FX's density per U is higher, so you can get a lot more cores in the same datacenter, even if the TFLOPS/watt is far worse.
For example, if you look at the TOP500, #1 (A64FX) gets about 14.78 TFLOP/watt, #5 (EPYC/A100) gets 27.37 TFLOP/watt. Most of the A100 based solutions are similar performance per watt. Nvidia is already beating ARM in power efficiency. This looks closer to the 3dfx acquisition from Nvidia's prospective.
What incentive would Nvidia have to develop new ARM features that would assist competitors in competing with them?
The Fujitsu's A64FX Processor for example somewhat competes with Nvidia datacenter GPUs. What incentive would Nvidia have to continue to develop the Arm Scalable Vector Extension?
The only good thing from this sale would be the pivot to RISC-V.