Nvidia's stake in Intel could have terrible consequences. First, it is in Nvidia's interest to kill Intel's Arc graphics, and that would be very bad because it is the only thing brighing GPU prices down for consumers. Second, the death of Intel graphics / Arc would be extremely bad for Linux, because Intel's approach to GPU drivers is the best for compatibility, wheras Nvidia is actively hostile to drivers on Linux. Third, Intel is the only company marketing consumer-grade graphics virtualization (SR-IOV), and the loss of that would make Nvidia's enterprise chips the only game in town, meaning the average consumer gets less performance, less flexibility, and less security on their computers.
Conclusion: Buy AMD. Excellent Linux support with in-tree drivers. For 15 years! A bug is something which will be fixed.
Nvidias GPUs are theoretically fast on initial benchmarks. But that’s mostly optimization by others for Nvidia? That’s it.
Everything Nvidia has done is a pain. Closed-source drivers (old pain), out of tree-drivers (new pain), ignoring (or actively harming) Wayland (everyone handles implicit sync well, except Nvidia which required explicit sync[1]), and awkward driver bugs declared as “it is not a bug, it is a feature”. The infamous bug:
This extension provides a way for applications to discover when video
memory content has been lost, so that the application can re-populate
the video memory content as necessary.
This extension will be soon ten years old. At least they intend to fix it? They just didn’t in the past 9 years! Basically, video memory could be gone after Suspend/Resume, VT-Switch and so on. The good news is, after years someone figured that out and implemented a workaround. For X11 with GNOME:
AMD is not competing enough with NVIDIA, so they are not a solution.
What I mean is that whenever NVIDIA removed features from their "consumer" GPUs in order to reduce production costs and increase profits, AMD immediately followed them, instead of attempting to offer GPUs that have something that NVIDIA does not have.
Intel at least tries to be a real competitor, e.g. by offering much, much better FP64 performance or by offering more memory.
If Intel's discrete GPUs disappear, there will be no competition in consumer GPUs, as AMD tries to compete only in "datacenter" GPUs. I have ancient AMD GPUs that I cannot upgrade to newer AMD GPUs, because the newer GPUs are worse, not better (for computational applications; I do not care about games), while Intel offers acceptable substitutes, due to excellent performance per $.
Moreover, NVIDIA also had excellent Linux driver support for more than 2 decades, not only for games, but also for professional graphics applications (i.e. much better OpenGL support than AMD) and for GPU computing applications (i.e. CUDA). AMD gets bonus points for open-source drivers and much more complete documentation, but the quality of their drivers has been typically significantly worse.
NVIDIA always had good support even for FreeBSD, where I had to buy discrete NVIDIA GPU cards for computers with AMD APUs that were not supported for any other OS except Windows and Linux.
AMD "consumer" GPUs are a great choice for those who are interested only in games, but not for those interested in any other GPU applications. AMD "datacenter" GPUs are good, but they are far too expensive to be worthwhile for small businesses or for individuals.
I've found the amdgpu Linux driver to be fairly buggy running dual monitors with my Radeon VII, and found things like the fTPM to be highly buggy on Threadripper 2k/x399 to the point that I had to add a dTPM. They never got things truly working properly with those more-niche products before they just.. kind of... stopped working on them. And of course ROCm is widely regarded to be a mess.
On the other hand, my Steam Deck has been exceedingly stable.
So I guess I would say: Buy AMD but understand that they don't have the resources to truly support all of their hardware on any platform, so they have to prioritize.
Contrast that with the earlier R9 285 that I used for nearly 10 years until I was finally able to get a 9070XT that I'm very happy with. They were still refining support for that aged GCN 1.2 driver even today, even if things are a lower priority to backport.
Overall the ONLY things I'm unhappy about this GPU generation.
* Too damned expensive
* Not enough VRAM (and no ECC off of workstation cards?)
* Too hard for average consumers to just buy direct and cut out the scalpers
The only way I could get my hands on a card was to buy through a friend that lives within range of a Microcenter. The only true saints of computer hardware in the whole USA.
Both of which NVidia does a lot better in practice! I'm all for open-source in-tree drivers, but in practice, 15 years on, AMD is still buggy on Linux, whereas NVidia works well (not just on Linux but on FreeBSD too).
> I don’t judge whether implicit sync or explicit are better.
> Both of which NVidia does a lot better in practice!
Correction - if they care. And they don't care to do it on Linux, so you get them dragging feet for decades for something like Wayland support, PRIME, you name it.
Basically, the result is that in practice they offer abysmally bad support, otherwise they'd have upstream kernel drivers and no userspace blobs. Linux users should never buy Nvidia.
I don't understand what you're saying here. I've used NVidia on Linux and FreeBSD a lot. They work great.
If your argument is they don't implement some particular feature that matters to you, fair enough. But that's not an argument that they don't offer stability or Linux support. They do.
Taking very long to implement stuff is a perfect argument of bad support for the platform. Timely support isn't any less important than support in general.
Are you a product manager? Or do you just not see the irony on your comment?
Long term support means my thing that has been working great continues to work great. New feature implementation has nothing to do with that and is arguably directly against long term support.
And Nvidia seems justified in this since effectively no distro dropper X11 until Nvidia had support.
If you think taking decades is an acceptable rate while others do it in a timely manner it's your own problem. For any normal user it's completely unacceptable and is the opposite of great (add to it, that even after decades of dragging their feet they only offer half cooked support and still can't even sort out upstreaming their mess). Garbage support is what it is.
AMD is notorious for not having ROCM support on in production currently sold GPUs, and horrendous bugs that actually make using the devices unusable.
I use AMD gpus on linux, I generally regret not just buying an Nvidia GPU purely because of AMDs lacklustre support for compute use cases in general.
Intel is still too new in the dGPU market to trust and on top of that there is so much uncertainty about whether that entire product line will disappear.
So at this point the CUDA moat makes is a non issue, on top of that what works works and keeps working, whereas with AMD I constantly wonder whether something will randomly not work after an update.
A timeline of decades for “features” your biggest consumers don’t care about is a reasonable tradeoff, even more so if actually pushing those features would reduce stability.
That's exactly the point. Nvidia might care about industrial use cases, while they don't care about desktop Linux usage and their support is garbage in result.
I've been using Nvidia gpus exclusively on debian linux for the past 20 years, using the binary Nvidia drivers. Rock solid stability and excellent performance. I don't care for Wayland as I plan to stay on Xorg + Openbox for as long as I can.
Wayland support hasn't been an issue since GLX was depreciated for EGLStream. I think the Nvidia backend has been "functional" for ~3 years and nearly flawless for the past year or so.
Both Mutter and KWin have really good Nvidia Wayland sessions nowadays.
It got better, but my point is how long it took to get better. That's the indicator of how much they care about Linux use cases in general. Which is way below acceptable level - it's simply not their priority (which is also exacerbated by their hostile approach to upstreaming).
I.e. if anything new will need something implemented tomorrow, Nvidia will make their users wait another decade again. Which I consider an unacceptable level of support and something that flies in the face of those who claim that Nvidia supports Linux well.
Buying AMD (for graphics) has been the only ethical choive for a long time. We must support the underdogs. Since regulation has flown the coop, we must take respondibility ourselves to fight monopolies. The short term costs may be a bit higher but the long term payoff is the only option for our self-interest!
> Conclusion: Buy AMD. Excellent Linux support with in-tree drivers.
Funnily, AMD's in-tree drivers are kind of a pain in the ass. For up to a year after a new GPU is released, you have to deal with using mesa and kernel packages from outside your distro.. While if you buy a brand new nVidia card, you just install the latest release of the proprietary drivers and it'll work.
Linux's driver model really is not kind to new hardware releases.
Of course, I still buy AMD because Nvidia's drivers really aren't very good. But that first half a year was not pleasant last time I got a relatively recently released (as in, released half a year earlier) AMD card.
A lot of people want to use Ubuntu or Ubuntu-based distros.
I have since switched from Ubuntu to Fedora, maybe Fedora ships mesa and kernel updates within a week or two from release, I don't know. But being unable to use the preferred distro is a serious downside for many people.
ATI/AMD open source linux support has been blowing hot and cold for over 25 years now.
They were one of the first to actually support open source drivers, with the r128 and original radeon (r100) drivers. Then went radio silence for the next few years, though the community used that as a baseline to support the next few generations (r100 to r500).
Then they reemerged with actually providing documentation for their Radeon HD series (r600 and r700), and some development resources but limited - and often at odds with the community-run equivalents at the time (lots of parallel development with things like the "radeonhd" driver and disagreements on how much they should rely on their "atombios" card firmware).
That "moderate" level of involvement continued for years, releasing documentation and some initial code for the GCN cards, but it felt like beyond the initial code drops most of the continuing work was more community-run.
Then only relatively recently (the last ~10 years) have they started putting actual engineering effort into things again, with AMDGPU and the majority of mesa changes now being paid for by AMD (or Valve, which is "AMD by proxy" really as you can guarantee every $ they spend on an engineer is $ less they pay to AMD).
So hopefully that's a trend you can actually rely on now, but I've been watching too long to think that can't change on a dime.
It is possible that at some point, maybe 15 years ago, AMD provided sufficient documentation to write drivers, but even 10 years ago, a lot of documentation was missing (without even mentioning that fact), which made trying to contribute rather frustrating. Not too bad, because as you said, they had a (smallish) number of employees working on the open drivers by then.
Those who want to run Linux seriously will buy AMD. Intel will be slowly phased out, and this will reduce maintenance and increase the quality of anything that previously had to support both Intel and AMD.
However, if Microsoft or Apple scoop up AMD, all hell will break loose. I don’t think either would have interest in Linux support.
Oh boy that strikes a nerve with the "Video memory could be gone after Suspend/Resume". Countless hours lost trying to fix a combination of drivers and systemd hooks for my laptop to be able to suspend/hibernate and wake up back again without issues... Which makes it even more complicated when using Wayland.
I have been looking at high-end laptops with dedicated AMD Graphics chip, but can't find many... So I will probably go with AMD+NVidia with MUX switch, let's see how it goes... Unless someone else has other suggestions?
> Basically, video memory could be gone after Suspend/Resume, VT-Switch and so on.
This actually makes sense: for example, a new task has swapped out previous task's data, or host and guest are sharing the GPU and pushing each others data away. I don't understand why this is not a part of GPU-related standards.
As for solution, discarding all the GPU data after resume won't help? Or keeping the data in the system RAM.
Last I tried to file a bug for a crash in an AMD Windows driver I had to through an anonymous developer I found on Discord, and despite weeks of efforts writing and sharing test case they choose to ignore the bug report I the end. The developer even asked not to be named as he might face repercussions for trying to help out.
I once had an mini pc with Nvidia. I got it for Cuda dev. One day the support for it was dropped for it so I was unable to update my system without it messing things up. So regardless of Cuda I decided Nvidia is not for me.
However, doing research when buying a new pc, I've found that AMD kind of sucks too. ROCm isn't even supported on many of the systems i was looking into. Also, I've heard their Linux graphics drivers are poor.
So basically I just rock a potato with Intel integrated graphics now. GPUs cost too much to deal with that nonsense.
In your case maybe, but not according to some of the comments here in this very thread and also in some forums and YouTube videos back when I'd last checked.
FWIW, my experience gaming/web browsing/coding on a 3070 with modern drivers has been fine. Mutter and KWin both have very good Wayland sessions if you're running the new (>550-series) drivers.
Apparently it is 5% ownership. Does that give them enough leverage to tank Intel’s iGPUs?
That would seem weird to be. Intel’s iGPUs are an incredibly good solution for their (non-glamorous) niche.
Intel’s dGPUs might be in a risky spot, though. (So… what’s new?)
Messing up Intel’s iGPUs would be a huge practical loss for, like, everyday desktop Linux folks. Tossing out their dGPUs, I don’t know if it is such a huge loss.
> Tossing out their dGPUs, I don’t know if it is such a huge loss
It would be an enormous loss to the consumer/enthusiast GPU buyer, as a third major competitor is improving the market from what feels like years and years of dreadful price/perf ratio.
amd is slow and steady. they were behind many times and many times they surprrised with amazing innovations overtaking intel. they will do it again, for both CPU and GPU.
Intel's iGPUs don't seem very at risk because the market for low-power GPUs isn't very profitable to begin with. As long as Nvidia is able to sell basically any chip they want, why waste engineering hours and fab time on low-margin chips? The GT 1030 (Pascal) never got a successor, so that line is as good as dead.
Even before the Pascal GTs, most of the GT 7xx cards, which you would assume were Maxwell or Kepler from the numbering, were rebadged Fermi cards (4xx and 5xx)! That generation was just a dumping ground for all the old chips they had laying about, and given the prominence of halfway decent iGPUs by that point, I can't say I blame them for investing so little in the lineup.
That said, the dGPUs are definitely somewhat at risk, but I think the risk is only slightly elevated by this investment, given that it isn't exactly a cash cow and Intel has been doing all sorts of cost-cutting lately.
Aren't a lot of those cards sold for the audience that needs more display heads rather than necessarily performance?
This has been somewhat improved-- some mainboards will have HDMI and DisplayPort plumbed to the iGPU, but the classic "trader desk" with 4-6 screens hardly needs a 5090.
They could theoretically sell the same 7xx and 1030 chips indefinitely. I figure it's a static market like those strange 8/16Mb VGA chipsets that you sometimes see on server mainboards, just enough hardware to run diagnostics on a normally headless box.
Agree. Not only would there be no money in it to try to replace Iris graphics or whatever they call them now -- it would be ultra pointless because the only people buying integrated graphics are those where gaming, on-device AI, and cryptocurrency aren't even part of the equation. Now, that is like 80%+ of the PC market, but it's perfectly well served already.
I saw this move more as setting up a worthy competitor to Snapdragon X Elite, and it could also probably crush AMD APUs if these RTX things are powerful.
Intel sells discrete cards and their next card was setup to do AI and games competently. They were poised to compete with the low to mid range Nvidia cards at HALF the cost.
It was definitely going to upset the market. Now i understand the radio silence on a card that was supposed to have been coming by Xmas.
Oh for sure. Arc is in jeopardy. Though tbh it was already, wasn't it? Can't you see an alternate universe where this story never happened, but Intel announced today "Sorry, because our business is dying in general and since Arc hasn't made us a ton of money yet anyway, we need to cut Arc to focus on our core blah blah blah".
I just meant their integrated GPUs are what's completely safe here.
It wasn't in jeopardy for being no good, it was in jeopardy because Intel is so troubled. Like the Bombardier C-Series jet: Everyone agreed it was a great design and very promising, but in the end they had no choice but to sell it to Airbus (who calls it the A220), I think because they didn't really have the money to scale up production. In like manner, Intel lacks the resources to make Arc the success it technically deserves to be, and without enough scale, they'll lose money on Arc, which Intel can hardly afford at this point.
Calling BS on "gaming not part of the equation". Several of my friends and I have exclusively games on integrated graphics. Sure we don't play the most abusively unoptimized AAA games like RDR2. But we're here and we're gaming.
RDR2 is quite optimized. We spend a lot of time profiling before release, and while input latency can be a tad high, the rendering pipeline is absolutely highly optimized as exhibited by the large amount of benchmarks on the web.
Sorry, I'm happy for you, and I do play Minecraft on an iGPU. I just meant that about 80% of the PCs sold seem to be for "business use" or Chromebooks, and the people writing those POs aren't making their selections with gaming in mind.
(And also, I'm pretending Macs don't exist for this statement. They aren't even PCs anymore anyway, just giant iPhones, from a silicon perspective.)
That performance is not surprising, Arc seems pretty dope in general.
I hadn't realized that "Arc" and "Integrated" overlapped, I thought that brand and that level of power was only being used on discrete cards.
I do think that integrated Arc will probably be killed by this deal though, not for being bad as it's obviously great, rather for being a way for Intel to cut costs with no downsides for Intel. If they can make RTX iGPUs now, and the Nvidia and RTX brand being the strongest in the gaming space... Intel isn't going to invest the money in continuing to develop Arc, even if Nvidia made it clear that they don't care, it just doesn't make any business sense now.
That is a loss for the cause of gaming competition. Although having Nvidia prop up Intel may prove to be a win for competition in terms of silicon in general versus them being sold off in parts, which could be a real possibility it seems.
"Gaming" = "real-time-graphics-intensive application". You could be playing chess online, or emulated SNES games, but that's not what "gaming" refers to in a hardware context.
> Sure we don't play the most abusively unoptimized AAA games like RDR2.
Wait, RDR2 is badly optimized? When I played it on my Intel Arc B580 and Ryzen 7 5800X, it seemed to work pretty well! Way better than almost any UE5 title, like The Forever Winter (really cool concept, but couldn't get past 20-30 FPS, even dropping down to 10% render scale on a 1080p monitor). Or with the Borderlands 4 controversy, I thought there'd be way bigger fish to fry.
It would be amusing to see nVidia cores integrated into the chipset instead of the Intel GPU cores. I doubt that is in the cards unless Intel is looking to slash the workforce by firing all of their graphics guys.
I would guess Nvidia doesn't care at all about the iGPUs, so I agree they are probably not at risk. dGPUs though I absolutely agree They are in a risky spot. Perhaps Intel was planning to kill their more ambitious GPU goals anyway, but That seems extremely unhealthy for pretty much everyone except Nvidia
We'd have to see their cap table approximation, but I've seen functional control over a company with just a hair over 10% ownership given the voting patterns of the other stock holders.
5% by about any accounting makes you a very, very influential stockholder in a publicly traded company with a widely distributed set of owners.
Intel was already dead, even money from gov didn’t help them. It is old, legacy, bad corp. I think NV just wants to help them and use however it wants - Intel management will do anything they say.
Intels gpus are a better solution for almost all computing outside high end gaming, ai, and a few other tasks. For most things a better gpu is overkill and wastes energy
- The datacenter GPU market is 10x larger than the consumer GPU market for Nvidia (and it's still growing). Winning an extra few percentage points in consumer is not a priority anymore.
- Nvidia doesn't have a CPU offering for the datacenter market and they were blocked from acquiring ARM. It's in their interest to have a friend on the CPU side.
- Nvidia is fabless and has concentrated supplier and geopolitical risk with TSMC. Intel is one of the only other leading fabs onshoring, which significantly improves Nvidia's supplier negotiation position and hedges geopolitical risk.
> Nvidia doesn't have a CPU offering for the datacenter market and they were blocked from acquiring ARM. It's in their interest to have a friend on the CPU side.
Someone should tell nvidia that. They sure seem to think they have a datacenter CPU.
I wonder if this signal a lack of confidence in their CPU offerings going forward?
But there's always TSMC being a pretty hard bottleneck - maybe they just can't get enough (and can't charge close to their GPU offerings per wafer), and pairing with Intel themselves is preferable to just using Intel's Foundry services?
>Nvidia is fabless and has concentrated supplier and geopolitical risk with TSMC.
East India Company has been conducting continental wars on its own. A modern company with $4T valuation and a country-GDP-size revenue and possessing key military technology of today and tomorrow wars - AI software and hardware, including robotics - can successfully wage such a continental war through a suitable proxy, say an oversized private military contractor (especially if it massively armed with drones and robots), and in particular is capable of defending an island like Taiwan. (or thinking backwards - an attack on Taiwan would cause a trillion or two drop in NVDA valuation. What options get on the table when there is a threat of a trillion dollar loss ... To compare - 20 years of Iraq cost 3 trillions, ie. 150B/year buy you a lot of military hardware and action, and efficient defense of Taiwan would cost much less than that.)
Not necessarily. Territorial war requires people. Defense from kinetic strikes on key objects concentrated on smallish territory requires mostly high-tech - radars and missiles - and that would be much easier for a very rich high-tech US corporation.
An example - Starlink antenna, sub-$500, a phased array which actually is like a half or a third of such an array on a modern fighter jet where it cost several millions. Musk naturally couldn't go the way of a million-per-antenna, so he had to develop and source it on his own. The same with anti-missile defense - if/when NVDA gets to it to defend the TSMC fabs, NVDA would produce such defense systems orders of magnitude cheaper, and that defense would work much better than the modern military systems.
> Taiwanese gov prevents them from doing it. Leading node has to be on Taiwanese soil
This is bold claim. Do you have a public evidence to share? I have never once seen this mentioned in any newspaper articles that I have read about TSMC and their expansion in the US.
Nvidia's options are fund your competition to keep the market dynamic, or let the government do it by breaking you a part.
So yes. That's how American competition works.
It isn't a zero sum game. We try to create a market environment that is competitive and dynamic.
Monopolies are threat to both the company and a free open dynamic market. If Nvidia feels it could face an antitrust suit, which is reasonable, it is in its best interest to fund the future of Intel.
Will Nvidia continue to exist beyond the current administration? If yes, then would it be prudent to consider the future beyond the current administration?
Microsoft wasnt funding bankrupt Apple, Microsoft was settling lawsuit with Jobs just on the cusp of DOJ monopoly lwasuit. Microsoft was stealing and shipping Apple QuickTime sourcecode.
> handwritten note by Fred Anderson, Apple's CFO, in which Anderson wrote that "the [QuickTime] patent dispute was resolved with cross-licence and significant payment to Apple." The payment was $150 million
One interesting parallel is Intel and AMD back in x86 1991, which is today the reason AMD is at all allowed to produce x86 without massive patent royalties to intel. [Asianometry](https://youtu.be/5oOk_KXbw6c) had a nice summery of it.
Nvidia is leaning more into data centres, but lack a CPU architecture or expertise. Intel is struggling financially, but have knowledge in iGPUs and a vast amount of patents.
They could have alot to give one another, and it's a massive win if it keeps intel afloat.
Yeah I think Nvidia were hostile to Linux when they saw no value in it. Now it's where the machine learning is. It's the OS powering the whole AI hype train. Then there is also Steamdeck making Linux gaming not a complete write off anymore.
The article hints at it, but my guess would be this investment is intended towards Intel foundry and getting it to a place where NVIDIA can eventually rely on them over TSMC — and the ownership largely to give them upside if/when Intel stock goes up on news of an NVIDIA contract etc. It isn’t that uncommon of an arrangement for enterprise deals of such a potential magnitude. Long-term, however, and without NVIDIA making the call that could definitely have the effect of leading to Intel divesting from directly competing in as many markets, ie Arc.
For context, I highly recommend the old Stratechery articles on the history of Intel foundry.
My first thought was also that this relates to Intel's foundry business. Even if only to be able to use it in price negotiations with TSMC (it's hard to threaten to go elsewhere when there is no elsewhere left).
Do we want Intel to fall and bankrupt? Or do we want Intel to survive. I dont think most people are clear what is happening here. This is it. The Margin call moment.
Intel could either transform into Fabless company compete on design, and manufacture with TSMC. Or they continue to be a Foundry player, crucial to US strategic interest. You can only pick one, and competing on just one of them is already a monumental task.
GPU is burning money. With no short term success in sight that could make them cash flow positive in 3 - 4 years time frame. I have been stating this since 2016 and we are now coming close to 2026, recent market share suggest Intel is at less than 1% discrete market share. Especially given the strong roadmap Nvidia has.
This gives a perfect excuse for Intel to quit GPU. Nvidia to provide the cash flow to hopefully continue to develop A18 and A14. Manufacture Nvidia GPU for them, and slowly transition itself out to only x86 + Foundry model. Or even solely manufacture for Nvidia. US administration further force Apple, Qualcomm and Broadcom to use Intel in some capacity. Assuming Intel can keep up with TSMC, which is probably a comparatively easier task than to tackle the GPU market.
I am assuming the Intel board is happy with that direction though. Because so far they have shown they are completely lack of any strategic vision.
This seems like it could be a long term existential threat for AMD. AMD CPU + GPU combos are finally coming out strong, both with MI300+ series in the supercomputing space, Strix Halo in laptops, etc. They have the advantage of being able to run code already optimized for x86 (important for gamers and HPC code), which NVIDIA doesn't have. Imagine if Grace Blackwell had x86 chips instead of Arm. If NVIDIA can get Intel CPUs with its chip offerings, it could be poised to completely take over so many new portions of the market/consolidate its current position by using its already existing mindshare and market dominance.
This seems more like the deal where Microsoft invested in Apple. It’s basically charity and they will flip it in a few years when Intel gets back on their feet.
Using fortunes falling in the lap to kill competition is a common practice of economics (vs. technology) oriented organizations. That brings benefits only for the organization, for others it brings damages and disappointments.
Yeah, Nvidia has trillions at stake, Intel a mere 100B. It's more in the interests of Nvidia to interfere with Intel's GPU business than to help it, and the only things they want from Intel are the fabs.
At this point Nvidia is just shooting themselves in the foot with hostility towards Linux - they are actively using Linux systems for DGX systems and the dependency on Linux is only going to grow internally.
Something about this reminds me of other industry gobbling purchases. None of them ever turned out better for the product, price or general well being of society.
As an Apple user (and even an Apple investor), I'd rather that Apple went out of business back then. If we could re-roll the invention of the (mainstream) smartphone, maybe we'd get something other than two monopolistic companies controlling everything.
For instance, maybe if there were 4 strong vendors making the devices with diverse operating systems, native apps wouldn't have ever become important, and the Web platform would have gotten better sooner to fill that gap.
Or maybe it'd have ended up the same or worse. But I just don't think Apple being this dominant has been good for the world.
Or... we could still be using blackberry-like devices without much in the way of active/touch interface development at all. Or worse, the Windows CE or Palm with the pen things.
Why? Was Steve Jobs literally the only human who was capable of seeing the massive unserved demand that existed back then?
Sidekick was amazing for its time, but only on one also-ran carrier. BlackBerry had great features like BBM (essentially iMessage) but underpowered for multimedia and more difficult to learn. If Apple was out of business, one or more companies would have made the billions on MP3 players that iPod made, and any of them could have branched into phones and made a splash the same way. Perhaps Sony, perhaps Microsoft. Microsoft eventually figured it out -- the only reason they failed was that they waited for both Apple and Android to become entrenched so in this timeline they could have been the second-mover, but unlike with Apple and Android, maybe neither MS nor Google would have automatically owned the US marketshare the way Apple does[1]. If that were the case, we may have competition, instead of the unhealthy thing we have where Apple just does whatever they want.
With all due respect theres a simple answer to why Apple was destined to win the smartphone race - they had a 5 year lead over everyone else because they had the OS and touch interface tightly integrated. On top of that they managed to scale up the production of the glass necessary for the touch to work and partnered with a network provider to overcome the control network providers had over handset producers.
They had such a lead that nobody was going to catch up and eat into their economic profits. Sure Samsung et al have captured marketshare, but not eaten into Apples economic profits.
Whether you like it or not, this hard work, effort and creativity deserves to be rewarded - in the form of monopoly/oligopoly profits.
Apple has shown itself to be very disciplined with its cash. That cannot be said of for Google, who instead of taking an endless stream of vanity projects, should return that cash back to shareholders.
BB10 was the shit. Fantastic OS and (some models) a great hardware keyboard. But it was already a response to the iPhone, wouldn't have happened without...
There's nothing supernatural about Apple that meant only they could do something better than that shitty generation of devices. Remember, the portable consumer electronics market would certainly have other huge players if Apple hadn't existed to make the iPod. BlackBerry, Microsoft, and Sony come to mind. iPhone, based mainly on Apple's popularity from the iPod era, got a huge jump from that, and then the rush for native apps, which encourages consolidation, smothered every other company's competing devices (such as WebOS, BlackBerry 10, Windows Mobile) before they had a chance to compete.
To be honest, Android may have met a similar fate if Apple had been able to negotiate a non-exclusive contract with Cingular/AT&T. My understanding though was that they had to give exclusivity as a bargaining chip to get all the then-unthinkable concessions, as yeah, every phone was full of garbage bloatware and festooned with logos.
Both. Also things like sound cards, network cards, peripherals in general.
My happiness and stability while using Linux has been well correlated with the number of devices with Intel in the name. Every single device without Intel invariably becomes a major pain in my ass every single time.
It's gotten to the point I assume it will just work if it's Intel.
> And does AMD not have excellent Linux support for their own CPUs and GPUs?
They're making a lot of progress but Intel is still years ahead of them.
Earlier this year I was researching PC parts for a build and discovered AMD was still working on merging functionality like on die temperature sensors into the kernel. It makes me think I won't have a full feature set on Linux if I buy one of their processors.
Well, AMD isn't going away yet, and they do seem to have finally released the advantage of open-source drivers. But that's still bad very for competition and prices.
This is a death blow to the Intel GPU+AI efforts and should not be allowed by the regulators. It is clear that Intel needs the downstream, low-cost GPU market segment to have a portfolio of AI chips based on chiplets, where most defective ones end up in the consumer grade GPUs based on manufacturing yield. NVidias interest is now for Intel not to enter either the GPU market, nor the AI market - which Intel was preparing for with its GPU efforts in recent years.
The US government is itself a major shareholder in Intel, and has every incentive to push Intel stock over its competitors. It's almost a certainty that Nvidia was forced into this deal by the government as well. We are way beyond regulation here.
They want a source of chips for the wars they want to conduct that is not either controlled by the party they want to go war with, or way way closer to the party they want to go to war with than they are. Buying a chunk of Intel is a way of making sure they do the things the government wants that will lead to that outcome. Or at least so the theory goes; I've got my own cynicism on this matter and wouldn't dream of tamping down on anyone else's.
Right now if the US wants to go to war with China, or anyone China really really likes, they can expect with high probability to very quickly encounter major problems getting the best chips. AIUI the world has other fab capacity that isn't in Taiwan, and some of it is even in the US, but they're all on much older processes. Some things it's not a problem that maybe you end up with an older 500MHz processor, but some things it's just a non-starter, like high-end AI.
Sibling commenters discussing profits are on the wrong track. Intel's 2024 revenue, not profits, was $53.1 billion. The Federal Government in 2024 spent $6,800 billion. No entity doing $1.8 trillion in 2024 in deficit spending gives a rat's ass about "profits". The US Federal government just spends what it wants to spend, it doesn't have any need to generate any sort of "profits" first. Thinking the Federal government cares about profits is being nowhere near cynical enough.
This is generally true even setting side the "war with China" angle. Intel is a large domestic company employing hundreds of thousands in a very critical sector, and the government has every incentive to prevent it from failing. In the last two decades we've bailed out auto companies and banks and US Steel (kinda) for the same reason.
Concisely put. This is exactly the reasoning. The US is preparing for a potential war with China in 2026 or 2027, and this is how it is beginning preparations.
> Right now if the US wants to go to war with China
The US is desperate to not have that war, because they spent so long in denial about how sophisticated China has become that it would be a total humiliation. What you see as the US wanting war is them simply playing catch up.
I find it funny that people talk about a US/China war as a real possibility. You are aware that that would be the end of life on earth as we know it, right?
Unfortunately, "it would end life on Earth as we know it" is not, on its own terms, a thing that will stop it from happening. All it takes is the people who can make the decision deciding to do it because they think they will come out ahead, and not caring about what it may do to anyone else. And they don't even have to be right. They just have to think they will come out ahead.
Don't mistake talking about a thing as advocating for that thing. It leaves you completely unable to process international politics, and frankly, a lot of other news and discussion as well. If you can only think about things you approve of, your model of the world is worse than useless.
We haven’t really tested the idea of a geographically restricted war. During the Cold War there were some pretty transparent proxy wars, but the proxy still allowed for backing out and saving face.
I don’t think geographically restricting a war is even possible, really. The US’s typical game plan involves hitting the enemy’s decision-making capabilities faster than they can react. That goes out the window if we can’t hit each other’s mainlands. A war where we don’t get to use our strongest trick and China keeps their massive industrial base is an absurd losing one that the US would be totally nuts to sign up for.
Anyway, we and China can be perfectly good peaceful competitors.
Sure, but this is an interesting independent of the government holding Intel stock.
The US government always ought to have the interest of US companies in mind, their job is to work in the interest of the voters and a lot of us work for US companies.
They can buy enough stock to shift the price, then use that as a lever to control their own investments prices (and thence profits). Like they've done with tariffs.
That sounds more like an abuse of government powers for individual gain than any legitimate government interest. If that was the plan it would make just as much sense to short a company and then announce a plan to put them under greater regulatory scrutiny.
Well, I wouldn’t be able to prove it if challenged. And anyway, it seems better overall to not start building the case that that’s just something we expect politicians to do.
A shocking surprise needs to be a surprise for it to work. Call it strategic naivety if you want.
Donald Trump's erratic tariff policies are surprising.
Donald anounces tariffs and the markets react. He postpones tariffs and the markets react again. Only Donald and his friends know what he will announce next.
> Donald Trump's erratic tariff policies are surprising.
This feels like a misreading of what I wrote. The discovery that he is using tariffs to make a personal profit should be surprising.
> Donald anounces tariffs and the markets react. He postpones tariffs and the markets react again. Only Donald and his friends know what he will announce next.
That wouldn’t surprise me at all, I just don’t think a hypothesis about how he could abuse his power will be very compelling to anybody who doesn’t already think he’s prone to corruption. If anything, I think it starts inoculating people to the idea.
Shouldn't be, yes. Isn't? Have you seen the rhetoric around tariffs? A lot of people thought they wanted the government run like a business, so welcome to the for-profit government society.
well the question i answered was "Why would anything that isn't Intel implode" and an AI winter and another dotcom boom would do that to everyone not named Intel.
Well, the AI bubble will eventually pop since none of the major AI chatbots are remotely profitable, even on OpenAI's eyewatering $200/month pay plan which very few have been willing to pay, and even on that OpenAI is still loosing money on it. And when it pops, so will Nvidia's stock, it's only a matter of time.
The AI hype train was built on the premise that AI will progress linearly and eventually end up replacing a lot of well paid white collar work, but it failed to deliver on that promise by now, and progress has flatlined or sometimes even gone backwards (see GPT-5 vs 4o).
FAANG companies can only absorb these losses for so long before shareholders pull out.
The AI bubble pop is probably not something NVIDIA is super looking forward to, but of anybody near the bubble they are the least likely to really get hurt by it.
They don’t make AI chips really, they make the best high-throughput, high-latency chips. When the AI bubble pops, there’ll be a next thing (unless we’re really screwed). They’ve got as good chance of owning that next thing as anybody else does. Even better odds if there are a bunch of unemployed CUDA programmers to work on it.
There will be a dramatic reduction in “demand” and Nvidia will be stuck with a massive “surplus”
There will undoubtedly still be a market for Nvidia chips but it won’t be enough to keep things going as they are.
A new market opening up with the same demand as AI just at the point that AI pops would be a miracle. Something like being an unsecured bond holder in 2010.
When you follow the progress in the last 12 months, it really isn't. Big AI companies spent "hella' stacks" of cash, but delivered next to no progress.
Progress has flatlined. The "rocket to the moon" phase has already passed us by now.
The white collar worker doesn't need to be replaced for the bots to be profitable. They just need to become dependent on the bots to increase their productivity to the point where they feel they cannot do their job without the chatbot's help. Then the white collar worker will be happy to fork over cash. We may already be there.
Also never forget that in technology moreso than any other industry showing a loss while actually secretly making a profit is a high art form. There is a lot of land grabbing happening right now, but even so it would be a bit silly to take the profit/loss public figures at face value.
Numbers prove we aren't. Sales figures show very few customers are willing to pay $200 per month for the top AI chatbots, and even at $200/month, OpenAI is still taking a loss on that plan so they're still loosing money even with top dollar customers.
I think you're unaware just how unprofitable the big AI products are. This can only go on for so long. We're not in the ZIRP era anymore where SV VC funded unicorns can be unprofitable indefinitely and endlessly burn cash on the idea that when they'll eventually beat all competitors in the race to the bottom and become monopolies they can finally turn a profit by squeezing users with higher real-world price. That ship has sailed.
I don't think you can confidently say how it will pan out. Maybe OpenAI is only unprofitable at the 200/month tier because those users are using 20x more compute than the 20/month users. OpenAI claims that they would be profitable if they weren't spending on R&D [1], so they clearly can't be hemorrhaging money that badly on the service side if you take that statement as truthful.
"OpenAI claims that they would be profitable if they weren't spending on R&D "
Ermmm dude they are competing with Google. They have to keep reinvesting otherwise Google captures the users OAI currently has.
Free cash flows matter. Not accounting earnings. On a FCFF basis they largely in the red. Which means they have to keep raising money, at some point somebody will turn around and ask the difficult questions. This cannot go on forever.
And before someone mentions Amazon... Amazon raised enough money to sustain their reinvestment before they eventually got to the place where their EBIT(1-t) was greater than reinvestment.
> They just need to become dependent on the bots to increase their productivity to the point where they feel they cannot do their job without the chatbot's help
Correct, but said technology needs to be self sustaining commercially. The cost the white collar worker pays needs to be enough to cover the cost of running the AI + profit
It seems like we are a long way off that yet but maybe we expect an AI to solve that problem ala Kurzweil
Why are this and the first reply being downvoted? Perfectly legitimate thoughts.
Anyway, I'd just point out that users don't even need to depend on the bots for increase productivity, they just need to BELIEVE it increases their productivity. Exhibit A being the recent study which found that experienced programmers were actually less productive when they used an LLM, even though they self-reported productivity gains.
This may not be the first time the tech industry has tricked us into thinking it makes us more productive, when in reality it's just figuring out ways to consume more of our attention. In Deep Work, Cal Newport made the argument that interruptive "network tools" in general decrease focus and therefore productivity, while making you think that you're doing something valuable by staying constantly connected. There was a study on this one too. They looked at consultants who felt that replying as quickly as possible to their clients, even outside of work hours, was important to their job performance. But then when they took the interruptive technologies away, spent more time focusing on their real jobs, and replied to the clients less often, they started producing better work and client feedback scores actually went up.
Now personally I haven't stopped using an LLM when I code but I'm certainly thinking twice about how I use it these days. I actually have cut out most interruptive technology when I work, i.e. email notifications disabled, not keeping Slack open, phone on silent in a drawer, etc. and it has improved my focus and probably my work quality.
He didn’t say anything violent. Have you watched the monologue?
Even if he did (which he didn’t), I don’t see Fox shutting down anything when one of their presenters recently stated, on air, that we should euthanize our homeless population.
To be clear (not that I agree with this situation):
Fox News (where that presenter works) is a cable network, beholden to the cable providers but not a broadcaster. The FCC has relatively little leverage to regulate it, because it does not rely on broadcast licenses.
ABC is a broadcast network. It relies on a network of affiliates (largely owned by a few big companies) who selectively broadcast its programming both over the airwaves and to cable providers. Those affiliates have individual licenses for their radio broadcasting bandwidth which the FCC does have leverage over (and whose content the FCC has a long history of regulating, but not usually directly over politics, e.g. public interest requirements, profanity, and obscenity laws).
Let’s not pretend that the Trump admin would’ve done anything about it even if they did have leverage. They actively encourage and participate in violent rhetoric when it’s directed towards their perceived enemies. Which includes the homeless.
Of course I watched it, many times. I didn't say he said anything directly violent, but he spread hateful disinformation about someone's death, entirely against FBI's findings and common sense, during a time of the highest temperatures in a while. Just to try to win the attention of people that'd rather not look in the mirror.
This is exactly the kind of disingenuous, dehumanizing behavior that radicalizes people like Tyler. And saying that right now would be like if Reagan got in to a spat about something personal during the cold war.
Firstly, being human about the death, then being transparent about the investigation are the most important things they could be doing, and they are doing that.
Idk how the antifa terror thing is going to go, but that really sounds like a loong time coming. Best by far would now be for the left to take some responsibility, not sink deeper in to their "good, x right-winger next" kind of hate spiraling.
That is literally exactly what he did say. Absolutely disgraceful. Glad he got fired.
Quoting:
> We hit some new lows over the weekend with the MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them, and doing everything they can to score political points from it
The fact you’re taking this opportunity to state this clip was hard for you to watch says a lot about your ability to consider another’s perspective in this conversation. I’m out.
> We hit some new lows over the weekend with the MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them, and doing everything they can to score political points from it
Having seen the murder pretty much live. A father with "prove me wrong" written on the tent he was sitting in, taking non-prescreened questions and debating with everyone that dared to come up to the mic. Shot with kids next to him. The most direct attack on debate itself we've seen during our lifetimes; it wouldn't matter if it was Sanders or any other left-wing, centrist, or right-wing figure doing very reasonable debate there. And after the murder so many heartless people came out to MOCK the death, and celebrate it.
Kimmel is lying against not just common sense but what authorities and people around Tyler have said. Absolutely nothing points at the murderer being maga, but pretty much every detail points towards him having been radicalized by the left. Unless you're just going to ignore people that knew him saying he was a leftie, Bella Ciao carved in to the bullets, and the very obvious of him having shot a right winger who was managing to change people's minds, etc.
And after that, you expect me to take Kimmels comment as level headed? You are hate trolling. You were succesful, I am actually angry. Regardless, I did watch everything and gave a fair response to it. The only people that would not be angered by Kimmel's comments knowing all this have no heart.
My comment wasn't removed, and don't try to put words in my mouth. I said he fanned the flames of violence; raising temperatures with easily disproven misinformation during an extraordinary time just a week from the murder.
I've seen the clip, now you go see what the authorities have figured of the killer; Absolutely nothing points at the murderer being maga, but pretty much every detail points towards him having been radicalized by the left. Unless you're just going to ignore people that knew him saying he was a leftie, Bella Ciao carved in to the bullets, and the very obvious of him having shot a right winger who was managing to change people's minds, etc.
You might feel smug and superior posting this, but it's quite ironic to post on one specific comment in an otherwise guidelines-breaking, off-topic, political circlejerk subthread like the one we're responding to.
This subthread is indeed reddit-esque: Started by a pithy, barely-constructive comment; and followed by pithy witticisms that add nothing to the conversation about Intel and Nvidia, but instead echo popular sentiments about the current administration without saying anything substantive. Meanwhile, the one contrarian opinion was instantly flagged and hidden, despite being level-headed and non-combative.
Trump doesn't seem to hear alot of questions - and has been reported to ramble on about all sorts of nonsense, during state visits and high-level meetings.
Almost like a dementia patient.
...but that is just my opinion - even so... not clarifying an asked question is not, well uh a sign of overall "great leadership"...
Intel had an opportunity to differentiate themselves by offering more VRAM than Nvidia is willing to put in their consumer cards. It seemed like that was where Battlemage was going.
But now, are they really going to undermine this partnership for that? Their GPUs probably aren't going to become a cash cow anytime soon, but this thing probably will. The mindset among American business leaders of the past two decades has been to prioritize short-term profits above all else.
It may be that Nvidia doesn’t really see Intel as a competitor. Intel serve a part of the GPU market that Nvidia has no interest in. This reminds me a bit of Microsoft’s investment into Apple. Microsoft avoided the scorn of regulators by keeping Apple around as a competitor and if they succeed, great, they make money off of the deal.
I remember when I was studying for an MBA.. a professor was talking about the intangible value of a brand .. and finance.. and how they would reflect on each other ..
At some point we were decomposing the parts of a balance sheet and they asked if one could sell the goodwill to invest in something else .. and the answer was of course .. no… well.. America has proven us wrong .. the way you sell the goodwill is to basically enshittification.. you quickly burn all your brand reputation by lowering your costs with shittier products .. your goodwill goes to 0 but your income increases so stock go up .. the CEO gets a fat bonus for it .. even tho the company itself is destroyed .. then the CEO quickly abandons ship and does the same on their next company .. rinse and repeat… infinite money!
We always called this “monetizing the brand” and it’s been annoying me since at least when Sperry when private equity and the shoes stopped being multi-year daily drivers
I don’t follow how it’s a death knell to intel AI chips. Nvidia bought shares, not a board seat. May be that’s the plan, but if you take the example of Microsoft buying apple shares that only gave apple a lifeline to build better. I do understand nvidia wants to have the whole gpu market to themselves but how will they do it?
I think the assumption there is that the strategic partnership that is part of the deal would in effect preclude Intel from aggressively competing with NVIDIA in that market, perhaps with the belief that the US governments financial stake in Intel would also lead to reduced anti-trust scrutiny of such an agreement not to compete.
They literally bought board seats - not today, but shares entitle you to vote on the board members on the next shareholder meeting. And 5$bn of shares buy you a lot of votes.
5$bn may not buy a huge amount of voting power, but if there are close votes on important things then it could be enough to affect the company. Keeping ones enemies closer, regardless of voting, can also help overall.
The likelihood intel AI was going to catch up with efforts like AWS Trainium, let alone Nvidia was already vanishingly small. This gives intel a chance at maintaining leading edge fab technologies.
I feel bad for gamers - I’ve been considering buying a B580 - but honestly the consumer welfare of that market is a complete sidenote.
I don’t agree. OneAPI gets a lot of things right that ROCM doesn’t, simply because ROCM is a 1:1 rip of what nvidia provides (warts and historical baggage included) whereas OneAPI was thoughtfully designed and did away with all of that. Intel has a strong history in networking, much stronger than Xilinx/AMD, and really was the best hope we had for an open standard to replace nvidia’s hellscape.
> This gives intel a chance at maintaining leading edge fab technologies.
I don't think so:
> The chip giant hasn’t disclosed whether it will use Intel Foundry to produce any of these products yet.
It seems pretty likely this is an x86 licensing strategy for nvidia. I doubt they're going to be manufacturing anything on intel fabs. I even wonder if this is a play to get an in with Trump by "supporting" his nationalizing intel strategy.
nvidia doesn’t need x86, they’re moving forward on aarch64 and won’t look back. For example, one of the headlines from CUDA 13 is that sbsa can be targeted from all toolkits, not as a separate download, which is important for making it easy to target grace. They have c2c silicon on grace for native host side nvlink. They’re not looking back.
They're clearly looking back though, investing in Intel and announcing quite substantial partnerships. Maybe they're not looking back for technical reasons, but they are looking back.
I think literally just the cash is a big deal at this point. Additionally, this deal probably increases the chances that Nvidia at least uses some Intel Foundry technology (like packing) and maybe very down the road, fabrication.
> The likelihood intel AI was going to catch up with efforts like AWS Trainium, let alone Nvidia
...and yet Nvidia is not gambling with the odds. Intel could have challenged Nvidia on performance-per-dollar or per watt, even if they failed to match performance in absolute terms (see AMD's Zen 1 vs Intel)
That was quite a long time ago! Intel going down the chutes now isn’t an effective punishment for how it behaved under Andy Grove and won’t deter others from Grove-like behaviour. Instead it’ll just mean even less restraint on any of the big players with market power now, like nVidia, AMD and TSMC.
This isn't about GPU competition, it's about fab competition (which is in far more dire of a situation).
Intel can no longer fund new process nodes by itself, and no customers want to take the business risk to build their product on a (very difficult) new node when tsmc exists. They're in a chicken and egg situation. (see also https://stratechery.com/2025/u-s-intel/ )
Consumer gpus are totally different products from the high end gpus now. Intel has failed on the gpu market and has effectively zero market share, so it is not actually clear there is an antitrust issue in that market. It would be nice if there was more competition but there are other players like AMD and a long tail of smaller ones
>Consumer gpus are totally different products from the high end gpus now. Intel has failed on the gpu market and has effectively zero market share, so it is not actually clear there is an antitrust issue in that market. It would be nice if there was more competition but there are other players like AMD and a long tail of smaller ones
I'm sorry that's just not correct. Intel is literally just getting started in the GPU market, and their last several releases have been nearly exactly what people are asking for. Saying "they've lost" when the newest cards have been on the market for less than a month is ridiculous.
If they are even mediocre at marketing, the Arc Pro B50 has a chance to be an absolute game changer for devs who don't have a large budget:
The lastest Arc GPUs were doing good, and were absolutely an option for entry/mid level gamers. I think lack of maturity was one of the main things keeping sales down.
Intel has been making GPUs for over 25 years. Claiming they are just getting started is absurd.
To that point, they've been "just getting started" in practically every chip market other than x86/x64 CPUs for over 20 years now, and have failed miserably every time.
If you think Nvidia is doing this because they're afraid of losing market share, you're way off base.
Sure, but claiming they have literally just started is completely inaccurate.
They've been making discrete GPUs on and off since the 80s, and this is at least their 3rd major attempt at it as a company, depending on how you define "major".
They haven't even just started on this iteration, as the Arc line has been out since 2022.
The main thing I learned from this submission is how much people hate Nvidia.
> The main thing I learned from this submission is how much people hate Nvidia.
I think there's a lot of frustration with Nvidia as of late. Their monopoly was mostly won on the merits of their technology but now that they are a monopoly they have shifted focus from building the best technology to building the most lucrative technology.
They've demonstrated that they no longer have interested in producing the best gaming GPUs because those might cannibalize their server technology. Instead they seem to focus on crypto and AI while shipping over priced knee capped cards at outrageous prices.
People are upset because they fear this deal will somehow influence Intel's GPU ambitions. Unfortunately I'm not sure these folks want to buy Intel GPUs, they just want Nvidia to be scared into competing again so they can buy a good Nvidia card.
People just need to draw a line in the sand and stop supporting Nvidia.
I love GPU differentiation, but this is one of those areas where Nvidia is justified shipping less VRAM. With less VRAM, you can use fewer memory controllers to push higher speeds on the same memory!
For instance, both the B50 and the RTX 2060 use GDDR6 memory. But the 2060 has a 192-bit memory bus, and enjoys ~336 GB/s bandwidth because of it.
I don't know what anybody would do with such a weak card.
My RTX 5090 is about 10x faster (measured by FP32 TFLOPS) and I still don't find it to be fast enough. I can't imagine using something so slow for AI/ML. Only 2.2 tokens/sec on an 8B parameter Llama model? That's slower than someone typing.
I get that it's a budget card, but budget cards are supposed to at least win on a pure price/performance ratio, even with a lower baseline performance. The 5090 is 10x faster but only 6-8x the price, depending on where in the $2-3,000 price range you can find one at.
> My RTX 5090 is about 10x faster (measured by FP32 TFLOPS) and I still don't find it to be fast enough. I can't imagine using something so slow for AI/ML. Only 2.2 tokens/sec on an 8B parameter Llama model? That's slower than someone typing.
Its also orders of magnitudr slower than what I normally see cited by people using 5090s; heck, its even much slower than I see on my own 3080Ti laptop card for 8B models, though usually won’t use more than an 8bpw quant for that size model.
I feel as though you are measuring tokens/s wrong, or have a serious bottleneck somewhere. On my i5-10210u (no dedicated graphics, at standard clock speeds), I get ~6 tokens/s on phi4-mini, a 4b model. That means my laptop CPU with a power draw of 15 watts, that was released 6 years ago, is performing better than a 5090.
> The 5090 is 10x faster but only 6-8x the price
I don't buy into this argument. A B580 can be bought at MSRP for 250$. A RTX 5090 from my local Microcenter is around 3250$. That puts it at around 1/13th the price.
Power costs can also be a significant factor if you choose to self-host, and I wouldn't want to risk system integrity for 3x the power draw, 13x the price, a melting connector, and Nvidia's terrible driver support.
EDIT: You can get an RTX 5090 for around 2500$. I doubt it will ever reach MSRP though.
I've been using Mistral 7B, and I can get 45 tokens/sec, which is PLENTY fast, but to save VRAM so I can game while doing inference (I run an IRC bot that allows people to talk to Mistral), I quantize to 8 bits, which then brings my inference speed down to ~8 tokens/sec.
For gaming, I absolutely love this card. I can play Cyberpunk 2077 with all the graphics settings set to the maximum and get 120+ fps. Though when playing a much more graphically intense game like that, I certainly need to kill the bot to free up the VRAM. But I can play something simpler like League of Legends and have inference happening while I play with zero impact on game performance.
I also have 128 GB of system RAM. I've thought about loading the model in both 8-bit and 16-bit into system RAM and just swap which one is in VRAM based on if I'm playing a game so that if I'm not playing something, the bot runs significantly faster.
Hold on, you're only getting 45 tokens/sec with Mistral 7B on a 5090 of all things? That gets ~240 tokens/sec with Llama 7B quantized to 4 bits on llama.cpp [1] and those models should be pretty similar architecturally.
I don't know exactly how the scaling works here but considering how LLM inference is memory bandwidth limited you should go beyond 100 tokens/sec with the same model and a 8 bit quantization.
My understanding is that quantizing lowers memory usage but increases compute usage because it still needs to convert the weights to fp16 on the fly at inference time.
Clearly I'm doing something wrong if it's a net loss in performance for me. I might have to look more into this.
Yes it increases compute usage but your 5090 has a hell of a lot of compute and the decompression algorithms are pretty simple. Memory is the bottleneck here and unless you have a strange GPU which has lots of fast memory but very weak compute a quantized model should always run faster.
If you're using llama.cpp run the benchmark in the link I posted earlier and see what you get; I think there's something like it for vllm as well.
Other than the market segmentation over RAM amounts, I don't see very much difference. There's some but there's been some for a long time. Isn't AMD re-unifying their architectures?
> There's some but there's been some for a long time. Isn't AMD re-unifying their architectures?
Yes.
> Other than the market segmentation over RAM amounts, I don't see very much difference.
The difference between CDNA and RDNA is pretty much how fast it can crunch FP64 and SR-IOV. Prior to RDNA, AMD GPUs were jacks of all trades with compute bias. Which made them bad for gaming unless the game is specifically written around async compute. Vega64 has more FP64 compute than the 4080 for context.
I think if AMD was able to get a solid market share of datacenter GPUs, they wouldn't have unified. This feels like CDNA team couldn't justify its existence.
The alternative is currently looking like cutting up of intel into piecemeal to make a quick buck just to stay afloat. The GPU division is not profitable and may be destroyed if overall financials don't improve.
Right now, for the US national interests, our biggest concern is that Intel continues to exist. Intel has been making crappy GPUs for 25 years. They weren’t going to start making great GPUs now.
Besides, who would actually use them if they don’t support CUDA?
Everyone designs better GPUs than Intel - even Apple’s ARM GPUs have been outpacing Intel for a decade even before the M series.
Why does it matter if Intel exists if they can't compete? AMD exists. The only point of hoping they remain is to create an environment of competition as that drives development and progress.
Though fair and free markets is not at all what the current regime in the US believes in, instead it will be consolidation, leading waste, and little innovation and progress.
So you don’t see the difference in the threat level of China bombing and invading Taiwan - which they already claim they own - and China attacking the US directly?
So its just an imagined subtext that China that has been rabble rousing about taking over Taiwan is more likely to attack a tiny island nation right next to than attack the US?
As Taiwan should, it's their prerogative. People often think when global policy changes abruptly everything stops; in reality, the contrary is true: supply chains and demands shift.
For what it's worth, its TSMC's expertise in semiconductor manufacturing that has been loaned to the US, not bought, settled, and forgotten.
Wait what. Intel GPU+AI efforts. People had to come together to fund the abandoned Intel SW development team. Intel GPUs are great at what they do but they are no nvidia. I don't even think that was on the roamdap. Also you don't know what nvidia wants. Maybe they want to flood the low end to destroy AMD benefiting consumers. We just don't know
> For personal computing, Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs.
I don't think this is Intel trying to save itself, it's nVidia. Intel GPUs have been in 3rd place for a long time, but their integrated graphics are widely available and come in 2nd place because nVidia can't compete in the x86 space. Intel graphics have been closing the gap with AMD and are now within what? A factor of 2 or less (1.5?)
IMHO we will soon see more small/quiet PCs without a slot for a graphics card, relying on integrated graphics. nVidia has no place in that future. But now, by dropping $5B on Intel they can get into some of these SoCs and not become irrelevant.
The nice thing for Intel is that they might be able to claim graphics superiority in SoC land since they are currently lagging in CPU.
Way back in the mid-late 2000s Intel CPUs could be used with third party chipsets not manufactured by Intel. This had been going on forever but the space was particularly wild with Nvidia being the most popular chipset manufacturer for AMD and also making in-roads for Intel CPUs. It was an important enough market than when ALi introduced AMD chipsets that were better than Nvidia's they promptly bought the company and spun down operations.
This was all for naught as AMD purchased ATi, shutting out all other chipsets and Intel did the same. Things actually looked pretty grim for Nvidia at this point in time. AMD was making moves that suggested APUs were the future and Intel started releasing platforms with very little PCIe connectivity, prompting Nvidia to build things like the Ion platform that could operate over an anemic pcie 1x link. There were really were the beginnings of strategic moves to lock Nvidia out of their own market.
Fortunately, Nvidia won a lawsuit against Intel that required them to have pcie 16x connectivity on their main platforms for 10 years or so and AMD put out non-competitive offerings in the CPU space such that the APU take off never happened. If Intel had actually developed their integrated GPUs or won that lawsuit or if AMD had actually executed Nvidia might well be an also-ran right around now.
To their credit, Nvidia really took advantage of their competitors inability to press their huge strategic advantage during that time. I think we're in a different landscape at the moment. Neither AMD nor Intel can afford boot Nvidia since consumers would likely abandon them for whoever could still slot in an Nvidia card. High performance graphics is the domain of add-in boards now and will be for awhile. Process node shrinks aren't as easy and cooling solutions are getting crazy.
But Nvidia has been shut out of the new handheld market and haven't been a good total package for consoles as SoC both rule the day in those spaces so I'm not super surprised at the desire for this pairing. But I did think nvidia had given up these ambitions was planning to try to build an adjacent ARM based platform as a potential escape hatch.
> It was an important enough market than when ALi introduced AMD chipsets that were better than Nvidia's they promptly bought the company and spun down operations.
This feels like a 'brand new sentence' to me because I've never met an ALi chipset that I liked. Every one I ever used had some shitty quirk that made VIA or SiS somehow more palatable [0] [1].
> Intel started releasing platforms with very little PCIe connectivity,
This is also a semi-weird statement to me, in that it was nothing new; Intel already had an established history of chipsets like the i810, 845GV, 865GV, etc which all lacked AGP. [2]
[0] - Aladdin V with it's AGP Instabilities, MAGiK 1 with it's poor handling of more than 2 or 3 'rows' of DDR (i.e. two double-sided sticks of DDR turned it into a shitshow no matter what you did to timings. 3 usually was 'ok-ish' and 2 was stable.)
[1] - SIS 730 and 735 were great chipsets for the money and TBH the closest to the AMD760 for stability.
[2] - If I had a dollar for every time I got to break the news to someone that there was no real way to put a Geforce or 'Radon' [3] in their eMachine, I could have had a then-decent down payment for a car.
[3] - Although, in an odd sort of foreshadowing, most people who called it a 'Radon', would specifically call it an AMD Radon... and now here we are. Oddly prescient.
I'm thinking the era of "great ALI chipsets" was more after they became ULi in the Athlon 64 era.
I had a ULi M1695 board (ASRock 939SLI32-eSATA2) and it was unusual for the era in that it was a $90 motherboard with two full x16 slots. Even most of the nForce boards at the time had it set up as x8/x8. For like 10 minutes you could run SLI with it until nVidia deliberately crippled the GeForce drivers to not permit it, but I was using it with a pretty unambitious (but fanless-- remember fanless GPUs?) 7600GS.
They also did another chipset pairing that offered a PCI-Ex16 slot and a fairly compatible AGP-ish slot for people who had bought an expensive (which then meant $300 for a 256MB card) graphics card and wanted to carry it over. There were a few other boards using other chipsets (maybe VIA) that tried to glue together something like that, but the support was much more hit-or-miss.
OTOH, I did have an Aladdin IV ("TXpro") board back in the day, and it was nice because it supported 83MHz bus speeds when a "better" Intel TX board wouldn't. A K6-233 overclocked to 250 (3x83) was detectably faster than at 262 (3.5x75)
ALi was indeed pretty much on the avoid list for me for most of their history. It was only when they came out with the ULi M1695 made famous by the Asrock939dual-sata2 that they were a contender for best out of nowhere. One of the coolest boards I ever owned and was rock solid for me even with all of the weird configs I ran on it. I kind of wish I hadn't sold it even today!
I remember a lot disappointed people on forums who couldn't upgrade their cheap PCs as well, but there were still motherboards available with AGP to slot into for Intel's best products. Intel couldn't just remove it from the landscape altogether (assuming they wanted to) because they weren't the only company making Intel supporting chipsets. IIRC Intel/AMD/Nvidia were not interested in making AGP+PCIe supporting chipsets at all, but VIA/ALi and maybe SiS made them instead because it was a free for all space still. Once that went away Nvidia couldn't control their own destiny.
nvidia does build SOCs already. The AGXs and other offerings. I'm curious why they want intel despite having that technical capability of building SOCs.
I realize the AGX is more of a low power solution and it's possible that nvidia is still technically limited when building SOCs but this is just speculation.
Does anybody know actual ground truth reasoning why Nvidia is buying Intel despite the fact that nvidia can make their own SOCs?
Sometimes HN users appear to have absolutely zero sense of scales. Lifetime sales numbers of those are like hours to days worth equivalent of Switch 2.
I think it is bad news for the GPU market (AMD has had a beachhead with their integrated solution here as they've lost out elsewhere) but good for x86 which I've worried would be greatly diminished as Intel became less competitive.
That was targeted at supporting more tightly integrated and performant Macbooks .... it flopped because Apple came up with M1, not because it was bad per se.
Remember when Microsoft invested in Apple when Apple was down in the dumps? This is giving similar vibes. That deal was arguably what saved Apple near its nadir. I’m not a fan of Intel’s past monopolistic practices, but for the sake of sustaining competition in the CPU/GPU market, I hope this deal works out for them even half as well as the MS deal did for Apple.
>Remember when Microsoft invested in Apple when Apple was down in the dumps? This is giving similar vibes.
Doesn't feel the same because the 1997 investment was arranged by Apple co-founder Steve Jobs. He had a long personal relationship with Bill Gates so could just call him to drop the outstanding lawsuits and get a commitment for future Office versions on the Mac. Basically, Steve Jobs at relatively young age of 42 was back at Apple in "founder mode" and made bold moves that the prior CEO Gil Amelio couldn't do.
Intel doesn't have the same type of leadership. Their new CEO is a career finance/investor instead of a "new products new innovation" type of leader. This $5 billion investment feels more like the result of back-channel discussions with the US government where they "politely" ask NVIDIA to help out Intel in exchange for less restrictions selling chips to China.
> This $5 billion investment feels more like the result of back-channel discussions with the US government where they "politely" ask NVIDIA to help out Intel in exchange for less restrictions selling chips to China.
This style of classical fascism or economic fascism, or whatever the term is differentiate it from the modern unrelated usage of fascism, being used in the US is a bit unnerving, and it's crazy that it's usually from the Republican party, who claims to espouse free markets.
It also happened under G. W. Bush with banks and auto manufacturers, but the worst offense was under Nixon with his nationalization of passenger rail.
At least with the bank and car manufacturer bailouts the government eventually sold off their stocks, and with the Intel investment the government has non-voting shares, but the government completely controls the National Railroad Passenger Corporation, (the NRPC aka Amtrak) with the board members being appointed by the president of the United States.
We lost 20 independent railroads overnight, and created a conglomerate that can barely function.
That's how post-WW2 France was actually rebuilt. You could also see big hints of that in the US WW2 economic effort, which couldn't have been done without the Government taking a direct hold of things and instituting central-ish planning.
You're speaking of what is referred to as neo-corporatism [0] and it's a tripartite, democratic process, not the fascist sort where everything is within and for the benefit of the state [1].
There was not that much democracy in the French post-WW2 technocratic establishment, but I agree that they were not technically fascist (nor otherwise).
There's big difference between government allocating tax payer dollars by passing a bill than a president using their influence to force dealings between corporate entities that benefit the ruling party.
The parent comment is speculation. But yes, speculatively, a legislative act of investment would be less authoritarian than the whims of an executive that puts tariffs on your product constantly unless you do what he says.
Is the method by which it’s communicated what gives you negative feelings? Because this is an approach to handling the labor dumping that’s been allowed in nearly every industry since the 1980s, and it’s been used numerous times in the US and abroad. They typically only offer temporary relief, while domestic industries should be adjusting and better trade deals get negotiated. The last I checked, that’s been happening to some degree… but it also probably needs to be supported by the ability for companies to borrow money, which the Fed (until recently) seemed hell bent on preventing, while we continued to watch the job market burn to the ground. So cash flush businesses investing in each other to keep competition alive seems like a positive here. Maybe that’s just me?
Most regulation is effectively coercion. The difference is regulation isn’t easily rolled back, whereas the current approach to modifying behavior is (as we’ve seen, numerous times in the last few months even). One is more tolerant of failure than the other.
There is an extreme where policy cannot be modified, and there is an extreme where the whims of one person, and the precedent of having the US government defined as the whims and whiplashes of one person, is immensely harmful to our national credibility. It fucks with investment, immigration and education.
Microsoft also invested $100M in Borland at the same time.
Investing in Apple and Borland were an counter-anti-trust legal move, keeping the competitors alive, but on life support. This way they could say to the government "yes there is competition".
Google does the same these days by keeping Firefox alive.
I don't think that's an apt comparison, given that Microsoft and Apple were more direct competitors than Intel and Nvidia; the latter have a more symbiotic relationship. I think the rationale is closer to the competitor of my competitor is my friend -- they face two threats by AMD growing larger in the CPU market:
- a bigger R&D budget for their main competitor in the GPU market
- since Nvidia doesn't have their own CPUs, they risk becoming more dependent on their main competitor for total system performance.
Required in that Nvidia would like to sell them to you. But customers seem to be hesitant and prefer x86-based DGX and similar systems. At least from what I've heard and seen.
This is a big ask for a shrinking market- with the pressure that the Chinese government is putting on their domestic companies to not buy H20's, I'm not sure how big this is going to be going forward. 5 billion (plus whatever it costs to build these products) is a lot for a market that is probably going to be closed soon.
> Remember when Microsoft invested in Apple when Apple was down in the dumps?
Had Apple failed, Microsoft would probably have been found to have a clear monopolistic position. And microsoft was already in hot waters due to InternetExplorer IIRC.
That Microsoft-Apple deal was part lifeline, part strategic insurance. Intel clearly needs a win, and Nvidia needs more control over its ecosystem without being chained to TSMC forever
The reason why Nvidia is buying now does not have to do anything with Arc or GPU competition. There are mainly two reasons.
1) This year, Intel, TSMC, and Samsung announced their latest factories' yields. Intel was the earliest, with 18A, while Samsung was the most recent. TSMC yieled above 60%, Intel below 60%, and Samsung around 50% (but Samsung's tech is basically a generation ahead and technically more precise), and Samsung could improve their yields the most due to the way set up the processes, where 70% is the target. Until last year, Samsung was in the second place, and with the idea that Intel caught up so fast and taking Samsung's position at least for this year, Nvidia bought Intel's stock since it's been getting cheaper since COVID.
2) It's just generally good to diversify into your competitors. Every company does this, especially when the price is cheap.
I am curious where you get your information about Samsung being more “precise”.
I was recently looking into 2nm myself, and based on wikipedia article on 2nm, TSMC 2nm is about 50% more dense than the samsung and intel equivalent. They aren’t remotely the same thing. Samsung 2nm and Intel 18A are about as dense as TSMC 3nm, that’s been in production for years.
> I was recently looking into 2nm myself, and based on wikipedia article on 2nm, TSMC 2nm is about 50% more dense than the samsung and intel equivalent.
I did the math on TSMC N2 vs Intel 18A, and the former is 30% denser according to TSMC
> Nvidia will also have Intel build custom x86 data center CPUs for its AI products for hyperscale and enterprise customers.
Hell has frozen over at Intel. Actually listening to people that want to buy your stuff, whatever next? Presumably someone over there doesn't want the AI wave to turn into a repeat of their famous success with mobile.
In the event Intel ever do get US based fabrication semi competitive again (and the national security motivation for doing so is intense) nVidia will likely have to be a major customer, so this does make sense. I remain doubtful that Intel can pull it off, and it will have to come from someone else.
If you were a big enough customer you could get a SKU for you, too. E.g. hyperscalers have Xeons which are not available for any other customers for any price.
But what they've completely resisted so far is any non trivial modification.
They turned down Acorn about the 286, which led to Acorn creating the Arm, they have turned down various console makers, they turned down Apple on the iPhone, and so on. In all cases they thought the opportunities were beneath them.
Intel has always been too much about what they want to sell you, not what you need. That worked for them when the two aligned over backwards compat.
Clearly the threat of an Arm or RISC-V finding itself fused to a GPU running AI inference workloads has woken someone up, at last.
Intel’s test for new business ideas has always been: will it make $1B in the first year?
It leads to mistakes like you mention, where a new market segment or new entrant is not a sure thing. And then it leads to mistakes like Larrabee and Optane where they talk themselves into overconfidence (“obviously this is a great product, we wouldn’t be doing it if it wasn’t guaranteed to make $1B in the first year”).
It is very hard to grow a business with zero risk appetite. You can’t take risky high return bets, and you can’t acknowledge the real risk in “safe” bets.
If Intel had a server SKU with fully integrated, competitive performance GPU cores that work with CUDA + unified memory, they’d sell billions worth in a day to the CSPs alone.
Sounds like they will someday soon.
There will always be giant, faraway GPU supercomputer clusters to train models. But the future of inference (where the model fits) is local to the CPU.
Larrabee could have grown into something very cool if they had not dropped it and made it available on the open market, donated to universities and so on. Transputer vibes.
I think for Larrabee it was intel experimenting to find other markets for their Atom cores, and if there was market for it they needed to have the tenacity to cultivate it. Similar to how nvidia took huge amounts of time establishing GPGPU, CUDA, then machine learning, through to reaping the rewards over the past few years.
2010-2011 was also the time that AMD were starting to moan a bit about DX11 and the higher level APIs not being sufficient to get the most out of GPUs, which led to Mantle/Vulkan/DX12 a few years down the road. Intel did a bit regarding massively parallel software rendering, with the flexibility to run on anything x86 and implement features as you liked, or AMD's efforts for 'fusion' (APU+GPU, after recently acquiring ATi) or HSA which I seem to recall was about dispatching different types of computing to the best suited processor(s) in the system for it. However I got the impression a lot of development effort is more interested in progressing on what they already have instead of starting in a new direction, and game studios want to ship finished and stable/predictable product, which is where support from intel would have helped.
Console makers only get trivial modifications. ASRock sold a cryptocurrency miner, the BC-250, with the PS5 APU, and it works just like any of their other APUs, albeit with limited driver support.
The BC250 does not use a PS5 APU, it uses another APU which has the same CPU core. By that measure the Cell in the PS3 and the Xenon of the XBox 360 were the same, or any AMD Jaguar device is a PS4.
This relates to the Intel problem because they see the world the way you just described, and completely failed to grasp the importance of SoC development where you are suddenly free to consider the world without the preexisting buses and peripherals of the PC universe and to imagine something better. CPU cores are a means to an end, and represent an ever shrinking part of modern systems.
There's almost no chance it isn't using rejected PS5 APU dies. It has fused off two of the eight CPU cores, as well as 12 of the 36 GPU compute units, but otherwise has the exact same specifications. The one customization Sony did get, the use of GDDR6 RAM, is still present. It also exhibits the same very short-lived mix of Zen 2 with RDNA 2 and has the same die size and aspect ratio.
The problem is, console manufacturers know precisely how much of their product they anticipate to sell, and it's usually a lot. The PlayStation 5 is 80 million units so far.
And at that scale, the console manufacturers want to squeeze every vendor as hard as they can... and Intel didn't see the need to engage in a bidding war with AMD that would have given them a sizable revenue but very little profit margin compared to selling Xeon CPUs to hyperscalers where Intel has much more leverage to command higher prices and thus higher margins.
> they turned down Apple on the iPhone
Intel just was (and frankly, still is) unable to compete on the power envelope with ARM, that's why you never saw x86 take off on Android as well despite quite a few attempts at it.
Apple only chose to go for Intel with its MacBook line as PowerPC was practically dead and offered no way to extract more performance, and they dropped Intel as soon as their own CPUs were competitive. To get Intel CPUs to the same level of power efficiency that M-series CPUs have would require a full rework of the entire CPU infrastructure and external stack, that would require money that even Intel at its best frankly did not have. And getting x86 to be power effective enough for a phone? Just forget it.
> Clearly the threat of an Arm or RISC-V finding itself fused to a GPU running AI inference workloads has woken someone up, at last.
Actually, that is surprising for me as well. NVIDIA's Tegra should easily be powerful enough to run the OS for training or inference workload. If I were to guess, NVIDIA wants to avoid getting caught too hard on the "selling AI shovels" train.
Apple did not want their x86 chips, they wanted their Xscale stuff. Apple went to Intel to get chips, the power envelope was appealing to Apple. Intel was the one to say no.
Right. But of course, intel was busy spinning off their Xscale business to Marvell. If they had seriously invested in it, they could have owned the coming mobile revolution.
They did push hard on their UMPC x86 SoCs (Paulsbo and derivatives) to Sony, Nokia, etc. These were never competitive on heat or battery life.
> And at that scale, the console manufacturers want to squeeze every vendor as hard as they can... and Intel didn't see the need to engage in a bidding war with AMD that would have given them a sizable revenue but very little profit margin compared to selling Xeon CPUs to hyperscalers where Intel has much more leverage to command higher prices and thus higher margins.
And so that gave AMD an opening, and with that opening they got to experiment with designs, tailor a product, get experience and industrial marketshare, and they were able to continue to offer more and better products. Intel didn't just miss a mediocre business opportunity, they missed out on becoming a trusted partner for multiple generations, and they handed market to AMD that AMD used to be a better market competitor.
> and they handed market to AMD that AMD used to be a better market competitor.
AMD isn't precisely a market competitor. The server and business compute market is still firmly Intel and there isn't much evidence of that changing unless Apple drops M series SoCs to the wide open market which Apple won't do. Intel could probably release a raging dumpster fire and still go strong, oh wait, that's what they've been doing the last few years.
AMD is only a competitor in the lower end of the market, a market Intel has zero issue handing to AMD outright - partially because a viable AMD keeps the antitrust enforcers from breathing down their neck, but more because it drags down per-unit profit margins to engage in consoles and the lower rungs and niches.
> The server and business compute market is still firmly Intel and there isn't much evidence of that changing
This is not true anymore, as it IS changing, and very rapidly. AMD has shot up to 27.3% of the server market share, which they haven't had since the Opteron days 20 years ago. Five years ago their server market share was very small single digits. They're half of desktops, too. https://www.pcguide.com/news/no-amd-and-intel-arent-50-50-in...
>To get Intel CPUs to the same level of power efficiency that M-series CPUs have would require a full rework of the entire CPU infrastructure and external stack, that would require money that even Intel at its best frankly did not have
Intel, at one of its lowest low, still come up with lunar lake, which is not as efficiency as Apple M, but still, quite impressive.
I bet if they were focus on mobile when they are at their peak, they could come up with something similar to Apple M
Estimates are at 1M Xeons a month [1], so there have been more units of PS5 and thus CPUs sold to a single customer in the same timeframe than units of Xeon CPUs over all customers.
NVDA sold 153 million Tegra units to Nintendo in 8 years, so 1.5M units a month. That's just as comparable.
Spot on about the AI/mobile parallel. Intel sat out the smartphone wave while pretending it didn’t matter, and now they’re scrambling not to miss the AI train
Which makes you wonder, aren’t they already about a decade late? Sure, LLMs weren’t hyped a decade ago, but surely AI and deep learning were massively hyped back then.
It was intel culture at one time - when I started, everyone got a card to wear with your badge with intel values, there were only 6 and ‘customer orientation’ was one. It definitely influenced my personal development, but was clearly not adopted equally across the company.
Intel is a strategically important company for the United States. This smells like a token investment to appease the US government. Not saying it’s bad, but very much looks like that.
Why would either of these three be interested in buying a fab? The only other large player with its own fab is Samsung and Samsung has the same problem that Intel has, namedly a fab that is nowhere near close to TSMC.
I agree that Intel would be better served to spin off its fab division, a potential buyer could be the US government for military and national security relevant projects.
Someone could be interested. It could also be Global Foundries. High risk big reward bet which the government is willing to help mitigate some of the risk with funding.
Intel has enormous 14nm capacity and the node has been fully depreciated for years now, I wouldn't be surprised to see them keep it around long past its time in the zeitgeist. I'd be willing to bet we're about a generation away from a deluge of demand for embedded chips made on that node. Several high-end microcontrollers are made on 18nm processes already.
I'm still rooting for them to separate the fabs from the IP, I just wouldn't be surprised if some of the fabs stick around longer than folks would expect.
Not an expert in the area, but I think the highest of the high-end chips is a big market, but not the biggest market as revenue for fabs. It is just the most profitable part of the market.
Maybe this changed with the AI race but there are plenty of people buying older chips by the millions for all sorts of products.
The key for getting (financial) value out of fabs is their time after they are the overtaken by the next node. The ability to keep the order book full after you have a better node is what pays off the fab. So its all the other chips- the chips for cars, for low-power internet connected devices, etc. that make the fab profitable. That is where TSMC's ability to work with different customers enables them to extract value from a fab that pure-play CPU makers struggle with.
Ah, that makes sense. I guess Intel is stuck with making x86 CPUs for datacenters even on their old-node factories so they need to retool them for newer nodes more often/earlier because they don't have a foundry business.
Oh, the integrated players (now pretty much just Intel and Samsung, but in the past people like AMD as well) have other things they make on these older fabs- modems, OOB managers, USB controllers, all the other chips that go on a motherboard, but these are lower margin than a pure-play fab can get selling to all of those customers who don't need the latest node.
This is one reason Intel and Samsung are both hesitant about going to the next node- Intel has put out official statements that they are only going to 14A if they can get Foundry up with a significant partner, and Samsung is hedging their bets and being cagey about their own 1.4nm node (at least in English, I haven't seen any direct demand for a major foundry customer from Samsung, just statements saying that they were going to be delaying and might not be building it at all).
Being fabless is a huge strategic advantage to chip designers. Intel's biggest problem has been that theyre stuck on shitty fabs. Nvidia, amd, and qualcomm do not want to be in that position.
It did cross my mind that it could just be a kind of validation investment, to validate the Trump "investment" in Intel as a smart move (i.e., "see, Nvidia also thinks intel is a good investment), but the announcement did seem a bit contrived for that.
I am also thinking that it may just be more of a "everyone scratching each other's backs." Intel avoids anti-trust/monopoly investigations, Intel is saved for all the institutional and political stakeholders and Nvidia floats an artificial competitor to make them look less like a monopoly, Intel stays alive, etc.
I wonder what this means for Intel's Arc lineup. Would be a bit crazy to have privileged access to a competitor's roadmap through just owning a chunk of them. I also have to admit I really hope they dont cancel them. A triopoly is at least better than a duopoly (or realistically, a monopoly as AMD's competitiveness in gpus is pretty questionable)
It probably kills any prospect of Intel releasing a market disrupter card that many were calling for - a 64GB or 92GB card with even middling performance for under $1k.
It's pretty clear AMD and Nvidia are gatekeeping memory so they can iterate over time and protect their datacenter cards.
That's what I think of, along with favour from their new investment sibling, the US government. AMD doesn't want to be super competitive, they like their margins and being second choice in a hypetastic market. Even though Arc has very low adoption, it was making signs of doing scrappy things, like enabling two 24GB GPUs on one card from third party vendors, which got the hobby/upstart community pretty excited. Ultimately it's not a real market giving the people what they want via competition, it's all contrived by politics and the biggest players.
They will not, because need to save at least weak competition, or anti-trust regulators will use very high taxes against Nvidia.
This is reason, why Intel all previous decades saved tiny stripe for competitors (sure, AMD , but also like Cyrix or Sys), but immediately hit brakes, when some competitor becomes too competitive - to show regulators, that market is still competitive, is not just monopoly.
The size of stripe for outsiders is not right parameter here, but more important outsiders will not show bright products in most important niches.
So idea, Arc will not die fast, but it will constantly lag, to be only second or third.
How one could cut wings to GPU? Well first, delay top products, for example installing slow RAM and use too high temperature margins, so chip will run on slower frequency than could.
Second, as I hear, Arc drivers still not ideal and some games don't run smooth.
Third, cut all long term perspective initiatives, like WebGPU.
ps Other examples, you may seen strange behavior of IBM, Commodore/Atari, when they avoid to implement some very obvious things, and that is - they visited by regulators, and warned about approaching of formal margin, and after that visit, hit brakes, to limit their products, to avoid become next ATT.
Which is arguably kind of weird because where is it actually competing with NVIDIA? A hypothetical future, I guess?
But also, does this amount of ownership even give them the ability to kill anything on Intel's roadmap without broad shareholder consensus (not that that's even how roadmaps are handled anyway)?
That is arguable. Regardless of everything else, currently Intel stock is up about 50% from recent averages. If investors were so hurt, they really should be selling right now, and there seems to be reason to do so because Intel's troubles have not gone away with this Nvidia stake that does not touch Intel's rotten underbelly.
There's a lot of concern in the comments here about what this means for ARC. The size of this investment while large isn't enough to warrant jeopardizing ARC though. Intel has a responsibility to all shareholders, and diminishing ARC would be a bad move for overall shareholder value.
If Nvidia did try to exert any pressure to scrap ARC, that would be both a huge financial and geopolitical scandal. It's in the best interest of the US to not only support Intel's local manufacturing, but also it's GPU tech.
Intel was well on its way to be a considerable threat to NVIDIA with their Arc line of GPUs, which are getting better and cheaper with each generation. Perhaps not in the enterprise and AI markets yet, but certainly on the consumer side.
This news muddies this approach, and I see it as a misstep for both Intel and for consumers. Intel is only helping NVIDIA, which puts them further away from unseating them than they were before.
Competition is always a net positive for consumers, while mergers are always a net negative. This news will only benefit shareholders of both companies, and Intel shareholders only in the short-term. In the long-term, it's making NVIDIA more powerful.
I'm not convinced. The latest Battlemage benchmarks I've seen put the B580 at the same performance as the RTX 4060 (which is a two years old entry-level card) but with 50% more power consumption (80W vs 125W average). It's good to have more than one open source supporting graphics vendor, but I don't think Nvidia is losing any sleep over Intel's GPU offerings.
Battlemage had the best perf/% and most the driver issues from Alchemist had been ironed out. Another generation or two of steady progress and intel have a big winner on their hands.
Intel's foundry costs are probably competitive with nvidia too - nvidia has too much opportunity cost if nothing else.
The B580 was released in December 2024, and the 4060 in May 2023. So not quite a two year difference.
While it doesn't quite compete at performance and power consumption, it does at price/performance and overall value. It is a $250 card, compared to the $300 of the 4060 at launch. You can still get it at that price, if there's stock, while the 4060 hovers around $400 now. It's also a 12GB card vs the 8GB of the 4060.
So, sure, this is not competitive at the high-end segment, but it's remarkable what they've accomplished in just a few years, compared to the decades that AMD and NVIDIA have on them. It's definitely not far fetched to assume that the gap would only continue to close.
Besides, Intel is not only competing at GPUs, but APUs, and CPUs. Their APU products are more performant and efficient than AMD's (e.g. 140V vs 890M).
This is very short-sighted. The cards are improving, which can't really be said about AMD, the only other potential threat to Nvidia. It's also well known that Nvidia purposefully handicaps their consumer cards to avoid cannibalizing their enterprise cards. That means that the consumer market at least is not as efficient/optimal as it could be, so a competitor actually trying to compete (unlike AMD, apparently) should be able to do that without even having to out-innovate Nvidia or anything like that. Just get close on compute performance, but offer more VRAM or cheaper multi-gpu setups.
Nah, nobody cares about that. Even in their heyday, SLI and CrossFire barely made sense technologically. That market is basically non-existent. There's more people now wanting to run multiple GPUs for inference than there ever were who were interested in SLI, and those people can mix and match GPUs as they like.
nvidia's margins are over 80% for datacenter products. If Intel can produce chips with enough vram and performance on par with nvidia from 2 years ago at 30% margins theyd steal a lot of business, if they can figure out the cuda side of things.
I'm sure Larrabee will be superb any year now. The Xeon phi will rise again. For supporting evidence, the success of Aurora. Weren't the loss-leading arc GPUs cancelled as well? Maybe that only one generation of them, it does look like some are on the market now.
I think this partnership will damage nvidia. It might damage intel, but given they're circling the drain already, it's hard to make matters worse.
It's probably bad for consumers in every dimension.
Or to take the opposite, if nvidia rolled over intel and fired essentially everyone in the management chain and started trying to run the fabs themselves, good chance they'd turn the ship around and become even more powerful than they already are.
> It might damage intel, but given they're circling the drain already, it's hard to make matters worse.
How was Intel "circling the drain"?
They have a very competitive offering of CPUs, APUs, and GPUs, and the upcoming Panther Lake and Nova Lake architectures are very promising. Their products compete with AMD, NVIDIA, and ARM SoCs from the likes of Apple.
Intel may have been in a rut years ago, but they've recovered incredibly well.
This is why I'm puzzled by this decision, and as a consumer, I would rather use a fully Intel system than some bastardized version that also involves NVIDIA. We've seen how well that works with Optimus.
None of their products are competitive, they fired the CEO who was meant to save them, fired tens of thousands of their engineers, sold off massive chunks of the company, they're still bleeding money and begging for state support?
Also their network cards no longer work properly which is deeply aggravating as that used to be something I could rely on, just bought some realtek ones to work around the intel ones falling over.
Intel's 140V competes with and often outperforms AMD's 890M, at around half the power consumption.[1]
Intel's B580 competes with AMD's RX 7600 and NVIDIA's RTX 4060, at a fraction of the price of the 4060.[2]
They're not doing so well with desktop and laptop CPUs, although their Lunar Lake and Arrow Lake CPUs are still decent performers within their segments. The upcoming Panther Lake architecture is promising to improve this.
If these are not the signs of competitive products, and that they're far from "circling the drain", then I don't know what is.
FWIW, I'm not familiar with the health of their business, and what it takes to produce these products. But from a consumer's standpoint, Intel hasn't been this strong since... the early 00s?
No way, man. Peak consumer Intel was from Core 2 up to Skylake-ish. That was when they started coasting and handed the market to AMD. Right now they're losing market share to them on mobile, desktop, and server. If we ignore servers, most PCs have an AMD CPU inside.
The GPUs might be competitive on price, but that's about it. It's pretty much a hardware open beta.
Ah, I was thinking of Core 2, but was off by a couple of years. Although "peak" consumer Intel was undeniably in the 90s.
Like I said, Intel may not be market leader in some segments, but they certainly have very competitive products. The fact they've managed to penetrate the dGPU duopoly, while also making huge strides with their iGPUs, is remarkable on its own. They're not leaders on desktops and servers, but still have respectable offerings there.
None of this points to a company that's struggling, but to a healthy market where the consumer benefits. News of two rivals collaborating like this is not positive for consumers.
The 90s were easy mode for semiconductor manufacturers because of Moore's law, and because cranking the clocks was relatively easy. After 2000 was when the really advanced microarchitectures started coming out.
>a company that's struggling, but to a healthy market where the consumer benefits
I would argue that the market is only marginally healthier than, say, 2018. Intel is absolutely struggling. The 13th and 14th generation were marred by degradation issues and the 15th generation is just "eh", with no real reason to pick it over Zen. The tables have simply flipped compared to seven years ago; AMD at least is not forcing consumers to change motherboards every two years.
And Intel doesn't even seem to care too much that they're losing relevance. One thing they could do is enable ECC on consumer chips like AMD did for the entire Ryzen lineup, but instead they prefer to keep their shitty market segmentation. Granted, I don't think it would move too many units, but it would at least be a sign of good will to enthusiasts.
When your own most competitive products are being made by your competitor for you, while you still have the cost center of running your own production fabs incapable of producing your most competitive products, and receiving bailouts just to keep the lights on...
Mergers where one company is on the verge of failing can be a net positive for consumers. Most obviously this happens when banks fail and people’s bank cards still work etc and at least initially the branches stay open.
Intel isn’t at that point, but the companies trajectory isn’t looking good. I’d happily sacrifice ARC to keep a duopoly in CPU’s.
AMD has always followed closely NVIDIA in crippling their cheap GPUs for any other applications.
After many years of continuously decreasing performance of the "consumer" GPUs, only Intel has offered in the Battlemage GPUs FP64 performance comparable with what could be easily obtained 10 years ago, but no longer today.
Therefore, if the Intel GPUs disappear, then the choices in GPUs will certainly become much more restricted than today. AMD has almost never attempted to compete with NVIDIA in features, but whenever NVIDIA dropped some feature, so did AMD.
The only consumer GPUs ten years ago that offered decent FP64 performance were the GTX TITAN series. And they were beasts! It's a shame nothing quite like them exists anymore. But they were the highest of high-end cards, certainly not that common or cheap.
AMD Hawaii GPUs in their professional variant (FirePro), which were cheap, unlike the "datacenter" GPUs of today, and the more recent Radeon VII had much better FP64 performance per $ than GTX Titan.
Moreover, there were claims that the memory errors on GTX Titan were quite frequent. On graphics applications memory errors seldom matter, but if you have to do a computation twice to be certain that there were no memory errors affecting the results, that removes much of the performance advantage of a GPU.
Fair enough. I did not know about these. It's hard to find reliable MSRP for them today, though. Given the era, market segment, and the competition, I'd estimate $1500-2000. It's not clear to me they were on consumer store shelves, either, whereas the GTX Titan was.
A cheap GPU ten-plus years ago was $200-300. That GPU either had no FP64 units at all, or had them "crippled" just like today. What happened between then and now is that the $1k+ market segment became the $10k+ market segment (and the $200+ market segment became the $500+ market segment). That sucks, and nVidia and AMD are absolutely milking their customers for all they're worth, but nothing really got newly "crippled" along the way.
Might rather see it the other way around - Nvidia getting license to create products with x86(_64) CPUs integrated in the silicon. Nvidia are the big boy in this transaction and they'll get what they want out of it. But I can see the attraction for Intel.
I don't think they can, as AFAIK the agreement for x86_64 is that Intel and AMD cannot change hands. AMD will surely fight this tooth and nail in the courts
But with the state of the courts today... who knows..
Yes indeed. It's still a step in that direction that opens up a bunch of communication channels between the execs of the two companies. Things move slowly.
It's absolutely not, the ARC line is not a threat in any way to nVidia, it's to get it's feet into the CPU market without the initial setup costs and research it would take to start from scratch.
They will be dominating AMD now on both fronts if things go smoothly for them.
> It is unclear if Intel will issue new stock for Nvidia to purchase
Erm, a rather important point to bury down the story. The fiest question on anyone’s lips will be is this $5bn to build new chip technology, or $5bn for employees to spend on yachts?
It’s the most important part of the story. It’s so gross that companies can just dilute and create stock out of thin air like this. Why hold stock in Intel if the only people that ever buy the real stock and create buy pressure are the plebs? Here is the previous time…
> Intel stock experienced dilution because the U.S. government converted CHIPS Act grants into an equity stake, acquiring a significant ownership percentage at a discounted price, which increased the total number of outstanding shares and reduced existing shareholders' ownership percentage, according to The Motley Fool and Investing.com. This led to roughly 11% dilution for existing shareholders
> It’s so gross that companies can just dilute and create stock out of thin air like this.
To get money from the outside, you either have to take on debt or you have to give someone a share in the business. In this case, the board of directors concluded the latter is better. I don't understand why you think it is gross.
To get a share in the business, you can also just buy stock in the business like everyone else, not increasing the total share count or causing dilution. They chose not to do this because it would have been more expensive due to properly compensating existing shareholders. So it's spiritually just theft.
> To get a share in the business, you can also just buy stock
The business is looking for additional capital. You can only do that by either selling new shares or raising debt.
> in the business like everyone else, not increasing the total share count or causing dilution. They chose not to do this because it would have been more expensive due to properly compensating existing shareholders. So it's spiritually just theft.
Shareholder dilution isn't inherently theft. Specific circumstances, motivations, and terms of issuance have a bearing on whether the dilution is harmful or whether it is necessary for the business.
For instance, it can be harmful if: minority shareholders are oppressed, shares are issued at a deeply discounted price with no legitimate business need or to benefit insiders at the expense of other shareholders, or if the raised capital isn't used effective to grow the company.
Dilution can be beneficial, such as when the raised capital is used for growth, employee compensation via employee stock options, etc.
Intel has treasury shares - its own stock that it owns. It had enough to cover this $5B deal. Nvidia could have bought that without screwing over existing shareholders.
And it is already trending down today, I wonder how long it will stay up and how long it will take the average investor to figure out NVDA isn't buying INTC on the open market and driving up the price.
This really wasn't a surprise, nVidia has seemed to be itching for a meaningful entry to the CPU market and when intel's CEO started undoing all and any future investment in the company it was clear everything was being setup for a sell off.
5 Billion is just a start but this is a gift for nVidia to eventually squire intel.
I think if Nvidia wanted to acquire Intel, they would acquire Intel.
Intel has never been so cheap relative to the kinds of IP assets that Nvidia values and probably will not be ever again if this and other investments keep it afloat.
Trump's FTC would not block.
You write with proper case-sensitivity for their titles which suggests some historic knowledge of the two. They have been very close partners on CPU+GPU for decades. This investment is not fundamentally changing that.
The current CEO is more like a CFO--cutting costs and eliminating waste. There are two exits from that: sell off, as you say, and re-investment in the products of most likely future profit. This could be a signal that the latter is the plan and that the competitive aspects of the nVidia-intel partnership will be sidelined for a while.
nVidia has also been licensing their GPU IP to MediaTek recently, who are working on a 2nd generation of a SoC that combines their ARM cores with nVidia GPUs now, catering to e.g. the automotive market.
Looks like using GPU IP to take over other brands' product lines is now officially an nVidia strategy.
I guess the obvious worry here is whether Intel will continue development of their own dGPUs, which have a lovely open driver stack.
I'd agree, but Intel has also halted dGPU development efforts before, cf. the canned Larrabee project. Which was more troubled on the technology side however.
Seems Nvidia needs an alternative to MediaTek or wants to pressure MediaTek given the announcement of x86 Intel/Nvidia SoCs and the delay of DGX Spark, GB10 and N1X.
They wanted to launch DGX Spark early summer and it's nowhere to be seen, while strix halo is shipping in over 30+ SKUs from all major manufacturers.
I'm guessing NVidia didn't do this by choice. Propping up Intel doesn't seem in their best interests, nor does it do their share holders any favors by diluting their rapid growth.
There's some case for self-interest – propping up another fab etc. – but I do wonder how much of it is USG. (The economic case for Intel integrating Nvidia silicon on-chip doesn't too much sense to me: there's no growth potential in commodity/consumer x86, and maybe they can shove their new integrated Nvidia in front of enterprise buyers, but I'd be a dubious re: ROI.)
Sure it makes sense for NVidia to propagate proprietary interfaces like NVLink, as well as to do anything that helps drive their own GPU sales, but I'm not sure how you so confidently conclude from that that propping up Intel is in NVidia's best interest ?
> I'm guessing NVidia didn't do this by choice. Propping up Intel doesn't seem in their best interests
In a top-down oligarchy, their best interests are served by focusing on the desires of the great leader, in contrast to a competitive bottom-up market economy, where they would focus on the desires of customers and shareholders.
Called it. I knew Nvidia had nowhere left to go, with that insanely high valuation, other than to start buying competitors and adjacent companies. I don't think this is the end, either.
I don't think I actually commented it anywhere, I said think I said it to some friends IRL. It's just been my observation that, when a company's valuation suddenly spikes, the company usually tries to "lock in" those gains by diversifying.
Nvidia sees the forest of the trees. The consequences of the US government buying steaks and Intel are that there will be Federal requirements for us companies using Intel. This is entirely about the foundry business. Nvidia is at risk when 100% of the production of its intellectual property occurs in Taiwan. They're more interested than anyone else in diversifying their foundry solutions. Intel has just been a terrible partner and totally disregards its customers. It's only because of the new strategic need for the US to have a foundry business that the government is saving until. NVIDIA is understandably supportive of this.
Except that Nvidia, Inc. doesn't own any of that NVDA stock, other people do, and they cannot access that money. Nvidia, Inc. owns net profit, which is orders of magnitude less than market cap. Last year's net profit was just under $73 billion. ($5 billion is still very affordable, too be sure).
I think you may have missed AMC and TSLA; for quite a while their best selling products and biggest revenue drivers were their stock. Reflexivity is a thing; NVDA could issue 5, 10 or 20B and I don't think the price would move very much in this market. (Note that could change tomorrow.)
> Except that Nvidia, Inc. doesn't own any of that NVDA stock, other people do, and they cannot access that money. Nvidia, Inc. owns net profit, which is orders of magnitude less than market cap. Last year's net profit was just under $73 billion. ($5 billion is still very affordable, too be sure).
Not all deals are made in cash, they can borrow money against their market share.
Difference is, AMD wasn't a competitor for ATi. One mostly built CPU's, while another- GPUs.
These two, on the other hand, are competing in several major product categories. Overall, not a good look
NVIDIA is Jensen Huang life, and he is probably the best CEO in the USA. But he should be careful. Possible Shareholders lawsuits come with Discovery. NVIDIA sales to Coreweave for example, a company they have shares on is starting to look a lot like self-dealing.
Also, since this Intel deal makes no sense for NVIDIA, a good observer would notice that lately, he seems to spend more time on Air Force One than with NVIDIA teams. The leak of any evidence, showing this was an investment ordered by the White House, will make his company hostage of future demands from the current corrupt administration. The timing is already incredibly suspicions.
We will know for sure he become a hostage, if the next NVIDIA investment is on World Liberty Financial.
In 2007 Jen-Hsun Huang had “No CPU plans; can’t add much value.”
explained Nvidia “cannot add much value” in the CPU market and would focus on GPUs/chipsets.
In 2009 he said a no to an Intel-compatible x86 CPU.
In 2022 he said If x86 already exists, we’ll just use/partner: “One of the rules of our company is not to squander resources on something that already exists. If something already exists, for example, an x86 CPU, we’ll just use it… we’ll partner with them.”
IFS Bottleneck means; NVIDIA buys time, Intel loses it and Blackwell ultra slips. RISC-V (Tenstorrent) + AMD ROCm keep the pipeline moving while NVIDIA polishes it's hammer. Lesson; Adage (Old Norse inspired)
“Hamarr klofnaði, er vé fann eigi leið;
Spjót fló, en skjár var eigi bundinn.”
Translation back to English
“The hammer split, when craft found not its path;
The spear flew forth, yet screen was never bound.”
This mirrors the original iambic pentameter adage, but cast in skaldic-styled compactness, tying the hammer (Mjölnir) to the broken driver/kernel path, and the spear (Gungnir) to the unpatched Wayland/X11 display stack. Something like Snorri Sturlosson might've slipped into the prose Edda? except it's about GPUs and Linux drivers.
You mean AMD's unified architecture. They were a founder of the HSA Foundation that drove innovation in this space complete with Linux kernel investments and unified compute SDKs, and they had the first shipping hardware support.
AMD's actual commitment to open innovation over the past ~20 years has been game changing in a lot of segments. It is the aspect of AMD that makes it so much more appealing than intel from a hacker/consumer perspective.
It feels like the end is in sight for dedicated graphics chips in consumer devices. Phones, consoles, and now Apple silicon are proving that SoC designs with unified memory and focused thermals are a winning strategy for efficiency and speed. Nvidia may be happy enough to move the graphics strategy onto an SoC and keep discrete boards just for AI.
A weird kind of full-circle moment: Intel used to laugh off Nvidia, then tried Kaby Lake-G with AMD (RIP), and now they're handing over CPU real estate to the company that wiped the floor with their own GPU efforts
NVIDIA has started integrating NVLink with the x86 CPU ecosystem. It's possible that NVLink will overcome CXL in this competition, just as CUDA once defeated OpenCL.
The irony here is that Intel once tried to buy Nvidia for $20B, but Intel's board rejected the deal as "too expensive" and Intel should build its own GPUs.
SemiAccruate reported that NVidia had been dipping its toes into manufacturing its products using Intel's fabs several months ago, I'd assume that that's related.
People have been talking about the concentration of risk in popular indices, now large caps are buying each other's stakes? Intel's stock is up 27% today..
No way this doesn't get blocked by antitrust. This will make them way too large and Intel is already trying to sell off (US govt bought $10B couple weeks ago)
That action may cease to exist soon, especially after Vance is POTUS and the courts stacked with Peter Thiel loyalists that back his vision of anti-competition. Bet on it.
Intel should never have existed in the first place. We should have gone the RISC route in the 80s and 90s and it took 30 years for the world to realize that with ARM. It’s like we’re continuously resuscitating a zombie to terrorize us, instead of just letting it die.
INTC is strategically important company. They won't be allowed to fail. Of course, that doesn't mean the stock is a good investment. During the GFC, all the equity holders were wiped out all the bond holders got all their money back. Figure that one out.
Perhaps, but you do understand that the probability of every tranche regardless of seniority getting paid in full and the equity getting nothing is zero. It's mathematically impossible to pin the waterfall like this.
With good enough lawyers mathematically impossible is practically relative. Assume the game is rigged, play accordingly. If it doesn't make sense, it makes sense.
This has been an interesting 1.5 months for Intel on all fronts. I wonder how long this deal was in the making, since the timing is impeccable, looking at the current administration's involvement with Intel.
No idea what to think of this. I don't want Intel to die, but what will this do to their GPU business they're competing with NVIDIA on. And at worst this leads to even more consolidation
I'm mixed on this, only because when they've done similar hybrid chips with AMD GPUs in the past the support has been poor and dropped off rather quickly.
I'm very pessimistic about this. Goodbye to those nice, budget-friendly intel GPUs. nGreedia is going to continue selling 8 gig cards to consumers forever.
> Nvidia announced that it will buy $5 billion in Intel common stock at $23.28 per share, representing a roughly 5% ownership stake in Intel. (Intel stock is now up 33% in premarket trading.)
Why/how is INTC premarket up from $24.90 around 30% (to $32), when Nvidia is buying the stock at $23.28 ? Who is selling the stock?
I suppose the Intel board decided this? Why did they sell under the current market price? Didn't the Intel board have fiduciary duty to get as good a price from Nvidia as possible? If Nvidia buying stock moves it up so much, it seems like a bad deal to sell the stock for so little.
It's typical in these situations that the price per stock is negotiated, with current SP as a starting point. It's fairly unusual, I think, for the company selling stock to get a price significantly higher than market price. It's more typical that there's a slight discount. At least that's been the case for every stock I owned where dilution has occured. We also don't know yet when exactly this deal was negotiated and approved, so it's hard to actually say. Considering where INTC has been very recently(below $20), $23.28 seems very reasonable to me.
The reason the stock surged up past $30 is the general market's reaction to the news, and subsequent buying pressure, not the stock transaction itself. It seems likely that once the exuberance cools down, the SP will pull back, where to I can't say. Somewhere between $25 and $30 would be my bet, but this is not financial advice, I'm just spitballing here.
smart move for nvidia, as amd is the true competitor, keeping using amd's cpu will just help to build up a competitor fast. also this helps intel to figure out its foundry business and it might work someday, which also benefits nvidia as now its only choice is tsmc.
Nowadays I always wonder to what extent such deals are actually driven by market considerations and to what extent it's catering to the Trump administration. Token investments into this state enterprise named Intel seems to be a practical way to cater goodwill with the autocrats.
The U.S. government won’t have a seat on the board and agreed to vote with Intel’s board on matters requiring shareholder approval “with limited exceptions.”
Why? That is an example of a bad engineering company being acquired and then poisoning the quality of the acquirer with its toxic, low-quality, corporate-politics-above-engineering culture.
There have been a lot of mergers where that has not happened.
There are two scenarios here. In one, the AI bubble bursts (so Nvidia is overpriced now) and almost any value stock deal is good for them. In the other, it doesn't, and this gives them a limited hedge against problems with their most critical strategic partner (TSMC).
It looks like a good deal either way and in any amount. But of course I am no expert.
I suppose the problem is Intel doesn't actually have the fab capacity anyway. They were building it, but that's all on ice now, and probably wasn't close to TSMC anyway, I'd guess.
This all ignores the near complete lack of product out of their advanced processes as well.
This is a technology forum first and foremost. I know it might not look that way given the recent flood of political activism articles. But, in the technology field, this is pretty big news. This stake makes Nvidia one of Intel's biggest shareholders.
It's a good deal for Nvidia, because custom x86 server CPUs have optimization potential for AI computing clusters, which matters now that Nvidia has competitors that they didn't just 2 years ago. I think that the next several years of Nvidia will be ones of fending off growing competition.
They basically baked in a massive investment profit into the deal. When you factor in the stock jump since this announcement, Nvidia has already made billions.
Strategically this is good for the US and the West. Intel needs to survive because they have the only advanced fabs that aren't within reach of China.
But as a consumer, I hate this. Intel APUs have become quite good and are great for Linux users. I don't want Nvidia's bullshit infecting them. Jenson wants to be the Apple of chips and we'll all be worse off if Nvidia SoCs become ubiquitous.
If you wanted to acquire Intel you'd do it now. Maybe Intel's future products are garbage and they do worse - but the upside seems pretty high otherwise. This seems like a bit of a firesale price to acquire an advanced fab and CPU maker. Sure, it's Intel and they haven't been doing great, but companies with solid reliable outlooks don't trade this cheaply.
Ofc I would kind of hope/expect antitrust to object given that Intel makes both GPUs and CPUs, and Nvidia is/has dipped their toes into CPU production as well.
Intel still has to go through a lot of reorg (i.e. massive cuts) to get to a happy place, and this is what their succession of CEOs have been procrastinating over.
I recall reading a reddit comment (resounding source, I know) that claimed the reason Intel's e-cores are crushing it is because they actually synthesise them, while the P-cores are a bunch of bespoke circuits bodged together.
One wonders just how bad things must have been internally for that to be the state of one of their core IPs in this day and age...
Not even the government at this point. The oligarchs are now in full control of the US and are dividing up their kingdoms. The plans for glulags for detractors are also being placed.
> This needlessly divisive and devoid of any factual basis. No gulags will exist and you know it.
What about "Alligator Alcatraz", that has been called "concentration camp" [1] (so comparable with a gulag), or where the Korean detainees from the raid on the Hyundai/LG plant ended up, alleging utterly horrible conditions [2]? And there's bound to be more places like the latter, that was most likely just the tip of the iceberg and we only know about the conditions there because the South Korean government raised a huge stink and got the workers out of there.
Okay, Alcatraz 2.0 did get suspended in August to my knowledge, but that's only temporary. It's bound to get the legal issues cleaned up and then be re-opened - or the case makes its way through to the Supreme Court with the same result to be expected.
I do not agree with that. In some cases it is acceptable to detain non-citizens for immigration-related offenses, but only if they receive due process to establish that they indeed should be detained.
Any denial of due process to any person is a gross violation of our most important right. Without the guarantee of due process to everyone, no one has any rights because those in power can violate rights at a whim.
There have been reported cases where ICE just ignored people's legal residence status or that they also snatched up citizens who didn't have paperwork on them just for "walking while black".
ICE doesn't reliably make any distinction, not since they hired thugs off of the streets and issued arrest quotas. Doesn't matter if the arrested have to be released later on.
I for one am happy that our capitalist overlords have finally shredded the last remnants of the pretense, but it probably causes a lot of issues for economists who have to explain why this is all fine and definitely normal.