Hacker News new | past | comments | ask | show | jobs | submit login






You linked to an article about Google renting an insignificant amount of additional capacity.

Google runs AI / HPC workloads on their own hardware and has been doing that for more than a decade. Google Gemini was trained on TPUs developed in house. It does not run on Nvidia hardware.


And I believe Apple refuses to use NVidia as well, they’re actually using Google’s TPUs.

https://www.tomshardware.com/tech-industry/artificial-intell...


Apple is definitely using Nvidia hardware.

That was for training. For inference, they reportedly use their own silicon.

There was a rumor that emerged a few weeks ago that they broke down and made an order from nvidia:

https://finance.yahoo.com/news/apple-might-ai-game-1-1951003...

Before that, in a Wired article from 10 years ago about siri and ai, one of the apple higher ups was quoted bragging about having one of the baddest gpu farms around (paraphrasing)


Google, last month: “we’re doubling down on our partnership with NVIDIA”

https://blog.google/technology/ai/google-nvidia-gtc-ai/


That's Google Cloud Platform. Of course they will provide Nvidia hardware as demanded by external customers.

But their internal workloads and their frontier model (Gemini) runs on TPUs.


Yes, but the cloud customers "who finance TPUs" have NO INTEREST in TPUs and in Nvidia GPUs instead.

How does Google pay for TPUs internally? By Google Search and Google Cloud of course. Google Search uses TPUs, Google Cloud however has way more non-TPUs instances.

What people forget, nobody wants to switch from CUDA dependency to SW dependency on Google/AWS/Azure. CUDA at least allows me to use it in consumer, in pro HW, in cloud and AND in on-prem data center.

I'm really looking forward to Fortune 500 companies sending all their internal company data to Google to structure it to train custom AI models. Yeah, that will never happen. What happens instead is that Fortune 500 companies will build up AI expertise to build their own custom AI model and they will think hard if they want the training AI compute internally or on a cloud. Nvidia has a huge business of building data centers on-premises which people totally oversee. NO CSP will ever compete there because it's against their primary business model. A Reliance India contract from 2023 alone is a delivery of 2 million GPUs in a few years. That's probably more than Nvidia's last year's total revenue and that is 1 large corp in India only.


That’s the fundamental premise of the article: The hyperscalers will consolidate GPU compute exactly as they consolidated all other forms of compute. Including highly sensitive compute like product design and customer databases.

You can argue they won’t, but the “enterprises won’t put sensitive data in Cloud” ship sailed years ago.


> I'm really looking forward to Fortune 500 companies sending all their internal company data to Google

Their internal company data is already on cloud servers. They’re not going to waste money on doing it all in house. The executives will buy the AI service from Google/Azure/AWS, where the company data is already hosted, avoid the costs and risk of doing it in house, and collect their bonus.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: