aws and other hyperscalers will keep growing, no doubt. Public cloud adoption is at around 20%. So the new companies that migrate into the cloud will keep the growth going. That doesn't deny the fact that some might be repatriating though. Especially ones that couldn't get the benefits out of the cloud.
One thing I've seen in every startup I've been in over the last decade is that cloud asset management is relatively poor. Now I'm not certain that enterprise is better or worse, but ultimately when I think back 10+ years ago resources were finite. With that limitation came self-imposed policing of utilization.
Looking at cloud infrastructure today it is very easy for organizations to lose sight on production vs frivolous workloads. I happen to work for an automation company that has cloud infrastructure monitoring deployed such that we get notified about the resources we've deployed and can terminate workloads via ChatOps. Even though I know that everyone in the org is continuously nagged about these workloads I still see tons of resources deployed that I know are doing nothing or could be commingled on an individual instance. But, since the cloud makes it easy to deploy we seem to gravitate towards creating a separation of work efforts by just deploying more.
This is/was rampant in every organization I've been a part of for the last decade with respect to cloud. The percentage of actual required, production workloads in a lot of these types of accounts is, I'd gather, less than 50% in many cases. And so I really do wonder how many organizations are just paying the bill. I would gather the Big cloud providers know this based on utilization metrics and I wonder how much cloud growth is actually stagnant workloads piling up.
The major corps I've worked in that did cloud migrations spent so much time on self-sabotage.
Asset management is always poor, but thats half because control over assets ends up being wrestled away from folks by "DevOps" or "SREs" making K8S operators that just completely fuck up the process.
The other half is because they also want "security controls" and ensure that all the devs can't see any billing information. How can I improve costs if I can't tell you the deltas between this month and last?
I guess the idea wouldn't be to compete with AWS but just explore potential other revenue streams for Tesla owners. Clearly there will be limitations in terms of what can be done with such computer.
There won't be revenue stream for other owners or Tesla from this, because computing this way is far less efficient than simply doing it locally in a datacenter, with high speed interconnects between machines.
Meaning if he sells that at profit, no one would buy it. And if he uses it himself, he'd be losing money rather than buying time in a datacenter or setting up one on their own.
Basically. There's only one scenario in which I see this being useful.
Let's say some world crisis and most of our chip manufacturing capacity is bombed/destroyed or unavailable in some other way, while our computing needs are rising exponentially due to some AI-warfare (economic or military).
In this case, every device with compute will become valuable, because compute is extremely scarce and datacenters won't be able to service this.
So you'll be able to sell compute from your phone (when hooked to a charger mostly), your laptop, your desktops, iPads, and your cars.
But there's nothing special about the cars themselves. And in a relatively normal world, what he proposes is a non-starter.
Even my scenario is a bit of a sci-fi thing. It's really inefficient to distribute compute. And while folding@home and seti@home ship independent small units of data for analysis, AI inference tends to be very high-volume data and needing high-interconnect between inference units. Meaning it'll be extra hard to distribute in a meaningful way compared to these well-known processing donation projects.
reply