> Data centers are one of the best demographics […]
Hardly. Data centers are a dying breed, and their number has been rapidly dwindling in recent years. DC (and the mythical «on-prem» by extension) has effectively become a dirty word in contemporary times. The most brutal lift-and-shift approach (without discussing the merits of doing so) is most common: create a backup, spin up a cloud «VM», restore from the backup and turn the server in the DC off forever. No-one is going to even remotely consider a new hardware architecture, not even in the cloud.
Moreover, since servers do not exist in a vacuum and either run business apps or do something at least remotely useful, that entails the software migration to adopt the new platform. And the adoption has to be force-pushed onto the app developers otherwise they won't bother, and for them to convert/migrate the app onto a new architecture, they need desktops/laptops that run on the new ISA, and no viable server and desktop hardware exists in June 2023 – it will come along later with «later» not having a clear definition. Talking open source is a moot point as most businesses out there run commercially procured business apps.
Data centers in general are NOT a dying breed, and it's more a case of rapidly growing, not dwindling. Perhaps you are referring to individual companies moving to the cloud, and colo type activity (albeit institutions with strict regulation may still require a backup colo) dwindling?
However, the cloud resource providers are definitely growing (https://www.statista.com/outlook/tmo/data-center/worldwide#r...), and there is a huge push for more power and heat efficient architecture, whether on the server/network/supporting infrastructure side.
This doesn’t seem to comport with Amazon’s experience, investment, and trajectory with Graviton, based on public reference customers and a few personal anecdotes.
They are, but they are not data centers in the traditional sense of the term. The GP was referring to the traditional data centers as far as I understand.
> You're paying a x10 markup to make accounting shenanigans easier,
Whilst cloud platforms do allow one to accrue an eye-watering cloud bill by virtue of shooting oneself with a double-barelled gun, the fault is always on the user and the «10x markup» is complete bonkers and is a fiction.
As an isolated random example, API gateway in AWS serving 100 million requests 32 kB each with at least 99.95% SLA will cost US$100 a month. AWS EventBridge for the same 100 million monthly events with at least 99.99% availability will also cost US$100 a month.
That is US$200 in total monthly for a couple of the most critical components of a modern data processing backbone that scales out nearly indefinitely, requires no maintenance nor manual supervision and is always patched up security wise and is shielded from DDoS attacks. Compared to the same SLA, scalability and opex costs in a traditional data centre, they are a steal. Again, we are talking about at least 99.95% and 99.99% SLA for each service.
If one uses the cloud to spin up cloud VM's and databases that run 24x7 and result in an averags 10% monthly CPU utilisation, they are cooking the cloud wrong, they are wasting their own money and they are the only ones to blame the 10x markup that is a delusion caused by ignorance.
> but the technology is exactly the same.
The underlying technology might be the same, but is abstracted from the user who can no longer care about it, use a service and pay for the actual usage only. The platform optimises the resource utilisation and distribution automatically. That is the value proposition of the cloud today and not 15 years ago.
> Go compare prices of e.g. Hetzner or OVA and come back to me again with that "fiction".
I have given two real examples of two real and highly useful fully managed services with made-up data volumes along with their respective costs. Feel free to demonstrate which managed services API gateway and pub/sub services Hetzner or OVA have to offer that come close or the same, functionality and SLA wise, – to compare.
> That's only about 35 events per second.
Irrelevant. I am not running a NASDAQ clone, and most businesses do not come anywhere close to generating 35 events per second anyway. If I happen to have a higher event rate, the service will scale for me without me lifting a finger. Whereas if a server hosted in a data centre has been underprovisioned, it will require a full time ops engineer to reprovision it, set it up and potentially restore from a backup. That entails resource planning (a human must be available) and time spent on doing it. None of that is free, especially operations.
> […] Hosting over at Hetzner will cost you maybe $25 a month.
It is the «maybe» component that invalidates the claim. Other than «go and compare it yourself» and hand-waving, I have seen slightly less than zero evidence as a counter-argument so far.
Most importantly, I am not interested in hosting and daily operations, whereas the business is interested in a working solution, and the business wants it quickly. Hosting and tinkering with, you know, stuff and trinkets on a Linux box is an antithesis of the fast delivery.
The vast majority of servers in data centers idle by most of the time anyway consuming electricity and generating pollution for no-one's gain so the argument is moot.
It isn't 1992 anymore, people don't "tinker", they have orchestration in 2023.
The orchestration tools for self-hosted are cheaper, more standard and more reliable. (Because Amazon's and Google's stuff is actually built on top of standard stacks, except with extra corporate stupidity added.)
Regardless of whether you use something industry standard or something proprietary, you will need to have an ops team that knows orchestration. (And an AWS orchestration team will be more expensive, because, again, their stuff is non-standard and proprietary.)
There are reasons for using AWS, but cost or time to market is never one of them.
Hardly. Data centers are a dying breed, and their number has been rapidly dwindling in recent years. DC (and the mythical «on-prem» by extension) has effectively become a dirty word in contemporary times. The most brutal lift-and-shift approach (without discussing the merits of doing so) is most common: create a backup, spin up a cloud «VM», restore from the backup and turn the server in the DC off forever. No-one is going to even remotely consider a new hardware architecture, not even in the cloud.
Moreover, since servers do not exist in a vacuum and either run business apps or do something at least remotely useful, that entails the software migration to adopt the new platform. And the adoption has to be force-pushed onto the app developers otherwise they won't bother, and for them to convert/migrate the app onto a new architecture, they need desktops/laptops that run on the new ISA, and no viable server and desktop hardware exists in June 2023 – it will come along later with «later» not having a clear definition. Talking open source is a moot point as most businesses out there run commercially procured business apps.