> In the context of AWS, the expenses associated with employing AWS administrators often exceed those of Linux on-premises server administrators. This represents an additional cost-saving benefit when shifting to bare metal. With today’s servers being both efficient and reliable, the need for “management” has significantly decreased.
I also never seen an eng org where substantial part of it didn’t do useless projects that never amount to anything
I get the point that they tried to make, but this comparison between "AWS administrators" and "Linux on-premises server administrators" is beyond apple-and-oranges and is actually completely meaningless.
A team does not use AWS because it provides compute. AWS, even when using barebonea EC2 instances, actually means on-demand provisioning of computational resources with the help of infrastructure-as-code services. A random developer logs into his AWS console, clicks a few buttons, and he's already running a fully instrumented service with logging and metrics a click away. He can click another button and delete/shut down everything. He can click on a button again and deploy the same application in multiple continents with static files provided through a global CDN, deployed with a dedicated pipeline. He clicks on another button again and everything is shut down again.
How do you pull that off with "Linux on-premises server administrators"? You don't.
At most, you can get your Linux server administrators to manage their hardware with something like OpenStack, but they would be playing the role of the AWS engineers that your "AWS administrators" don't even know exist. However, anyone who works with AWS only works on the abstraction layers above that which a "Linux on premises administrator" works on.
This is the voice of someone who has never actually ended up with a big AWS estate.
You don't click to start and stop. You start with someone negotiating credits and reserved instance costs with AWS. Then you have to keep up with spending commitments. Sometimes clicking stop will cost you more than leaving shit running.
It gets to the point where $50k a month is indistinguishable from the noise floor of spending.
> This is the voice of someone who has never actually ended up with a big AWS estate.
I worked on a web application that provided by a FANG-like global corporation that is a household name and used by millions of users every day, and which can and did made the news rounds if it experiences issues. It is a high-availability multi-region deployment spread about a dozen independent AWS accounts and managed around the clock by multiple teams.
Please tell me more how I "never actually ended up with a big AWS estate."
I love how people like you try to shoot down arguments with appeals to authority when you are this clueless about the topic and are this oblivious regarding everyone else's experience.
Hrm. I have worked for a global corporation that you have almost certainly heard of. Though it's not super sexy.
The parent you're replying to resonates with me. A lot of politics about how you spend and how you commit, it's almost as bad as the commitment terms for bare-metal providers (3,6,12,24month commits). Except the base-load is more expensive.
It depends a lot on your load, but for my workloads (which, is fragile dumb but very vertical compute with a wide geographic dispersion), the cost is so high that a few dozen thousand has been missed numerous times, despite having in-house "fin-ops" folks casting their gaze upon our spend.
From the parents posters comments, the developers could very well be putting together quick proofs of concepts.
I’ve set up an “RnD” account where developers can go wild and click ops away. I also set up a separate “development” account where they can test thier IAC manually and then commit it and it gets tested through a CI/CD pipeline. Then after that it goes through the standard pull request/review process.
> A random developer logs into his AWS console, clicks a few buttons, and he's already running a fully instrumented service with logging and metrics a click away
In a dream. In the real world of medium-to-large enterprise, a developer opens a ticket or uses some custom-built tool to bootstrap a new service, after writing a design doc and maybe going through a security review. They wait for the necessary approvals while they prepare the internal observability tools, and find out that there is an ongoing migration and their stack is not fully supported yet. In the meantime, he needs permissions to edit the Terraform files to update routing rules and actually send traffic to their service. At no point he does, or ever will, have direct access to the AWS console. The tools mentioned are the full-time job of dozens of other engineers (and PMs, EMs and managers). This process takes days to weeks to complete.
> A random developer logs into his AWS console, clicks a few buttons, and he's already running a fully instrumented service with logging and metrics a click away...
This only works that way for very small spend orgs that haven’t implemented soc 2 or the like. If that’s what you’re doing then probably should stay away from datacenter, sure
> This only works that way for very small spend orgs that (...)
No, not really. That's how basically all services deployed to AWS work once you get the relevant CloudFormation/CDK bits lined up. I've worked on applications designed with high-availability in mind, which included multi-region deployments, which I could deploy as sandboxed applications on personal AWS accounts in a matter of a couple of minutes.
What exactly are you doing horribly wrong to think that architecting services the right way is something that only "small spend orgs" would know how to do?
Your original comment gives an impression that you like AWS bc anyone can click-ops themselves a stack so that's why you got all these clickops comments.
How is an army of "devops" implementing your CF/CDK stack any different from an army of (lower paid) sysadmins running proxmox/openstack/k8s/etc on your hw?
> Your original comment gives an impression that you like AWS (...)
My comment is really not about AWS. It's about the apples-to-oranges comparison between the job of "Linux on-premises server administrator" and value-added of managing on-premises servers, and the role of "AWS administrator". Someone needs to be completely clueless to the realities of both job roles to assume they deliver the same value. They don't.
Someone with access to any of the cloud provider services on the market is able to whip out and scale up whole web applications with far more flexibility and speed than any conceivable on-premises setup managed with the same budget. This is not up for debate.
> How is an army of "devops" implementing your CF/CDK stack any different from an army of (lower paid) sysadmins running proxmox/openstack/k8s/etc on your hw?
Think about it for a second. With the exact same budget, how do you pull off a multi-region deployment with an on-premises setup managed by your on-premises linux admins? And even if your goal is providing a single deployment, how flexible are you to put up this scheme to test a prototype and afterwards shut down the service?
> Someone with access to any of the cloud provider services on the market is able to whip out and scale up whole web applications with far more flexibility and speed than any conceivable on-premises setup managed with the same budget.
Bullshit. I've seen people spin wheels for months/years deploying their cloud native jank and you should read the article - it's not nearly the same budget.
> Think about it for a second. With the exact same budget, how do you pull off a multi-region deployment with an on-premises setup managed by your on-premises linux admins?
You do realize things like site interconnect exist right? And it likely will be cheaper than paying your cloud inter-region transfer fees. You're going to be testing multi-regional prototype? please
Look there's a very simple reason why folks have been chasing public clouds and it has nothing to do their marketing spiel of elastic compute, increased velocity, etc. That reason is simple - teams get control of their spend without having to ask anyone for permission (like the old-school infra team).
Not on HN. Where everyone uses Rust and yet needs a billion node web scale mesh edge blah minimally otherwise you are doing it wrong. Rather waste 100k per month on aws because ‘if the clients come downtime is expensive’ than just run a 5$ vps and actually make a profit while there are not many clients. It’s the VC rotten mindset. Good for us anyway; we don’t need to make 10b$ to make the investors happy. Freedom.
Yeah, that's part of it. The other part is that you can move stuff that is working, and working well, into on-prem (or colo) if it is designed well and portable. If everything is running in containers, and orchestration is already configured, and you aren't using AWS or cloud provider specific features, portability is not super painful (modulo the complexity of your app, and the volume of data you need to migrate). Clearly this team did the assessment, and the savings they achieved by moving to on-prem was worthwhile.
That doesn't preclude continuing to use AWS and other cloud service as a click-ops driven platform for experimentation, and requiring that anything that is targeting production to refactored to run in the bare-metal environment. At least two shops I worked at previously have used that as a recurring model (one focusing on AWS, the other on GCP) for stuff that was in prototyping or development.
> Yeah, that's part of it. The other part is that you can move stuff that is working, and working well, into on-prem (or colo) if it is designed well and portable.
That's part of the apples-and-oranges problem I mentioned.
It's perfectly fine if a company decides to save up massive amounts of cash by running stable core services on-premises instead of paying small fortunes to a cloud provider for the equivalent service.
Except that that's not the value proposition of a cloud provider.
A team managing on premises hardware barely covers a fraction of the value or flexibility provided by a cloud service. That team of Linux sysadmins does not nor will it ever provide the level of flexibility nor cover the range of services that a single person with access to a AWS/GCP/Azure account provides. It's like claiming that buying your own screwdriver is far better than renting a whole workshop. Sure, you have a point if all you plan on doing is tightening that screw. Except you don't pay for a workshop to tighten up screws, and instead you use it to iterate over designs for your screws before you even know how much load it's expected to take.
Counterpoint: most shops do not need most of the bespoke cloud services they're using. If you actually do, you should know (or have someone on staff who knows) how to operate it, which negates most of the point of renting it from a cloud provider.
If you _actually need_ Kafka, for example – not just any messaging system – then your scale is such that you better know how to monitor it, tune it, and fix it when it breaks. If you can do that, then what's the difference from running it yourself? Build images with Packer, manage configs with Ansible or Puppet.
Cloud lets you iterate a lot faster because you don't have to know how any of this stuff works, but that ends up biting you once you do need to know.
> Counterpoint: most shops do not need most of the bespoke cloud services they're using. If you actually do, you should know (or have someone on staff who knows) how to operate it, which negates most of the point of renting it from a cloud provider.
Well said! At $LASTJOB, new management/leadership had blinders on [0][1] and were surrounded by sycophants & "sales engineers". They didn't listen to the staff that actually held the technical/empirical expertise, and still decided to go all in on cloud. Promises were made and not delivered, lots of downtime that affected _all areas of the organization_ [2] which could have been avoided (even post migration), etc. Long story short, money & time were wasted on cloud endeavors for $STACKS that didn't need to be in the cloud to start, and weren't designed to be cloud-based. The best part is that none of the management/leadership/sycophants/"sales engineers" had any shame at all for the decisions that were made.
Don't get me wrong, cloud does serve a purpose and serves that purpose well. But, a lot of people willfully ignore the simple fact that cloud providers are still staffed with on-prem infrastructure run by teams of staff/administrators/engineers.
[0] Indoctrinated by buzz words
[1] We need to compete at "global scale"
[2] Higher education
> Yeah, that's part of it. The other part is that you can move stuff that is working, and working well, into on-prem (or colo) if it is designed well and portable. If everything is running in containers
Anyone who says that hasn’t done it at scale.
“Infrastructure has weight”. Dependencies always creep in and any large scale migration involves regression testing, security, dealing with the PMO, compliance, dealing with outside vendors who may have white listed certain IP addresses, training, vendor negotiations, data migrations etc.
And then even though you use MySQL for instance, someone somewhere decided to do a “load data into S3” AWS MySQL extension and now they are going to have to write an ETL job. Someone else decided to store and serve static web assets to S3.
I mean, aside from my current role in Amazon, my last several roles have been at Mozilla, OpenDNS/Cisco, and Fastly; each of those used a combination of cloud, colo and on-prem services, depending on use cases. All of them worked at scale.
I specifically said "if it is designed well", and that phrase does alot of heavy lifting in that sentence. It's not easy, and you don't always put your A-team on a project when the B or C team can get the job done.
The article outlines a case where a business saw a solid justification for moving to bare metal, and saved approximately 1-3 SDE (depending on market) salary in doing so.
That amount of money can be hugely meaningful in a bootstrapped business (for example, for one of the businesses my partner owns, saving that much money over COVID shut-downs meant keeping the business afloat rather than shuttering the business permanently).
I didn’t mean to imply that you have worked at scale, just that doing a migration at scale is never easy even if you try to stay “cloud agnostic”.
Source: former AWS Professional Services employee . I just “left” two months ago. I now work for a smaller shop. I mostly specialize in “application modernization”. But I have been involved in hairy migration projects.
Most folks aren't focused on portability. Almost every custom built AWS app I've seen is using AWS-specific managed services, coded to S3, SQS, DynamoDB, etc. It's very convenient and productive to use those services. If you're just hosting VMs on EC2, what's the point?
I worked for a large telco, where we hosted all our servers. Each server ran multiple services bare-metal - no virtualization, and it was easy to rollout new services, without installing new servers. I missed the level of control over network elements and servers, the flexibility, and ability to debug by taking network traces anywhere in the network in my next job using AWS.
> Our choice was to run a Microk8s cluster in a colocation facility
they go on to describe they use helm as well. there's no reason to assume that "a a fully instrumented service with logging and metrics" still isnt a click and keypress away.
your points dont make a whole lot of sense in the context of what they actually migrated too.
Absolutely not my experience. I used to work for a Japanese company that was almost entirely self funded. They wouldn't even go to the bank and get business loans.
Your description applies to a substantial number of business units in that company. They also had a "research institute" whose best result in the last decade was an inaccurate linear regression (not a euphemism for ML).
You've never had friend and colleagues working at big (local and international) established companies sharing their experience of projects being canned, and not just repurposed?
there's nothing about being bootstrapped vs venture backed that lets everyone know, a priori, whether a given project will be successful or not. something like 80% of startups fail within the first two years.
> In the context of AWS, the expenses associated with employing AWS administrators often exceed those of Linux on-premises server administrators. This represents an additional cost-saving benefit when shifting to bare metal. With today’s servers being both efficient and reliable, the need for “management” has significantly decreased.
I also never seen an eng org where substantial part of it didn’t do useless projects that never amount to anything