I’ve been saying for a couple of years whoever builds a tool to one click deploy an AWS alike to run on local bare metal has built the next billion dollar a year company.
I know of a lot of big shops that are desperate to get out of the cloud due to 7-8 figure AWS bills but their software and engineering is too tightly wound into AWS tooling.
But AWS is huge. They employ thousands of engineers. Granted most of the complexity is due to its huge huge scale, but still it has too many features to do what youre asking. And then there are AWS exclusives like DynamoDB.
Yes there are open source public/private cloud alternatives like OpenStack but they won't solve the problem of lock in and you'd still have to change all your tooling.
Conflating lock in to an open source tool with lock in to a vendor who you literally have to keep paying money to survive is a shitty tactic used by companies like Amazon.
Using open stack absolutely solves the problem of lock in from a business perspective. You can’t be cancelled, the price can’t be changed, etc.
I legitimately wonder how many people feel free from lock in after they finish paying $75k to Canonical for a 12 node OpenStack "private cloud build" (https://assets.ubuntu.com/v1/d4766546-Private-Cloud-Build_Da...), not to mention ongoing support bills and billing for addons "starting at $25k".
Just because something is open source does not mean your org will actually have the expertise to be successful at running it yourself. That's how most "open source" projects like Kubernetes and OpenStack become so successful: the supporting orgs behind the project make lots of money charging companies for support and maintenance and issues that pop up with the setup.
There are many forms of lock in. Buying a big upfront investment that has to pay itself off for the next X years is a form of lock in. Paying for expensive professional consulting support to build out the system and keep the system up and running is a form of lock in. Or even worse, hiring or training expensive dedicated employees to learn the open source system and become the internal professional consultants for the rest of the org, that is also lock in.
> I legitimately wonder how many people feel free from lock in after they finish paying $75k to Canonical for a 12 node OpenStack "private cloud build"
Here’s the part where it’s not lock-in. We stopped paying for support from Redhat when we got big enough that our own SRE expertise to run our own open stack cloud.
You know what the impact was on our applications running on it? Absolutely nothing.
How much do you pay Canonical to use grep every month? Inexperienced developers have just been conditioned by cloud providers to think that an IaaS platform must cost something in payments per month to some tech company. It does not.
This is no different than Windows vs Linux on servers. I can’t wait until 20 years from now when all of the proprietary shit looks ridiculous in hindsight.
> to think that an IaaS platform must cost something in payments per month to some tech company. It does not.
I agree fully! As you explained, it can cost several full time employees and their payrolls and their HR and their management!
Edit: And before someone misses the point, I'm not saying the math never makes sense for in-house, but to me the inexperienced take is thinking either approach should obsolete the other...
Sounds like you’re pushing a hybrid cloud approach in your edit. I have been consulting on this topic for 7 years and companies are still tied to their data center or the cloud and can’t readily pull their components apart because they originally built them so intertwined with the cloud environment of the time
Even if it doesn't cost you in monthly payments to a tech company it still costs you in "our own SRE expertise to run our own open stack cloud" so salaries, benefits, 401k contributions, etc.
> I can’t wait until 20 years from now when all of the proprietary shit looks ridiculous in hindsight.
I'd posit that in 20 years there will still be tech which is more efficiently managed by a central provider, rather than having each company hiring their own independent expertise.
I'm sure you can find other OpenStack professionals that are willing to manage your Ubuntu OpenStack deployment.
Of course every solution costs something. Lock-in means that there is no other choice, or that the cost to switch is so steep that it becomes prohibitive to do so.
Zerostack sounded like a turnkey openstack solution akin to VMware but I think the owners got stuck in the consulting grind and never pitched up for growth
I share your disdain(?) for openstack, but the 12 node count there is minimum I think (to deploy the control plane). You can likely add a reasonable number of hypervisors at no cost.
(I think I'm correct based on the table in the last page of your doc, but happy to be corrected).
Everybody is focusing on what happens on top of the CPU, but very few people talk about what happens beneath it.
Datacenters are fucking expensive to build and run, buying network and compute hardware is a pain in the ass, maintaining sufficient capacity to sustain growth while operating on a 3-6 month deployment lead time is atrociously expensive, and that's without all of the financial calculation around depreciation/taxes/etc.
The fact that cloud providers are able to ball all of that shit up into a flat opex rate for a server is unbelievably attractive.
Exactly, open source solves the problem with vendor lock-in in your tooling. It's more needed than ever now that the world is moving towards proprietary SaaS black boxes.
> The downloadable version of DynamoDB is intended for development and testing purposes only. By comparison, the DynamoDB web service is a managed service with scalability, availability, and durability features that make it ideal for production use.
Ah understood. I thought this comment thread was talking about replacing _production_ AWS use cases with a bare metal system that has a compatible API.
"...Whether you are testing complex CDK applications or Terraform configurations, or just beginning to learn about AWS services, LocalStack helps speed up and simplify your testing and development workflow..."
I think it would not be impossible for a talented, well-funded group to pick the top 20% of AWS's services and then implement them with the top 80% of features, with an API-compatible layer.
Even "just" to emulate a single region with 3 AZs would be useful for testing and for prod for many people. I know of several large organizations that to this day haven't scaled their CI beyond one region.
This makes sense; I would guess that 90% of AWS customers are only using the top 10% of available AWS features.
Same deal with Azure and GCP, although note that Azure has Azure Stack, which is a self-hosted version of Azure (I guess it's still expensive though, and I don't know how many services it includes).
OpenStack isn't a redhat project, it is contributed to by a large number of players. Redhat, SUSE and Canonical all provide their own OpenStack service subscriptions, but one can easily deploy OpenStack with a project like OpenStack Kolla. I run it at home and it is great.
OpenStack is an over-complicated mess. Most enterprise IT orgs already struggle to hire the people with skills to operate a public cloud account, nevermind having to build the underlying infrastructure as well.
I suppose it is complicated, just a homelabber here, but I don't think it's that bad. I deployed my cluster at home using kolla, and I've had relatively few hiccups that non-sysadmin/non-coder me couldn't get around. I've had the same cluster going for a few openstack releases now. Kolla really makes it easy.
That said before I found Kolla I did deploy openstack manually a few times before messing things up and having to restart.
I mean the value is really implementing AWS’s API surface area in a single control-plane overtop of the usual on-prem tooling. K8s is kinda getting there but is too narrow in scope I think for what people actually want.
For DynamoDB specifically, ScyllaDB had a Dynamodb compatible mode. There are several options (open source and otherwise) with s3 compatible APIs. I think it is more likely we'll end up with replacements for individual services and maybe at some point it will be possible to glue them together as a replacement for a significant subset of AWS
What's so exclusive about a half-baked Dynamo implementation? Dynamo is just a worse Cassandra. DynamoDB itself doesn't even implement the Dynamo Paper.
I worked on a system called Quartz at Bank of America. It wasn't an AWS clone, but was targeting the same flexibility for the development and deployment of applications on 'programmable' infrastructure. The team that built it had done similar projects before at J. P. Morgan and Goldmans. It was a set of infrastructure and services all accessed through Python APIs, with YAML config, all stored in a single distributed version control system. It was a lot of fun to work on.
Wow, my brother works on Quartz. But I think that half of BofA does. Higgins and Kirat formed their own company to bring the Quartz/Athena paradigm to the masses.
It’s very different, much more like AWS or Azure. The underlying infra is there, you just allocate resources using config and APIs. For example there’s a server farm for running server applications such as batch processing or web applications. To stand up a new application you develop the app on a local instance of the Quartz Python runtime, check it into version control, then to deploy it by configuring a YAML file in version control that defines which server farm it runs on, the path to the code in VC, the schedule and various resource limits. That’s about it. Job done.
Need more details. Matching AWS service for service would be overkill I imagine, so what services does this need to have. Is hardware virtualisation is must-have.
The way I made systems/applications before AWS existed was to create bootable images. I could already use a hypervisor, Xen, because the OS fully supported both guest and host, all virtualisation modes, before Linux did, and before AWS existed. But because I am not a hosting provider I saw little need for virtualisation. The OS I used also had "unikernel" capability, before Docker, etc. existed. As it happened, this inexpensive, self-determination was not to be the future, instead we got "the cloud". Sharing servers with other "tenants". Less expensive for the hosting provider, but more expensive and more limited for the subscriber. No argument, the "limited" part has improved since then, but it is still expensive (for continuous use that is, spot pricing was a neat benefit of "the cloud").
Anyway, I "deploy" images to "local bare metal" which is a laptop, RPi, or some other smaller form factor. I can use unikernel for some kernel drivers in userspace. Basic filesystem is embedded in the kernel, and larger filesystem is on the USB stick filesystem in compressed format. Updates are easy enough. I put two kernels on the USB stick, one is the running kernel and the other is the update kernel. Same for the larger filesystem which may contain servers and configuration files. Can update or go back to last working kernel/configuration by selecting one or the other in the boot menu/renaming the larger compressed filesystem.
Here is someone running a search engine out of his living room. AKAIK, his setup survived a sustained HN thundering herd, hug of death without a hiccup.
The expense of AWS is an obvious point of discussion but another one not mentioned here is control. When I create images for bare metal I do not need to jump through any hoops as I would in order to create an image that will run on AWS. Nor do I need to fiddle with all the AWS knobs. There are no silly marketing names for every program I run. I know the system I am creating as well as I know the OS and the software I choose. That is much better than how well I know every aspect of AWS which just gets more and more complex every year. AWS documentation is as cringeworthy as it it is voluminous. The ever-increasing complexity of AWS, including the "tooling", is, IMO, How To Create Lock-In 101
> The ever-increasing complexity of AWS, including the "tooling", is, IMO, How To Create Lock-In 101
That is an interesting point. Complexity creates lock-in. Why? Because when you are interacting with a complex system you depend on it working in the complex ways it does. It is unlikely that anybody else could duplicate those features of the AWS your application is depending on.
This all runs counter to the idea of "encapsulation". You should be able to use a system via a well-defined interface. Once the interface is well-defined, other providers can provide their own implementation of the same interface.
So, AWS is basically bad software engineering, lacking encapsulation?
I would absolutely love reading you blogging in details about this. It's a lost art, and one I've ever been good at. I want to lately, but people like you are always undercover and don't document their knowledge. :(
I think this misunderstands the hard part of running AWS. You're paying for the software, sure. What you're really paying for is the hundreds of oncall engineers supporting you at all times, and the culture of ensuring that those services keep working.
Because minutes after a metric drops that indicates the possibility of an issue an engineer who probably wrote the code or has the code checked out on their laptop and is intimately familiar with it working on the problem. No waiting for a support team to triage, find the owner, and get ahold of them.
Agreed. Although it's a big ask. I have something that's slowly going in that direction but it'll be a while before it gets the interest I'm looking for.
Nutanix is strong contender for this use case. We see lot of workloads moving from Cloud to onprem on nutanix stack.
P.S I work on engineering side in nutanix.
I built this a few times. The most current implementation is called CloudSeed, though I am trying to fundraise for an open source version (with a RedHat-like business model of support).
CloudSeed has been a key part of several >$1bn contracts for KnightPoint/Perspecta/Peraton, most recently DHS DCCO ($2.7bn).
Nutanix is getting pretty good at this. My enterprise style company switched from AWS to nutanix this year, we bought three all nvme/ssd nodes with 1TB memory in each.
We were heavy EC2/RDS users, now we run their NX whitebox nodes with their KVM based hypervisor, and their database management platform "Era".
Everything is one click for deployment and upgrading of things, down to stuff like upgrading the SSD firmware, NIC firmware, etc. It auto migrates the VMs around hot, and does one node at a time. They also have a kubernetes platform called Karbon that seems to be pretty good.
But why? Because they can run an on-prem production "AWS"? Or because it's easier for testing? If the former, well, sure, but why stop there? Why not an easy on-prem Heroku clone? If the latter, I don't think that goes far enough. Simply emulating AWS (as LocalStack does) is barely usable for testing. You really would want a GUI that lets you see everything in the LocalStack config and data stores. Running a fake DynamoDB is okay, but it's painful if you need to use DynamoDB queries to see the data.
So are you saying that the tooling needs to deploy to their datacenters or AWS, or just deploying to their own datacenters needs to be as simple as using AWS? If it’s both, use Kubernetes. If it’s deploying to their own datacenters, then maybe a new product could disrupt that space. You would still need to have a team competent in datacenter operations and hardware procurement though.
Agreed, that's really stunning - seen the analogy of cloud as the "modern mainframe" come up more and more recently.
I guess the (business) reason is that the centralized computing model enables enormous economies of scale. (Makes perfect sense from an operations point of view.)
What it seems to miss in the current incarnation/iteration, though, is the "power of distribution" - leveraging the fact that local compute capabilities (especially during development/integration) can help reduce the cognitive load, increase the efficiency, and thereby reduce overall costs.
There seems to be a tendency to focus mostly on the operational aspects, rather than the overall end-to-end developer journey. What remains to be seen is whether future iterations of "the cloud" will do a better job of embracing the power of distributed and/or hybrid.
The reason why AWS got popular everywhere I worked was that you didn't need to get buying new hardware past ops. You just spun up your own, then when it was supporting half the business you pointed at it and said "Gee wouldn't it be nice if we had a box in our own data center to run it?" then there'd be a fire lit under the ass of ops and you'd get your computers in a week instead of next financial year.
Now it's exactly the same thing in AWS. My next guess is that you're going to be running production code on fleets of devs computers because you don't have to get extra budget for AWS next financial year to afford spinning up another instance.
Localstack Pro looks quite nice and comprehensive, but the CI pricing seems to really discourage proper continuous integration best practices. At 10 cents per run you're encouraging developers to run their CI pipelines as little as possible, while ideally you'd run CI on each commit and you'd commit often, probably 10s is times a day. In that case Localstack Pro for CI would get costly very quickly.
Why not have CI pricing per project/repo. Regardless of how often it runs.
Thanks for the feedback, great points. We're going to put a strong focus on our CI offering (both from a feature and pricing point of view) over the next couple of weeks/months.
For example, we're working on a new feature that will make it easy to snapshot the state of the LocalStack instance during and after each CI run, making the state "browsable" in the UI, and providing advanced insights and analytics into how the application stack and tests evolved over time.
We're also looking into alternative options for pricing - your point is well taken and will be considered for our roadmap. We definitely want to encourage testing frequently and on every commit - but I'd like to emphasize that our current CI offering is just a starting point, with lots more exciting features to come soon.. Thanks!
Great question - we're working very closely with moto, in fact our team very actively contributes to the moto core on a regular basis.
The main differentiator is that moto is a Python library that mocks out / implements the AWS APIs, whereas we see LocalStack as a platform that aims to support the end-to-end development lifecycle for your cloud apps.
A lot of the work we do in LocalStack is to create a seamless dev experience in your local environment - providing local DNS integrations, persistence features, Lambda code mounting, CI integrations, transparent injection of "localhost" endpoints into AWS SDKs (e.g., for your Lambda functions), and much more. Also, today we already provide some fairly sophisticated integrations - e.g., our Athena API which allows you to run your SQL-like big data queries natively over the local S3 filesystem. This is out of scope for moto, but a big focus area for us.
Our aspiration is that you can take any (AWS) cloud app and deploy it natively to LocalStack - which is already the case in many scenarios, and improving on a daily basis.
Btw, we're currently working on revamping/polishing our docs with lots of more content and details - we'll push out an update to https://localstack.cloud/docs in the next couple of days!
i wish aws...and to be honest, most cloud providers put more effort into the development experience.
they have local dynamo, but i believe that's the only official appliance they provide.
I think it's especially necessary for the proprietary products -- dynamo being a great example.
I'll take a moment to call out snowflake for this as well -- it's really frustrating to not be able to quickly (within milliseconds) and easily (i shouldn't need to get credentials from elsewhere in my org) setup & teardown databases while under test.
There is also Step Functions Local (https://docs.aws.amazon.com/step-functions/latest/dg/sfn-loc..., it is also integrated with LocalStack). I built it as a side project when I was at AWS, motivated by the same frustrations. The best way to get these prioritized is to request it through your account manager/SA/etc. AWS does listen closely to customer feature requests.
It frustrates me how they always go for the biggest common denominator there, the lowest level shared by everyone, but what's hardly used by professional users: don't let me download a jar file, please give me Maven coordinates. Don't tell me how to click in the console to get an EC2 instance, please give me an example Terraform snippet. Don't just show me how AWS CLI returns a blob of JSON when I assume role, please tell me how to use aws-mfa properly. I understand and respect their hands off approach to these more opinionated tools. But ergo, I tend to skip over the Amazon properties when finding what I need in the search results!
PS: Localstack is awesome! Also in combination with Testcontainers.
Or put the API definitions into some database (sqlite, datascript + nodejs or babashka or nbb), then you can decide what http library or mocked http solution would you like to use, eg https://github.com/oliyh/martian
Most of the code is just a thin veneer over that information...
i think cloud providers have little incentive to help you develop and test everything locally, since they benefit more from onboarding you quickly into using their infrastructure, which they can much better monetize on. it seems to me that a completely local cloud stack would be threatening to cloud providers also in that you can then start running non-critical applications on-prem at no cost.
i don't think that's a real threat -- i don't think development environments are likely a huge income source for them, either.
i think they'd be better served by letting me develop more quickly, so I can help my company grow, and presumably need (and have the budget for!) more resources in our production environment.
if i'm mistaken, i think it's reasonable to have some sort of built in time limit to how long the appliance will run for, or some other way of preventing it being used for non-development purposes.
I general I do agree, but otoh I'm not sure if local appliances are the solution. I believe AWS' vision is that development happens against real cloud resources, which seems neat on paper but to that to really work provisioning resources needs to be super smooth to easily create developer sandboxes. CloudFormation definitely is not it.
In addition to local dynamo, they also have local AWS Lambda execution via AWS SAM (Serverless Application Model).
You invoke it via the command line
`sam local invoke [OPTIONS] [FUNCTION_LOGICAL_ID]`
Behind the scenes it is running an attached docker container and feeding you the output. I found that it works actually very well and mimics the cloud invocation environment perfectly.
The problem is that it stubs out any other services you run. So if your Lambda adds something to SQS or SNS or SES or S3 or anything else, it simply stubs those out. So advanced functions do get difficult to test locally. But This Localstack emulator may solve those problems (but I haven't used it yet)
I’ve used localstack to mock things like Dynamo, S3 and Step Functions for dev/integration testing of lambdas built with SAM with great results. I actually didn’t know that SAM stubbed those out by default and haven’t used it for a while, but I found that if you point your AWS library at localstack:4566 then it’ll run it all against localstack correctly.
It's impressive what Localstack has been able to do to keep up with AWS.
However for my own projects, we consider local development like this an anti-pattern. IaC tools like Terraform/CDK/SST make it easy to spin up environments for each developer in the cloud. This may sound crazy at first but would recommend giving it a shot.
If you're fully taking advantage of managed AWS services the best development environment is running everything in AWS. There's definitely some friction but you reduce the "was working on my machine but not in production" situations.
> However for my own projects, we consider local development like this an anti-pattern. IaC tools like Terraform/CDK/SST make it easy to spin up environments for each developer in the cloud.
But what do you test your IAC against? If you consider using something like Terratest, then combining it with LocalStack means you can run Terratest nearly instantly and test your IAC.
Plus developers being able to run "AWS" locally menas they, too, benefit from the instant feedback loop. That alone is a good enough reason to use LocalStack, but there are other massive positives: no credentials to a real AWS account to leak; no developers spinning up some new AI tool 'cos it looks cool; and more.
The credentials are the API keys and secrets, not the login and passwords.
One improper setting in IAM and an accidental pastebin or GitHub means half the automated hacks have access to the same thing the IAM user does.
I did this once on accident by hitting control-V instead of control-c and overwriting my censored version before pasting into GitHub so other people could save the 4 hours of tedious scripting I had to do. GitHub sends an email within minutes, but I was already miles away from a computer when I got the email...
It’s not really that impressive all things considered. Localstack is effectively Python’s Boto library (an AWS mocking tool) running as a server.
In turn, all of the AWS APIs are defined by a kind of JSON schema document, so there’s not really any reverse engineering going on.
I consider Localstack somewhat toxic in most projects because it shows the the developer didn’t understand how to contract test.
Furthermore, doing anything more than contract testing with AWS is a fool’s errand. The problem you face with Localstack is that it gives you an inaccurate model of how AWS actually works. As a general rule, AWS APIs are asynchronous, as in their side effects are not atomically bound to the request you’re making (sans some certain S3 actions); while Localstack is invariably synchronous.
I'd like to better understand how you define local development specifically as an "anti-pattern". What are your criteria for using that term, and what other examples might also fall into a similar category?
I'm not sure I agree with the linked article, are there any other (accessible, and existent in the present) resources you used to arrive at this decision?
For one, the article assumes the tools you use to replicate your production environment locally aren't accurate; with localstack, they are accurate. It's seamless, truly. I recommend you give it a try instead of spending the extra cash to have your developers deploy into the cloud constantly.
Another thing to note is that this seems only true for a specific architecture, if you're writing serverless functions. Not everyone is or should be doing that, so I'm not sure "you should always test in the cloud" applies as a "context free" rule.
Finally, it sounds like the author had a lot of trouble getting his local environment to work. Maybe folks who have a bit more experience working with the technologies involved in their services don't have that problem, and it's a little arrogant to assume that, just because you had a problem, that it's a problem for everyone else, too.
Overall, I just don't think commenting, "This service nice, but using it is an anti-pattern" was quite as productive of a comment, in general, as you meant it to be.
I'm happy that this exists, but it really should be a product supported and maintained by AWS rather than the user community, to ensure consistency with the real AWS services.
LocalStack is super useful tool for developers during local development and e2e testing. We have been using it for one year and it really helped us to debug and find issues without deploying to real AWS environment.
Interesting! Staying lightweight and zero-dependency is certainly a good argument. Out of curiosity - which programming language(s) are you / your team developing in? Do you have a notion of how much (%) of your time, approximately, goes into maintaining the mocks - does it scale / pay off over time?
We use Typescript and jest for the mocks. Probably about 10%, they’re very simple mocks, all they do is return success and in a few unit tests we make them return failure and then test how our app responds.
I can’t really say if it pays off but it has caught some issues, would like to expand them more and implement more logic in then to check arguments etc but haven’t been able to yet.
We also have a pipeline that does e2 work tests and canary releases to hopefully limit the blast radius. If we encounter >10 errors in a 1 minute period we roll back and the team are alerted
I do. But I also primarily use JVM which means either high quality mocks are already available or it's easy to mock them out using Mockito or creating a fake implementation if I want.
Hey - LocalStack founder here. Thanks for the feedback.
We're currently investing some time and resources on refactoring the code base to improve the performance. We have some improvements in the pipeline which we should be able to roll out within the next 1-2 weeks.
Performance (as well as general UX) is definitely high up on our priority list, and we'll continue to invest heavily in this area. Stay tuned!
Happy for any additional suggestions and questions you may have.
Localstack is great but I wish it wasn't needed. Engineers have become too comfortable stringing cloud services together with minimal code in a way that makes them unable to be run locally. This is also a hard problem to fix after the fact. If you make a mock of the cloud service, how do you know you captured all of the functionality? How do you know your mock won't pass all of the tests, but the real cloud server will behave differently and break production? Greenfield you can anticipate this and explicitly test all behavior locally.
Localstack is our favorite for developing our application on AWS environment. It saves us from redeploys and AWS env based issues by simulating services on locally
One of the main differentiators we see is that frameworks like SAM or Serverless are pretty opinionated about how your app should be defined/structured/implemented. These frameworks work absolutely great, as long as you 100% buy in to their way of doing things. This creates a certain lock-in effect, and may make it harder to switch to a different framework/approach later on.
LocalStack takes a different approach by working on the API emulation layer - hence making it easier to switch and integrate with various different application development frameworks.
Btw - there's also a Serverless plugin, as well as a "samlocal" command line - in case you'd like to give it a try:
Can I use it in conjunction with something like CDK? I’m trying to learn the AWS CDK, but I would feel much better if I could run all these orchestrations on a local docker instead of a real AWS account.
…And just trust that I destroy them before they incur cost.
Absolutely - CDK under the covers generates CloudFormation templates, which deploy natively on LocalStack. Certainly a great use case for local dev&test - especially if you want quick feedback cycles, ability to destroy the stack immediately, etc.
Yes, great point - you can define and deploy your resources to LocalStack using CloudFormation (either as YAML or JSON files). This makes it very easy to maintain and exchange stacks in a platform-neutral way.
We'll soon publish some more details on the TF integration in the LocalStack docs...
Side note - similar to the *.tfstate file maintained by Terraform (storing metadata about resource deployment states), we're increasingly making use of persistence files (called "local cloud pods") that allow you to easily store the state of your LocalStack instance, share it with team members, keep track of changes over time, etc. Really nifty feature...
Hi, we do have Azure on our roadmap - recently launched a beta program with a few selected customers. We'd love to learn more about your use case - if you're interested, please shoot us an email to info@localstack.cloud . Thanks!
I know of a lot of big shops that are desperate to get out of the cloud due to 7-8 figure AWS bills but their software and engineering is too tightly wound into AWS tooling.