As someone who first used AWS in mid-2009 (just a few weeks before VPCs were announced!), but hasn't used an EC2 Classic enabled AWS account now since around 2012, it's hard to remember just how far the service has come.
There were only a few instance types, and they were all slow and small (by todays standards). Everyone's EC2 instances were mostly publicly pingable/ssh-able from the internet. EBS was horribly horribly slow (our DBAs set up a super convoluted RAID 0+1 configuration for our MySQL databases and even then we needed massive sharding to keep up with growth). EC2 instances were, in general, very unreliable (I recall something like 1 in 500 instances failing _per week_), and especially so in the leadup to Christmas (where the rumor was AWS kept the best instances for themselves).
This is pretty much the first time I've heard of AWS really deprecating something, so I have a feeling it will _hugely_ simplify things on their end. From reading the post I also get the idea that it won't be _that_ hard of an operation from their side. I bet few people (by AWS standards) are still using EC2 classic heavily.
I worked for AWS, specifically on a backend component specific to EC2 Classic, until late last year, and this will definitely simplify things on Amazon's end. There were, as of last year, still some large customers using Classic. It's not just a drop in the bucket compared to VPC, though definitely quite a bit smaller.
Depending on the service, but mainly a whole bunch of custom x86 machines. If you search you'll find nuggets about all 4 hyperscalers and what they're doing.
I know Amazon went the route of building their own smart NICs with both network and IO capabilities early (The same thing Nvidia is doing and calling revolutionary since last year)
They rack more servers per day than I could afford in a lifetime.
> This is pretty much the first time I've heard of AWS really deprecating something
Amazon has deprecated several things, they tend not to actually retire them (just reduce thei public profile so people aren't tempted to build new instances, while supporting the existing users indefinitely.)
Do they publicly deprecate them? OpsWorks seems to have been silently deprecated for years, but I sure wish they would have told us that awhile ago. Our sales contact could never give us a straight answer on that.
Interesting. Looks like you're right. I guess they changed their mind on depreciation. Six years ago at Netflix we had to move all of our SimpleDB domains to DynamoDB because they told us they were shutting down SDB. They also removed it from the console. But apparently you can still use it from the command line.
S3 torrent support isn't enabled in new regions, but it still works in every region that it was launched in. In other words, AWS goes out of its way to not break its customers applications.
> Torrent support in S3 is being killed off with little notice.
Did anyone ever seriously use it? It never made much sense to me -- if you're trying to optimize for performance, S3 is already pretty good and a CDN can make that even better; if you're trying to optimize for cost, you wouldn't be using S3 in the first place.
At Airbnb, we considered using it for downloading deployment artifacts like app tarballs and search indexes within our cluster, back when we did deploys to stateful instances. The complexity was never worth it, though, and in the end we moved to Kubernetes anyways.
If you have a paid service that requires guaranteed delivery but want to offset a portion of the cost, it made sense. The problem is that BitTorrent is not trivial to integrate into non-torrent applications.
EC2 Classic was essentially deprecated back in 2013, whether it was said as such at the time or not. 8+ years is an incredibly long sunset period, especially compared to AWS’s competitors.
yeah, it's not technically a sunset period if they haven't announced the service will be ending. but they've been sending some pretty strong clues - afaik you haven't been able to create new EC2 classic instances for many years now.
> All AWS accounts created after December 4, 2013 are already VPC-only, unless EC2-Classic was enabled as a result of a support request.
There were announcements made at the time, but I don't remember if they explicitly called it a deprecated product at the time (hence why I hedged my wording in GP comment).
You’re generally right but it’s definitely not a hard rule.
Amazon/AWS has deprecated and then put to the grave multiple payment services. I made that same comment when SimpleDB was deprecated and someone said the same thing. More recently, torrent support in S3 is being killed off with little notice.
The thing has been that historically even though Amazon would "depreciate" a service they kept it running and supported.
"Amazon SimpleDB (N. Virginia) Service is operating normally" is their current status page, but their marketing materials doesn't show any SDB stuff. So if you had some old workload ticking over - you could just leave it usually.
I actually used SimpleDB in a light use case wildly past the point at which it disappeared from all marketing. I kept on expecting an email like the EC2-classic one, but it never came even though simpleDB didn't show up anywhere really
My favorite similar mixup is ordinance vs ordnance. Though I suppose they're not mutually exclusive. I'm sure that printed copies of some bylaws are hefty enough to cause serious damage if you lob them at the enemy.
This was hilarious when I saw local businesses put up signs requiring masks due to local "ordnance". I had to wonder what kind of masks they expected us to have
Interestingly, I think this was correct or at least colloquially correct a few centuries ago. I've seen that spelling used in text from the 1700s and 1800s by educated people. Probably a holdover from before spelling got really standardized.
not only can deprecate cause depreciation, it can be a synonym for depreciate (in the financial meaning). but depreciate can't be a synonym for deprecate (in the end-of-life meaning).
just one of those fun things that gets thrown in to make sure nobody ever fully understands the english language.
As a Swede reading OT comments on HN I can ensure you that deprecate and depreciate are not words I would mix up unless autocorrect gets me. They look similar after all.
Ordnance and ordinance I don't even know the difference between though.
Sounds like a good reason to integrate with Authorize.net or another vendor that provides platform agnostic APIs that let you pick your own payment vendor (at much lower cost than Amazon's payment offerings).
It was never our primary payment method, but we had higher conversion rates with it. We had Authorize.net at the time, then added PayPal PayFlow and Stripe. I actually authored a payment abstraction library that would try providers in order of priority and fall back gracefully, avoiding double charges and even abstracting over payment status, refunds, etc.
SimpleDB is still supported though. You can still make new SimpleDB domains.
Thats part of what people love about AWS compared to Google Cloud. Google is quick to kill off services it finds no longer convenient for it to run. AWS has historically gone out of its way to keep old services running even as it releases new, better alternatives.
I work in the AWS Mobile org. The predecessor to our premier service, Amplify, was called Mobile Hub (https://aws.amazon.com/blogs/aws/aws-mobile-hub-build-test-a...). While I haven't personally worked on either service, I've heard and seen enough to know that they've taken customer obsession over keeping the lights on for any existing workloads farther than most reasonable businesses would. There is no sentiment of "force them to move to Amplify" in any conversations or design discussions, but rather "accommodate their existing workloads" as long as they exist.
Update: it looks like we're executing a migration plan this year by building seemingly-full parity between Mobile Hub and Amplify (https://docs.aws.amazon.com/aws-mobile/latest/developerguide...) such that "if you don't migrate your project to Amplify, your app will continue to function, and all your related cloud resources will continue to be available". This seems like a great solution for existing Mobile Hub customers.
Google has been doing it's best to make my life miserable. It recently killed Google Voice in the free edition, so my younger child can no longer get a phone number. With zero warning.
I've been through this each time I've (tried) to use Google for anything critical, so I've stopped.
I mentioned a family example since B2B would be confidential. But I can count at least a half-dozen instances of Google discontinuing something an employer has relied on, leading to a world of pain.
My general cloud policy is AWS, Azure, or anything-but-Google. I have a similar anti-Oracle policy too. Once you're burned a few times, you find that some companies are too expensive to do business with.
The only concrete example of a sunset in the rant is support for Python 2.7 (released in 2010) in the gcloud CLI tool. Seems like a pretty bad place to start.
> I know I haven’t gone into a lot of specific details about GCP’s deprecations. I can tell you that virtually everything I’ve used, from networking (legacy to VPC) to storage (Cloud SQL v1 to v2) to Firebase (now Firestore with a totally different API) to App Engine (don’t even get me started) to Cloud Endpoints to… I dunno, everything, has forced me to rewrite it all after at most 2–3 years, and they never automate it for you, and often there is no documented migration path at all. It’s just crickets.
> is this one of the first actual depreciation and get off service things AWS has done?
The transition from Amazon Linux to Amazon Linux 2 on Elastic Beanstalk was pretty rough. The migration took a full week, and there was really only a six month window where it could be done.
EB is kinda a mixed bag. Is there much production on EB? It’s great for q&d stuff you don’t want to get into specifics but I couldn’t imagine explaining my architecture with a big “EB does what EB wants to do here” bubble
Notion (my employer) was 100% Elastic Beanstalk until about a month ago, now we’re 100% ECS. We migrated once our 1200 box cluster started stalling deploys randomly while saying everything was healthy in the API. At that many instances, the some pages in the dashboard would hang the browser.
One bad day, it took like 18 hours from deploy attempt started to AWS resolving the situation by fiddling knobs on their side.
Did the migration went easy? We are currently running all our production service through EB as well, but we only had smaller issues (e.g. deployments stalling, health checks not working correctly etc).
Would you recommend to invest time learning how ECS works or had your company hires that managed that?
ECS and some ECS Fargate work great - I really like that combo. Seems pretty low overhead, and ECS anywhere I've had some luck with just playing around (you can do a local box on a 1Gig link with tons of RAM / Storage etc).
My one compliant is around having fallback to on-remise providers / capacity provider etc support - doesn't seem fully fleshed out across ECS/Fargate/ECS anywhere but I may not have read docs properly yet.
EB is neat when it works, but it’s only as good as it’s weakest link. I constantly get environments stuck in unavailable statuses etc. I don’t recommend it for anyone except maybe that R&D stuff. Much better to go ECS/Fargate/EKS IMHO.
Same for EKS versions. I think versions of software in general they don't really necessarily keep around long, though these are the only two I can think of.
Combining "enterprise" with "vague promise of longevity" is quite the oxymoron. Google should start celling cell service if they want to continue upleveling their doublespeak abilities.
Google Fi already sells cell service, it's essentially T-Mobile service but without most domestic roaming and with higher prices than Mint, T-Mobile Connect and their other MVNOs.
From what I hear MMS stopped working a few months ago for iPhones on Google Fi.
> In order to fully migrate from EC2-Classic to VPC...
Whenever I read anything about networking on AWS, I feel glad by having switched to GCP. On Google Cloud, you can put a project into production without having to fumble with networking at all (off course, the options are there if you need it).
I feel more productive by only having to split my cloud resources by projects - which is a high-level concept, and a good abstraction - instead of security groups - which is a low-level implementation detail.
Indeed. In a couple places we ended up replicating the GCP project feature in AWS by creating multiple different AWS accounts (not IAM accounts, root accounts). It provided isolation between completely different apps. It was quite heavyweight but the isolation benefit was judged worthwhile.
At this point AWS desperately needs higher level abstractions with sane/safe defaults. They seem to be heading in that direction, example Amplify, but they still have a long way to go.
I suspect AWS has now gotten so huge that there is no one PM to take a holistic look and build something to ease the pain of developers. I think a few startups are trying to fill this void.
In AWS, accounts within an AWS Organization are roughly analogous to projects within GCP. Most organisations running AWS in production will have tens or hundreds of accounts managed within their Organization. Segregating by security groups hasn't been the recommended approach for years, for this reason amongst others.
The accounts and Organization model is a slightly clunkier abstraction than projects, but on the other hand the security boundaries between accounts are harder than those between projects, which has its benefits.
In order to fully migrate from EC2-Classic to VPC, you need to find, examine, and migrate all of the following resources:
<LIST OF AWS BILLIABLE RESOURCES>
I'm not sure if this was unintentional or done as a tongue-in-check joke, but "you yourself must FIND what you're using in our services" indicates to me that they're fully aware of how hard it is to easily see what exactly you're paying for when using AWS.
Whenever I see this kind of deprecation at a company not normally known for deprecating things, I'd tend to guess it's being removed to make way for something else. I look forward to new network functionality being unlocked or optimized or simplified by not having to worry about how it interacts with non-VPCs.
They don't seem to be going away. The linked document says:
> Option 4: Migrate manually to a Classic Load Balancer in a VPC
> The following information provides general instructions for manually creating a new Classic Load Balancer in a VPC based on a Classic Load Balancer in EC2-Classic. You can migrate using the AWS Management Console, the AWS CLI, or an AWS SDK. For more information, see Tutorial: Create a Classic Load Balancer in the User Guide for Classic Load Balancers.
A decade without interruption is better than most enterprise IT departments manage despite considerably higher costs. I still have a couple of instance which have been upgraded a number of times over the year but aren’t quite ready to turn off yet.
Six months ago I left the following comment about this.
> I miss EC2 Classic :/. It always feels like the entire world of VPCs must have come from the armies of network engineers who felt like if the world didn't support all of the complexity they had designed to fix a problem EC2 no longer had--the tyranny of cables and hubs and devices acting as routers--that maybe they would be out of a job or something, and so rather than design hierarchical security groups Amazon just brought back in every feature of network administration I had been happily prepared to never have to think about ever[] again :(.
Honestly and likely overly-frankly, I have absolutely nothing positive to say about VPC or any of the engineers who worked on or with it: it seems like it is uninspired and creates complexity out of whole cloth with absolutely no benefits I have ever heard of to redeem its existence. For a long time, instances could not have multiple security groups, which limited the mechanism... but that was fixed long ago; security groups should simply have been made hierarchical instead of forcing everyone to think about network layout and address space limitations as part of manually laid-out networking in what should be a purely cloud resource capable of infinite extension... all of that networking equipment and subnet numbering exists in the real world to solve problems virtual hardware does not and should not have. EC2 Classic was "fun" to work with and yet had no limits... VPC is "work" and offers nothing in return.
Creating completely private networks in a public cloud, and also being able to link these networks across the WAN to other private networks in different regions seems the opposite of nothing to offer IMO.
This is a common error of conflating the name of something with a property that that name typically has.
The "RFC 1918 private range" is "private".
A publicly routeable range that is firewalled off is "private".
There is no practical difference in the level of privacy. There's a difference in naming only.
And of course, there is one other difference: The RFC1918 range is worse, because it can never be routed. You have no choice in the matter, it's not an option.
So you have two kinds of "private networking":
- Private by choice.
- Private with no choice.
Which do you prefer? To have choices, or to have those choices taken away from you?
The default VPC setup in each region just has a two public subnet configuration by default. You boot up an EC2 instance and it has a public IP address reachable from the internet (if you open up the security group) and a private IP address.
If you want a totally private subnet to put your back end app servers on so they are not routable from the world as an extra layer of safety, then you can do that too. Or, if you'd like to boot up EC2 instances with only non-routable IP addresses for security, yet be able to have the instances reach out to the world, you can create private subnets and then route the traffic out of NAT Gateways.
VPC's offer the best of both worlds, rather easily too once you wrap your head around how all of the VPC objects and software defined networking work.
Which is AWS's way of printing money. Straight up, no lie.
Instead with the advent of IPv6 and everything getting a publicly routable IP address anyway, you no longer can rely on a machine having a "private" IP address.
I recently stood up infrastructure where each machine in the VPC got a public IPv4 and IPv6, and used security groups to set up permissions for what systems can access what other systems.
This way I protect the instances, and don't pay the NAT Gateway fee because the public IP is a 1:1 NAT and doesn't cost anything.
You don't need nat gateways and I suspect most folks don't use them or are even aware they exist. The default setup gives you hosts with public ips directly reachable over the public internet, and no nat gateways or fees required.
It's not really any safer though. If I want to say that server A and B can connect to each other and connect out to the outside world, and the outside world can connect in to A but not to B, I should be able to just do that, without having to give each server multiple addresses. Addressing should be decoupled from access control.
Every time you write “subnet” you’re affirming saurik’s original post about the (needless) complexity of network hardware administration being brought to the cloud.
Just because the network is virtual and in a data center you don't own doesn't mean standard networking principals go out the window, you still have to setup the network as you see fit.
Maybe folks are just annoyed at the complexity and want something more plug and play which is understandable. The default vpc usually is fine enough for most everyone out of the box.
A company where I used to work still had those as of early 2020. The reason was technical debt, system being treated as a "black box" since the people who built it left many years ago, etc.
In addition, that product didn't exactly make big bucks for the company so there was not much incentive to improve things.
Hi, we have some customers left running things on our EC2 classic shard at Heroku. Mostly related to having dependence on customers doing things to migrate them cleanly.
How is that dependent on your customers – doesn't Heroku, like, fully abstract the EC2 backend so that the Heroku customer doesn't even have to know it's on EC2?
We just launched EC2 generational upgrades on Vantage that autodetects when there are chances for you to upgrade from older generation EC2 instances to get both cheaper costs and better performance.
You essentially get a summary of all your older generation EC2 instances that are candidates for upgrades and what the associated savings will be.
The fact that we have NATting is why you still have ipv4 addresses at all. Just look at the other thread about ipv6 that's going on in HN right now. Even if a small percent of a population doesn't have an IPv6 address means that no company can use it exclusively yet.
Well not having to dualstack is still just nice. No more DHCP4, and so on – e.g. you can have SLAAC as the one and only very simple way of auto-assigning addresses (in a client setting, obviously not what you'll use with servers that have their addresses listed in DNS :D)
It would be nice if NAT64 was embraced everywhere, I love typing IP addresses because I'm lazy, but we need IPv6 right now.
There are still too many rough edges going V6 only though, like if I set my own DNS servers, will they resolve A records to NAT64 AAAA records? And how will the regular Windows sysadmin deal with registering DNS records?
DJB's (sketch of a) solution seems more like 6to4, where every IPv4 address automatically gets a /48 IPv6 prefix. It was deprecated due to unpredictable reliability.
It doesn't make sense to have NAT64 on every router, because NAT64 is stateful and needs to be properly engineered into a network. There are also alternatives like DS-Lite and MAP, with different design tradeoffs.
Everyone who wants to is already doing v4 NAT along with IPv6 in a dual-stack setup. In this NAT64 alternate reality the discussed AWS configuration would be "v6-only vpc with NAT64 disabled".
(And mandating NAT in routers would be a pretty radical departure from the current internet architecture).
Far less radical than having two different versions of IP on the Internet at the same time.
When there was only IPv4 there was no reason for backwards compatibility. Caring about backwards compatability doesn't become radical simply because it becomes necessary.
Except you don't need NAT. You don't even have to get external ipv4. Use ipv6 for external ip and internet access.
Think of it as classic with customizable internal network.
VPC is a superset of classic.
At $0.045/hour, NAT gateways are expensive. This sets up perverse incentives, as it's cheaper to keep wasting public IPv4 addresses. If anyone from AWS is watching, I suggest that you make the base fixed price of NAT gateways way cheaper, and maybe add a small extra charge for EC2 instances and Fargate tasks with public IPs.
You need one of those per AZ, and multiples of that if you split up projects across VPCs or AWS accounts.
AWS accounts and VPCs are free, so NAT gateways can form a significant part of the per-account/VPC base cost, which can be a significant part of your total cost for a small project/environment.
The only problem with NAT gateways is that they are single-AZ. So if you're setting up multi-AZ in a VPC, you need one per AZ and you need the routing to be AZ specific.
If the NAT gateway was a service, then it could be multi-AZ transparent to the VPC.
If VPC isn’t important to you, haven’t you made one catch-all VPC for your account a long time ago with good-enough general settings? It shouldn’t come up that often unless you are in a position to benefit from the feature.
Why would you think its 1990s NAT? Launch an instance with a public IP and there's no central-point-of-failure. The "Internet Gateway" isn't an actual physical device.
Sure. It's not the NAT gw. But it is NAT. Op was complaining about having to use 1990s NAT, and I was responding to that. NAT gw isn't really 1990s NAT either, since it autoscales. I assume the sentiment was the complaint about having a "public" subnet and a "private" subnet, and using a NAT to route traffic for the "private" subnet. Its been a while since I used AWS, but I was at a large company and that's simply how IT Security demanded it. So, of course, AWS offers a solution for that market.
But if you use IGW, then your "public" subnet is still actually a private subnet: all networking to hosts inside the VPC occurs with private IPs. The public IPs are 1:1 NATed by the IGW. Your instances never see packets with their public IP. And you can launch instances in the "public" subnet without a NAT mapping if you want. For IPv6, you can have an egress only IGW.
So you can do "traditional" NAT if you want, or you can do "modern cloud" NAT using IGW. It is really your choice. I'm not saying one is better than the other. I'm just letting OP know that there is a non-1990s option. =)
Except with IPv4... it hasn't been solved because there just isn't enough addresses. And most organisations I have worked with on AWS is still predominately IPv4.
Just think of all those people putting off ever migrating off EC2-Classic that now have to reboot databases that nobody knows or remembers how they were configured. The time has come. Good luck.
Very interesting that mainframes were really only "dead" for about 10-15 years or so: [0] from the early/mid 90's to EC2 in 2006. Yes, the cloud if off-site, but I'm using mainframe in the sense of centralized non-desktop computing resources.
[0] Not that they were ever actually dead. There's mainframes around from decades ago, and HPC continued to be required for a variety of applications, specifically research. But in terms of day-to-day computation needs, things shifted from mainframes to desktops, and centralized systems tended to be servers dedicated to a specific purpose instead of general compute needs.
Accounts haven't had the ability to spin up Classic unless you were grandfathered in, iirc, and I also believe people have been getting emails about this all for quite some time. Most likely, if you're using EC2, you're not using Classic.
There were only a few instance types, and they were all slow and small (by todays standards). Everyone's EC2 instances were mostly publicly pingable/ssh-able from the internet. EBS was horribly horribly slow (our DBAs set up a super convoluted RAID 0+1 configuration for our MySQL databases and even then we needed massive sharding to keep up with growth). EC2 instances were, in general, very unreliable (I recall something like 1 in 500 instances failing _per week_), and especially so in the leadup to Christmas (where the rumor was AWS kept the best instances for themselves).
This is pretty much the first time I've heard of AWS really deprecating something, so I have a feeling it will _hugely_ simplify things on their end. From reading the post I also get the idea that it won't be _that_ hard of an operation from their side. I bet few people (by AWS standards) are still using EC2 classic heavily.