Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a developer deploying code to JVMs running in a PaaS (Google App Engine). I don't know or care what the architecture is.


Indeed you might not, but the person that wrote your JVM does, and the person that wrote the system that runs on does, and the person that wrote GAE does...

That single instance you're running on already took half a dozen or so other systems developers and more before it got to you, so in your example you're the minority.

It's because of the work they've done, that you can not care about the architecture you're running on, not in spite of them.


Sure, but every time you move down the stack a level, you shrink the network effect by several orders of magnitude.

Linus' argument is that x86 stays on top because everyone is developing with x86 at home. It's much less convincing to argue that x86 will stay on top because the people writing JVMs use x86 at home. There just aren't that many of them, and if they get paid to write ARM, they write ARM.


Indeed, but by at home he means in the office too (he says as much), and I don't see offices don't this unless they have a real incentive to throw out the hardware they've invested in, perhaps in the not so near future when they inevitably have to replace it all due to failure the arm stuff will have a chance to take some share.


Likewise... most of my code runs on Lambda JVMs now.

If AWS switches to running JVMs on ARM, and passes the cost savings onto me, I'd be in no position to argue.


[flagged]


Well, if ARM servers are cheaper for Amazon to run, they're going to want to incentivize customers to switch to ARM. Either by passing on some of the cost savings (even if it's only 5%), or by making the x86 option more expensive.

In the second case, Amazon is still "passing on the cost savings" in a sense, it's just that now they take a higher profit regardless.


As the spot history charts depict, AWS pricing continues to drop.

To break through any floor requires a disruptive change in architecture (CPU or otherwise).

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-sp...


AWS is still 10x-100x more expensive than renting bare-metal unmetered servers and running everything yourself, so I don't think the actual hardware factors too much into their pricing.


more expensive than...running everything yourself

Only if you value your time at zero.


This is so utterly untrue and directly related to what Linus was talking about.

Those bare-metal servers are basically 1:1 what you are developing on.

I can install an instance of my application on them in minutes.

It's AWS that takes significantly more time to set up and learn.

Most people using AWS are spending big bucks on an 'automatically scaling' architecture (that never just works) that will cost them many thousands of dollars a month, which they could have comfortably fit on a 30 bucks dedicated server.

You can pay a dedicated system administrator to run your server (let's not kid ourselves, you probably just need one server) and still save money compared to AWS.

With AWS you're not only paying Amazon, you're probably also paying someone who will spent most of his time just making sure your application fits into that thing.

Take my use-case for example: I can run my entire site on about 8 dedicated servers + other stuff that costs me ~600-700 euros a month.

Those just work month after month (rarely have to do anything).

Just my >400TB of traffic would cost me 16,000 bucks / month on AWS. I could scale to the whole US population for that money if I spent it on dedicated servers instead and just ran them myself.


8 servers fixed capex is not comparable to the opex of 8 peak servers.

If bandwidth is your highest cost, that's a completely separate problem that likely requires CDN. Neither x86 or ARM is going to reduce that cost.


My situation is similar to what chmod775 describes.

We serve 200+ TB/month, and no we didn't just forget to use a CDN ◔_◔ Those cost money, too.

For us, cloud is about double - $10k/month more - than dedicated boxes in a data center. I've run the same system in both types of environments for years at a time.

For us, cloud is basically burning money for a little bit of additional safety net and easy access to additional services that don't offer enough advantage over the basics to be worth taking on the extra service dependency. It's also occasional failures behind black boxes that you can't even began to diagnose without a standing support contract that costs hundreds or more a month. Super fun.

High bandwidth and steady compute needs is not generally a good fit for cloud services.


Most CDNs are more expensive at 400TB/month than just serving content yourself.

And no Cloudflare's cheap plans are not an option, they'll kick you out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: