Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The solution to saving costs is to go for cheaper cloud providers or run k8s on VPS's/colocated physical hardware.

I'm not sure where the idea came from that this is hard to do.

To answer your questions in order:

1) People warned aggressively about lock-in of cloud providers with proprietary extensions and you must have chosen not to listen; so, I have little sympathy.

I'm not saying it's black and white; but I hope that you got your velocity required to hit market faster and have made more than you spent because this is the price you paid; and now to get out you'll have to invest a little time on cleaning house, that's the reality of lock-in.

2) S3 is pretty easy as there are "s3 compatible" FOSS projects; min.io comes to mind, or ceph with a RADOS plugin, or Riak with the s3 plugin... there's also s3proxy with a multitude of backends..

3) Aurora serverless is replaced by knative

4) Beanstalk is just classical servers with an auto-scaler component, auto-scaling depends greatly on your provider, so I can't say how easy or hard it will be, if you're using kubernetes then understanding your load should be easy at least.



Now you have five more things you need to research, learn, keep track of, debug, etc. Plus, in my experience, anyone who says kubernetes is easy to run (cloud or native) has never had to run kubernetes in production.


Sure, then pay?

I don't understand the argument: "I want to outsource understanding but I want to save costs";

You can think of things as a spider-graph of three points:

Quality -- Low-Cost -- Knowledge-Required.

No solution can score high points on all three reasonably.

Anyway, things are a bit skewed because I'm an infrastructure type, and people in my profession really do think of systems administration tasks as being "very easy" and if done right soak up nearly no time at all, but developers don't like hearing that because sysadmins are "old world".

I don't really care if you're paying someone elses sysadmins or not, the fact remains that you're going to be spending something in that area, and if you balk at the cost of cloud then maybe taking ownership of what they do can help optimise costs.

Obviously they put a premium on their own time in these areas.


I disagree. Once you increase your knowledge of AWS and associated systems, you can decrease cost of what you are doing through tips like what you see in this article. I don't quite get the point about optimizing cloud costs solely by switching to self hosted? Like of course you could do that, but you could also optimize costs by doing what the article says.

Full Disclosure: I work at AWS building tools to help customers do cost optimization.


Someone made a good reply to this and as I was replying to that they deleted it, so I'll copy it here:

----

> Have you actually done a cost-benefit analysis on some of these solutions?

Yes, I even gave a talk at google in stockholm about it.

For my use-case, hybrid was best, with no cloud lock-in aside from Google Storage Buckets (which can be replaced) but I went into detail about that in the talk.

> Take your Riak / S3 plugin. What do your servers cost to run that cluster?

Depends a lot, don't you think?

> How much time do you spend managing it?

Depends again, if it's anything like my elasticseach clusters then about 2-man hrs/mo.

> How do you test your backups?

Continuously, and with alerting.. and, you should be doing this anyway.

> Are you going to target the same SLOs for durability that S3 offers?

Depends on the business, the whole point of SLO is that you pay in what it's worth to the business.

> Do you run multi-data center for high availability?

Depends on SLO.

> In many cases the cheaper or self-hosted solutions have costs that you aren't accounting for.

Yes, physical machines often need some hand-holding, VPS's can have brown outs, but this is true in AWS's EC2 anyway.

Ultimately, this is where the cost increase will be.. but defining it is important, I've deployed cloud and physical (as stated) and it's true that physical machines are not as problem-free as our GCE ones- but we pay about 50% less than the GCE equiv instances, so it's "worth" spending time automating the unpredictable.

> Sometimes that's fine, but "just run it yourself" is as worthless as saying "just ship it to AWS" unless you actually think through the impact.

This is kind of the main point I always make.. understand your trade-offs, don't buy into proprietary tech. Cloud is a fantastic way to prototype and bootstrap but it's /usually/ better to have a migration plan to optimise costs in the future.

If you fail to take that into account then I don't have sympathy for you, because you put the project at risk. Financial in-viability is a risk.


That was me ;) I deleted it because I read your second post and realized I'd completely misread you and actually agree almost in entirety.

> understand your trade-offs, don't buy into proprietary tech. Cloud is a fantastic way to prototype and bootstrap but it's /usually/ better to have a migration plan to optimise costs in the future.

Preach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: