Hacker News new | past | comments | ask | show | jobs | submit login

8x is close to the top end, somewhere between 3x and 10x depending on a number of factors, mostly scale.

For the record I factor in:

  - Hardware depreciation (36mo)
  - Power
  - People 
  - DC rent + power
  - software licences / support
for on premise.

What we see now is that compute is dropping in price for on prem every year and density is improving. AMD Rome brings incredibly bang for your buck when buying at significant scale.

But it's not comparing Apples with Apples.

It's virtually impossible to accurately factor in the opportunity cost of doing all this yourself but you can potentially hire a bunch of engineers with the savings of going on prem, ymmv

You can never ever recreate the developer experience on prem regardless of your scale, on you can tell if on prem is good enough

It's difficult to put a value on being to pay as you go or suddenly be serving workloads out of a geo close to your users in Cloud where on prem there is always a lead time

Finally whilst the developer experience is better suddenly having to deal with new challenges takes a while to adjust in Cloud, outages out of your control, non predictable performance, poor support, no access to your hardware

TLDR: Cost is hard to define and isn't a zero sum game




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: