Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This. There is no alternative for high performance object storage outside of AWS S3 and Google Cloud Storage. Even Azure's offering is wonky.

Lots of providers claim to offer object storage, but try hitting them from couple thousand cores and they all tend to immediately fall over.



There's options outside SaaS object storage though, eg running your own MinIO or Ceph, + DAS (or SAN) at your colo. Yes it's some hassle but if you have significant expenses from SaaS it might be worth checking out.


AWS S3 buckets have a limit of 5gb/s access to a single bucket.

Unless that limitation has changed in the last couple of years; I can easily make a system beat that, if that's the requirement.. and ultimately it does come down to understanding requirements. :\

I think generally people forget that cloud is just computers too, it's really not anything special, and amazon/google/microsoft are solving the general case (and, doing so well, actually) but this comes at a high premium.


It's 25Gbps now[1]. I have a rule of thumb: if I haven't refreshed something I think I know about AWS for more than half a year, it's probably out of date.

[1]: https://aws.amazon.com/premiumsupport/knowledge-center/s3-ma...


That the throughput available to a single EC2 instance. S3 as a whole will do hundreds of Gbps no problem.


That's not a very common use case though. Most companies don't need to DDOS their cloud provider.


25Gbps is 250 cable Internet users downloading the release of your new software. What you call a DDOS is actually a woefully underwhelming instantaneous transfer rate.


If you're distributing releases (or anything, really) users should not have to go directly to your object store...


You should put your objects storage behind a CDN if you expect that many downloads. That's much cheaper and faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: