Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly, so why would the lower-level service aimed at big slow transfers be more expensive than a full-featured product?


There's a few reasons:

* Dropbox doesn't guarantee 11 9s like this service does - when I'm backing critical data up, I want to make sure it's _there_. * Dropbox likely wouldn't take kindly to me storing 10PB, whereas that is what this service is designed for. * I've got SLAs and guaranteed speeds with this, Dropbox isn't designed for me to suddenly download 10PB very quickly.


Where are you seeing these durability claims? I can't find them. And what would 12 9s durability even mean? You lose one byte out of every TB?


It's 11, not 12, I misspoke.

Those claims are from glacier: http://aws.amazon.com/glacier/faqs/ which is priced the same per GB of data.

In this case, the durability refers to the loss of data/objects stored per year - if you're sending multiple PBs of data off to Glacier, you want to be able to retrieve them many years later. Even 5 9s would mean that 1 object out of 100,000 is lost every year, which is quite poor.


I'm thinking (unrecoverable data loss / total data stored) in their system per year.


12 9s is new to me too. Not worked on anything better than 5 9s (5 mins downtime/year), and Wikipedia only goes to 9. 9 9s comes to 35ms downtime a year. I can't think of anything that needs more than 5 9s, let alone 9.

http://en.wikipedia.org/wiki/High_availability#Percentage_ca...


This is a durability metric, not an availability one. This typically tells you the likelihood of losing a given object in a year. (5 9's would be quite poor for this, implying a loss of 1 object out of 100,000 every year)


It was 11, not 12. I misspoke: http://aws.amazon.com/glacier/faqs/

In this case, it's not as much service uptime as it is data retention. If you're storing 5+ PB of data, even 5 9s of data loss per year can have a measurable impact.


Do you have any information to suggest that the Google Nearline service can handle 10PB?


If you'd like to put 10PB in storage, let's talk offline :)

source: SE at Google Cloud


It's safe to assume this is going to be just like Glacier and S3/Google Storage, ie: unlimited.

Also retrieval speeds increase with your data set size: Note: You should expect 4 MB/s of throughput per TB of data stored as Nearline Storage. This throughput scales linearly with increased storage consumption. For example, storing 3 TB of data would guarantee 12 MB/s of throughput, while storing 100 TB of data would provide users with 400 MB/s of throughput.


I was just interested as to what these storage systems are capable of supporting. While I'm confident they could all store 10 TB of Data (That's just barely Tier-2 of 6 for Amazon), I'm wondering if they have the back end capability to store 10 PB of data.


The issue is with all the 9s, is it will take 3hrs to find out there is an issue, and another 3hours to make another request? I bet dropbox could fix their shit in less time.


The full-featured product is $10 regardless of how much storage you're using. If my requirements are for 1GB, then my price is $0.01 a month.


Hm yeah, I must admit I have a hard time understanding the high pricing on cloud storage. I always end up comparing to something like OVH storage servers, and cloud storage seems way overpriced..


Based on the prices I can see on OVH's website you're not going to be able to hit 1c/gigabyte with many nines of durability. And thats ignoring the cost of operating the service yourself.


SLAs on availability/durability & guaranteed throughput..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: