Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For a more in-depth analysis of these kinds of effects, see:

http://web.stanford.edu/~skatti/pubs/usenix13-copysets.pdf

The authors basically try to determine "if all data is replicated to N nodes, how can we minimize the probability of data loss when a random N nodes go down at once?" There are some really interesting tradeoffs that arise in the process of answering that question.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: