Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow, three years ago? Balls of steel!


People make it seem as if Docker is some bleeding edge magical technology, but in reality its most useful features are just thin wrappers around stable linux kernel features and some nice automation.

We have also been running databases in Docker (on the tb scale though) for around 3 years, we had the odd issue here and there, but nothing terrible and certainly nothing fundamental or resulting in data loss.

If your data is corrupted by a single process dying in an unclean fashion then you have other operational problems.


> People make it seem as if Docker is some bleeding edge magical technology, but in reality its most useful features are just thin wrappers around stable linux kernel features and some nice automation.

That's one of the things to dislike. The company and the community are trying to sell it as the best thing since sliced bread and usually forget to assign merit to the kernel developers.

On top of that, its 180k lines of code are unwarranted for a "thin" layer.


Well obviously it has a bunch more features than just being the thin layer, but Docker really is the best thing since sliced bread.


This is one of the biggest problems with people hating on docker... They don't understand what it is and actually does. Great comment!


Oh hell no, not hating docker! I love docker ( I'm the multiplier in my company and teaching docker,and doing presentations)

But putting pb of data in docker three years ago is just insane.

There are still things not clear right now,which are not stable ( not in unstable destroying data) but unstable in: if something goes wrong with your data you need to dig deep and find out what's going on. If stuff changes every few month. You will have a hard time.

Also, enterprise concerns like:

Will there be a docker standard from Google and Facebook? Will AUfs be Version 4 or scrapped again? Will the composer format stay? How to manage runtime upgrades? Ie syncronize 1000 dockers with one version update than another 1000 with another software. So that one update depends on another. Etc etc etc.

Three years ago it just wasn't there. Two years ago I would have said: "good enough" to play around. One year ago it started to get really interesting for enterprise.

That's why I say: you handle pb of data,with docker,three years ago?

Balls of steel.


The data wasn't petabytes 3 years ago, it grew to that amount over time.

But with that said, if you have replication and operational workflows set up, you minimize the chance of issues.

If you have mission critical data, you should set up your database to handle failovers of any type, let alone docker.


Didn't mean you Bombthecat :-) I was referring the article.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: