Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why is a process running within a container failing over more frequently than a process running directly on bare metal

Because the way to change anything in a container is to kill it and restart it. That's a fundamental difference compared to managing/maintaining a database not in a container.



Unless you've written very poorly behaving software, you kill it by sending it a SIGTERM, and waiting for it to exit. This is true of software both within and outside of containers.

The fact `docker kill` defaults to using SIGKILL instead of SIGTERM is unfortunate, and something one should be aware of before deploying a process with docker, but again, this does not make the process running within the container inherently less reliable.

edit: Looks like `docker stop` does the right-ish thing -- sends a SIGTERM, then only resorts to SIGKILL after a timeout has expired.


Also worth noting that the timeout is a configurable parameter with `docker stop`.


You don't have operate your container like that. If you need to push configuration to it, you can make it writeable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: