You mean creating a different container that is exactly equal to the previous one?
It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.
It's useful if you want to bring up a containerized service, optionally update OS, run tests, and if everything is good, copy that instance a bunch of times rather than starting fresh.
It gets you scale out a batch of VMs remarkably quickly, while leaving the original available for os/patch updates.
If I'm willing to pay the cost of keeping an idle VM around, subsequent launches are probably an order of magnitude faster than docker hello-world.
Your cloud provider may be doing it for you. Ops informed me one day that AWS was pushing out a critical security update to their host OS. So of course I asked if that meant I needed to redeploy our cluster, and they responded no, and in fact they had already pushed it.
Our cluster keeps stats on when processes start. So we can alert on crashes, and because new processes (cold JIT) can skew the response numbers, and are inflection points to analyze performance improvements or regressions. There were no restarts that morning. So they pulled the tablecloth out from under us. TIL.
None of this is making live forking a container desirable to me, I'm not a cloud hosting company (and if I was, I'd be happy to provide a VPS as a VM rather than a container)
For the VM case, I'm sure I might have benefited from it, if Digital Ocean have been able to patch something live without restarting my VPS. Great. Nothing I need to care about, so I have never cared about live forking a VM. It hasn't come up in my use of VMs.
It's not a feature I miss in containers, is what I'm saying.
That's another reason they're so infuriating. Containers are intended to make things faster and easier. But the allure of virtualization has made most work much, much slower and much, much worse.
If you're running infra at Google, of course containers and orchestration make sense.
If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.
The contexts in which they are appropriate and actually improve anything at all are vanishingly small.
I have wasted enough time caressing Linux servers to accommodate for different PHP versions that I know what good containers can do. An application gets tested, built, and bundled with all its system dependencies, in the CI; then pushed to the registry, deployed to the server. All automatic. Zero downtime. No manual software installation on the server. No server update downtimes. No subtle environment mismatches. No forgotten dependencies.
I fail to see the churn and destruction. Done well, you decouple the node from the application, even, and end up with raw compute that you can run multiple apps on.
Part of why I adopted containers fairly early was inspired by the time we decided to make VMs for QA with our software on it. They kept fucking up installs and reporting ghost bugs that were caused by a bad install or running an older version and claiming the bugs we fixed weren’t fixed.
Building disk images was a giant pain in the ass but less disruptive to flow than having QA cry wolf a couple times a week.