> it doesn't even solve the problem because dependencies are just downloaded from the package manager.
The advantage of Docker is that you can verify the container works locally as part of the build process rather than finding out it is broken due to some missing dep after a deployment. If you can verify that the image works then the mechanism for fetching the deps can be as scrappy as you like. Docker moves the dependency challenge from deployment-time to build-time.
Does container mean something different to y’all than it does to me?
I ask because I read your comment as saying “the advantage of Docker is that it uses (explanation of what containers are)” and the parent comment as saying “all I want from Docker is (explanation of what containers are)” and I am confused why (a) y’all are not just saying “containers” but rather “the part of docker that packages up my network of scripts so I can think about it like a statically linked binary” and (b) why you think this is a competitive advantage over other things you might have instead recommended here (Buildah, Makisu, BuildKit, img, Bazel, FTL, Ansible Container, Metaparticle... I am sure there are at least a dozen) to satisfy the parent comment’s needs.
Is there really any container ecosystem which has write-an-image-but-you-can’t-run-it-locally semantics? How do you finally run that image?
Docker is too general, too much of a Swiss army knife for this particular problem. The problem I am talking about is where a C++ program has all of its dependencies vendored into the source tree. When you run Make, everything including the dependencies build at the same time. All you need is a chroot, namespaces, cgroups, btrfs, squashfs--plain old Linux APIs--to make sure the compiler has a consistent view of the system. Assuming the compiler and filesystem are well behaved (e.g., don't insert timestamps), you should be able to take a consistent sha256sum of the build. And maybe even ZIP it up like a JAR and pass around a lightweight, source-only file that can compile and run (without a network connection) on other computers with the same kernel version.
Again, Bazel is basically this already. But it would be nice to have something like OP's tool to integrate in other build systems.
I could just make a Dockerfile and say that's my build system. But then I'm stuck with Docker. The only way to run my program would be through Docker. Docker doesn't have a monopoly on the idea of a fully-realized chroot.
For some scenarios, most (all?) of them have write-an-image-but-you-can’t-run-it-locally semantics.
My build server is x64, but target output is ARM. Can't exactly just run that locally super easily. Perhaps somebody has created a container runtime that will detect this, and automatically spin up a qemu container, running an arm host image, and communicate my container run request (and image) to that emulated system, but I haven't heard of that feature. (Not that I actually looked for it.)
In my current company we are deploying almost all code as docker (with exceptions of lambda functions) when talked to multiple developers. No one uses docker for local development, except maybe using it to spin another service that might interact with the app, but even that isn't preferred. Mainly because unless you're running Linux, docker is quite expensive on resources due to running under VM.
The advantage of Docker is that you can verify the container works locally as part of the build process rather than finding out it is broken due to some missing dep after a deployment. If you can verify that the image works then the mechanism for fetching the deps can be as scrappy as you like. Docker moves the dependency challenge from deployment-time to build-time.