> I thought devcontainers were merely a way of telling VSCode to host your dev environment in a Docker container.
Right, which is gross. It gets worse when you start talking about using them practically in an enterprise-ish environment. There, they end up being a less effective Xen-style programming interface. It's too bloated for most cases. The distinction I make is building with docker (for cross compilation or whatever) vs hosting your entire dev environment in a container.
But why is it "gross"? I'd have thought it would be especially useful in C development where headers and other development packages are typically installed globally on a machine - it would allow you to have multiple isolated environments, the correct packages (and versions) in each of those environments, and your editor/LSP/IDE would be able to interact with that isolated environment pretty much out of the box.
I don't really see the difference between just building via docker, and doing static analysis, incremental builds, running tests, etc inside docker. Surely the goal in all these scenarios is the same: a reproducible environment for every developer on the project?
There's a higher friction to working with that isolated environment.
It's hard to take stuff out of a running container; and it's hard to access files/programs on the host from within the container.
There may be advantages to running inside a container; but it largely feels like when there are easier ways for quickly making programs available to the host (e.g. virtual environments, or asdf version manager), then most tools aim for that first.
Honest question.. have you used dev containers? Because these seem like solved problems.
Bind mounts let you easily move files in/out of the container (and are already set up by devcontainers). And the whole point is to _not_ access programs on the host, you want that isolation so that the environment is reproducible and everything you need to build is defined in the dev container.
It just needs your build toolchain and libs.. you don't need to use the shell from the container to run random unix utils or curl for instance.
Any C app (or even Python app, since python libs like to depend on C libraries) with non-trivial dependencies get very annoying to configure across a range of distros (even worse if you include MacOS and/or Windows).
`sudo apt install libpng-dev` vs `sudo dnf libpng-devel` etc.
Rather than document and test all those different configs, devcontainers is a really easy way to avoid this pain for example applications or ones that will only ever ship to one distro/OS. And if you're running on Linux atleast, there's literally no overhead (containers are just processes tagged with C-groups, after all).
I'm ignorant about C development and its practices, but installing development dependencies using the distro's package manager has always seemed very wrong to me.
Doing it inside a container solves the portability problem, but you're still using a Linux distribution's package manager to get the dependencies of your project, which makes no sense to me at a fundamental level, even if it "works" in practice.
Is vendoring the only somewhat sane way of doing dependency management in C?
About 10 years ago when I wrote C++ for a living, vendoring was the solution. When you look at flatpak, snap etc. that's effectively what they do.. vendor all their libs.
I would hope that tools like conan and vcpkg solve this now on the developer side? I don't have much experience with them though.
You still have to deal with libc though, which means you likely need to containerize to avoid issues with old versions or distros that use something other than glibc (musl or bionic, for example).
It's a lot more complex with C/C++ to build fully static binaries than it is in something like rust or GoLang.
I too am a pedant when it comes to using the word "literally" :)
IMO I'm using it correctly here though, let me explain.
Overhead is originally a business term that refers to an ongoing cost. Yes there is a small amount of code in the kernel that has to run, but my naive understanding is that this code also runs for processes that are not in a container (the kernel still needs to check whether the process is in a namespace or not). Additionally, I've never seen a benchmark that shows a workload performing worse when cgroups are applied. I'm happy to be proven wrong here but if this is the case, then there is no ongoing cost (and thus no overhead).
Why is it gross? Performance issues? It works well* for creating a reliable environment for all developers involved in a project.
* granted I did just spend half a day last week figuring out that WSL environment variables are not correctly applied to the containerEnv, but otherwise they've been solid
Right, which is gross. It gets worse when you start talking about using them practically in an enterprise-ish environment. There, they end up being a less effective Xen-style programming interface. It's too bloated for most cases. The distinction I make is building with docker (for cross compilation or whatever) vs hosting your entire dev environment in a container.