Any C app (or even Python app, since python libs like to depend on C libraries) with non-trivial dependencies get very annoying to configure across a range of distros (even worse if you include MacOS and/or Windows).
`sudo apt install libpng-dev` vs `sudo dnf libpng-devel` etc.
Rather than document and test all those different configs, devcontainers is a really easy way to avoid this pain for example applications or ones that will only ever ship to one distro/OS. And if you're running on Linux atleast, there's literally no overhead (containers are just processes tagged with C-groups, after all).
I'm ignorant about C development and its practices, but installing development dependencies using the distro's package manager has always seemed very wrong to me.
Doing it inside a container solves the portability problem, but you're still using a Linux distribution's package manager to get the dependencies of your project, which makes no sense to me at a fundamental level, even if it "works" in practice.
Is vendoring the only somewhat sane way of doing dependency management in C?
About 10 years ago when I wrote C++ for a living, vendoring was the solution. When you look at flatpak, snap etc. that's effectively what they do.. vendor all their libs.
I would hope that tools like conan and vcpkg solve this now on the developer side? I don't have much experience with them though.
You still have to deal with libc though, which means you likely need to containerize to avoid issues with old versions or distros that use something other than glibc (musl or bionic, for example).
It's a lot more complex with C/C++ to build fully static binaries than it is in something like rust or GoLang.
I too am a pedant when it comes to using the word "literally" :)
IMO I'm using it correctly here though, let me explain.
Overhead is originally a business term that refers to an ongoing cost. Yes there is a small amount of code in the kernel that has to run, but my naive understanding is that this code also runs for processes that are not in a container (the kernel still needs to check whether the process is in a namespace or not). Additionally, I've never seen a benchmark that shows a workload performing worse when cgroups are applied. I'm happy to be proven wrong here but if this is the case, then there is no ongoing cost (and thus no overhead).
Bloated how? Which cases?
Any C app (or even Python app, since python libs like to depend on C libraries) with non-trivial dependencies get very annoying to configure across a range of distros (even worse if you include MacOS and/or Windows).
`sudo apt install libpng-dev` vs `sudo dnf libpng-devel` etc.
Rather than document and test all those different configs, devcontainers is a really easy way to avoid this pain for example applications or ones that will only ever ship to one distro/OS. And if you're running on Linux atleast, there's literally no overhead (containers are just processes tagged with C-groups, after all).