Not bad except for visual studio devcontainers. Absolutely gross solution to a problem I've never faced in over a decade of professional development. I can see docker in some cases (rarely) for C programming. But devcontainers? That's like reaching around your head to touch your nose. I know plenty of people that use them and every time I watch them to try to glean some insight I am left more confused than I started. It seems so stilted. I always get "its a better experience!" but never any explanation why. Like everyone who uses them watched the same Microsoft sponsored youtube video selling them.
None of this solves C's only REAL problem (in my opinion) which is the lack of dependency management. Most everything else can be done with a makefile and a half decent editor. No need to step up into vscode if you don't want to. Clang LSPs are basically everywhere and just fine.
> C's only REAL problem (in my opinion) which is the lack of dependency management. Most everything else can be done with a makefile and a half decent editor.
Care to hear about our lord and saviour Meson?
Both of your quoted problems are mutually incompatible: dependency management isn't the job of the compiler, it's a job for the build or host system. If you want to keep writing makefiles, be prepared to write your own `wget` and `git` invocations to download subprojects.
Meanwhile, Meson solves the dependency management problem in a way that makes both developers and system integrators/distributions happy. It forces you to make a project that doesn't have broken inter-file or header dependency chains and cleans up all the clutter and cruft of a makefile written for any non-trivial project, while making it trivial to integrate other meson projects into your build, let other people integrate your project into theirs, and provides all of the toggles and environment variables distribution developers need to package your library properly. You can really have your cake and eat it too.
Devcontainers are the main reason I use vscode. Being able to git pull a project on a new starters laptop with only docker installed and have everything there, shared across everyone’s workstation, is great. It helps reduce developers local computer settings causing issues. It’s not as complicated to teach as Nix. I’ve never heard someone dislike it before tbh!
I love them even though I don't use them myself. They let less technical members of my teams like content and interaction designers run apps in two clicks. Often they don't have admin rights on their computers to install the necessary tools so it eliminates that problem too.
I didn't even know about the admin rights (because I work in startups most people have local admin so I don't even think about it). That's great to find out!
Yeah I work in a team with a mix of contractors who BYOD and perm staff who use laptops provided to them.
The contractors have a huge productivity advantage out of the gate - modern hardware, any software, no mandatory VPN that blocks genuinely useful websites (MDN!).
> Absolutely gross solution to a problem I've never faced in over a decade of professional development. .... None of this solves C's only REAL problem (in my opinion) which is the lack of dependency management.
I'd juxtapose these two sentences.
As I understand things. Those who know Nix swear by it. You can declare a development environment which will provide the toolchain and the libraries you need to build your software.
Some things do seem inelegant about Docker containers. e.g. Building the images with Dockerfiles feels fragile. Running containers means high friction to accessing the build environment from the host machine.
Those downsides aside, AFAIU the VSCode devcontainers aim to provide that "wow it just works" experience that the Nix people love, without having to pay the steep learning curve of learning Nix.
> None of this solves C's only REAL problem (in my opinion) which is the lack of dependency management.
I thought this too for a long time, but the more I'm exposed to languages with "proper" dependency management the more I appreciate the C way of just copying external library sources into the project (and I only consider libraries which make this easy, e.g. if they come with their own complex build system files they already lost - just give me a bunch of headers and source files, and maybe a readme with a list of configuration defines).
> the more I'm exposed to languages with "proper" dependency management the more I appreciate the C way of just copying external library sources into the project
What's cumbersome about copying a couple of files and adding them to your build scripts? At least that way you have complete control over the entire update and build process, you can use the build system of your choice, and you know exactly that there are no 'surprises' lurking in your dependency tree. It also nudges you to avoid libraries that come with complex build requirements (which will always pay off long term).
Updating third party dependencies becomes much more cumbersome, as copy pasting isn’t really the most reliable way to update stuff. You lose all traceability with upstream, and their code becomes much harder to distinguish from your own. It also increases the size of your repositories by a lot.
For me guix failed at simply being too slow to use, but that was years ago.
nix on the other hand, I loved the concept and idea, but it was just too much of a 'stop sign' followed by 'we don't do that kind of user activity here' to be usable. I tried for a few weeks to bend our wills together, but the systems' will won and I walked away.
I suspect my experience with guix would have been remarkably similar if I hadn't been put off by the speed - I love scheme, but guile has always seemed the second slowest implementation ever, and I suspected that was the cause for it taking so long to do anything in guix.
Guix still isn't particularly blazing, but Guile is definitely not the bottle neck these days. Guile is actually a very fast Scheme implementation now. The problem with Guix's speed right now is that the binary substitute servers have slow networking, so your options end up being either to compile software from source, or crawl through a slow download.
This is only usually a problem if you want to download something large or do a large update though. If you're just downloading a small program or you update your system frequently, it's quite reasonable to use.
As far as the "stop sign", I have never really run into that with Guix, and I did run into that with Nix. The fact that Scheme - unlike the Nix language - is not purely functional, I think encourages users to do what they want, even if it goes against the spirit of the functional package manager.
I'm not sure what you mean by "slow". Can you please clarify?
Please don't take this as "you're wrong" or to somehow invalidate your experience. In the two years of running Guix as my main system, the only time I've thought something was slow was in building Chromium from source (which took all night to compile). Everything else I never noticed. Certainly not downloads. Those were fine for me both in the US and in Europe. Your experience contrasts so sharply with mine, I wonder what the difference is?
I don't have exact numbers on hand, and it's not always, but sometimes when I'm downloading binary substitutes it will crawl at 100-300 kb/s. I know my experience isn't unique because I also see people complaining about it on the mailing lists.
Sometimes downloads are fine though. I think it's an issue of load on the servers that will cause downloads to be slow sometimes.
I am not sure what kind of dependency management you mention.
For several decades, I have never compiled any C project otherwise than with auto-generated dependencies between files, which are generated by the compiler and used by make.
I never write by hand dependencies for a makefile. (I also never write by hand the list of source files, but I let make search for them and choose the appropriate compiler)
It is true that many open-source projects have horrible makefiles, but I have never understood why anyone has written such makefiles, when the GNU make manual explains the right way and that explanation already existed at least a quarter of a century ago.
Devcontainers are a HUGE problem solver when you have a large organization. You have no idea how much time I have wasted doing support for devs who have inconsistent dev environments. We're talking like 30% of the engineering org's time. Absolutely insane.
They are in addition to a simple Docker or Docker compose setup, not an alternative or replacement. If you want to, you can still use the simple Docker setup in a project for yourself even if it has a devcontainer configuration.
I.e., you can point a .devcontainer at a Dockerfile that you've already got working for your environment and specify standard configurations for environment variables, vscode extensions, and other bootstrap configurations so that the startup friction for a new developer working on the code is reduced to the minimum and the manual steps they need to compete to get started are documented and standardised in the devcontainer format, but you don't lose anything by doing so.
The docker setup is just one part of it. There are sometimes extra considerations - let's call them environmental factors - that can begin to increasingly build complexity until it becomes a significant source of problems. You can try to avoid them, ignore them, or make your own solution. But once you face them all, and measure how much productivity and reliability is lost dealing with them, suddenly Devcontainers makes sense.
First environmental category: your dev environment itself.
What platform are you using? What architecture are you using? Do you need a VM? How will you mount filesystems? How will you handle networking - through the VM or on the native host? How will you handle HTTPS proxies - on the docker client, on the docker server, in the VM? Do you need a custom DNS resolver? Do you need additional applications for your platform, like AWS or GCP CLI tools and credentials? Do you need more applications, some of which may have specific version requirements (for example, Homebrew famously does not support old versions of software)?
Second environmental category: CI/CD.
How should I do a build, test, and deploy from my laptop? How should I do a build, test, and deploy from a CI/CD server? How can I make sure I get the same result from either? How can I reduce the number of dependencies in the deployed form, without getting different results at runtime? How can I re-use things from my local machine in the CI/CD so that I don't have to maintain two different ways of doing the same thing?
- In the best case: you have no problems at all, and everything works automatically.
- In the worst case: one of a hundred different things can break your dev environment. You end up spending an inordinate amount of time just "maintaining your dev environment", rather than writing application code. And the more developers there are, the higher the probability of time is spent on "environment work" rather than "making the product".
- Besides all of that dev environment work, you end up replicating your whole process in a completely different CI/CD system, and now you're maintaining two separate yet identical things.
A solution to all of these environmental factors combined is Devcontainers: https://containers.dev/overview Because it's one standard, all sorts of tools can now take advantage of it. This means different devs can use different tools to solve the same problem with the same config file.
My solution: git submodules + every project stores its main sources in src/ + Makefile that recursively searches for "src/*.c" files and compiles them to a big ol object collection.
It's obviously not infinitely scaleable because you could end up with name conflicts for object files, or some modules might require custom build settings. But it works well enough to nest my own projects 3-4 layers deep without issue.
git submodules have ruined my life on multiple teams enough to never want them. They're good enough for a dev team of 2-3 but don't scale very well. git subtree can be a bit better. But ultimately you probably want a working package manager.
Using git subtree for dependencies is completely unworkable for two interrelated reasons:
(1) There's no recursive option; each dependency has to be pulled/pushed/etc. manually.
(2) Each manual pull/push/etc. of a subtree has to be explicitly path-specified and remote-specified.
There's just no easy way to `git subtree pull --prefix=specificdependencysubdir https://github.com/whatever/gadgets main --squash` for each of your dependencies, and each of their dependencies, etc. It's a tedious manual process for everything.
Best case scenario, you make a .PHONY target of `updatesubtrees` and manually keep it up to date with the directory structure and remote URLs of each dependency.
Why don't they scale?
I used them for some years for a team of ten or so people who all first had to learn to understand how they worked so might that be the problem?
The configuration of ssh vs https can break a lot of workflows, the inability to (reliably) track a branch from a submodule can make maintenance a pain, I've never really figured out how to reasonably do a rebase in the presence of gitmodules changes on both branches, github actions had broken support for submodules for quite awhile... just to name a few.
There's not one giant issue with submodules. There are dozens of small annoyances and footguns. I have yet to see a problem that is better solved by git submodules that can't be solved another way.
Agreed. Compiler toolchains belong in version control. This doesn't make total sense for "libraries" on GitHub. But for anything corporate and closed you should 1000% commit full toolchains in source control. Builds shouldn't use anything from the local environment.
Of course Git kinda sucks and isn't good at hosting toolchains. So I'm putting a heavy emphasis on "should".
> I can see docker in some cases (rarely) for C programming. But devcontainers? That's like reaching around your head to touch your nose.
I thought devcontainers were merely a way of telling VSCode to host your dev environment in a Docker container. I'm confused about what distinction you are drawing here.
> I thought devcontainers were merely a way of telling VSCode to host your dev environment in a Docker container.
Right, which is gross. It gets worse when you start talking about using them practically in an enterprise-ish environment. There, they end up being a less effective Xen-style programming interface. It's too bloated for most cases. The distinction I make is building with docker (for cross compilation or whatever) vs hosting your entire dev environment in a container.
But why is it "gross"? I'd have thought it would be especially useful in C development where headers and other development packages are typically installed globally on a machine - it would allow you to have multiple isolated environments, the correct packages (and versions) in each of those environments, and your editor/LSP/IDE would be able to interact with that isolated environment pretty much out of the box.
I don't really see the difference between just building via docker, and doing static analysis, incremental builds, running tests, etc inside docker. Surely the goal in all these scenarios is the same: a reproducible environment for every developer on the project?
There's a higher friction to working with that isolated environment.
It's hard to take stuff out of a running container; and it's hard to access files/programs on the host from within the container.
There may be advantages to running inside a container; but it largely feels like when there are easier ways for quickly making programs available to the host (e.g. virtual environments, or asdf version manager), then most tools aim for that first.
Honest question.. have you used dev containers? Because these seem like solved problems.
Bind mounts let you easily move files in/out of the container (and are already set up by devcontainers). And the whole point is to _not_ access programs on the host, you want that isolation so that the environment is reproducible and everything you need to build is defined in the dev container.
It just needs your build toolchain and libs.. you don't need to use the shell from the container to run random unix utils or curl for instance.
Any C app (or even Python app, since python libs like to depend on C libraries) with non-trivial dependencies get very annoying to configure across a range of distros (even worse if you include MacOS and/or Windows).
`sudo apt install libpng-dev` vs `sudo dnf libpng-devel` etc.
Rather than document and test all those different configs, devcontainers is a really easy way to avoid this pain for example applications or ones that will only ever ship to one distro/OS. And if you're running on Linux atleast, there's literally no overhead (containers are just processes tagged with C-groups, after all).
I'm ignorant about C development and its practices, but installing development dependencies using the distro's package manager has always seemed very wrong to me.
Doing it inside a container solves the portability problem, but you're still using a Linux distribution's package manager to get the dependencies of your project, which makes no sense to me at a fundamental level, even if it "works" in practice.
Is vendoring the only somewhat sane way of doing dependency management in C?
About 10 years ago when I wrote C++ for a living, vendoring was the solution. When you look at flatpak, snap etc. that's effectively what they do.. vendor all their libs.
I would hope that tools like conan and vcpkg solve this now on the developer side? I don't have much experience with them though.
You still have to deal with libc though, which means you likely need to containerize to avoid issues with old versions or distros that use something other than glibc (musl or bionic, for example).
It's a lot more complex with C/C++ to build fully static binaries than it is in something like rust or GoLang.
I too am a pedant when it comes to using the word "literally" :)
IMO I'm using it correctly here though, let me explain.
Overhead is originally a business term that refers to an ongoing cost. Yes there is a small amount of code in the kernel that has to run, but my naive understanding is that this code also runs for processes that are not in a container (the kernel still needs to check whether the process is in a namespace or not). Additionally, I've never seen a benchmark that shows a workload performing worse when cgroups are applied. I'm happy to be proven wrong here but if this is the case, then there is no ongoing cost (and thus no overhead).
Why is it gross? Performance issues? It works well* for creating a reliable environment for all developers involved in a project.
* granted I did just spend half a day last week figuring out that WSL environment variables are not correctly applied to the containerEnv, but otherwise they've been solid
I agree about devcontainers. Now you are pushing everyone in the team to use vscode which is bad on its own. I think docker is fine, but I mostly try to stay away from any project that even mentions vscode (an editor should not be part of any project IMO).
I don't get this. If a project has a devcontainer configuration, you don't have to use it - it's just there if you want to use it. Also the devcontainer format considers vscode an extension, it's not mandatory - it's just that vscode is about the only thing to fully support devcontainers, so it's the natural choice (for now).
It really depends on the audience. I find having an opinionated, but very easy to get started with setup (like vscode + devcontainers) really handy for juniors, or folks that rarely contribute (they might not if setup is painful). The more senior devs or those with strong opinions can use still use whatever they want.
Hi, I'm the author of the article. It seems I've confused a lot of people here about what I mean with "C environment". My goal was not to "solve C/C++ dependency problems" as hinted by the use of a plain CMakeLists.txt and not even talking about the selected build environment. There are a million solutions out there and whatever people use is highly opinionated.
Docker vs. devcontainers: You got my point. It isn't about trying to force people to use devcontainers and vscode, it's about maintaining and sharing a development environment. You could just open a shell in this container and swap the base image out for whatever you need (also pointed out in the article). I myself also don't use vscode devcontainers but just exec into the running container and, e.g., use docker compose (or podman or whatever works best) :)
EDIT: The reason for showcasing devcontainers is that you get an IDE with code completion, format on save and all the other goodies "for free", whereas any plain docker or nix setup requires that you do this on your own. In my career, I've seen way too many people editing code in notepad or notepad++, making tons of mistakes that can be avoided by whatever IDE. I'm not saying vscode is best, I'm saying right now it is the easiest IDE to set up.
It is recommended at work now, where I also can use "the old way" which is a series of Makefiles. I've never had a problem with the Makefiles, but they've been hand-tuned for a few decades, and errors would be reported immediately by the build server. They check if you have the right compilers and then just work. Contrast this with Docker, where you always have the right compilers and they only work sometimes.
While using Docker, if even one source file changes while the compilation is happening, the VM will simply hang and refuse to compile or accept any break keystroke. Outside of Docker, a file change sometimes works without error and sometimes you'll cause a minor problem that a new iteration of "make <whatever>" fixes in a few seconds. Worst case? "make clean", "make <whatever>", no reboot required.
After force quitting the Docker-based build, I try "machine stop" and "machine start" but often encounter errors and need to reboot my machine and/or reinstall Docker, such as this gem from last week, "No connection could be made because the target machine actively refused it."
I don't know if Docker has an equivalent to "make extraclean" or "make distclean" or "sudo cut the bullshit && let me use you again", so I'm already uncomfortable restarting a build after rebooting/reinstalling. (That you need to manually delete container directories after uninstalling is also troubling.) I know some data are stored in the container in a mixed-persistence way that uses the existing filesystem. It's confusing to me why some generated output (.pdb, .exe, .dll) are accessible outside the container while others (.o) are evidently not used if you decide later to rebuild without Docker. Thus, the difference when switching is often recompiling 5 files in a few seconds vs 500+ in a few minutes, so there's a short-term incentive to try to keep using Docker, where I then waste time trying to solve Docker instead of just recompiling with the old method. Any semblance of "Flow" I have after getting a Docker error gets annihilated.
If our local container server goes down (it's not often but has happened), I am unable to build because some of our dependencies aren't FOSS/widely available.
I've had issues related to services, prerequisites, permissions, mounts, certificates, massive network login delays... When I hear the constant praise for containers, it gives me pangs of imposter syndrome, because my experience has been awful and mentally taxing. I find I'm constantly fixing tools instead of using them.
I can't necessarily blame Docker, but twice in the past 3 years I've needed to install a fresh OS within a week after installing it because of the problems above.
Docker's main selling point? It fixes a problem we don't really have. Every developer at my company has essentially the same computer---an overpowered Ryzen/Quadro combo in a black tower with a small collection of preinstalled essentials like MSVC bintools, VCS software, network drives preconfigured---unless they specifically request another for build purposes, i.e. Mac or Sun. In the latter case, that developer (or small group of developers) is in charge of builds on that host OS. In the one place it matters, we have one version of a compiler expected in a specific directory, and this is well-documented and changes once every few years.
It's possible I completely misunderstand Docker or have a misconfigured system (whatever THAT means, I thought the whole point was to eliminate problems caused by local customizations), but for anyone to tout it as fool-proof would mean they have severely underestimated the technical (in)abilities of fools.
Docker, to me, often feels like bringing in and using a CNC machine when all I usually need is a sharpening stone and chisel.
-----
Also, I would be quite uncomfortable using GitHub Actions to compile C for a microcontroller. There's simply too long of a delay between changing a config YAML and getting an error, fixing it, waiting for the next error. Plus, despite the "low overhead" of Docker, if you do this long enough on GitHub (especially with a non-Linux host OS) you will run into server fees.
And after you set up a GitHub Action, are you writing in VS Code, waiting for the remote computer to compile, downloading the artifact, syncing that with VS Code so you can debug, flashing to the target with your own set of (locally installed) tools, and finally debugging?
Contrast that with a local install... Edit. Save. "make", "make flash", gdb. In 20% of the time, with no server costs.
Having spent a decade building embedded C, its refreshing to see that I'm not the only one who sees the value in doing things well.
Every single one of the steps here, though they may seem excessive provide immense value, not at first, but as the project grows, matures, and needs to exist for decades.
I am rather surprised at all the negativity here. I assume that it's either few of the people commenting have worked on embedded systems, or the few that do do not have an appreciation for maintaining a large project for the long term.
Attitudes like I'm seeing expressed here are reinforcing my desire to get away from embedded, or at last work with people that don't wish to stay stuck in the stone age.
I'm not intending to argue here, just surprised and a bit sad at the reactions, and happy that some people, at least at memfault share my values.
> Attitudes like I'm seeing expressed here are reinforcing my desire to get away from embedded, or at last work with people that don't wish to stay stuck in the stone age.
I hate to break it to you, but there’s the same problems on the SWE side too. The problem is never the stack, it’s the culture. If you’re tired of embedded, that’s fine, but remember to ask culture questions for your next position.
> I will definitely take that under advisement. Any places you recommend I look into?
Ask yourself what you want? What do you want in a leader? What do you want in a company? Be picky, be patient, if you can, maybe start something yourself. I wish I could tell you what would be the right fit for you, but I can’t. All I can say is don’t lose hope, but at the same time don’t expect too much either. We’re all just trying to figure out this thing called life, whether we’re embedded engineers, SWEs, or even sales for that matter.
I think the article is great too. There's a complete dearth of decent guides for how to setup a C/C++ environment with dependencies, including some sort of build system, and it's often been my biggest stumbling block. Everybody assumes you know how all this works and most guides are either way too basic ("just do make" like the comment below) or it's highly technical documentation that overwhelms anybody who isn't already familiar with C development.
> There's a complete dearth of decent guides for how to setup a C/C++ environment with dependencies, including some sort of build system, and it's often been my biggest stumbling block.
Disagree. I had to learn make for my first job. I found two ways to manage dependencies that are probably the most common, and the limitations. There really isn't a method that is perfect; I don't know if dependencies can be done (even outside of C) without occasionally rebuilding.
The methods that I've seen rely on using the compiler to generate .h dependencies via the preprocess command. This can be done with gcc / cygwin for cross compilation, or with the native compiler. I'll explain.
Method 1: the "better" method. Generate dependencies for your header files at make invocation, before compile step. This is more robust, but outside of very small projects, this can add a ton of time to the build.
Method 2: the faster, more common method. Generate dependencies as a side effect of the build. When you compile, you use the previous dependencies. A new dependency file is generated for next time.
They both have limitations. Method 2 can detect changes in the header files included from C files, but not headers included from H files. Method 2 can detect changes in header chain, but not all the time. I think it's the files that the headers include, those includes from that file don't get updated, but it's been a while since I've read the paper.
Anyway neither method is that hard. You use gcc to generate dependencies for each file, then combine into one file (or you may create .d files, but for some reason a single file is common). That's it.
Otherwise, add a step to compilation (some compilers may have a build in "byproduct" command). Every time you generate .c there's also .d. When you compile, you use the previous *.d file.
Yes you need to know make, and this is an intermediate-to-advanced make task, but this is true for any build system! Entry level is a user that usually just adapts a build from the engineer experienced with the build system.
I've seen very good guides, I wish I had one on hand. I saw both of the examples first hand and assumed these were the "main" ways to get the job done.
When developing for a very small project the generate dependency first method was used. It's more complete. Still added an annoying amount of time to the build ~20-60 sec.
For our main project (which also might be considered fairly small, relatively) this method ends up being too time consuming. So the generate dependencies as a byproduct of build method is typically employed, including by: a completely separate team, a competent supplier, and recommended by our compiler company.
Honestly, once you know, you're golden:
The generate ahead can be done with gcc (I think g++ is typically employed) via the preprocess command. You create a rule to generate all *.d first, and combine into one file (this may not be necessary).
To generate as a byproduct, you simply add a command to your compile rule.
That's it. Can be described in one line. I can't recall offhand, but you will need to figure out how to include the dependency files and the formats may need fine tuning. This isn't asking very much.
I saw a really good document out there, and interestingly our supplier copied it almost verbatim. Our build ended up being somewhat similar. So that sweet, sweet doc is out there, you just gotta find it.
I spent more than a decade building embedded C/C++, never felt the need for any of that. But I was always able to keep things under my control; that is a GCC-based cross-compilation toolchain, my own makefiles do build stuff, and an editor I understand inside and out. When you are forced to use a proprietary IDE/SDK/compiler ball of mud, certainly that's a different story.
I'd say if one has the sort of problems TFA says it addresses, then go for their solution. But if one does not see any problem a priori, then they should go for the simplest thing that could possibly work. One can always reconsider things later; but if one goes for that more sophisticated solution first, and then realize that it was actually not need, they've wasted their time - and perhaps introduced problems that would never had happened without.
Don't be fooled by "modern" anything - too often it introduces new problems while solving old problems. "Stone age" can also mean battle-tested and rock-solid solutions.
> appreciation for maintaining a large project for the long term.
Nothing author describes is applicable for a large project or a long-term project or both.
He chose one of the worst editors available, and decided to incorporate it into environment setup. Any long-term or large project will have multiple people working on it. A fraction will find the crappy editor the author decided to use useful, but most will want something better. Most people with development experience will want to set up their environment in the way comfortable for them, this setup is asking you to jump through too many hoops, none of which bring any value.
Same goes for CMake -- I've worked on multiple large C projects. None used an off-the-shelf build tool like CMake. These tools are inadequate for large-scale projects. (But, lets give the author credit here, he never claimed that his project was large or long-term). Still, I had never found CMake to be useful, nor for small nor for big projects. Whenever I had to work on a project that used CMake, it was a major pain.
It is common to test stuff in containers during development. But, it's also more common the more lazy and less insightful the programmer is / intends to be. In my experience, better programmers usually set up their environment in such a way that they don't have to deal with the container nonsense, as it gets in the way of debugging and a bunch of other tools useful to interact / understand the program being tested.
So... maybe containers for a smoke test. But, if you plan on going long term... that's just not going to cut it. You need a proper environment where you have comfortable access to your program.
Having bounced around very different things a lot, the NodeJS community feels the nicest. It's not about flexing your tooling, flexing your lack of tooling, or dunking on other languages, it's about getting stuff done by whatever means necessary.
At the risk of being featured in a future episode of "Hacker News reacts to (Dropbox|iPhone)"...
...a Makefile and vim (or emacs, or even nano, I'm not going to judge your kink) are fine. If they are not fine, then C is probably not the right language for the project.
I did not read this as gatekeeping, rather I read it as an opinion on the suitability of C for kinds of project. If vim + make isn't enough, then C is probably not the appropriate language. Not that you shouldn't use C if you don't use vim + make.
We always talk about how good engineers "choose the right tool for the job". I don't think expressing an opinion on what the right kind of job is for a particular tool should be out of bounds. (Setting aside whether the opinion is correct or not.)
Editor preference was not intended to be the focus of my comment, it was more me being incredulous that someone would build something in C which necessitated Docker.
If someone wants to do that anyway, I am going to be perplexed that C was the language choice rather than golang or rust, and perhaps worried about C being a footgun WRT security, but whatever.
Yeah, the Docker part makes sense. You don't need it, but it can be nice. Don't want mundane differences between devs' setups to get in the way, especially when they're not all the same OS.
That's usually not the case. You work on existing code base and you want your editor to be able to explore it efficiently. Changing language is rarely an option unless you happen to enter a project at the very start.
It is gatekeeping. The poster was not offering advice, they were being actively discouraging. You can argue shades of grey, but at the end of the day this is an unsolicited discouragement, and an arbitrary out-grouping.
The fact that you don't see this as problematic, is problematic.
There is no gatekeeping. There is a suggestion that C is best used for projects that have builds no more complex than a Makefile configuration. There is explicitly no ~judgement~ limit on what ~kink~ text editor one may use to edit the files.
the fact that they don't see it as problematic means that they didn't misinterpret the comment the way you did, which is because they know things you don't
thus hegemonic institutions reinscribe the kyriarchy generation after generation by perpetuating a hierarchical opposition between "knowledge" and so-called "ignorance", which is socially constructed to be inferior, less than, but which serves the purpose of distinguishing the graduates of elite institutions (to which access is mediated by the financial privilege of the bourgeoisie) from the subaltern underclass; thus "knowledge" serves the interests of capitalism by reproducing oppressive class divisions, enabling the continued exploitation of the working class
what could be more problematic than that?
but if you're interested in learning something and not just playing word games to gain status by putting others down, i problematically mansplained what you're missing in https://news.ycombinator.com/item?id=37083233
The proposed solution of a text editor and makefile is a lot easier to setup than the what's proposed in the blog post, I would call this the opposite of gatekeeping.
I get that, but even figuratively, it's a relatively harmless opinion that can only achieve as much by way of influence or control as readers are willing to allow the anonymous commenter to hold over them. There's more to learn if we confront the opinion by its premises (as shown by other responses to the comment) than by subjective moral grounds.
The term has become common on the web to refer to enthusiasts trying to control how people enjoy/use a term or participate in an activity - for instance "real fans only like the stuff from before $album" or "only filthy casuals play the game that way" (or "you shouldn't use C if you want modern, good tooling"). This might be worth reading through:
If vim and make are spooking somebody, they need to turn tail and run the fuck away from C. I love it to death, but it's a frustrating experience from another era. vim and make are the least of your worries when dealing with it.
> but it's a frustrating experience from another era
Are you using C99 features? I find the "new" features extremely enjoyable, it feels like a different language compared to C89 or the common C/C++ subset.
Started coding in 1986, used my first UNIX (Xenix) in 1993, the only thing I care about vi, is knowing enough to rescue me when there isn't anything else installed to edit files.
Yep, plenty of really good windows/macos programmers have never touched vi or emacs. I'm not sure why users of those are so elitist. I tend to prefer vim editor keystrokes in my editors but that's just because I got used to it from college and terminal editing.
they're saying that you don't need to be comfortable with ides and fancy debuggers and cmake and language servers and game development engines and ci pipelines and all kinds of complicated stuff to write c successfully
a bare-bones text editor and the most minimal build system are plenty
and if they aren't plenty, they're not saying that's a problem with you, but with c
i don't agree (ctags, valgrind, git, and gdb go a long way towards making c usable, and evidently c is the best language for a lot of things even ctags and gdb struggle with, like linux kernel drivers, and cmake evidently helps a lot if you care about ms-windows) but that's what they said, and you totally misunderstood them because you somehow got the idea that vim and make are some kind of super advanced tools rather than relics from the 01980s
they're maybe a bit unprepossessing at first glance but mostly what they are is simple and primitive
think of using a hammer rather than a cam-driven turret lathe
you can go lower tech than make too now that cpus and c compilers are so fast
while sleep 1; do # ci pipeline
gcc -Wall -funny -mtune=i69 *.c -lm -liberty -letmypeoplego -o proggie && # build system
./proggie --run-tests # test runner
done
c compiles fast enough that this scales to several thousand lines of code, c++ very much does not
of course you need a testing framework
if (!strcmp(argv[1], "--run-tests")) return run_tests();
now i'm not saying you shouldn't write a test runner in unity and distribute your ci pipeline with zmq and mqtt and whatever the fuck. better ux is worth my weight in gold, and i have programmer gut. also zmq is metal as fuck
what i am saying is that the difference between no test runner and an infinite loop in bash is much bigger than the difference between the bash loop and circleci or gitlab pipelines. so don't be intimidated by articles like this which make it sound like you need a team of phds to set up a test runner. writing tests and running them is what helps, not so much stylishness
except for version control. a build script in shell is a serviceable alternative to make, but cp -a proggie/src snapshot.$(date +%Y-%m-%d) is not a serviceable alternative to git
also if 3d test runners with inotify and particle systems with custom shaders mean that people write more tests and see the tests fail sooner after they break shit, that could make a real difference
I'm not a he (some other folks used "they" which is fine), but this is otherwise a pretty good interpretation - it absolutely was a dig at C, not elitism.
If you're going to learn a bunch of modern tooling and start a green field project that justifies that complexity, C is generally a poor choice. Learn a modern language.
I use C somewhat regularly, including for kernel stuff, embedded, and legacy code.
Mostly, though, when I use C, it's because I'm doing a small thing that I need to be very fast, and I haven't yet bothered to get comfortable with Rust.
oops, i'm sorry i misgendered you. i think i have fixed it now, but now the editing window has closed
on your other points i mostly agree, except that if i write a library in any popular 'modern' language, it can only be called from that language, which seems like a missed opportunity
and when i went back and compared development time logs, the development speed advantages of modern languages seem to be only a factor of 2 or 3 over c, once i get beyond a few hundred lines
which i guess is why linux, firefox, cpython, gcc, apache, poppler, libvte, and so on are written in c or occasionally c++. it's not because the authors didn't know about common lisp, scheme, ml, smalltalk, and so on, or couldn't figure out how to write a garbage collector
rust and some other unpopular modern languages look like they might change that situation (nim, zig, koka, a couple of others i can't think of right now)
You can write C how ever you want, doesn't mean it's the right language for the project. My team uses C++ a lot like Java, and turns out Java was the better tool for our use case.
They don't, cause the speed at which we have to implement new features means cutting corners. We write backends, none of which are performance-critical. A lot of faster algos that would take too long to implement in C++ get skipped, so in the end it's slower than partner teams' Java code that does similar things, but again it doesn't matter as long as nobody notices the latency.
This is not a matter of taste. VSCode is an alien in the C world. It's clumsy, demanding and offers plenty of useless features, whereas the simple stuff is hard / impossible to get to work.
If you work on a C project, you need to be ready to have to edit source code, at least minimally, from an environment that doesn't have GUI. You will have to interact with pagers, man readers, readline a lot -- it's a lot more convenient if your editor works in the same way as those tool.
You are just creating unnecessary problems when you use VSCode or a similar editor. And I cannot think about a single benefit that would come from using it. Beside other things, it's just a crappy text editor... the only thing that's going on for it is that it's a Web browser application... which is kinda worthless when it comes to a typical C project.
If you're using vsc then C is absolutely not the right language. If your code can't fit in your head when you're writing it then you have no hope of debugging it. Vsc very much encourages your code to grow as large as the machine you're wiring it in can handle.
So, let's say we have a project. For whatever technical (or social, historical, religious...) reason, C is chosen as the right language.
Why does that imply that the programmers on that project should therefore write that C code in a terminal, with no linter, code formatter, static analysis, test runner, etc.?
The point that I think everyone else is missing is that C is actually hard to work with and write correctly. If your project is sophisticated enough that you can't get by with basic tools, then maybe C isn't the best language for your project.
I think this idea of "basic tools" is highly debatable, especially with developers who are young enough for heavy IDEs like visual studio to have been their intro to coding (or similarly for people who enjoy programming but aren't so invested in the entire computer software stack to really care about knowing their environment particularly well).
Eg while I enjoy understanding all the layers of software in a computer and am perfectly comfortable with vim and a makefile (although I prefer vscode and cmake), most of the other developers I work with are just as competent with C, but can't really function if they can't use visual studio. To them visual studio is the basic tool and vim+make are advanced tools.
I meant "basic" in the sense that it does not have a lot of helpful features for developing complicated applications. Whether or not that's easy or hard for any individual to use isn't really relevant.
That said, Vim has some IDE-like features, moreso for C (by default) than some other languages.
Again, the point is that C is complicated and hard to work with as you scale up into a bigger project, not that my tools are better than your tools.
more gatekeeping. If I'm using C I want to use modern tools like every other language. If my environment doesn't have a debugger, tools like valgrind, linter, etc I don't have any use for it.
20 years ago, I worked on a C and C++ project that supported a billion dollar business. Almost everyone used vi. They used none of those other tools, and there were maybe a half dozen unit tests in the whole project. I sure hope times have changed.
At the time, I was a junior engineer, barely out of school. This company was run, technically, by people with 30+ years of experience. They preferred manual testing.
Also, I'll add that automated testing was definitely not as much of a thing in 2003 as it is in 2023.
A project manager, probably. "Advocating for XYZ" costs emotional capital that I'm not necessarily willing to devote to a particular project, or to my employer at all.
Weird strawman. Those tools all run on terminals. IDEs use tools that run on terminals.
Teams can use a combination of different IDEs and run the tools at different levels or their pipelines, which necessitate that they be automatable (and not GUI only).
You're missing the point, they aren't saying "your development process should never involve a terminal" but rather "you don't necessarily have to use this bare-bones setup which happens to be terminal based"
Docker is a godsend when you're a solo embedded consultant and a former customer comes to you five years later with an urgent firmware feature request and cash in hand.
If you don't have the exact machine any more, then it's "Good luck getting the toolchain and build environment from the vendor!"
Docker has saved my bacon at least twice in this manner.
I'd consider a VM, and I've used them before, but I've found Docker gives the advantage of forcing me to be thorough in my toolchain docs, and portable to CI pipelines.
chroot sounds like nothing but a headache.
QEMU VM, I don't even understand the potential advantage of lol
QEMU for emulating different hardware I guess. Actually I was wondering how you test firmware repeatably without hardware emu; never dealt in firmware myself.
Would a five year old Docker image still work? I mean, probably, but for long term maintenance of a build environment, I would feel significantly better with a VM.
I have Docker images older than that I use regularly.
For things where kernel changes matter, sure a VM may provide better isolation, but should you ever run into problems running your older Docker images, you can easily solve that by spinning up a VM to run Docker in as a worst case fallback.
Docker strikes a nice balance of ease of integration and sufficient isolation for most situations.
I write a lot of pure Win32 C, and I don't even need a Makefile for the majority of my projects because they're just compiled as a single file and quickly enough that I don't bother with anything more than necessary.
IMHO the "tool fetish" that a lot of (mostly newer) programmers seem to have is an artifact of a mentality that favours complexity and novelty over simplicity and efficiency. They will waste tons of time and resources configuring and debugging (often with little understanding) huge complex monstrosities of "development environments", and end up feeling more productive, but in reality aren't.
This article just further confirms what I'd already expected with the word "modern".
For a lot of small and even medium-size C projects, one file is enough. IMHO having dozens of tiny files with only a few dozen lines each is an antipattern.
How many LOC is "medium" in your view? Most devs I work with would only put a thousand lines of code in a single file (obviously subjective, depends on density, comments etc). I'd still consider that a toy app, as opposed to my medium mark at about 100k LOC. At that stage you certainly benefit from all the tooling you seem to eschew. "Large" projects (the likes of Linux and Chromium) have so much mass they tend to have fully customized build systems.
I wrote my first Java code in Notepad and compiled it from command.com (as Windows 9x still called it). It was... fine, I guess? For the era? Not what I would want to do but far from unmanageable.
I used to do that for years, and then I discovered gdb and IDE integrations with it (code blocks, codelite) and later visual studio. I have no idea why anyone would subject themselves to developing without breakpoints in 2023 other than to larp as an 80s MIT hacker
I started writing in C with an IDE then realized much later on that I don't really need breakpoints, I just log stuff. Whatever I'm testing often won't easily work in the debugger (or won't reproduce the bug there due to timing), and even when it does, it's not significantly easier than logging.
Also, every new job I've had, I've watched my coworkers spend like a week trying to figure out how to make the IDE work with whatever environment, which then changes later... I just skip that.
Debuggers have logpoints as well as breakpoints. Learning to use a debugger grants you access to this kind of basic "log-debugging" with the option to use more advanced techniques (traditional breakpoints, break on value change, etc) at your fingertips.
> Also, every new job I've had, I've watched my coworkers spend like a week trying to figure out how to make the IDE work with whatever environment, which then changes later... I just skip that.
A single week to achieve better productivity? That's a sweet deal. It's about as much time as you need to figure out how the work email works, learn how to get to the cafeteria from your desk quickly, and so on. It's absurd to think that you'll be 100% productive that first week anyway, so why not use the opportunity to familiarize yourself with the tooling as well?
If it were a week to actually get better productivity, and that setup never broke later, then sure. It doesn't really help because of how few things are actually runnable in the debugger (basically just unit tests). Nobody else on my team even uses the debugger, they only use IDEs out of comfort, and so far they've had to switch IDEs 3 times in 3 years.
Past jobs had similar caveats. The only time I've ever been able to use a debugger consistently was in school.
Personally with visual studio and vcpkg, I never have an issues with setting up the environment. Vcpkg in particular makes it easy to note have to do manual linking, and it handles x86 vs x64 automatically as well.
But in all seriousness. Using a debugger can be useful, and even though I've given it a try numerous times, I mostly avoid it because it doesn't fit the way I think.
Debugging is about searching for the source of the problem. With print debugging I'm always leaving behind breadcrumbs which I can inspect all at once at the end. If I'm going down a wrong path I delete the wrong ones and add new ones until I find the issue.
With a debugger I have to mouse around, put breakpoints, run the code, inspect where I'm at, step through, decide that this breakpoint is useless, and have to have all this state in my mind. If I get lost, I have to start from scratch.
Print debugging matches my way of thinking much more.
Logpoints! They're like breakpoints, but they don't stop execution. And they don't require whatever you're writing C for to have any sort of console output. Of course you do need to have a debug port enabled, which often isn't the case for production hardware so you get stuck printf debugging by blinking an LED and probing it with a logic analyzer. Royal PITA.
There are small projects, hobby projects, and large industrial projects. For hobby projects in C, I am myself very happy with a shell, make, a text editor, git and valgrind.
Imagine developing an embedded device for medical or aerospace hardware : you are going to provide lost of effort into testing. You are going to work with teams of people with varied abilities and experience : it's going to get a little bureaucratic, there are going to have rules and guidelines. Enforcing those with tools remove part of the friction, if done well.
Avionics code isn't written by hand any more. We (society) can't write sufficiently reliable C such that pilots and passengers can rely on it. Writing C by hand these days is hubris, in my opinion.
Most avionics these days are done in Simulink, and then you hit the Autocode button.
A big usecase for C is embedded systems projects where wrangling cross-compilers, debug dongles, etc. can become a big headache especially when trying to keep multiple developers' local toolchains in sync or when managing multiple projects with different requirements for the development environment.
PlatformIO is a dream for this, running build on a project automatically downloads the required toolchains for the target and orchestrates the build process for you.
That reads almost like a disparagement of GCC/Clang; the reality is you can just build on pretty much any host system directly and it will work. Having clean and repeatable builds is extremely useful, and Docker is a reasonable way to do that, but that shouldn't mask the importance of understanding the native runtime dependencies of the program once used outside of that one container. A concrete example of that: don't compile a C program in an Alpine Linux Docker image with dynamic linking, or it you won't be able to run the resultant binary on a RHEL machine. This is contrast to languages which run atop a virtual machine or interpreter, where the details of the build environment rarely matter on the runtime machine.
Its a disparagement at the fact they're dated, rather than the software itself.
GCC searches by default in /usr/include and friends, (and a similar set of paths for library paths), meaning that one random library you downloaded 11 years ago is now perpetually on your search path.
I write a decent amount of go, and it's not wildly useful there either but I still use them as the rest of the tooling is good enough that it works for me.
> If they are not fine, then C is probably not the right language for the project.
It's for C/C++ as author says, not just for C. And even if you're using Qt and write mostly QML you still need some C++ and it's much easier with code completion. I'd rather use VSCode than Qt Creator for that and I'm certainly not going back to vim.
I once had to compile cmake from source (the cmake version of something I needed to build was lower that the OS version of cmake). Not a fun afternoon.
IMHO any proposed solution (for any programming language) should also include a step-debugger that's directly integrated into the edit-compile-test workflow, otherwise people don't know what they are missing out on.
I write a bunch of C for various projects of mine and i do it in an editor with some C understanding (it varies by which, but the latest combo is Kate with a Clang-based LSP, however i've also used "plain" text editors with barely if at all any understanding like Notepad++ and Geany, both of which give you at best word completion with words they picked up from the buffer without actually knowing what they mean).
I also write Free Pascal using Lazarus, an IDE which actually understands the language at a deep level.
The experience writing the latter is way *WAY* better than the former and my opinion to that isn't "just write more in Free Pascal" but "i want an IDE for C that is at least as good as Lazarus is for Free Pascal".
I want a C IDE that, among others:
1. Can understand the code so that when i, e.g., want to rename the identifier "foo" it doesn't try to rename it as a keyword but knows if it refers to a local variable, global variable or struct member and only update references to that.
2. Can understand the code so that if i use a symbol defined in a header file that the current module doesn't include by itself it can put it in the includes section automatically (or at least ask me to) instead of waiting for the compiler to fail. If there is some conflict (e.g. because the symbol definition depends on macros or whatever) it should be able to understand that.
3. Can understand the code so that it can move structs, functions, etc around in modules and update header files and their uses in other modules automatically.
4. Can understand the code so that it can convert a struct defined in a header file to an opaque struct and vice versa, for the former being able to tell me which existing code would break and offer fixes (e.g. automatically creating accessor functions for the members).
5. Can understand the code so that it can expand macros in place visually, allow me to edit the macros expanded in-place while actually modifying the header file (or wherever the macro is defined) and also quickly tell me where the macro is used in other places.
6. Can understand the code so that it can convert code to functions, expand functions inline (with any local variables being placed at a decent position and any conflicts handled - it can ask for example if it sees the inline code to use "for (i=1; i<10;" and the existing code also has a "int i" if it should reuse the "int i" or replace it with heuristics that provide decent defaults), add/remove arguments (with automatic updates wherever they are used), etc.
7. Can understand the code so that one can create queries like "replace all string literals to non-static functions whose name matches the '^foo_.*$' regex with calls to macro 'TXT' and the same string literal as the parameter to macro".
8. Has a debugger that works with all the basics (breakpoints, step in/through/out, watches, etc) and...
9. The debugger can modify a variable while the program is running.
10. The debugger can make function calls while the program is running.
11. The debugger can modify a function while the program is running (any new calls are done to that function).
12. The debugger can watch data over time and be able to display values in various means like various graph types.
13. The debugger can tell you where (in code) some use-after-free was originally allocated and then where it was freed and where it was used, all with nicely shown arrows, hyperlinks and graphics directly inside the editor instead of you having to manually parse (in your head) some callstack. Similar for other types of errors like accessing invalid pointers, array data out of range (should tell you both the range and the accessed index where that can be statically inferred), etc.
14. The debugger can put conditional breakpoints which:
14a. Can call functions defined in the program.
14b. Can check if the breakpoint comes from a specific callstack (e.g. break if function "foo" is called from "bar" but not from "baz").
14c. Can access local variables and arguments up the callstack (obviously the breakpoint will only break where that is possible).
15. There is a profiler that works and...
16. The profiler can create call-based profiles (think gprof) and statistics-based profiles (think perf).
17. The profiler can record the full callstack instead of just the function name.
18. The profiler can create full call diagrams (see Luke Stackwalker[0] as an example) as well as flamegraphs (see some perl script for perf i don't remember).
19. The profiler can keep track not only where* but also when* a sample was taken so it can create profile timelines. Actually just have it do everything i had a profiler i wrote some time ago for Free Pascal[1], including be able to be instrumented by the running program, filter via thread, call stack depth, etc.
20. The profiler can call functions in the profiled program to create additional profile (e.g. if you want to count how many files are opened or how many rays your raytracer is casting or whatever so you can use the full profiler functionality instead of hacking up your own)
21. There is a static analysis tool that works and...
22. ...works like whatever Xcode's integration with Clang's analyzer is, i do not have much experience with those but i remember using it a couple of times years ago and thinking it was neat. I haven't seen any other graphical integration of a code analysis tool, pretty much every other analyzer feels like compiler warnings++. More of the graphical approach and less of the warnings++ approach please.
23. There is some form of project management / build tool / whatever that the IDE uses and...
24. The IDE can let you setup various "configurations" for build options (including preprocessor defines, which files/objects/libraries are to be included, etc) that can be mixed and matched at will (e.g. a "lightweight" and "full" configuration set could be mixed with a "win32" and "linux" configuration set with the latter relying on the "unix" configuration and none of those would need you to duplicate any information).
25. The IDE can know about libraries, be able to locate them as well as place them in appropriate locations when you are building a library. Libraries should be able to be built as both statically and dynamically linked if that is needed. If a library A relies on another library B and a program uses library A it should not need to also specify library B too - in the exceptional case where that is needed (e.g. there are alternative versions of library B) it should allow that but it should not be necessary.
26. Setting up the above should not need to be done via arcane text files but via a nice GUI that doesn't hate the user - e.g. using a library should not need you to type it's name but allow you to check next to its name in a listbox with checkboxes (with name filtering and categories). It should also know about different system libraries for the same thing (e.g. OpenGL in Windows, Linux and Mac is accessed via different ways). This should be configurable, not hardcoded - a custom library should also be able to use that functionality. Note that all of this can be stored in text files (e.g. for easier VCS support) but not needed.
27. More of a #26b but i think it deserves its own point: in addition the IDE should have deep knowledge of what a library offers and like #2 if it knows a symbol being offered by a library it should automatically add it to your project's requirements.
28. Again more of a #27b but also a #2b: it should be able to clean up any unused stuff (either automatic or explicitly). In total i should, e.g, be able to use OpenGL by typing "gl", pressing the auto-complete key (ctrl+space or whatever) and have the IDE suggest all the "gl" prefixed functions like "glClear", then once i select "glClear" (or whatever), the IDE adds the #include <GL/gl.h> header and the libgl library in the dependencies. If i press Ctrl-Z to undo that, the header files and library dependencies (if added) will be removed. If i press Ctrl+S to save or some other key to cleanup the project (depending on the configuration), any unused headers and libraries should be removed.
I could write for more but basically i'd like an IDE (i haven't touched topics like VCS support and how i'd like to be able to see and use different version of the code from inside the IDE - like e.g. go back in time to a different function while the debugger is running - or anything that has to do with GUIs) that is smart and helps me get rid of all the manual drudge,
(also FWIW little of the above is provided by Lazarus and i'd also like Lazarus to do all that where it makes sense but it still does more than pretty much any C IDE i've used)
I wrote some simple C a while ago and my go to IDE (which ostensibly says it supports C/C++ out of the box) wasn’t working, and since this was just a dabble, I was really not interested in fighting the IDE.
And, mind, I haven’t touched C in 20 years.
So I fired up Xcode. And, boy, that was easy. For my silly thing, I fat fingered and left thumbed my way to success.
No doubt Xcode has its critics and limits, but for my 3 hour project, I got to focus on my code and not the IDE.
I have another, more substantial pure C, not Mac specific, project I’m thinking of starting, and I’ll go with Xcode until it fails me.
>No doubt Xcode has its critics and limits, but for my 3 hour project, I got to focus on my code and not the IDE.
XCode has to carry a lot on its shoulders these days - C, c++, objective c, SWIFT and a lot of GUI editors, so it's impressive for what it is.
But it can easily support simple c/c++ projects, I've actually developed apps there and then ran them on linux, sometimes with the odd #ifdef to make it compiles on linux, but the dev experience on XCode is better for me than anything I can get on linux.
Really. I'm not kidding. It's all there. Not for everybody but /clearly/ very much for you. (Unless you move the incredibly complex goalpost you've set up here).
It has a learning curve to configure and to use but if the above 28 is what you want as you say you /can/ have them all.
All of them? Including things like the modify a function while the application is running, either directly or using an old version from VCS and things like know all the available libraries, the symbols they export and what header files to include to use them and then be able to do that "type gl, press ctrl+space (or whatever), have glClear (or whatever) show up and have the IDE automatically add #include <GL/gl.h> at the top and 'libgl' as a required library on Linux, 'opengl32' on Windows and 'OpenGL framework' on Mac OS X with Ctrl+Z undoing all that stuff"?
If so i'd be impressed - and TBH ~17 years ago i did use Eclipse CDE and found it superior for C and even C++ code compared to what i saw people praise MSVC for (though MSVC's debugger was better than the GDB front-end functionality Eclipse CDE provided), so i don't think it is unlikely. I do remember it having some neat integration with Trac back then too. I mainly stopped using it because the computer i had at the time struggled to run both Eclipse and Mozilla at the same time :-P. But ~17 years ago i always praised it with the only annoyance (resource usage aside) being that weird "workspace" approach, but that was minor.
I didn't think much about it because it seemed to have fallen out of fashion since then so i thought it is just limping along.
I think i'll be trying it out then :-) perhaps it is time to go back to Eclipse CDT if that is the case.
EDIT: eh, tried it, not impressed. First of all Eclipse crashes or hangs almost every time i try to edit the settings.
Then i tried to make a test library, "librola" (random name) to be used by a program "rolathing". First i made the library using the managed C project with the "Linux GCC" toolchain that was already available. Typed some methods "int rola_init" (dummy, sets some static initialized variable to 1), "int rola_gimme" (returns 42) and "void rola_shutdown" (sets the initialized variable to 0).
1. I forgot to mention it in the above but Lazarus and other IDEs already have this anyway and it seems Eclipse CDT also has it, except it doesn't work: i tried to have the IDE generate the bodies for these functions in a C file (that didn't exist yet) using the "Implement methods" command that seemed the most appropriate (and i couldn't find anything else anyway) but that didn't work. I thought it was because there wasn't a C file so i made an empty one with the same name as the header file but again it didn't work - even after adding an #include in the C file for the header file it didn't seem to match the two. So i typed these by hand even though i believe it should either use the C file with the same name as the H file (a common use) or at least ask me.
2. I made a C program with managed C project and added a new "main.c" file to it. I typed "rol" and pressed ctrl+space to see any suggestions and there were none, so it basically failed #2 and #28 above. I typed the code manually, used printf to print out the return value of "rola_gimme" and even after saving there were no errors or warnings at all even though the C file had no include files at all.
3. Tried to compile and as expected, it failed. There were no quick-fixes or anything else i could do from inside the IDE - there was an "info" message that i should include stdio.h to get the printf definition but the IDE wouldn't help there by adding it itself, i had to manually type the code. And TBH most likely that message came from the compiler and the IDE was oblivious to it anyway. Obviously no message about which file to include to get the rola_init, rola_gimme and rola_shutdown definitions, let alone doing those for me.
I lost interest at that point because it not only was unstable but also failed at the first thing i tried that would show the IDE really understood the source code. Note that this whole "automatically add header files" isn't even something i came up with, i remember a trial version of C++ Builder i tried years ago to do that and while i didn't check to see if it only does it for the header files that come with it or also for your own libraries, the latter would be an obvious thing to do anyway as the library was already part of the workspace and thus known to the IDE - and Eclipse didn't do either anyway.
But just in case, i tried to at least try the debugger so i thought to add the reference manually - which i did but Eclipse crashed every time i pressed the "Apply and close" button. However restarting it seemed to have the setting applied anyway so i could at least try the debugger. And at the end i couldn't figure out how to convince Eclipse to use the shared library project next to the program project, no matter what i tried (i'm sure there is a way but IMO all i should have to do is, as i wrote in #26, check the library's name in a listbox with all the available libraries - and FWIW there was such a thing but it did absolutely nothing, Eclipse only started realizing there is a library somewhere when i started adding in paths, etc, manually but i failed to make it build).
So at least i wrote a "int foo() { return 42; }" (in more than one line :-P) in main.c and then have an endless loop in "main" and run the program under the debugger. The program wrote a bunch of "42"s in the stdout so i put a breakpoint in "return 42;" and modified it to "return 48;" - then continued and... nothing happened, it kept outputing "42". I tried to rebuild the "current source file" and even "rebuild all" but it didn't change, i had to restart the program, thus failing (as far as i can tell) #11.
On a semi-positive side i made a "int lala(int x, int y) { return x*y; }" and the debugger was able to run this function, but then when i tried it with something a little more practical like implementing a very simple strlen "int lala(char* s){ int l = 0; for (;*s;++s,++l); return l; }" it failed with some weird errors.
Well I missed that. No you can't modify running code in C and hope it's going to end well unless your C program is carefully designed to allow that.
You'll have to set paths on the installation or project so it knows where the relevant libraries and headers are. You won't configure it all correctly in the first week.
If you do give it a go, use a makefile project. If you're using something else to build, just have the call to that in the Makefile and nothing much else. Maybe it does cmake etc now? Haven't tried. I only use it occasionally nowadays. Brilliant for any refactor.
You have to want it to succeed and do some work to make it function the way you want. It does work if you care to put some time into it when you're /learning/ how it all works. I did, works great for me but isn't my nirvana. Just another useful tool.
>So, eh, i don't think Eclipse CDT is it :-P.
Don't confuse, "I didn't get it to work first time with very little effort" "does not work" Also note that like any complex tool, once you've worked it out, which takes time, you don't have to do that again. Like the people who say vi is garbage because you can't quit.
If you don't spend the time and give up early, that's fine too and shows you how much you do or don't want these things and /any/ answer there is by definition correct.
Edit: Editing a post above a response with wholly new info is a pain, don't do that. Just hit reply.
> Don't confuse, "I didn't get it to work first time with very little effort" "does not work" Also note that like any complex tool, once you've worked it out, which takes time, you don't have to do that again.
I don't confuse the two, the program didn't do what i explicitly wrote in my post. If you think it can do what i asked, please explain how, because right now from what i can see it fails to do what i want from it. I didn't try everything i mentioned because it failed at pretty much everything i tried from the very first thing, so it didn't give me any confidence to keep on trying.
Also you seem to misunderstand something important even from my initial post: almost everything i wrote can be done by stringing together various tools and/or have various workaround, even the "modify code while it runs" could be done by using one of the libraries or tools mentioned elsewhere.
But the point is having the IDE actually do all that in a single place and with good UX (for example i didn't mentioned the part about checking a library's name in a listbox of libraries in a project's settings for fun, i mentioned because i expect at least that sort of user experience and convenience for the user! Eclipse CDE fails at that part because it needs all filling all sorts of forms in various places to see the library - my requirement wont be met by ignoring the requirement, the entire point of it was to request good user experience).
It is all about having a good user experience, if the program fails at that, it fails at the core of what i asked.
> Edit: Editing a post above a response with wholly new info is a pain, don't do that. Just hit reply.
When i edited the post there wasn't any reply, you replied while i was typing the edit.
>It is all about having a good user experience, if the program fails at that, it fails at the core of what i asked.
You don't want what you said you wanted because you won't do the work to get it. It's not even that much work tbh. That's fine. It does work for me and the "user experience" is fine. I'm not associated with the project, nor pushing anything. You can pretend it's because I'm so much smarter or this is my con or something else to explain that but I doubt these lines of reasoning.
> Another words: hot code reloading / swapping [1].
Indeed, though without the use of external libraries, be for C and integrated seamlessly into the IDE - and it needs to work on Linux since that is what i'm using :-) (i've used MSVC's edit-and-run or whatever is called at the past though it was kinda finicky if it worked or not).
You seem to know what you want. Have you actively looked for an IDE to use with C? Why don't you think any of the available options is better than your current setup?
I don't know of any IDE that provides even half of what i want TBH.
On Linux (which is my main OS these days) pretty much all options involve a text editor with some plugin/addon to somehow understand C plus the ability to call a makefile or some other command. Of all those Kate (KDE's "advanced" text editor) with an LSP plugin and a Clang-based LSP seems to be good enough.
Though it is really more of a "least bad" than "actually good" situation.
> I don't know of any IDE that provides even half of what i want TBH.
> On Linux (which is my main OS these days) pretty much all options involve a text editor
You obviously haven't looked hard enough.
CLion fulfills a decent amount of what you're looking for.
https://www.jetbrains.com/clion/
A lot more limited, but still much more capable as an IDE than text editors is KDevelop, for example your request here :
> (i haven't touched topics like VCS support and how i'd like to be able to see and use different version of the code from inside the IDE - like e.g. go back in time to a different function while the debugger is running - or anything that has to do with GUIs)
> An especially useful feature is the Annotate border, which shows who last changed a line and when. Showing the diff which introduced this change is just one click away!
Also example snippet of the debugging integration :
> You can also hover the mouse over a symbol in your code, e.g. a variable; KDevelop will then show the current value of that symbol and offer to stop the program during execution the next time this variable's value changes.
> You obviously haven't looked hard enough. CLion fulfills a decent amount of what you're looking for. https://www.jetbrains.com/clion/
CLion has the problem of being a proprietary program that requires online validation, which is a big hard no for me.
Also IMO the fact that you point out KDevelop being able to show a per-line commit and the current value of a variable, both being among the minimum you can expect from an IDE tell me you didn't read what i wrote that i wanted, so by extension i doubt CLion also does anything close to what i wrote.
I don't "just" expect "some" debugger integration or "some" VCS integration, i explicitly wrote things like the debugger being able to call a function in the running program or modify a function while the program is called or being able to replace the current function with an older version of the function taken out of VCS. Among a ton of other things.
What you show KDevelop to do aren't anything special, even non-IDE "programmers' text editors" can do them.
You can still try CLion (the trial version) and see if it’s what you’re after. You don’t have to use it but IMHO complaining so much about there not being a decent IDE without even trying the proprietary ones isn’t very fruitful. CLion is what Borland was back in the 90s.
The thing is even if CLion does what i want (TBH i doubt it[0]), i wouldn't use it anyway so i don't see a reason to spend money on it. Note that my issue wasn't so much that it was proprietary (though it is an issue[0]) but that it requires an internet connection to function.
After all i still have and can use Borland C++ 5, Delphi 2 and C++ Builder 1 to this day without requiring any sort of connection or having Borland's permission to use the software i bought. I can't do the same with CLion.
[0] CLion relies a lot more than Borland on external tools like GCC, CMake, GDB, etc meaning that not only it most likely does not provide all the integrated functionality i mentioned but there is also a very high chance it will stop working at some point as its dependencies are changing so you do need to rely on JetBrains to keep it up to date without having access to the source code.
Yes, though i don't remember why i decided against KDevelop, i remember installing it at least a couple of times, finding some annoyance and removing it. I do remember liking QtCreator more but that was in comparison to KDevelop.
> ...a Makefile and vim (or emacs, or even nano, I'm not going to judge your kink) are fine. If they are not fine, then C is probably not the right language for the project.
Sorry buddy, you might believe Makefiles are fine only if you are not aware of the most basic requirements of a build system. CMake does stuff like running sanity checks on libraries, configure them for you with minimal effort, and even add platform-specific configuration easily. Did you know that cmake started as a Makefile generator? Why do you think people need that?
Makefiles alone were never enough, as the development of tools such as the autotools family demonstrated decades ago. Claiming otherwise just seems naive flexing from someone who has no real world experience whatsoever.
> Did you know that cmake started as a Makefile generator? Why do you think people need that?
I like make myself, but I’m honest enough to acknowledge that the whole autotools suite (autoconf/automake) was born to, essentially, generate makefiles.
Which is not 100% make’s complexity’s fault though… the toolchains have their own complexity (even more so when a project must build across platforms)…
since the container is executed in a VM, the I/O performance is significantly worse compared to a container
that is run natively. For compiled languages or for any process that creates a lot of files, this impact can be
significant since the overhead can be up to 100x of what you’re experiencing natively
Uhhhhhhh. I don't know what VM you're using... but if the I/O in your VM is 100x slower than the host, you can fix that. It should not take a minute and a half to write a single file.
I'm guessing OP is on Docker for Mac. This isn't so much a VM is slow, it's that file IO on docker for mac on mounted volumes is dog slow. (It's not exactly ripping fast on windows either).
If you don't mount the intermediate directory, it'll be way faster. Alternatively, use OrbStack (I feel like I'm shilling this a lot recently, I'm just a happy user), and the problem goes away.
And for everyone who has these problems and aren't, like, people who need access to Photoshop, it's "native performance" on Linux cuz there's no VM layer.
It could be that the VSCode remote container stuff also gets around this problem somewhat by working "in container", effectively opting out of the Mac->Docker FS stack issues (especially with file watching....).
The fact that we have an enumerable number of tools that write files and somehow ended up with a bunch of stuff built on file watching instead of signaling to a build daemon on file save/checkout is... It feels cleaner but in practice is a mess of downstream problems.
More like: writing to an overlay filesystem mounted on top of several more overlay filesystem running in a linux virtual machine started by docker desktop on a mac.
Having dealt with run times changing by order of magnitude by removing the mac from equation, it will do so even if there's no overlayfs on the linux VM side.
Corporate insistence on providing ill-fitting developer hardware (Macs) to what our software was being developed on (Linux, and in fact only runnable there) is why they might have paid the cost of few more laptops just for my single EC2 used to have reasonable compile times.
And I wasn't even using the host filesystem for any of the busy path - it was just compile definitions and storage for built tarballs and images (if it had to schlep actual source files and temporary files over, I might have had to wait a week for a single build to finish...)
I've had jobs where we used a containerized dev evironment like this. I've had others where we just installed all the dependencies to our dev machines. The container environment is hands down the worse experience of the two. Similar to this tutorial, if you don't want to use the blessed editor, you end up maintaining your own docker container. Similarly if you want to use tools not installed into the official container (say, ripgrep).
I'll take updating a dependency on my dev machine every now and then (which could be largely eliminated if we had used something like conan) over maintaining my own docker image any day. You can also largely eliminate relying on system headers through --sysroot, which cmake supports.
Another issue we had, which this tutorial doesn't appear to address, is that these containers run as root, so files created in them on a mounted folder easily end up as owned by root. Which can create all sorts of mayhem. The only reliable solution I have seen is to dynamically change the container users uid and gid on login, but this often doesn't seem to get implemented.
I hope beginners don't get the impression that all this is needed for developing in C. If you want a free (as in beer) and 'good enough' multi-platform solution without tinkering, just use VSCode with 2 extensions:
This gives you a cmake-based IDE-like setup that works across Linux, macOS and Windows (and also allows to build and debug UI applications with the native OS APIs).
I was hoping one can setup a "modern" C development environment without resorting to Docker.
Using Docker for setup a C development environment indicates that there are too many moving parts and the development environment is essentially very complex and that there's nothing that one do about it.
I wish more people would write such guides with the aim of reducing the development environment to its essentials that can be installed system-wide without being disruptive and thus not needing "Docker".
I have a C project template for VSCode that I just copy whenever I start a new C-project.
Inspiration for it was mostly because I don't want to rewrite the same CMake code constantly.
But I think there are a lot of these kind of projects floating around out there on github.
I use mingsys though . But theoretically it should be no trouble to change the compiler.
Obviously it is goal dependent, but if you eventually want your code to run on a variety of machines and systems, I find docker to actually be a barrier. Having a different environment in CI than local can be annoying, but it’s also the first time you are forced to confront the “but it runs on my laptop!” problem
This should be titled "A Modern -OPEN SOURCE- C Development Environment".
If you work in the embedded space for a large OEM, ODM, or integration house, you won't see any of this ... you will see all commercial environments with big price tags for seats e.g. compilers will be ARMC, Green Hills, IAR; for DAST you'll see tools from Synopsys or Cadence (same for virtual prototyping); lots of ISO compliance tools from hard-to-remember small companies that do that for a living and charge a cool mil to setup and audit ... for CI/CD you will likely see GitLab if not home grown suites. Gnu tools are some of the worst. Containers? 30+ years at this with 10 major contracts with big companies (and dozens of smaller ones) and I've only seen one company use containers and that was for virtual prototyping. C environments move at 1/100th the speed of webdev because product cycles happen in 6-12 months: literally no time to bring up a new system that breaks everything (and for no real benefit).
A million dollars?!?!? I think we pay like $60-150k per head. That doesn't all go to one company.
Perhaps if you have a 20+ person team it's possible that a supplier charges "a cool mil", but I'm pretty skeptical. If you have a big team anything can be expensive. That would be like implying laptops cost $20k, because you bought for a whole department.
This shit is expensive, but not THAT expensive. Support is like a grand for lessons, $150-300/hr. However, these are very normal rates for any engineering labor.
> If you work in the embedded space for a large OEM, ODM, or integration house
> Containers? 30+ years at this with 10 major contracts with big companies (and dozens of smaller ones) and I've only seen one company use containers and that was for virtual prototyping.
There are _a_lot_ of small and medium size businesses working with embedded devices with limited on-site resources, and those will usually use external contractors. In that case containers are very useful to share an Automated Test environment. The alternative is a long back and forth or spending time on-site supporting teams with wildly varying skill levels.
Also as someone else mentioned when an old client asks for a new feature, the ability to take a snapshot in time is a huge time saver, rather than trying to replicate that build environment from scratch.
I'm highly skeptical of the benefits of containers in the embedded space. The inertia required to set them up and maintain them, and train customers to use them is massive. I'm sure some people benefit somewhere, but in my experience people want a zip file of your code base, platform support package version, and compiler version, ... that's 99% of it. Working with ARC/MSP430/ATMEL/PIC/Cortex-M devices for decades I have rarely seen a codebase over 10 megabytes of customer C code.
I don't find none of the things author is using to be useful, or do I ever use them in my C development process.
I don't have a need for Docker or any containers for that matter, unless the goal is to deploy in containers.
I would never touch VSCode with a ten feet pole. Same goes for CMake -- unless forced to by external circumstances, I will never use it.
Even if this environment was set up by someone else for me, I would've found it painfully difficult and uncomfortable to use... so, I don't know why would anyone else want this.
Docker is a handy way to keep a hold on toolchains that might require a bit of arcane configuration, or the chaining together of a bunch of different tools.
The author's example isn't really the best at illustrating this as he's just apt-get'ing things like gcc. There are plenty of cases where you'll need some specific version of gcc or some other weird tool like openocd or gdb that's hard to hunt down, or is required due to some weird development ephemera you found.
Docker makes it a bit easier to compile these requirements and hang on to them painlessly - even across a few years' time lag.
I've used VMs, and they work - when you have the same computer. When you upgrade equipment and need to repro an environment, Docker proves to be the lighter lift.
Yeah, as soon as the article started talking about setting up a Docker instance to compile some C code I was out. It's like they went out of their way to make a C compile take as long as a modern language.
For all IDEs I’ve used, this meant a dead simple makefile, nothing to mess with. When I had to, I had to mess with these fancy envs even more, or they just couldn’t do it.
But if you meant that learning make is harder than using an IDE, then I agree. But why specifically XCode then? Lots of IDEs know C/C++ off the shelf.
This post has very little to do with C development and far more to do with using Docker to have quasi-reproducible development environments.
Quasi because when your provisioning automation is doing things like `apt update` and grabbing the latest and greatest toolchains from third party repos, you're still producing an entirely unpredictable result.
I'm a big containers fan, they work really well (assuming you're not developing with MSVC). They give you a repeatable build environment, which is a boon for something like C where you're implicitly depending on system include paths for versioning.
I said this elsewhere, but you'll get a pretty decent perf boost if you set your build intermediate directory outside your mounted volume.
Instead of rm -rf /var/lib/apt/lists/*, running `apt-get clean` as part of that layer will help more.
Wrapping docker commands with makefiles is sad times. Use https://magefile.org/ or a task runner instead. Try to avoid making it a scripting language dependency.
Installing ruby to install a build system when you're using cmake in the container is a bit bonkers, and the cause of the majority of the "bloat" here. I'd replace it with calls to cmake and unity directly. (And honestly, if you're going as far as using cmake, I'd ditch C altogether and usea subset of C++ with gtest)
But honestly, that's about all I can complain about. This is a neat, modern workflow.
> I'm a big containers fan, they work really well (assuming you're not developing with MSVC). They give you a repeatable build environment, which is a boon for something like C where you're implicitly depending on system include paths for versioning.
For development, I actually don't like "repeatable build environments". I like using a different, continuously updated systems, I consider having a different environment for CI/CD and for development to be a good thing. It is one of the best way to test for compatibility problems, a kind of dogfooding.
Plus, it is nice working on a system you are comfortable with, and without the performance penalties of virtualization/containerization.
For releasing and testing however, containers are great.
The cost in learning curve and weird edge cases on NixOS / flakes is debatable for any language ecosystem with some kind of Python-style virtual env or Rust-style (correctly) build the world.
In C/C++ land? Nix is the virtual env. It’s the only sane choice for that as a user land stack.
> Dockerfiles are not stable. A Dockerfile that built just fine yesterday might fail to build today. There are simply too many external dependencies.
> Docker is not platform-independent. Especially if you’re running a container on other CPU architectures, e.g., Apple ARM, you’ll notice that some things don’t run. We’ll see this later.
And it keeps going on about how Docker doesn't really solve the problem.
You can do all of this, but aside from some oddball project setups, virtually no one else in the C or C++ ecosystem does this. Everyone uses CMake (C++), or even just Makefiles (C).
You are working against yourself, because eventually you will need to learn CMake and Make when you have to interop with other projects.
In day-to-day use, I find that there is very little that is actually modern about C or C++, and that's OK. Just focus on getting things done, and build up a working knowledge of all of the practical stupid things you have to do when working with C and C++ projects.
Like, it's bewildering that there are no cross-platform CMake recipes for building an app. Totally wild. But everyone slogs through this stupid nonsense while other platforms hand it to you on a platter. Just deal with it. There are other hills to die on that are wildly more important. Help others that struggle with arcane CMake b.s.
I wouldn't use Debian or Ubuntu as Docker base for this since they always ship heavily outdated software. Alpine as a base offers slimmer image but also updated tools and libraries, including GCC and Clang. Article speaks about GCC 10 which is prehistoric by today standards.
`apt install` is just installing a whole lot of utilities without specifying any version. This environment will only last a couple of years (and that's generous because the kind of stuff you're installing, like cmake, gcc, curl, maybe ruby(?) tend to be stable, but at some point one of them will break something you were using).
It's the old dilema everyone faces: pin every dependency version to guarantee you have a stable environment that will work in 20 years as long as things can still be downloaded? OR use the latest of everything, making a hugely unstable environment you have to keep fixing every few months, but at least get the latest "security patches" and other improvements (as well as new bugs)?
Yeah, not pinning versions in a container build is bad practice imo.
Though, I haven't used Debian-based distros in a while, but does apt actually serve really old versions of its packages? I vaguely seem to remember that you could realistically only `apt install` the last few versions of a package.
You can install specific versions of libs with apt[1] but only if that happens to be compatible with your system deps... it's possible to run into trouble where one utility you install requires openssl 3 and another requires the latest version, and then you can't have both libs together easily. But normally, a distro is meant to keep a bunch of utilities with compatible versions for you. I just think that this way of doing things may not be appropriate for building software, and if you look at how Nix does it, for example, you can see that they break up with the traditional Linux distro system and let you have multiple environments with different libraries installed - and you can totally pin everything to make sure it will work forever (or until the sources/binaries can be fetched).
Yeah but I assume that you'd be pinning the base image, in addition to whatever you apt install. If I install, let's say Debian 6, can I use `apt install package=version ` to install the 2011 versions of most packages?
Other than the complete lack of writing CMake, Makefiles, autoconf, or any number of other end-user-configuration complicated systems as such, and other than the trivial statically-linked cross-compilation support, sure, I guess it's "just" a wrapper around clang, if you squint.
Out of curiosity, what DO you write instead of Makefiles in this case?
I’m genuinely curious if it’s actually easier, but I’m assuming you’re just using a different tool, maybe without as much historical baggage (and incompatible implementations)…
If you aren't supporting more than one compiler, aren't needing to compile anything in parallel, aren't needing to find and link shared libraries on the system, aren't needing to deal with any number of real complexities that happen when building C software, then sure, build.sh calling clang a few times manually is absolutely reasonable. (And again: cross-compilation is a real concern: shipping for amd64 only is not enough in 2023). And to be clear - there are plenty of small-scale projects that fit this description! But to simply hand-wave away `zig build` or other modernized build systems and say "just use clang directly" seems a bit dishonest or incomplete to me.
Note that I am not advocating for just having a few shell scripts that invoke clang. The context is someone saying to use `zig build`, and linking a blog post where all they do is compile redis from scratch, including all dependencies from source except for libc. In that context, `zig build` is just a wrapper around clang. Nowhere in their comment or linked blog post is the issue of these real complexities you allude to addressed at all.
Now I personally would rather not use a pre 1.0 release of an entirely different programming language to compile my C projects instead of a cross-platform C compiler, but people can do whatever they want.
to be fair - my goto move on a smallish project when autoconf barfs is 'cc *.c -o foo'. it works pretty damn often, sometimes you need to throw in some -I action
Personal attacks are pretty unnecessary here, thanks. This kind of comment is what gives (some) C developers the reputation they have in some circles, I guess.
> Bigger cmake builds are doing dependency resolution,
find_package(foo)
#...
target_link_libraries(myapp foo::Static)
> configuration tests,
I don't know what you mean by configuration tests. I don't think I ever added any test resembling that description other than sanity checks in cmake find modules and sanity checks on projects just for convenience.
> and configuration for development or release builds
Those are not handled by cmake other than setting a flag that's used in the code.
> /installs.
You don't need to do anything other than setting the install target.
Ok. How does find_package work? Hint: you need to understand how cmake module paths are discovered and their precedence, and you'll probably see a cmake folder in a build with the actual logic behind it as a .cmake file for each dependency. Depending on what package manager you're using you may not need this, or if you support many you'll need to own it.
> I don't know what you mean by configuration tests.
When you run cmake -S <my project> -B build folder you'll see a bunch of output. What this output corresponds to may include a number of tests that either succeed or fail to indicate whether the project will build. For example, the aforementioned find_package logic, finding the compiler tool chain (especially if cross compiling), testing for random shit like endianness, etc.
> Those are not handled by cmake other than setting a flag that's used in the code.
Is ridiculously common with the "..." filled in by various things like setting optimization flags, paths within the build directory, etc
> You don't need to do anything other than setting the install target.
No but you should probably understand what RPATH is and why it's set in release builds but not installed artifacts or why doing something that seems obvious like checking the checksum of your built object is the same as the installed one might fail.
The point is that cmake does a lot of work and it's not just declaring targets. Builds are hard.
The C language is like 'an elegant weapon for a more civilized age.'
This article reminds me of an idea I once had: to write a memory manager/garbage collector for C. The challenge is to know the scope of a dynamically managed chunk of memory. Using that, the memory can be automatically garbage-collected, but this is difficult since C wasn't designed for this in mind.
I'm curious if anyone has any other experiences they can share.
I did this once - a compacting collector for C. it was a really trivial mark/sweep - and as I recall you had to write functions for each type to perform the object graph traversal. for artificial memory-intensive workloads it was around 8x the performance of boehm.
I switched from fighting VSCode and plug-in hell to Nova with Seadragon a couple of months ago. At the end of the trial window, I had no problem ponying up money for Nova. I don’t even know where to start on the myriad little ways I like Nova better.
I have a side job of teacher at university, sometimes I have to teach C fundamentals. I was looking into giving my students a standard setup. This looks awesome.
The setup described is incredibly involved ... I'm doing some ESP32 experiments and plan to investigate PlatformIO which seems to provide solutions to most of the problems described in the OP. PIO supports a bunch of platforms and also provides a way to create your own if necessary [1].
huh? the correct answer to this question should be "get a time machine and go back to 2003 or something. why people insist on carrying on with C in the face of its glaring issues is beyond me. I am not advocating for any specific language, because several other languages are around that might be a better fit. Rust, Zig, D, Nim, Go. just please let C die already.
No. I just started a mega project in my company in C11 3 months ago and we already have insane velocity with zero problems. I intend to start many more in C in coming months. Sorry.
nope. they all are. all depends on the use case. if your use case is "must do 100% of what C can do, no exceptions" then of course that leaves one option. but for many programmers the trade offs today are not in favor of C, and haven't been for some years.
Programmers doesn't define requirements, the task define the requirements
If you need to target very tiny boards, none from that list are suitable C replacements, _maybe_ zig and D with betterC since they both provide an inline assembler
Otherwise we'll end up with companies using Raspberry Pi 4 for their fleet of scooters because "i can run nodejs on it" (hyperbole, but you get the idea) https://news.ycombinator.com/item?id=37016842
Yeah. I don't understand the choice of Rust in this context. If you can write all the CPU instructions on a whiteboard that also has the literal memory map you don't need a high level language like Rust.
I don't see this as a place where Rust can't replace C (or where Zig is a better replacement, or whatever) because why are we writing any high level language at all?
I guess it can make sense if there's a device family and this is the smallest of a range.
> why people insist on carrying on with C in the face of its glaring issues is beyond me.
Most people and companies cant delay new features and bug fixes for years with the excuse of “we’re rewriting the thing in $currently_trending_language”
That would be more like 30 years since C was truly trendy, and maybe 20 in some areas like embedded software engineering and games development. C was the top of the pile for decades, and I think when you compare it strictly to other languages that were available during that time it becomes clear why (not that is necessarily better than those other languages, but why it was so terrifically popular).
None of this solves C's only REAL problem (in my opinion) which is the lack of dependency management. Most everything else can be done with a makefile and a half decent editor. No need to step up into vscode if you don't want to. Clang LSPs are basically everywhere and just fine.