This is a mostly theoretical argument in my experience. It's quite easy to close the things that you need to close (if there are very many, something is wrong).
RAII can improve "scripting" speed when putting a lot of automatic variables on the stack. However, a lot of those variables need to be moved to the heap when proofing out the code, and that consolidation isn't going well because RAII doesn't work with generic code (void-pointers, memcpy etc). You need to go all-in with RAII containers. Which causes a lot of boilerplate and increases compilation times.
You appear to be using "generic" to mean "C-ish". That is not what the word means, in context, and there is no reason to do any of it anyway. "Proofing the code"? Does that mean anything at all?
For RAII on heap objects, you have std::unique_ptr, and no boilerplate. Anyone failing to lean into RAII to manage resources is just choosing to write unreliable code. Or C, but I repeat myself.
Due to language limitations, a lot of C programmers make all their variables shared pointers, which means they don't have to figure out how to do value semantics, moving and copying objects, especially in multithreaded environments.
Interestingly, it seems Rust programmers do the same too, because otherwise programming in Rust is too hard.
> Due to language limitations, a lot of C programmers make all their variables shared pointers, which means they don't have to figure out how to do value semantics,
I wouldn't approve. Make shared (i.e. ref-counted) pointers when actually needed.
Writing C is best when not mimicking bigger languages (i.e. with GC), but when figuring out how a system can work optimally.
Your theory is that Rust programmers "make all their variables shared pointers" ?
Even if I generously assume you're thinking of C++ "shared pointers", and so you've concluded that Rust's Rc<T> (a reference counted T) is basically the same thing, I don't see where you'd come to the conclusion that Rust programmers do this for "all their variables" as this seems wildly unlikely.
Just look at the code in cargo. (A)rc is all over the place.
It's essentially a cop out to not have to think about ownership and lifetime, and not have to worry about the borrow checker.
The fact that this is so pervasive throughout Rust code is for me a sign that Rust failed to deliver on its initial promise. The borrow checker was meant to be its main value added. But heh, people still find value added to Rust otherwise.
Hmm. So the code I happened to have open was Aria's "Cargo mommy" which has neither Arc nor Rc anywhere, but is only a toy.
So I did go look at Cargo itself, but unlike you although I do see some use of Rc and fewer of Arc it was scarcely "all over the place" when I looked and in the cases I spent any time actually thinking about it's a shared value, so, yeah, that makes sense.
I randomly looked at cargo/core/profiles.rs and cargo/core/registry.rs and cargo/core/compiler/compilation.rs without finding either Rc or Arc used in those files at all. Searching across the whole repository I found some places which do use Rc, and fewer using Arc (implying this data is shared between threads) -- but this doesn't really support your original claim does it?
> It's essentially a cop out to not have to think about ownership and lifetime
Not a cop out. Something very much like Rc<> is needed whenever an object's lifecycle might be independently extended by multiple referencing "owners", none of which is subsumed by the others. That's a matter of broad high-level design that can't generally be avoided, least of all in a multi-threaded context (as shown by the use of Arc<>).
> "Proofing the code"? Does that mean anything at all?
Since you're already schooling me on the meaning of terms, you probably have enough experience to have realized that the typical code begins life in a stubbed out form that is barely functional enough to deliver first results. It is then iteratively enhanced and made more widely applicable, more robust, more refined, better specified, more performant, and so on. In the process, the code is undergoing many revisions, and moving data structures from automatic storage (stack) to the heap is very typical.
Another word I'm thinking of for this process is "consolidation", which I have also used. My apologies if I don't speak in your terms / in the most precise terms.
I don't have experience of this "moving from ... stack to the heap", in C++. Correct code is the same wherever the objects live, whether stack, member, or heap. Changing it costs time and adds faults.
This proofing process is another example of what makes coding C more costly and buggy than coding C++. I did it for years, and miss it not at all.
If writing systems-level code solving for non-trivial requirements, the majority of the state has a lifetime that outlives any particular function call. That's when you need to move that state to the heap.
There's no clear scope anymore, ownership has to be moved from function scopes to RAII managed container structures. Things are quickly getting more complicated and expensive (software complexity/bloat, compile times, binary size...) when you have to nest your structures in templated containers that don't know what you're nesting in them...
This is where all the complexity with 17 constructor types and 99 move semantics with const, non-const, r-valued references and abstract virtual base classes came from, and there might be less than a dozen persons left on the planet who really understand it all...
With RAII, it is trivial to move things from stack to heap, or to a member of something that might be on the stack or heap or in a container, without changing any of the code that implements the object. Most usually you don't need to declare or define any constructors, assignment operators, or a destructor; the compiler provides them, and guarantees they are correct. (Sometimes you provide one constructor as a convenience for users.)
It has been many years since I found a use for an abstract virtual base class.
Literally millions of people understand it all, and use it every day with no difficulty. You don't need to invent difficulties, or waste days "proofing" code that may be written exactly once and never touched again.
To clarify, I think by “move things from stack to heap” you mean “move values from stack to heap”, where, e.g., `std::vector<int>` is a value. The vector’s data is still on the heap (or in the allocator’s pool for `std::pmr::vector<int>`) but the value in the sense of value-semantics is moved from stack to heap.
I guess what they mean is moving to static memory allocation, not to the heap. That is something done rather frequently when proofing embedded C code, because it's simply to easy to mess up manual memory management. Only reason to move from stack to heap would be for huge data structures not give stackoverflow, something that can be hard to test for.
"It's quite easy to close the things that you need to close"
Very true. It is easy. Not difficult. That doesn't seem to stop enormous amounts of C++ code being created in which this is simply got wrong. Just because it's easy doesn't mean that huge numbers of programmers won't still get it wrong. They do. Regularly. If they adopt RAII conventions, they screw it up far less frequently. Those are the facts.
I prefer reducing complexity instead of hiding it in language cleverness. (Because hiding complexity doesn't make it go away). Try adopting ZII (Zero is initialization) and context managers (pooling, chunking, freeing in batches etc.), this can reduce the amount of boilerplate to a minimum. Writing generic code like that (down to the binary level) also helps improving many other metrics like executable size etc.
> Because hiding complexity doesn't make it go away). Try adopting [..] context managers (pooling, chunking, freeing in batches etc.), this can reduce the amount of boilerplate to a minimum
"don't hide complexity in small, single-purpose, composable containers, instead hide complexity in hulking behemoths which can consume months of refactorings & iterations" ?
That's not simplifying your code to be generic. That's adopting a particular framework & coding style. Which yes, you have to do in C, but you're doing that because of language issues. You're not saving yourself from "language cleverness"
That's nice, and I'm sure your code is just lovely, but there's just one of you and millions of people writing C++ who can't do that but can just about manage some simple RAII.
One can do lots of things with RAII. I once wrote some classes to generate xml. I used constructors to write opening tags and destructors to write the corresponding closing tags....
I used to do `if (errno) goto err;` a lot. I eventually realized that nested functions did the job much nicer:
void err() { printf("oops, we failed"); }
...
if (errno) return err();
This cleaned up a lot of my code. (The optimizer would of course inline the function, and the common tail merging would automatically convert it to the goto version in the optimizer. So this was cost-free.)
Why C doesn't officially have nested functions is, well, that's C!
Cleanup goto-style is hard to describe as a "rat's nest", since all the gotos in the same function go to the same label at the end that just does all the cleanup before returning. There's nothing clever about it, and many codebases wrap it into macros and such to make the syntax more concise and the intent explicit.
In general, any random C compiler is likely to not support that feature, because the way it interacts with function pointers makes it unnecessarily complicated.
As for function pointers, a simple solution is to not allow them unless the function is declared `static`. `static` functions don't have a hidden static link.
The problem isn't doing it as such; ALGOL 60 could do nested functions just fine.
The problem is doing it in an ABI-compatible way when you already have an ABI. The gcc implementation of nested functions does that - they are compatible with regular function pointers - but at the cost of requiring executable stack on at least some platforms.
And for C compilers that aren't gcc, the question becomes: why partially implement a non-standard gcc feature?
Yeah, stopped reading after that. How wrong does he want to be?
It is hard to tell what the point is of this article is. Yes, the C-ish subset of C++ differs from ISO C, in trivial details. That is irrelevant to C++ users, who may consult ISO C++ to discover the actual subset. They interact with any C only when they #include a header for a 3rd party C library, and then the relevant subset is what, exactly, appears in that header.
There is really no legitimate reason to code C anymore: A C++ compiler is available for any modern development target. Confining yourself to any C subset (with or without ISO C marginalia) amounts to crawling when you can fly.
To be more specific: nothing that can be done using C is unavailable to the C++ coder, but enormous power is available to the C++ coder and wholly unavailable in C.
For me the legitimate reason to code C is to get stuff done, period.
There is hardly a need for more (to do what I do -- systems in the broadest sense) and there's a lot of time saved by not thinking about 21 "safe" and "ergonomic" ways to write everything in C++ (that very often end up being not that safe, and unintelligible).
Yes, there is the "subset" argument, only use what you need. But I don't buy that, I'm not like that, it doesn't work for me. Constraining myself makes things easier.
I started in C, and am still in it for systems/OS development at work.
In UI development, we had older C toolkits, but are replacing them with newer toolkits that are C++. I started off trying to just do C business logic with some C++ glue code for the UI. Sticking to what's familiar. In the end, it was just easier to do everything properly. All C in areas that are C. All C++ in areas that are C++. Same with C#. Trying to mix them is just more work than necessary.
At least I get to add C++ to my resume. There is a little joke at work, w.r.t. our existing codebases and code choices. They are career-driven choices, not technical choices. The result is a mess of a codebase, but I'm fully embracing that now.
> For me the legitimate reason to code C is to get stuff done, period.
If I want to just get stuff done, C++ is great for that. Don’t underestimate how much you gain by having string, vector, and map when you just want to crank out code.
Nobody is forcing you to overdesign stuff. You can always just crank out the code you want.
Yeah, just having the standard library container types (or similar generic third party libraries if you don’t like std ones, eg I tend to use phmap maps and sets instead of std::unordered_map/set). Templated containers just make using them so much easier than generic containers in C. Ergonomics matter when you just want to get things done. Plus the C++ containers make it easier to manage memory IMHO.
The main reason I still sometimes use C is compiler support for microcontrollers where everything tends to have a C compiler but not everything has a C++ one (or if it does, not necessarily an up to date one).
I don’t use either in my day to day really but I do have sone personal projects. Ultimately I foresee doing these in Rust where I can, but for stuff like hobbyist game development, I’m currently too bought into some of the C++ libraries, primarily EnTT, but even there I think I’ll eventually just end up using Rust with Bevy. For now though, I actually quite enjoy using C++17 (and maybe 20 soon if I find the free time).
Even something like Turbo C++ for MS-DOS is preferable to raw C, the only embedded toolchains that don't offer even that are usually stuff like PIC and similar.
Which, in any case, have companies like Mikroe selling Basic and Pascal compilers.
> There is really no legitimate reason to code C anymore: A C++ compiler is available for any modern development target. Confining yourself to any C subset (with or without ISO C marginalia) amounts to crawling when you can fly.
>
> To be more specific: nothing that can be done using C is unavailable to the C++ coder, but enormous power is available to the C++ coder and wholly unavailable in C.
As a counterargument, sometimes constraints can be valuable in their own right rather than just being requirements for some larger goal. The most prominent example of this in programming design is type checking; enforcing type checking at compile time strictly reduces the number of programs you can express, so dynamic languages can do everything that static languages and more, and yet it turns out that it's still pretty useful specifically because we sometimes want to avoid the programs that a dynamic language would allow but a static language would reject. At the risk of getting too philosophical, imagine framing programming as an exercise in trying to write rules to specify one program out of the set of all possible programs, where each bug you write that changes the semantics moves one step further from your goal. The compiler can act as a "filter" on the set of programs you can potentially select by refusing to process your rules if they would specify one of the forbidden programs, which reduces the chance selecting the wrong program by mistake. In this scenario, the theoretically ideal compiler would be the one that rejects all programs other than the exact one you want! This obviously isn't possible in the general case, but the point here is that as long as compilers don't reject the exact program you want, a compiler that rejects more programs is _more_ useful.
I'm not making any claim about whether it is in fact worth it to code C anymore, but I disagree with the logic that C++ letting you do anything C can do and more is proof that C++ should always be used over C. Maybe C is crawling compared to C++'s flying, but if my goal is to get a pen that falls under my desk, it's a lot more useful for me to not use an airplane to do it.
I think the gp was talking about anti-features. Austral's announcement included a list of features it avoided on purpose. Including several from c++. operator overloading, increment operators and even implicit oder of operations were listed as features that made codebases worse, even if no one has to use them.
Anyone who says C is not important hasn't worked on embedded systems or operating systems.
I work on embedded systems. There are roughly 3 choices for languages[1]. C, Ada, and C++. Except the version of C++ is sometimes often vendor specific and may lack features that are "standard C++", and never have a "full" STL. In addition, if you're not super careful with C++, what you thought was a copy or declaration can execute additional constructor or assignment logic. It might just interfere with your estimate of how long something can take (which is super important in interrupt handlers), or worst case, smash your stack (which can be as small as 1-2k). With C, you pretty much know what's going to happen and what's getting called. The fact that C doesn't require a runtime (nor does Ada), means you can use it to write kernels. You can't do the same with a 'full' version of C++, so you're back to a cut down C++.
I also spend time debugging compiled, optimized code by tracing instructions. This is hard enough in C, where I don't magically jump to a constructor function or an assignment function. I can visualize in my head how the C code could look like, given the instructions the optimizer produced. So it may not look like my code, but it is a version of the code that I can at least logically infer.
Ada is nice, but outside of some super-safety critical realms, it hasn't had the uptake. Private industry prefers not having to make the investment and the government let everyone waive out of it.
1. I know people are going to to say 'what about Rust, Go, or embedded something or other?' They are not as mature in their development lifecycle. So there's no validated RTOS for Rust (there are some experimental ones). Go is even further behind. Basically, you need a vendor supported RTOS that's validated for the safety and operational requirements of your use case, before you can be a serious choice for a lot of embedded work.
> Anyone who says C is not important hasn't worked on [..] operating systems.
Most OS code is written in C++ or Object-C, not C. Between Android, MacOS/iOS, and Windows there's not a lot of C code. The kernels are most of it. The NT kernel has C++, but hard to get a source on how much is still C or not. But the kernel is far from being most of the OS regardless.
> The fact that C doesn't require a runtime (nor does Ada), means you can use it to write kernels. You can't do the same with a 'full' version of C++, so you're back to a cut down C++.
For a huge number of users, "full" C++ is C++ with -fno-exceptions and -fno-rtti anyway, at which point the 'runtime' is like 2 functions and it's absolutely perfectly fine to use in a kernel. But regardless neither of those features are inherently incompatible with being in a kernel. You just have to implement a runtime to do that, just like kernels written in C have to implement a libc replacement.
Unless you're talking about the standard library (even though nearly all of it is completely kernel-compatible out of the box), but then you'd have to include libc & friends in the "C runtime" category and then it's equally impossible to use in a kernel.
Sorry, I didn’t mean the userland stuff above the kernel, which, along with the boot loader, is the part that is really restricted in any sense. With the standard library, there are large parts that are not kernel safe. That’s because they rely on services like memory allocation, which are different in the kernel. Or they rely on kernel services, like files. And what I work on has no kernel, just an RTOS.
And C in the kernel isn’t really a libc replacement. They often have different behaviors because they need to run without allocating memory, or be interrupt safe. For example, in some situations I use a function call instruction that specifically does not creat a new stack frame.
Very little of the standard C++ library is not "kernel safe". If a bit of it might need to allocate, you may provide it a kernel-grade allocator to use; but you probably use custom kernel containers, for other reasons. If you don't want any filebufs, you do not make them.
I love how usually not having support for full ISO C++ or it not being up to date is an issue, while having to deal with a cut down version of C, freestanding, a custom library and compiler extensions is a plus.
It's been decades since I worked with Ada, so I could be wrong, but you could build it with an embedded runtime that allows you to do tasking and (I think I remember) exceptions. But it also has a language standard way to create limit certain features. The difference is the C++ is a grab bag, depending on the vendor, platform, RTOS, etc.
Vendor compilers are common, if not typical, for embedded and safety-critical systems. It's not masochism since they work, and in my experience they work quickly to address any compiler bugs you may discover and report. This is also true for Ada, C++, and Fortran in that domain.
If the hardware is exotic I guess you’d have no choice.
But for security critical don’t you run the risk of relying on obscurity rather than security due to the niche-ness of your stack?
What does a vendor compiler do or do better than a compatible generic one?
When you get a critical system certified for fielding you aren't just certifying the source code, but the actual executable and build process and test process and other things. This requires reproducibility for years to come. Choosing generic compilers may work in dev and parts of test, but not for actual deployment as a consequence (or it doesn't work well). Suppose you picked clang 11 several years back. Now you need to do an update to the system, you can still use clang 11, but not clang 14 at least not without doing a comprehensive recertification process. Also, if an issue is discovered in clang 11 it's likely been fixed in clang 14, but again you have to get your system recertified with clang 14. And that's if the issue is fixed, it may still exist.
With a vendor supplied compiler you can say, "We're using version 11.2". A year or two later an issue is discovered, the vendor will backport a fix to 11.2 giving you 11.2.1 which is much less effort for recertification. You aren't depending on the kindness of strangers (a terrible strategy) because you're actually paying someone to do the work.
Some vendors base their compilers on GCC or Clang. But if you're not using their provided compilers, you either need to 1) be willing to shoulder the expense of making changes to GCC or Clang for whatever you're adding to the silicon in terms of instructions, and 2) be willing to get your combination of RTOS and compiler re-validated for the safety or operational standard on which you need to deliver.
But even if you do use GCC or Clang, it doesn't change the mechanics of C++ as a poor language for embedded or operating system work. It just means you have the same choices (e.g. limited support for containers and strings, no exceptions, limited smart pointers, etc.) and you're making that choice based on GCC or Clang's limits.
It's not impossible. Other people have done it. Some have done it with a minimal kernel or micro-kernel in C and then services on top of that in C++. Just doing a quick perusal of their code base, the low level stuff is functions and structs. So, yes, it is a .cpp but not that different from the code you'd write if it were a .c. And it appears they're turning off exceptions and the standard library (which is completely reasonable). They probably have some additional coding standard (internally) like you can't use anything with a constructor in certain contexts, etc.
And on a validated RTOS - it might be in C++ under the covers. The examples that come to my mind aren't. But the world is a big and magical place, so I can't say for certain on every RTOS. The F35 uses C++ and a set of very restrictive internal coding standards built on https://en.wikipedia.org/wiki/MISRA_C.
And no one will die if their copy of Serenity OS crashes.
The rewrite in C++ movement was lived in Usenet flamewars, and we were in good path with all the userspace C++ frameworks in OS/2, MS-DOS, Windows, Apple and BeOS until the GNU Manifesto came around and urged everyone to write FOSS code in C.
"Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C."
Yes but back then you could compile C with a C++ compiler and get the same type checking as in actual C++. There were a few pitfalls but a shared "Clean C" coding style was viable and that's what most of the rewriting effort was focused on.
C++ did add stronger modularity with its public and private member specifiers, namespacing etc. but that was mostly useful on larger projects.
Stuff written about C++ before there was an ISO Standard obviously does not apply after. And, we should know by now how much Richard Stallman's prejudices are worth.
By all means, GNU Coding standards from 2006, 8 years after C++98.
"When you want to use a language that gets compiled and runs at high speed, the best language to use is C. Using another language is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. For example, if you write your program in C++, people will have to install the GNU C++ compiler in order to compile your program.
C has one other advantage over C++ and other compiled languages: more people know C, so more people will find it easy to read and modify the program if it is written in C.
So in general it is much better to use C, rather than the comparable alternatives."
You emphasize my point so clearly, I need add nothing.
But I will note that GNU Gcc and Gdb are both C++ projects now. Gold, the current GNU linker, started out C++. Are there any other still-relevant GNU projects?
That is now, we were talking about when GNU/Linux started to be relevant and what triggered language choice, versus what was happening in the desktop PC world.
The latest version of the GNU coding standard is much more permissive.
It is still far from clear whether anybody will be learning Rust, in ten years, instead of whatever is the new hotness then. Hiring a domain expert who also knows Rust is generally impossible today, and will be for, at least, a long time. So, starting a new project in Rust is OK for a project you know won't matter, but absurdly risky for one that will.
C++ is mature. None of the above is a concern, for C++.
In terms of adoption by pre-existing large players in the industry, Rust is already ahead of where Ruby was at its hype peak. It's just not quite as shiny because that code is not running some web app that is immediately demo-able, but some boring OS infrastructure stuff.
Given the money already invested into internal tooling and infrastructure specifically for Rust, I just don't see those large companies suddenly dropping it. Regardless of how good the language itself is, the sunk cost alone makes it hard to turn ship. And the language is good, so it'll use that time to entrench itself further.
They won't drop the language: whatever products depend on it will continue to depend on it. Unless hiring gets difficult; then they will transcribe it to a more mainstream language. In the meantime, language choice for new projects will be driven by experience hiring into these and other projects.
Other, newer languages will come up continually, siphoning off the most mobile who have begun to find Rust familiar and pedestrian.
Domain experts are focused on their domain. Becoming a beginner again is a recipe for radically reduced productivity. The people eager to learn a new language on the job are generally those who have not invested much in anything.
only if you ignore things like iostreams, thread locks, etc.