I'm probably going to make a few enemies with this opinion, but I think modern C++ is just an utterly broken mess of a language. They should have just stopped extending it after C++11.
When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?
Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.
I know the intentions are good, and the argument is that "it's intended for library developers" but how much of a percentage is that vs. just regular app/backend devs? In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.
Meanwhile it's 2021 and C++ coders are still
- Waiting for Cross-Platform standardized SIMD vector datatypes
- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
- Debugging cross-platform code using couts, cerrs and printfs
- Forced to use boost for even quite elementary operations on std::strings.
Yes, some of these things are hard to fix and require collaboration among real people and real companies. And yes, it's a lot easier to bury your head in the soft academic sand and come up with some new interesting toy feature. It's like the committee has given up.
> - Waiting for Cross-Platform standardized SIMD vector datatypes
which language has standardized SIMD vector datatypes ? most languages don't even have any ability to express SIMD while in C++ I can just use Vc (https://github.com/VcDevel/Vc), nsimd (https://github.com/agenium-scale/nsimd) or one of the other ton of alternatives, and have stuff that JustWorksTM on more architectures than most languages even support
- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
what are the other native languages with a standardized memory model for atomics ? and, what's the problem with using libraries ? it's not like you're going to use C# or Java's built-in threadpools if you are doing any serious work, no ? Do they even have something as easy to use as https://github.com/taskflow/taskflow ?
- Debugging cross-platform code using couts, cerrs and printfs
because people never use console.log in JS or System.println in C# maybe ?
- Forced to use boost for even quite elementary operations on std::strings.
can you point to non-trivial java projects that do not use Apache Commons ? Also, the boost string algorithms are header-only so you will end up with exactly the same binaries that if it was in some std::string_algorithms namespace:
Most of what you said is a fair retort, but boost isn't quite as rosy as you make it seem. It's great but it has serious pitfalls which is why many C++ developers really hate it:
A) Boosts supports an enormous amount of compilers & platforms. To implement this support is an enormous amount of expensive preprocessor stuff that slows down the build & makes it hard to debug.
B) Boost is inordinately template heavy (often even worse than the STL). This is paid for at compile time. Some times at runtime and/or binary size if the library maintainers don't do a good job structuring their templates so that the inlined template API calls a non-templated implementation. The first C++ talk I remember talking about this problem was about 5-7 years ago & I doubt boost has been cleaned up in its wake across the board.
C) Library quality is highly variable. It's all under the boost umbrella but boost networking is different from boost filesystem, different from boost string algorithms, different from boost preprocessor, boost spirit, etc. Each library has its own unique cost impact on build, run, & code size that's hard to evaluate a priori.
Boost is like the STL on steroids but that has its own pitfalls that shouldn't be papered over. Maybe things will get better with modules. That's certainly the hope anyway.
It's actually a bit impressive how many languages have it at this point.
> what are the other native languages with a standardized memory model for atomics
Rust, C, Go?
> It's not like you're going to use C# or Java's built-in threadpools if you are doing any serious work, no ?
Define "serious". By most metrics JVM apps run at 1->2x the speed of C++, that's really not terribly slow for a managed language. On top of that, there are a lot of places java can outperform C++ (high heap memory allocation rates). Java's threadpools and concurrency model is, IMO, superior to C++'s.
> Do they even have something as easy to use as taskflow
Several internal and external libs do. Java's completable futures, kotlin's/C#'s (and several other languages) async/await. I really don't see anything special about taskflow.
> can you point to non-trivial java projects that do not use Apache Commons
Yes? It's a fairly dated lib at this point as the JDK has pulled in a lot of the functionality there and from guava. We've got a lot of internal apps that don't have Apache commons as a dependency. I think you are behind the times in where Java as an ecosystem is now.
... I just checked your link and wouldn't say that any of these languages have SIMD more than C++ has it currently:
- Java: incubation stage (how is that different from https://github.com/VcDevel/std-simd). Also Java is only getting it soonish for... amd64 and aarch64 ??
- Rust: those seem to be just the normal intrinsics which are available in every C++ compiler ?
- Dart: seems to not go beyond SSE2 atm ? But it looks like the most "officially supported" of the bunch
- Javascript: seems to be some intel-specific stuff which isn't available here on any of my JS environments ?
- The Go one does not seem to support acquire-release semantics, which makes it quite removed from e.g. ARM and NVidia hardware from what I can read here ? https://golang.org/pkg/sync/atomic/
That's quite well thought out; without the compile-time checks for operations existing, you end up with code either needing to target a very small subset of the operations that are widely supported or something that is not really cross-platform -- I've seen too much of the following using what is theoretically portable code because software-fallback will typically be an order of magnitude worse than using a different set of datatypes and operators
#if defined(__NEON__)
"portable" SIMD goes here
#elif defined(__ALTIVEC__)
different "portable" SIMD goes here
...
There was some discussion about what to do with vector types and operations that weren't supported by the hardware. We decided on compiler error instead of emulation, because the emulation would be terribly slow and the user may be unaware that he's getting emulation.
With a compiler error, the user unambiguously knows if the SIMD hardware is being used or not.
I hope they keep going down this path and make it into a real mess of a language, so that people can finally stop pretending C++ is the solution to any problem, when it is in fact the cause of a lot of your problems.
I began C++ coding over 20 years ago as well, and it required reading thick books even then. I remember my class mates at Uni really hated software development all because of C++. It was way too hard as a beginners language, even 20 years ago.
I look at all these new features, and I am like: How on earth are you going to teach all this crap to students?
They have painted themselves into a corner. It becomes a language only for those who have already programmed it for 10-20 years.
This idea, that it is only for library developers is a bunch of crap. A lot of learning a language is really about reading the code for the standard library. That was one of the beauties of writing Go code. You regularly look at standard library code and is even encouraged to do so. It teaches you a lot about good style.
Same deal with I program in Julia. Looking at library code is totally normal and common.
Except in C++. I avoided looking at library code like the plague. And I suppose, now it will only get worse.
The worst part of this is that this isn't just a problem for C++ developers but also for everybody else. So many key pieces of software relies on C++ code. It becomes ever harder to migrate that code or interface with that code as C++ complexity grows.
That was the beauty of a language like Objective-C. Unlike C++ it is a fairly simple language which you can interface easily with. The result was that porting to Swift was really easy. When porting iOS apps to Swift I could pick individual functions and rewrite them to Swift.
There is no hope doing anything like that with C++.
> I look at all these new features, and I am like: How on earth are you going to teach all this crap to students?
You don't. You teach "A tour of C++ 2nd edition"[0] which presents a clean and smaller subset of the language people can wrap their mind around, with everything someone new to modern C++ needs to know to be effective. And you supplement this with "C++ Core Guidelines"[1] which can be enforced by code analysis and provide some examples of common mistakes or questions people might have.
You do not need to know all the details of the language and know every single features. And wouldn't teach everything to a student.
But it's true that there is some overhead due to the complexity of the language.
> I'm probably going to make a few enemies with this opinion, but I think modern C++ is just an utterly broken mess of a language. They should have just stopped extending it after C++11.
This is the popular refrain of the day, so I don't know why you cage this as if you're saying something controversial.
The popular refrain has more to do with the lack of memory security features in the language, although I'm sure they will bolt a borrow checker or something on to the language.
There are currently enclaves of developers who know varying versions of C++. There's a good chance that a 20-year C++ veteran would have to consult the documentation for syntax. That's concerning. Defining what something isn't is nearly always more important than defining what it is, and C++ is seemingly trying to be everything.
This is a common saying because it is a common occurrence.
People who use the language effectively know all about the complaints. Those people live with their complaints knowing no other language even comes close to meeting their needs. No language on the horizon is even trying to meet their needs.
C++ usage is still growing by leaps and bounds. Attendance at ISO Standard meetings is soaring; until Covid19 killed f2f meetings, each had more than any meeting before; similarly, at conventions. Even the number of C++ conventions held grows every year, with new national ones arising all the time.
Rust is having a go at part of the problem space, and making some headway. But more people pick up C++ for the first time in any given week than the total who have ever tried Rust. It is still way too early to tell whether that will ever not be true.
So the HN trend is very much an echo-chamber phenomenon, with no analog in the wider world.
> This is a common saying because it is a common occurrence.
Ha ha. This is not applicable for software, and I assume, for some craftsman.
What's the percentage of software developers that actually get to choose their tools? 40%? 60% at best? Though most likely it's just 20%.
Most projects are pre-existing, it's only natural. You can't create more projects than those already in existence, once a field matures a bit. Which means that you have to use what's already there.
Plenty of people are forced to use bad tools. And they can for sure blame them.
Many craftsmen do not get the tools they could wish for.
Your craft is your personal responsibility; you use your tools, they don't use you. So, your product is the result of what you do, not what your tools do. Limitations of your tools leave you with greater responsibility to ensure results that satisfy whatever standard you work to.
Blaming your tools for bad results tells people much more about you than about the tools.
First of all, we are not craftsmen. We are more like factory workers. Ford factory worker #515 had no say in the 1000 ton machine just installed in the factory. He just had to make his part of the car.
We delude ourselves into thinking we're all Picassos when we're just house painters, at best.
It'd be more accurate to say not many of us are craftsmen. (Craftspeople?) There are still some ways to make money by through creative, open ended development, they've just always been on the rare side.
Trillions of lines of existing code are also a strong argument of why C++ is going to stay for a while. Lot's of good C++ programmers I know would be really excited to use Rust, but the interop with legacy systems is not worth it for many use cases.
True, but there's plenty of Stockholm Syndrome as well. C++ is a mess, and there's people that will defend that mess to the end of times. Those people managed to get pretty good and have a deep understanding of all of its quirks, but lack the ability to take a seat back and admit that yes, nobody without masochistic tendencies would get into C++20, unless they're already familiar with it.
I'm sorry but can we stop hating on "academics"? No one in research matches your description. The intersection of academia and C++ contains only practitioners (like in the industry), who just want their code to work; and maybe some verification people who'd rather wish C++ was smaller because it is a hell of a beast to do static analysis on. Both these categories are real people having real use cases. The programming language crowd is generally more interested in stuff like dependent types or effect systems, not templates.
If you replace 'academic' with the secondary definition: "not of practical relevance; of only theoretical interest." it is probably true though. Having known some of the C++ standard contributors, they strongly defend themselves against the "not of practical relevance" part with "look what I wrote". Sure it's clever but adding language features just to say "look what I wrote, it's clever is no excuse for building a language that's become a train wreck.
(I have been coding in C++ on and off professionally since 1985 and I do like some of the C++11 and c++14 features. The pointer improvements are great but the template stuff is a complete joke on us).
Sure it's clever but adding language features just to say "look what I wrote, it's clever is no excuse for building a language that's become a train wreck.
Actually, the rationale behind the language features you're criticizing is that people in the real world were already using some techniques in C++ in a needlessly complex and convoluted way, and these new additions not only simplify these implementations but also allow the compilers to output helpful, user-friendlier messages.
Take concepts, for example. You may not like template metaprogramming, but like it or not they are used extensively in the real-world, in the very least in the form of STL and Eigen. Template metaprogramming is a central feature of C++ consumed by practically each and every single C++ developer, in spite of rarely producing code them. Does it make any sense at all to criticize work to improve a key feature that benefits each and every C++ programmer, in spite of not having to write code with it?
And no one of sane mind would argue in favour of shoehorning #include and #ifndef/#define in detriment to a proper module system.
Just because you aren't familiar or well-versed with some C++ features, or aware of how extensively they are used, it doesn't mean they are not used or that the stuff you don't know automatically qualifies as a trainwreck.
If you really did any serious work writing template metaprogramming code, or were aware of what happens under the hood with libraries that were developed with it, you wouldn't be criticizing recent contributions to improve it's UX, for both developers and library/module consumers, as a trainwreck.
> When I look at C++14 and later I can't help but throw my hands up,
Why C++14? The changes were very minor and mostly about being able to declare lambda functions with auto, which is extremely useful.
> Waiting for Cross-Platform standardized SIMD vector datatypes
I only know of ISPC having this, but there are also lots of SIMD libraries for C++ that are small and have minimal dependencies.
> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
std::thread, atomics, and mutexes were added in C++11 and work extremely well. OpenMP is in the top four compilers if someone wants super easy fork-join parallelism. What other languages make C++ look archaic here?
> Debugging cross-platform code using couts, cerrs and printfs
Both visual studio and Qt Creator have made this unnecessary for a long time (if you can do step through debugging). What other language are you thinking of that makes C++ look archaic here?
> Forced to use boost for even quite elementary operations on std::strings.
That's completely ridiculous. It is easy to avoid boost these days (thank god). This is DEFINITELY not worth using boost for. First you can use https://github.com/imageworks/pystring on top of what C++ already has combined with regular expressions.
I don't think anything you listed is actually a problem. If you had talked about not having a standard networking library or standard serialization it might have made more sense.
> a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility
I use a lot of library features after C++11. Variant, span, and string_view are the most important ones. As to language features, structured bindings and variable templates come to mind. They pretty much hit all of your code quality points. I don't think these are for "a small circle of language academics" either (I'm definitely not in that "small circle"). Syntax-wise, meta programming can get ugly yes. Even Stroustrup himself doesn't like it. I guess at this point it's just for "historical reasons".
> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
I think this one comes down to that there are a vast range of parallel computing models out there, and C++ wants to have generality. I used to write a lot of MPI programs targeting the super computers. I don’t think any language would want to include that in the standard…
> Debugging cross-platform code using couts, cerrs and printfs
What’s wrong with printing? I even debug JavaScript programs with console.log(). It’s convenient.
If you just do local dev, debuggers work pretty well, you can debug however you want. I was unfortunate enough to have pretty much always worked on platforms that is hard to have a good remote debugging session, due to hardware capacity, legacy toolchain, or even ssh-ing onto the host being hard enough due to security. But that's hardly C++'s fault.
> Forced to use boost for even quite elementary operations on std::strings
It’d be great if std::string has more features. But I don’t think it a big deal. Personally I don’t like linking boost to my programs, so I just write my own libraries for that. It’s just elementary operations anyway.
But that's the point. Metaprogramming has gotten significantly better since c++11, and c++17 metaprogramming is extremely clean. Are we getting mad at them for improving things?
So... You're arguing against it by pointing out an excellent library for the language? Was someone forcing you to use std::thread? Of course it won't have as many features as tbb; it's meant to help pthreads users.
Not exactly. I am reminded of n3557. The ability to write a library like TBB is a positive. But much richer libraries are just barely over the ridge. std::thread is not much more interesting than the abstractions provided by Boost in the early 00's.
Part of the job of the library is to be boring. It's the reason third party libraries exist in the first place. They give you the basic starting points to get the job done.
Look how many people are complaining on here about how complicated c++ is. If something like tbb was integrated they would be all over it.
These things are specific to CPU architectures, but other then that they’re cross-platform and de-facto standards set by Intel and ARM. Same source code builds with all mainstream compilers, regardless on the target OS.
> nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores
OpenMP is not part of C++ standard, but it’s still a standard in the sense they have a complete specification: https://www.openmp.org/specifications/ Mainstream compilers are reasonably good at implementing these specs.
> Debugging cross-platform code using couts, cerrs and printfs
Debugging story is not great outside MSVC, but it’s not terrible either. When I needed that, gdb worked OK for me.
> Forced to use boost for even quite elementary operations on std::strings
I agree the ergonomics can be better, but I’m not using boost, and I see improvements, e.g. std::string_view in C++/17 helped.
I'm not sure cross-platform SIMD vector data types are practical, at least not ones that don't force you to understand the implementation details on every microarchitecture you target.
If you actually care about performance, and presumably anyone that wants to use SIMD vector types does, you need to fit the higher-level data structures to the nuances of the microarchitecture you are targeting. Compilers don't do optimization at that level, you have to write the code yourself. Thin wrappers on compiler intrinsics is actually the right level of abstraction if you want to exploit those capabilities.
Similarly, how code is parallelized is completely dependent on what you are trying to do, the software architecture, and the silicon microarchitecture; there is no way to usefully standardize it outside of use cases so narrow they probably don't belong in C++. Parallelization in practice happens at a higher level of abstraction than the programming language.
And FWIW, I use many of these new C++ language features in real software every day because they provide immediate and compelling value. I am not an academic.
Code quality can also be judged by the quality of compiler output. C++ has many language features that allow compilers to generate efficient code. Unfortunately it also features incredibly complex abstractions that lead to insane binary interfaces.
Binary interface complexity is actually a huge reason why people rewrite stuff in C. When you write in C, you get symbols and simple calling conventions. Makes it easy to interoperate.
> C++ has many language features that allow compilers to generate efficient code.
It does, but it also has the ability of generating inefficient code. Sure, it's often the developers fault but I feel like it's much easier to shoot yourself in the foot in terms of performance in C++ compared to other compiled languages.
Some real-life examples for me:
* Missing a '&' for a function parameter resulting in that object being copied for each function invocation
* Adding a couple extra chars to an error message string in an inlined function which caused that function to then be 'too large' to inline according to the compiler
> When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?
I do. There are a lot of features introduced since C++11 that make my life much easier. Sure, it's always scary to have to learn new things, but once you get over that hump, you start to see the benefits. Concepts and constexpr cut down on the template boilerplate crap a lot. Being able to use the auto keyword in more contexts means less repetition. Modules get rid of the ugly hack that is the preprocessor. std::span means I don't constantly have to pass around a pointer and length, or create a dedicated struct to encapsulate pointer+length. Sure, there are some more obscure features whose usefulness are questionable, but for a design-by-committee language, they're doing a slow but sure job of moving past the language's old warts.
> In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.
Feature adoption doesn't happen overnight. Remember, we're talking about a decades-old language burdened by backwards compatibility - it took a long time for people to migrate from supporting C++03 to dropping it in favor of C++11. Give it five or ten years, and I reckon you'll see people make use of C++17 and C++20 in much greater numbers.
> Waiting for Cross-Platform standardized SIMD vector datatypes
No argument there. That said, all mainstream compilers already have "immintrin.h" for x64 and "arm_neon.h" for ARM, and using them isn't particularly difficult.
> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
Are you aware that std::thread has existed since C++11, and std::jthread and coroutines are in C++20?
> Debugging cross-platform code using couts, cerrs and printfs
This is a programmer problem, not a language problem. gdb exists, lldb exists, the Visual Studio debugger exists, and they're not particularly hard to pick up and use - if you're still using print statements to figure out why your application is crashing, that's on you.
> Forced to use boost for even quite elementary operations on std::strings
std::string is an RAII-managed bag of bytes. What kind of operations are you looking for? Stuff like concatenation and replacement can already be done in C++11 with std::string and std::regex. If you want to do lexical operations, like case conversion or glyph counting, then an encoding-aware library is a better solution.
On top of that, one can use strings as a normal "sequenced container of characters" and just use <algorithm>s on them. This is one of my favorite ways to write concise code in those interview questions (e.g. "That's just a rotate, then a partition").
> Sure, but very low level. I'd be great to have a standard for something like TBB or OpenMP.
The answer here is modules. Improve the story on shipping C++ libraries, and then who cares if it's in the "standard library" or not? It's not like anyone in JS land for example cares if something is native to the language or in a library since adding a library is trivial & easy.
Modules have nothing to do with shipping libraries (or dependency management), they are purely about encapsulation of interface and (API) implementation.
It should be std::string’s job to store strings. If people want to perform operations on them, that’s what free functions are for, right? Nobody wants std::string to have hundreds of methods.
> Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.
I don't understand... How do these features not address those points?
> a) Functionality (does it work, is it safe?)
constinit, consteval and all the remaining constexpr improvements are a massive step for ensuring the "compile-time-ness" of code:
There's several more sharp edges being removed too. It's of course not going to tackle the fundamental safety concerns the way Rust is doing, but that would be a new language (like Rust is) anyway.
> b) Readability
requires is infinitely more readable than the SFINAE we had to write so far:
Besides that elephant in the room, most of these changes involve making the code either simpler to read/write (too many to name) or more explicit (consteval/constinit, attributes, ).
> c) Conciseness
Half the features contribute to this in one way or another (e.g. see previous point, or the spaceship operator), but there's also a whole list of syntactic sugar being added:
True, C++20 also comes with great library additions that are not the subject of this blog post but affect (possibly to an even greater degree) the code attributes in question.
It's definitely tricky. I think if you just stick to modern C++ and avoid anything advanced unless necessitated, it's a big improvement on your code. But as we know, developers with the discipline not to take advantage of every feature available to be "clever" is rare. And I agree, the standard library is still very much lacking. This is one thing I really like about working with C#, the vast majority of what I'm doing is available and simplified through the standard library.
It is not really about the language at all. He got older, and does not want to learn new things. Other people who stopped learning earlier say, "better C" instead.
The language has gotten continuously more powerful since 2011, albeit in smaller increments until C++20 when several big features landed.
Good C++11 looks practically nothing like C++98, and good C++20 looks as little like C++11.
It is really getting more fun all the time, as old crud falls away, and you can just say more and more just what you mean. Improved type-inference capabilities are doing a great deal of the heavy lifting.
> - Waiting for Cross-Platform standardized SIMD vector datatypes
> - Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
SIMD computation and multithreaded parallel computations were largely solved with execution policies. C++17 added multithreaded and multithreaded+SIMD execution policies, C++20 added single threaded SIMD execution policy.
I would argue that standardizing SIMD vector extension datatypes is an anti-feature for all cross platform programming languages. Writing AVX512 code is very different from writing NEON code. If the compiler autovectorizer doesn't generate good enough code for you, you have no choice but to use the non-cross platform vendor specific intrinsics anyway. If a SIMD datatype and the operations you could perform on it were standardized, it would necessarily have to be a very low common denominator. I don't even know what the lowest common denominator between MMX, SSE2, AVX2, AVX512, NEON and Altivec (to name a few) even is.
Note that the autovectorizers in GCC and Clang (not MSVC) are very, very good. If you structure your data in the way it would have to be structured if one were going to write hand-vectorized code anyway, GCC and Clang will, with a high probability, vectorize it correctly.
I don't know what a standardized language feature for execution on different processors than the CPU would even look like. What languages have this, and what does it even look like? Can you give a code sample?
> - Debugging cross-platform code using couts, cerrs and printfs
I don't think I understand what you're suggesting. On second thought I definitely don't understand what you're suggesting.
Are you suggesting that the C++ standards committee should standardize a _debugger_? You'd have to standardize the ABI first. There's no way to do that; 32 bit x86 with its 8 registers must necessarily have different calling conventions than ARM with its 32 (I think? it's been a while) registers.
If you're suggesting that the committee standardizes a UI, there's no way you're going to get the Visual Studio team and the GDB team to agree on what a debugger ought to look like. I don't even know where a mediator would even begin to start suggesting anything.
If you're suggesting that current debugger offerings such as the Visual Studio debugger and GDB aren't good enough, I dunno what to tell you. They work for me.
> - Forced to use boost for even quite elementary operations on std::strings.
Can you give an example? The big thing I used boost string stuff for was boost::format, but now that there's std::format I don't need that anymore.
C++ is a broken mess and I'm completely fine with that because it couldn't be any other way. It started as C with classes and they've kept it moving into the 21st century. Rust is here now and should be used for new projects, but at least old projects get to use these new features, ugly as they are. I've also noticed that most people complaining about "new" features do not understand them.
When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?
Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.
I know the intentions are good, and the argument is that "it's intended for library developers" but how much of a percentage is that vs. just regular app/backend devs? In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.
Meanwhile it's 2021 and C++ coders are still
- Waiting for Cross-Platform standardized SIMD vector datatypes
- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
- Debugging cross-platform code using couts, cerrs and printfs
- Forced to use boost for even quite elementary operations on std::strings.
Yes, some of these things are hard to fix and require collaboration among real people and real companies. And yes, it's a lot easier to bury your head in the soft academic sand and come up with some new interesting toy feature. It's like the committee has given up.
Started coding C++ when I was 14 -- 22 years ago.