I'm probably going to make a few enemies with this opinion, but I think modern C++ is just an utterly broken mess of a language. They should have just stopped extending it after C++11.
When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?
Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.
I know the intentions are good, and the argument is that "it's intended for library developers" but how much of a percentage is that vs. just regular app/backend devs? In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.
Meanwhile it's 2021 and C++ coders are still
- Waiting for Cross-Platform standardized SIMD vector datatypes
- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
- Debugging cross-platform code using couts, cerrs and printfs
- Forced to use boost for even quite elementary operations on std::strings.
Yes, some of these things are hard to fix and require collaboration among real people and real companies. And yes, it's a lot easier to bury your head in the soft academic sand and come up with some new interesting toy feature. It's like the committee has given up.
> - Waiting for Cross-Platform standardized SIMD vector datatypes
which language has standardized SIMD vector datatypes ? most languages don't even have any ability to express SIMD while in C++ I can just use Vc (https://github.com/VcDevel/Vc), nsimd (https://github.com/agenium-scale/nsimd) or one of the other ton of alternatives, and have stuff that JustWorksTM on more architectures than most languages even support
- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
what are the other native languages with a standardized memory model for atomics ? and, what's the problem with using libraries ? it's not like you're going to use C# or Java's built-in threadpools if you are doing any serious work, no ? Do they even have something as easy to use as https://github.com/taskflow/taskflow ?
- Debugging cross-platform code using couts, cerrs and printfs
because people never use console.log in JS or System.println in C# maybe ?
- Forced to use boost for even quite elementary operations on std::strings.
can you point to non-trivial java projects that do not use Apache Commons ? Also, the boost string algorithms are header-only so you will end up with exactly the same binaries that if it was in some std::string_algorithms namespace:
Most of what you said is a fair retort, but boost isn't quite as rosy as you make it seem. It's great but it has serious pitfalls which is why many C++ developers really hate it:
A) Boosts supports an enormous amount of compilers & platforms. To implement this support is an enormous amount of expensive preprocessor stuff that slows down the build & makes it hard to debug.
B) Boost is inordinately template heavy (often even worse than the STL). This is paid for at compile time. Some times at runtime and/or binary size if the library maintainers don't do a good job structuring their templates so that the inlined template API calls a non-templated implementation. The first C++ talk I remember talking about this problem was about 5-7 years ago & I doubt boost has been cleaned up in its wake across the board.
C) Library quality is highly variable. It's all under the boost umbrella but boost networking is different from boost filesystem, different from boost string algorithms, different from boost preprocessor, boost spirit, etc. Each library has its own unique cost impact on build, run, & code size that's hard to evaluate a priori.
Boost is like the STL on steroids but that has its own pitfalls that shouldn't be papered over. Maybe things will get better with modules. That's certainly the hope anyway.
It's actually a bit impressive how many languages have it at this point.
> what are the other native languages with a standardized memory model for atomics
Rust, C, Go?
> It's not like you're going to use C# or Java's built-in threadpools if you are doing any serious work, no ?
Define "serious". By most metrics JVM apps run at 1->2x the speed of C++, that's really not terribly slow for a managed language. On top of that, there are a lot of places java can outperform C++ (high heap memory allocation rates). Java's threadpools and concurrency model is, IMO, superior to C++'s.
> Do they even have something as easy to use as taskflow
Several internal and external libs do. Java's completable futures, kotlin's/C#'s (and several other languages) async/await. I really don't see anything special about taskflow.
> can you point to non-trivial java projects that do not use Apache Commons
Yes? It's a fairly dated lib at this point as the JDK has pulled in a lot of the functionality there and from guava. We've got a lot of internal apps that don't have Apache commons as a dependency. I think you are behind the times in where Java as an ecosystem is now.
... I just checked your link and wouldn't say that any of these languages have SIMD more than C++ has it currently:
- Java: incubation stage (how is that different from https://github.com/VcDevel/std-simd). Also Java is only getting it soonish for... amd64 and aarch64 ??
- Rust: those seem to be just the normal intrinsics which are available in every C++ compiler ?
- Dart: seems to not go beyond SSE2 atm ? But it looks like the most "officially supported" of the bunch
- Javascript: seems to be some intel-specific stuff which isn't available here on any of my JS environments ?
- The Go one does not seem to support acquire-release semantics, which makes it quite removed from e.g. ARM and NVidia hardware from what I can read here ? https://golang.org/pkg/sync/atomic/
That's quite well thought out; without the compile-time checks for operations existing, you end up with code either needing to target a very small subset of the operations that are widely supported or something that is not really cross-platform -- I've seen too much of the following using what is theoretically portable code because software-fallback will typically be an order of magnitude worse than using a different set of datatypes and operators
#if defined(__NEON__)
"portable" SIMD goes here
#elif defined(__ALTIVEC__)
different "portable" SIMD goes here
...
There was some discussion about what to do with vector types and operations that weren't supported by the hardware. We decided on compiler error instead of emulation, because the emulation would be terribly slow and the user may be unaware that he's getting emulation.
With a compiler error, the user unambiguously knows if the SIMD hardware is being used or not.
I hope they keep going down this path and make it into a real mess of a language, so that people can finally stop pretending C++ is the solution to any problem, when it is in fact the cause of a lot of your problems.
I began C++ coding over 20 years ago as well, and it required reading thick books even then. I remember my class mates at Uni really hated software development all because of C++. It was way too hard as a beginners language, even 20 years ago.
I look at all these new features, and I am like: How on earth are you going to teach all this crap to students?
They have painted themselves into a corner. It becomes a language only for those who have already programmed it for 10-20 years.
This idea, that it is only for library developers is a bunch of crap. A lot of learning a language is really about reading the code for the standard library. That was one of the beauties of writing Go code. You regularly look at standard library code and is even encouraged to do so. It teaches you a lot about good style.
Same deal with I program in Julia. Looking at library code is totally normal and common.
Except in C++. I avoided looking at library code like the plague. And I suppose, now it will only get worse.
The worst part of this is that this isn't just a problem for C++ developers but also for everybody else. So many key pieces of software relies on C++ code. It becomes ever harder to migrate that code or interface with that code as C++ complexity grows.
That was the beauty of a language like Objective-C. Unlike C++ it is a fairly simple language which you can interface easily with. The result was that porting to Swift was really easy. When porting iOS apps to Swift I could pick individual functions and rewrite them to Swift.
There is no hope doing anything like that with C++.
> I look at all these new features, and I am like: How on earth are you going to teach all this crap to students?
You don't. You teach "A tour of C++ 2nd edition"[0] which presents a clean and smaller subset of the language people can wrap their mind around, with everything someone new to modern C++ needs to know to be effective. And you supplement this with "C++ Core Guidelines"[1] which can be enforced by code analysis and provide some examples of common mistakes or questions people might have.
You do not need to know all the details of the language and know every single features. And wouldn't teach everything to a student.
But it's true that there is some overhead due to the complexity of the language.
> I'm probably going to make a few enemies with this opinion, but I think modern C++ is just an utterly broken mess of a language. They should have just stopped extending it after C++11.
This is the popular refrain of the day, so I don't know why you cage this as if you're saying something controversial.
The popular refrain has more to do with the lack of memory security features in the language, although I'm sure they will bolt a borrow checker or something on to the language.
There are currently enclaves of developers who know varying versions of C++. There's a good chance that a 20-year C++ veteran would have to consult the documentation for syntax. That's concerning. Defining what something isn't is nearly always more important than defining what it is, and C++ is seemingly trying to be everything.
This is a common saying because it is a common occurrence.
People who use the language effectively know all about the complaints. Those people live with their complaints knowing no other language even comes close to meeting their needs. No language on the horizon is even trying to meet their needs.
C++ usage is still growing by leaps and bounds. Attendance at ISO Standard meetings is soaring; until Covid19 killed f2f meetings, each had more than any meeting before; similarly, at conventions. Even the number of C++ conventions held grows every year, with new national ones arising all the time.
Rust is having a go at part of the problem space, and making some headway. But more people pick up C++ for the first time in any given week than the total who have ever tried Rust. It is still way too early to tell whether that will ever not be true.
So the HN trend is very much an echo-chamber phenomenon, with no analog in the wider world.
> This is a common saying because it is a common occurrence.
Ha ha. This is not applicable for software, and I assume, for some craftsman.
What's the percentage of software developers that actually get to choose their tools? 40%? 60% at best? Though most likely it's just 20%.
Most projects are pre-existing, it's only natural. You can't create more projects than those already in existence, once a field matures a bit. Which means that you have to use what's already there.
Plenty of people are forced to use bad tools. And they can for sure blame them.
Many craftsmen do not get the tools they could wish for.
Your craft is your personal responsibility; you use your tools, they don't use you. So, your product is the result of what you do, not what your tools do. Limitations of your tools leave you with greater responsibility to ensure results that satisfy whatever standard you work to.
Blaming your tools for bad results tells people much more about you than about the tools.
First of all, we are not craftsmen. We are more like factory workers. Ford factory worker #515 had no say in the 1000 ton machine just installed in the factory. He just had to make his part of the car.
We delude ourselves into thinking we're all Picassos when we're just house painters, at best.
It'd be more accurate to say not many of us are craftsmen. (Craftspeople?) There are still some ways to make money by through creative, open ended development, they've just always been on the rare side.
Trillions of lines of existing code are also a strong argument of why C++ is going to stay for a while. Lot's of good C++ programmers I know would be really excited to use Rust, but the interop with legacy systems is not worth it for many use cases.
True, but there's plenty of Stockholm Syndrome as well. C++ is a mess, and there's people that will defend that mess to the end of times. Those people managed to get pretty good and have a deep understanding of all of its quirks, but lack the ability to take a seat back and admit that yes, nobody without masochistic tendencies would get into C++20, unless they're already familiar with it.
I'm sorry but can we stop hating on "academics"? No one in research matches your description. The intersection of academia and C++ contains only practitioners (like in the industry), who just want their code to work; and maybe some verification people who'd rather wish C++ was smaller because it is a hell of a beast to do static analysis on. Both these categories are real people having real use cases. The programming language crowd is generally more interested in stuff like dependent types or effect systems, not templates.
If you replace 'academic' with the secondary definition: "not of practical relevance; of only theoretical interest." it is probably true though. Having known some of the C++ standard contributors, they strongly defend themselves against the "not of practical relevance" part with "look what I wrote". Sure it's clever but adding language features just to say "look what I wrote, it's clever is no excuse for building a language that's become a train wreck.
(I have been coding in C++ on and off professionally since 1985 and I do like some of the C++11 and c++14 features. The pointer improvements are great but the template stuff is a complete joke on us).
Sure it's clever but adding language features just to say "look what I wrote, it's clever is no excuse for building a language that's become a train wreck.
Actually, the rationale behind the language features you're criticizing is that people in the real world were already using some techniques in C++ in a needlessly complex and convoluted way, and these new additions not only simplify these implementations but also allow the compilers to output helpful, user-friendlier messages.
Take concepts, for example. You may not like template metaprogramming, but like it or not they are used extensively in the real-world, in the very least in the form of STL and Eigen. Template metaprogramming is a central feature of C++ consumed by practically each and every single C++ developer, in spite of rarely producing code them. Does it make any sense at all to criticize work to improve a key feature that benefits each and every C++ programmer, in spite of not having to write code with it?
And no one of sane mind would argue in favour of shoehorning #include and #ifndef/#define in detriment to a proper module system.
Just because you aren't familiar or well-versed with some C++ features, or aware of how extensively they are used, it doesn't mean they are not used or that the stuff you don't know automatically qualifies as a trainwreck.
If you really did any serious work writing template metaprogramming code, or were aware of what happens under the hood with libraries that were developed with it, you wouldn't be criticizing recent contributions to improve it's UX, for both developers and library/module consumers, as a trainwreck.
> When I look at C++14 and later I can't help but throw my hands up,
Why C++14? The changes were very minor and mostly about being able to declare lambda functions with auto, which is extremely useful.
> Waiting for Cross-Platform standardized SIMD vector datatypes
I only know of ISPC having this, but there are also lots of SIMD libraries for C++ that are small and have minimal dependencies.
> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
std::thread, atomics, and mutexes were added in C++11 and work extremely well. OpenMP is in the top four compilers if someone wants super easy fork-join parallelism. What other languages make C++ look archaic here?
> Debugging cross-platform code using couts, cerrs and printfs
Both visual studio and Qt Creator have made this unnecessary for a long time (if you can do step through debugging). What other language are you thinking of that makes C++ look archaic here?
> Forced to use boost for even quite elementary operations on std::strings.
That's completely ridiculous. It is easy to avoid boost these days (thank god). This is DEFINITELY not worth using boost for. First you can use https://github.com/imageworks/pystring on top of what C++ already has combined with regular expressions.
I don't think anything you listed is actually a problem. If you had talked about not having a standard networking library or standard serialization it might have made more sense.
> a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility
I use a lot of library features after C++11. Variant, span, and string_view are the most important ones. As to language features, structured bindings and variable templates come to mind. They pretty much hit all of your code quality points. I don't think these are for "a small circle of language academics" either (I'm definitely not in that "small circle"). Syntax-wise, meta programming can get ugly yes. Even Stroustrup himself doesn't like it. I guess at this point it's just for "historical reasons".
> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
I think this one comes down to that there are a vast range of parallel computing models out there, and C++ wants to have generality. I used to write a lot of MPI programs targeting the super computers. I don’t think any language would want to include that in the standard…
> Debugging cross-platform code using couts, cerrs and printfs
What’s wrong with printing? I even debug JavaScript programs with console.log(). It’s convenient.
If you just do local dev, debuggers work pretty well, you can debug however you want. I was unfortunate enough to have pretty much always worked on platforms that is hard to have a good remote debugging session, due to hardware capacity, legacy toolchain, or even ssh-ing onto the host being hard enough due to security. But that's hardly C++'s fault.
> Forced to use boost for even quite elementary operations on std::strings
It’d be great if std::string has more features. But I don’t think it a big deal. Personally I don’t like linking boost to my programs, so I just write my own libraries for that. It’s just elementary operations anyway.
But that's the point. Metaprogramming has gotten significantly better since c++11, and c++17 metaprogramming is extremely clean. Are we getting mad at them for improving things?
So... You're arguing against it by pointing out an excellent library for the language? Was someone forcing you to use std::thread? Of course it won't have as many features as tbb; it's meant to help pthreads users.
Not exactly. I am reminded of n3557. The ability to write a library like TBB is a positive. But much richer libraries are just barely over the ridge. std::thread is not much more interesting than the abstractions provided by Boost in the early 00's.
Part of the job of the library is to be boring. It's the reason third party libraries exist in the first place. They give you the basic starting points to get the job done.
Look how many people are complaining on here about how complicated c++ is. If something like tbb was integrated they would be all over it.
These things are specific to CPU architectures, but other then that they’re cross-platform and de-facto standards set by Intel and ARM. Same source code builds with all mainstream compilers, regardless on the target OS.
> nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores
OpenMP is not part of C++ standard, but it’s still a standard in the sense they have a complete specification: https://www.openmp.org/specifications/ Mainstream compilers are reasonably good at implementing these specs.
> Debugging cross-platform code using couts, cerrs and printfs
Debugging story is not great outside MSVC, but it’s not terrible either. When I needed that, gdb worked OK for me.
> Forced to use boost for even quite elementary operations on std::strings
I agree the ergonomics can be better, but I’m not using boost, and I see improvements, e.g. std::string_view in C++/17 helped.
I'm not sure cross-platform SIMD vector data types are practical, at least not ones that don't force you to understand the implementation details on every microarchitecture you target.
If you actually care about performance, and presumably anyone that wants to use SIMD vector types does, you need to fit the higher-level data structures to the nuances of the microarchitecture you are targeting. Compilers don't do optimization at that level, you have to write the code yourself. Thin wrappers on compiler intrinsics is actually the right level of abstraction if you want to exploit those capabilities.
Similarly, how code is parallelized is completely dependent on what you are trying to do, the software architecture, and the silicon microarchitecture; there is no way to usefully standardize it outside of use cases so narrow they probably don't belong in C++. Parallelization in practice happens at a higher level of abstraction than the programming language.
And FWIW, I use many of these new C++ language features in real software every day because they provide immediate and compelling value. I am not an academic.
Code quality can also be judged by the quality of compiler output. C++ has many language features that allow compilers to generate efficient code. Unfortunately it also features incredibly complex abstractions that lead to insane binary interfaces.
Binary interface complexity is actually a huge reason why people rewrite stuff in C. When you write in C, you get symbols and simple calling conventions. Makes it easy to interoperate.
> C++ has many language features that allow compilers to generate efficient code.
It does, but it also has the ability of generating inefficient code. Sure, it's often the developers fault but I feel like it's much easier to shoot yourself in the foot in terms of performance in C++ compared to other compiled languages.
Some real-life examples for me:
* Missing a '&' for a function parameter resulting in that object being copied for each function invocation
* Adding a couple extra chars to an error message string in an inlined function which caused that function to then be 'too large' to inline according to the compiler
> When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?
I do. There are a lot of features introduced since C++11 that make my life much easier. Sure, it's always scary to have to learn new things, but once you get over that hump, you start to see the benefits. Concepts and constexpr cut down on the template boilerplate crap a lot. Being able to use the auto keyword in more contexts means less repetition. Modules get rid of the ugly hack that is the preprocessor. std::span means I don't constantly have to pass around a pointer and length, or create a dedicated struct to encapsulate pointer+length. Sure, there are some more obscure features whose usefulness are questionable, but for a design-by-committee language, they're doing a slow but sure job of moving past the language's old warts.
> In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.
Feature adoption doesn't happen overnight. Remember, we're talking about a decades-old language burdened by backwards compatibility - it took a long time for people to migrate from supporting C++03 to dropping it in favor of C++11. Give it five or ten years, and I reckon you'll see people make use of C++17 and C++20 in much greater numbers.
> Waiting for Cross-Platform standardized SIMD vector datatypes
No argument there. That said, all mainstream compilers already have "immintrin.h" for x64 and "arm_neon.h" for ARM, and using them isn't particularly difficult.
> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
Are you aware that std::thread has existed since C++11, and std::jthread and coroutines are in C++20?
> Debugging cross-platform code using couts, cerrs and printfs
This is a programmer problem, not a language problem. gdb exists, lldb exists, the Visual Studio debugger exists, and they're not particularly hard to pick up and use - if you're still using print statements to figure out why your application is crashing, that's on you.
> Forced to use boost for even quite elementary operations on std::strings
std::string is an RAII-managed bag of bytes. What kind of operations are you looking for? Stuff like concatenation and replacement can already be done in C++11 with std::string and std::regex. If you want to do lexical operations, like case conversion or glyph counting, then an encoding-aware library is a better solution.
On top of that, one can use strings as a normal "sequenced container of characters" and just use <algorithm>s on them. This is one of my favorite ways to write concise code in those interview questions (e.g. "That's just a rotate, then a partition").
> Sure, but very low level. I'd be great to have a standard for something like TBB or OpenMP.
The answer here is modules. Improve the story on shipping C++ libraries, and then who cares if it's in the "standard library" or not? It's not like anyone in JS land for example cares if something is native to the language or in a library since adding a library is trivial & easy.
Modules have nothing to do with shipping libraries (or dependency management), they are purely about encapsulation of interface and (API) implementation.
It should be std::string’s job to store strings. If people want to perform operations on them, that’s what free functions are for, right? Nobody wants std::string to have hundreds of methods.
> Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.
I don't understand... How do these features not address those points?
> a) Functionality (does it work, is it safe?)
constinit, consteval and all the remaining constexpr improvements are a massive step for ensuring the "compile-time-ness" of code:
There's several more sharp edges being removed too. It's of course not going to tackle the fundamental safety concerns the way Rust is doing, but that would be a new language (like Rust is) anyway.
> b) Readability
requires is infinitely more readable than the SFINAE we had to write so far:
Besides that elephant in the room, most of these changes involve making the code either simpler to read/write (too many to name) or more explicit (consteval/constinit, attributes, ).
> c) Conciseness
Half the features contribute to this in one way or another (e.g. see previous point, or the spaceship operator), but there's also a whole list of syntactic sugar being added:
True, C++20 also comes with great library additions that are not the subject of this blog post but affect (possibly to an even greater degree) the code attributes in question.
It's definitely tricky. I think if you just stick to modern C++ and avoid anything advanced unless necessitated, it's a big improvement on your code. But as we know, developers with the discipline not to take advantage of every feature available to be "clever" is rare. And I agree, the standard library is still very much lacking. This is one thing I really like about working with C#, the vast majority of what I'm doing is available and simplified through the standard library.
It is not really about the language at all. He got older, and does not want to learn new things. Other people who stopped learning earlier say, "better C" instead.
The language has gotten continuously more powerful since 2011, albeit in smaller increments until C++20 when several big features landed.
Good C++11 looks practically nothing like C++98, and good C++20 looks as little like C++11.
It is really getting more fun all the time, as old crud falls away, and you can just say more and more just what you mean. Improved type-inference capabilities are doing a great deal of the heavy lifting.
> - Waiting for Cross-Platform standardized SIMD vector datatypes
> - Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
SIMD computation and multithreaded parallel computations were largely solved with execution policies. C++17 added multithreaded and multithreaded+SIMD execution policies, C++20 added single threaded SIMD execution policy.
I would argue that standardizing SIMD vector extension datatypes is an anti-feature for all cross platform programming languages. Writing AVX512 code is very different from writing NEON code. If the compiler autovectorizer doesn't generate good enough code for you, you have no choice but to use the non-cross platform vendor specific intrinsics anyway. If a SIMD datatype and the operations you could perform on it were standardized, it would necessarily have to be a very low common denominator. I don't even know what the lowest common denominator between MMX, SSE2, AVX2, AVX512, NEON and Altivec (to name a few) even is.
Note that the autovectorizers in GCC and Clang (not MSVC) are very, very good. If you structure your data in the way it would have to be structured if one were going to write hand-vectorized code anyway, GCC and Clang will, with a high probability, vectorize it correctly.
I don't know what a standardized language feature for execution on different processors than the CPU would even look like. What languages have this, and what does it even look like? Can you give a code sample?
> - Debugging cross-platform code using couts, cerrs and printfs
I don't think I understand what you're suggesting. On second thought I definitely don't understand what you're suggesting.
Are you suggesting that the C++ standards committee should standardize a _debugger_? You'd have to standardize the ABI first. There's no way to do that; 32 bit x86 with its 8 registers must necessarily have different calling conventions than ARM with its 32 (I think? it's been a while) registers.
If you're suggesting that the committee standardizes a UI, there's no way you're going to get the Visual Studio team and the GDB team to agree on what a debugger ought to look like. I don't even know where a mediator would even begin to start suggesting anything.
If you're suggesting that current debugger offerings such as the Visual Studio debugger and GDB aren't good enough, I dunno what to tell you. They work for me.
> - Forced to use boost for even quite elementary operations on std::strings.
Can you give an example? The big thing I used boost string stuff for was boost::format, but now that there's std::format I don't need that anymore.
C++ is a broken mess and I'm completely fine with that because it couldn't be any other way. It started as C with classes and they've kept it moving into the 21st century. Rust is here now and should be used for new projects, but at least old projects get to use these new features, ugly as they are. I've also noticed that most people complaining about "new" features do not understand them.
The #1 feature I currently want is the ability to do an implicit lambda capture of a structured binding, at least by reference. I appreciate there are interesting corner cases of like, bindings to bitfields: I simply don't need those corner cases solved... if it just supported a handful of the most obvious cases I would be so so so happy, and then they can spend the next decade arguing about how to solve it 99% (which I say as we know it won't be 100%... this is C++, where everything is some ridiculous epicycle over a previous failed feature :/).
(edit:) OMG, I found this feature in the list!! (It was in the set of structured bindings changes instead of with the changes to lambda expressions, which I had immediately clanged through.) I need to figure out now what version of clang I need to use it (later edit: I don't think clang has it yet; but maybe soon?)... this is seriously going to change my life.
However a full destructuring bind, à la Lisp, hasn't. You can't do `for (auto& [a, [b, c]] : some_container_of_structs)` which is handy for taking apart all sorts of things.
Relatedly there's no "ignore" though it exists in function declaration syntax: you can write `void foo (char the_char, int, long a_long);`. But you can't ignore the parts of a destructure you don't need: `auto& [a, , c]`. This capability is sometimes useful in the function declaration case but is quite handy, say, when a function returns multiple values but you only need one (consider error code and explanation).
And variadic destructuring...well I could go on.
I haven't attended a C++ committee meeting in 25 years (and didn't do a lot when I did) so I have no reason to complain.
Destructuring that lets you ignore parts of the object is usually found in the form of pattern matching.
Lisp destructuring comes directly from macros: CL's destructuring lambda lists and macro lambda lists are closely related cousins.
Macros usually care about all their arguments. Reason being, they are designed to cater to those arguments; an unnecessary element in the syntax of a macro will just be left out from its design, rather than incorporated as a piece of structure that gets ignored. (The exceptions to it are reasonably rare that it's acceptable to just capture a variable here and there and ignore it.)
Yeah: 100% to these complaints; I do run into the full destructuring issue occasionally, but it isn't blocking me ability to do composition of features in the same way this lambda capture issue is ;P.
One day we will get it. I believe the intention is to support full destructuring but it is hard to get a feature added to the standard. Sometimes functionality is cut just to increase the probability that it will be voted in.
For example lambdas were added in C++11, but generic lambdas were cut out and only added in C++14.
I constantly use both lambdas and structured bindings; without this feature, I am having to constantly redeclare every single not-a-variable I use in every lambda level and then maintain these lists every time I add (or remove, due to warnings I get) a usage. Here is one of my lambdas:
And like, at least there I am able to redeclare them in a "natural" way... I also tend to hide lambdas inside of macros to let me build new scope constructs, and if a structured binding happens to float across one of those boundaries I am just screwed and have to declare adapter references in the enclosing scope (which is the same number of name repetitions, but I can't reuse the original name and it uses more boilerplate).
It's kind of weird structured bindings where not captured with [=](){} before, actually. I'm still stuck at C++11 for most of my work so I cannot use structured bindings at all, but I would not have expected to have to write that kind of monstrosity in C++17
I work on a "streaming" probabilistic nanopayments system that is used for (initially) multihop VPN-like service from randomly selected providers; it is called Orchid.
I dunno... Brian Fox (the developer of bash) got involved, and he tapped me (someone he has worked with before) as a combination networking and security expert? FWIW, if you describe anything with the technical precision I just did, almost anything will sound "esoteric" ;P.
Yeah... I did know gcc allowed it, but I didn't know it was because the spec now allowed it and not that they were just doing it anyway. Sadly, I am heavily heavily using coroutines (--even coroutine lambdas... with captures structured bindings ;P (don't try to auto template them though: that crashes the compiler)--which clang has much better support for.
I hope one day we can get a widely adopted C and C++ package manager. The friction involved in acquiring and using dependencies with odd build systems, etc. is one of the things I dislike about the language. I’m aware on Linux things are a bit easier, but if it were as easy as “npm install skia”, etc. everywhere, I think many people would use the language more.
Rust has package management, but not the ecosystem yet. On the other hand, C/C++ has the ecosystem, but no standard way to easily draw from it.
Widely adopted source code manager requires a widely adopted build system. CMake is certainly a contender but the ecosystem is too fragmented even then & you have to do a lot to try to link disparate build systems together. Also C++ is a transitive dependency hell nightmare & any attempt to solve that (like Rust has) would break every ABI out there. Given how bumpy such breakages have been in the past, I don't think there's any compiler maintainer eager for it (even MSVC has decided to largely ossify their STL runtime ABI).
Conan is certainly a laudable attempt at something like this. Without access to their metrics though, it's hard to tell if they're continuing to gain meaningful traction or if their growth curve has plateaued. It's certainly not in use in any project at medium to bigger size companies I've worked at. By comparison, Cocoapods was pretty successful in the iOS ecosystem precisely because Xcode was the de facto build/project system.
I'm a longtime CMake user, but I think even within the CMake world, the solution is quite a bit more complicated than just "everything needs to be CMake", with a lot of hassles that arise when multiple generations of the tooling is involved, when you're trying to pass down transitive dependencies, when package X has a bunch of custom find modules with magic to try to locate system versions of dependencies but silently fall back to vendored ones.
The higher up the stack you get, the worse and worse these problems get, with high-level packages like Tensorflow being completely intractable:
Yup. 100% agree. I totally overlooked the shitshow you'll have managing the different versions of CMake a build might require. Somehow Bazel manages to escape that mess. I think that might be a better foundation, but getting everyone to port to that... it's a tall ask & there's many vocal people who are against improving the build system they work with (hell, I've met many engineers who grumble and strongly prefer Makefiles).
I'm obviously pretty biased having come from 10 years of doing infrastructure in the ROS world, but having spent a lot of time integrating this or that random library into a singular product-versioned build, I do quite like the approach of colcon:
Basically it has plugins to discover/build various package types (autotools, cmake, bazel, setuptools, cargo), and the "interface" between packages is just the output of whatever the standard install target is for a given package. This makes it totally transparent whether your dependency is built-from-source in your workspace, coming from /usr/local via a sudo-make-install workflow, or coming from /usr via a system package.
Under this model, you never pull a dependency as a "subproject" with invocations like include or add_subdirectory; it's always using a standard find_package invocation, where basically the only requirement on participating packages is that they cooperate with long-existing standards like CMAKE_PREFIX_PATH and CMAKE_INSTALL_PREFIX. Vendoring a library is then not making a copy of it in your project tree, but rather as sibling project within the shared workspace that colcon builds.
> Without access to their metrics though, it's hard to tell if they're continuing to gain meaningful traction or if their growth curve has plateaued.
Some public data that could be used as proxy for traction:
- Some companies using Conan in production can be seen in the committee for Conan 2.0 called the tribe: https://conan.io/tribe.html. That includes companies like Nasa, Bose, TomTom, Apple, Bosch, Continental, Ansys...
- With +1600 subscribers the #conan channel in the CppLang slack is consistently ranked in the most active channels every month: https://cpplang.slack.com/stats#channels
C++ is considered the industry leading language in many fields. I'm not sure how many more you would want (given that those fields that don't use C++ ARE probably better served with some other language).
I agree the build is painfull, but large orgs have for this reason specifically implemented build systems using nugets, conan/cmake or whatnot.
In personal projects I just download the prebuilt binaries of component libraries and drag and drop them to visual studio, minimizing hassle.
If you discard finesse and scalability as requirements you can actually jury rig a C++ project in a jiffy. You just need to let go of the idea that it must be "industry standard setup".
C++ used to be the industry leading language in many more fields, but it lost ground to other languages. Not a bad thing--"know thyself" and all that. But Rust seems like a credible threat to C++'s remaining niches (bury your head in the sand if you want), and C++ will need to evolve if it is to not lose further market-/mindshare. And it is evolving, as this article points out, but a huge glaring pain point in C++ development remains the build and package management tooling. The aforementioned build systems that large organizations operate aren't nearly as nice as, say, Cargo and I think a lot of greenfield projects who have to choose between cobbling together their own build tool to work with C++ and using Rust + Cargo off the shelf will choose the latter (other factors notwithstanding).
I will get worried when NVidia releases CUDA-Rust, and changes their GPGPUs from C++ memory model to Rust, Microsoft decides to rewrite WinUI in Rust, Apple moves Metal from C++ into Rust, or Unreal/Unity get rewritten in Rust.
> You write as if Rust vs. C++ was some sort of competition.
Competition exists all around us, all the time, whether we like it or not.
And the competition between C++ and Rust is very clear. I, for example, would likely be spending more time / effort on learning the latest C++ standards if Rust didn't exist. And likely hate my life a little bit, unless I could exclusively stick to C++ "the good parts" if such a subset exists and I didn't need to interface as much with existing C++ code.
My job is writing C++ and I love it. I've been working with C++ since the mid 90's and have grown very fond of the language and I'm pretty productive in it.
If C++ use declines, then there are fewer opportunities for me. So you can count me as a member of team C++.
I also write C++ for a living but feel no threat if I had to suddenly start writing C, C#, Java, Python, Rust, F#, Scala, or what have you. Sure, it would need learning a thing or two but basically they all are driving the same ARM or x86 based compute stack with exactly the same constraints due to computer architecture .
My focus has been to brand myself as "domain expert" in few algorithmic domains rather than "C++" expert so this may affect my point of view, though.
> feel no threat if I had to suddenly start writing C, C#, Java, Python, Rust, F#, Scala
It isn't necessarily a threat. I'm pretty comfortable in a bunch of different languages but just enjoy C++ more than the others. To use a car analogy, I have no problem driving automatic but I really like driving stick.
If you don't keep up with all this C++20 material, there will likewise be fewer opportunities for you. They will throw it at you in an interview, to check that you aren't some stubborn C++98 gunslinger.
I do my best! I really enjoy working in C++ and it's a pretty exciting time for me when a new standard rolls around and my compiler gets updated to support it.
Years. Some things are obviously useful right away and other things take me a lot longer to grok. For example, rvalue references have been around for a long time now and I still have to slow down when I see && in code.
> If use of C++ declines then I don't understand how that would make the language a lesser tool.
The hypothesis is that C++ will decline because it becomes the lesser tool (where "lesser tool" means it excels only in increasingly small niches) if it doesn't adapt. That said, the C++ community seems to want to adapt and remain relevant, as indicated by its significant progress over the last decade.
As for why someone might care about the usage of a programming language: because "ease of finding developers" and "quality and breadth of ecosystem" are major factors in deciding on new projects. I.e., "the best tool for the job" is often the one with the broader ecosystem and more developers, all else equal. So these factors feed back on each other.
> Why would it matter and to whom if C++ use would decline?
If the use declines to zero, then all the effort someone put into developing C++ compilers and related tooling had been for naught, as are C++ development skills.
Given the prevalence of C++ it is very hard for me to imagine a situation where the use would decline to zero.
The notion of how to write a specific language such as C++ is immaterial compared to the capability to design and implement complex software systems and those skills are quite portable between languages. The best employers tend to recognize this.
>I hope one day we can get a widely adopted C and C++ package manager. [...] , but if it were as easy as “npm install skia”, etc. everywhere,
It's not just the package manager (the command line tool) ... it's the canonical website source that the tool pulls from.
C++ probably won't have a package manager with the same breadth of newer language ecosystems like npm/Nodejs and crates.io/Rust because for 20+ years C++ was developed by fragmented independent communities before a canonical repo website funded by a corporation or non-profit was created. There is no C++ institution or entity with industry-wide influence that's analogous to Joyent (Nodejs & npm) or Mozilla (Crates.io & cargo)
- C++ for 20+ years of isolated and fragmented development groups creates legacy codebases --> then decades later try to create package manager (vcpkg? Conan? cppget?) that tries to attracts those disparate groups --> thus "herding cats" is an uphill challenge
- npm and crates.io exist at the beginning of language adoption allowing the ecosystem to grow around those package tools and view them as canonical
Go has a perfectly good package manager that works with sources hosted on GitHub and other sites -- there isn't any centralized place for people to publish sources, unlike the other package managers you mentioned.
Go's package manager also came years after the language became widely used, and it is now very widely adopted according to the most recent survey[0].
I think C++ could have a good, unified package management story. It would just require the major stakeholders to all care enough to make it happen, which seems to be the missing piece here.
Go has a small dedicated team that develops and designs the language. They take some input from the broader community but are still the one who decides how things evolve. They decided at some point that go modules was the way to go and everybody followed, because they are the authority who decides how Go evolves.
C++ does not have an equivalent, it's completely decentralized which results in more messy situation. As a result you have an open market where different people try to build different tools and approaches for their own problems, then try to get others to use them (similar to what Go had before go modules, we had lot of package managers to chose from at the time).
Instead of a top down decision it's a negotiation between the various actors. But the last thing we need is for the C++ standards committee to standardize a package manager. That would take forever to do, would result in a messy tool that tries to compromise with all the actors in some ways, make it very hard and slow to evolve over time and would likely result in a lot of pain, etc.
> C++ does not have an equivalent, it's completely decentralized which results in more messy situation.
That is the role of the ISO C++ committee, is it not? They are the major stakeholders. They would just have to care enough. They cared enough to release C++20, didn't they? It's not like they never get anything done, which seems to be the implication a lot of people make in this discussion.
> Instead of a top down decision it's a negotiation between the various actors.
My understanding is that the various committee members represent the disparate interests of the broader C++ community. I agree it would be very much like a negotiation.
That doesn't mean that it can't be done. This whole thread is discussing things that have been done by the C++ committee: C++20.
> But the last thing we need is for the C++ standards committee to standardize a package manager. That would take forever to do, would result in a messy tool that tries to compromise with all the actors in some ways, make it very hard and slow to evolve over time and would likely result in a lot of pain, etc.
You just summarized my feelings about C++ in general. I would much rather people use Rust or Go or any number of other languages instead of C++, depending on project needs. Such opinions are rarely taken well in threads like this, though, so...
I've been trying to be optimistic and point out that C++ could get package management. If the C++ committee process works well, then the package manager should also end up turning out well.
I'll leave the reader to decide how well they think the long term direction and guidance of the C++ standard has been going and apply that to their feelings of a hypothetical future package manager.
It's design by agreement vs design by decision. In the former you need people to agree. In a committee setting it means that not only do you have to make MSFT, Google, & Apple happy, it's also the various other people that happen to be part of that standard body (the group is large). You definitely pull from a larger group of experts, but it's mired in indecision hell & compromise. Often times a decision that solves 90% of problems is better than a decision that is perfect, but the way ISO is set up, decisions kind of have to be perfect.
That being said, the C++ standards body (at least under Herb?) has done a decent job modernizing their process to fight some of the gravitational issues they were having. They've formalized deprecation rules & tried to get over disagreements. The design by committee issues haven't gone away though - the mess with coroutines, modules, & concepts is a great example of that. The ISO process of language papers precludes even simple additions to the STL where you not only have to navigate standardese, but also manage the review process (that's why you have to find a champion on the standards body to help guide your review through the rigamarole).
My experience contributing to the Rust standard library by comparison was much easier - put up a drive-by diff adding a new (admittedly minor) API, some minor review comments, done & shipped. The whole process took 1-2 weeks, no standardese, no arguing with a large committee on the exact wording, etc.
> My experience contributing to the Rust standard library by comparison was much easier - put up a drive-by diff adding a new (admittedly minor) API, some minor review comments, done & shipped. The whole process took 1-2 weeks, no standardese, no arguing with a large committee on the exact wording, etc.
this is off course all great until two people working in different parts of the language do things slightly differently. Either works alone, but the whole of the language is inconsistent and hard to learn.
C++ has enough inconsistent parts already and so tries to be careful to make new things consistent with itself as best as possible. Even that has failed despite all the review of people looking for places to make things consistent. It is a hard problem to design a large language.
So, a key difference here is that in Rust, the standard library and the language designers are two different teams, with two different standards. The parent is talking about the standard library; there's a reasonably low barrier to entry to add something, but it is added unstably, and the bar to getting it to stabilize is higher. The language does not accept additions by "drive-by PR", the barrier to getting something to land, even in an unstable way, is much, much higher.
The whole language team has to sign off on these stabilizations and language additions, which is what keeps up that consistency you're talking about.
Yeah sorry. Should have been clear that I was talking about the standard library. I've got the chops to contribute standard library code - would never even think about trying to tackle implementing language changes. I don't have the time nor energy to deal with C++ standardese since the spec is an ancillary artifact describing the thing rather than the thing itself (the thing itself being the implementation & documentation).
Granted, this isn't necessarily everyone's experience in std as the change I implemented is well-worn/adopted by any condition variable implementation. Something more controversial/exploratory may have been pushed off into a crate first. I'm still impressed that it only took ~2 weeks to get [1] reviewed & into unstable. I wasn't even involved in the stabilization work/cosmetic renaming that it took to close out [2] which was driven by a community ask & the std maintainers doing a pass to make things consistent. Rust's velocity seems to be that they can deliver changes a full 1 year faster than C++ can (& likely faster if the community really asks for it). In my book it's largely owing to having 1 compiler & 1 standard library & the latter having a much more streamlined RFC process.
No sorry needed! I think you were clear that you were, just worth re-iterating that in Rust, these are two separate groups with similar but slightly different processes, and in C++, the standard contains both language + standard library. (Obviously C++ has working groups... point is that the two languages are similar, but different.)
Unlike C++, Rust has pretty strong conventions around formatting and naming, and these conventions are followed in almost all major libraries. Furthermore, most such small APIs tend to bake in nightly for a while before they are stabilized, and so get two rounds of review: once during the initial commit, and once during the push for stabilization.
Right. C++ has a chicken-and-the-egg problem in that neither its build not packaging ecosystem have even de facto standards. GitHub URLs don't solve either problem.
I think everyone agrees that a common build system is a necessary step if any of this is going to work.
Thinking about how much or how little would be required beyond a common build system in order to get a working package management system is still a valid thing to do.
I expect it is necessary to keep build configuration portable with respect to packaging and environment configuration. In other words, I expect downloading from URLs in Makefiles and CMakeLists to be a local maximum.
Keeping a parallel set of instructions or metadata that includes specific URLs and such might work, though. As long as you can skip all that when a system package, filesystem path, or git submodule is more appropriate.
>Go's package manager also came years after the language became widely used, and it is now very widely adopted according to the most recent survey[0].
Are you talking about "pkg.go.dev" and the "go get" command? Isn't there some path dependence in the history of events that's not comparable to C++? Consider:
- Go language: created by Google Inc
- "go get" syntax for package download designed and created by Google Inc
- "pkg.go.dev" funded by Google Inc and highlighted on "golang.org" website that's also run by Google Inc.
There is no business entity or institution in the C++ world that's analogous to Google's influence for Go + golang.org + "go get" + pkg.go.dev.
>It would just require the major stakeholders to all _care_ enough to make it happen,
But it's easier to care if there was an influential C++ behemoth that captured everyone's mindshare to move the entire ecosystem forward. C++ has no such "industry leader" that dictates (or heavily influences) technical direction from the top down.
> Are you talking about "pkg.go.dev" and the "go get" command?
No. I'm not talking about either of those. Your whole comment is, unfortunately, irrelevant.
pkg.go.dev is not a package repo. It's just a place for documentation to be rendered. It renders documentation from third party hosted code, such as on GitHub or elsewhere.
"go get" predates Go Modules, which is the current package management system. The whole original design of "go get" was to simply download code from somewhere on the internet, and place it in the right spot of the $GOPATH. This has nothing to do with a proper versioned package manager like Go Modules.
AFAIK, "go get" was also never really designed for Google's internal use cases. They use a monorepo that was perfectly content with $GOPATH, and all their code was developed in the monorepo to begin with. There was nothing for them to "go get", except for the rare outside dependency that they were embedding into their monorepo, I would imagine. I've never worked for Google, these are just things I hear about.
Go Modules was also not designed for Google. It was designed for the community, based on findings from community developed package managers for Go. Google has no real use for it — again, they use a monorepo.
Nowadays, "go get" can be used with Go modules, but in practice, it feels like it almost never is. Maybe someone would use that command to upgrade an existing dependency, instead of editing the `go.mod` file to change the version there?
So, your comment just shows that you haven't researched this enough. Yes, Go Modules was still guided by Googlers, who were even more in control of the language direction back then than they are now. Yes, change always causes some drama. But, I'm not really here to explain the history of Go package management...
I'm just saying that C++ could have a nice, distributed package management system, it would just require the major stakeholders to all care and work together on it. The ISO C++ language committee is a finite number of people. They are the major stakeholders, as far as the language direction is concerned.
If they didn't have the power to enact major language changes, we wouldn't be here talking about C++20.
The stakeholders for Go were able to develop a package manager that is distributed (an idea compatible with how all C++ code is scattered across the web these days), and that achieved broad adoption, and this was some years after the language went into wide use.
It’s an extremely relevant analogue for C++ to study, if the committee members wanted a package manager badly enough.
> But it's easier to care if there was an influential C++ behemoth that captured everyone's mindshare to move the entire ecosystem forward. C++ has no such "industry leader" that dictates (or heavily influences) technical direction from the top down.
You edited this in while I was replying, but I agree entirely. Getting the committee to agree to a package management solution would be much more difficult than having a single behemoth guide the decision. Does that mean it is impossible and therefore no one could do it? Everyone here talks like it is impossible, but it doesn't really seem to be.
>Yes, Go Modules was still guided by Googlers, who were even more in control of the language direction back then than they are now. Yes, change always causes some drama. But, I'm not really here to explain the history of Go package management...
>I'm just saying that C++ could have a nice, distributed package management system, it would just require the major stakeholders to all care and work together on it. The ISO C++ language committee is a finite number of people. They are the major stakeholders, as far as the language direction is concerned.
The ISO C++ committee can't learn from the history of Go modules community acceptance because they don't have the same power as Google. You seem to misunderstand what the C++ committee _is_. Yes, they have representatives from Microsoft/Google/Apple/Intel but the org is designed to review proposals from submitted papers. They are more like an ongoing academic conference rather than a devops team that runs websites.
We seem to be discussing 2 different abstractions of making a "package manager". With your emphasis on Modules, you seem to be only focusing on the tool. To repeat my gp comment, I'm also focusing on the canonical package repository (or index, or discovery engine).
>The Go team is providing the following services run by Google: a module mirror for accelerating Go module downloads, an index for discovering new modules, and a global go.sum database for authenticating module content. >As of Go 1.13, the go command by default downloads and authenticates modules using the Go module mirror and Go checksum database.
You misunderstood my cite of "pkg.go.dev" run by Google Inc but this is the part of your survey that I was referring to:
>The package discovery site pkg.go.dev is new to the list this year and was a top resource for 32% of respondents. Respondents who use pkg.go.dev are more likely to agree they are able to quickly find Go packages / libraries they need: 91% for pkg.go.dev users vs. 82% for everyone else.
The ISO C++ committee is not set up to implement a new website to make the above Go-specific paragraphs be a similar reality for C++ with a search & replace "s/Go/C++/g". Think about _who_ funds and provides paid people to actually run the "proxy.golang.org". It's Google Inc. The C++ committee doesn't have an equivalent situation.
Yes, the C++ committee can receive a proposal for new language syntax such as "std::unique" and after some back & forth commentary and debate, they say "approved" and then it's up to each C++ compiler vendor to then go and independently implement it on their own timeline. In contrast, if someone proposes "C++ should have a package manager", exactly _who_ will implement and maintain the canonical repo mirror? This is not independent lines of work that GCC, Clang, Microsoft, and Intel can do on their own. Even if we hypothetically extend the website "isocpp.org" to actually start hosting the canonical C++ repos instead of just blog posts about syntax proposals, _who_ is paying for it? Again, there is no single entity like Joyent/Mozilla/GoogleInc that raises their hand and says, "We'll set it up". I suppose we could imagine that the major players like MS+Google+Apple all contribute to a shared fund to pay for the repo mirror -- and the salaries for devops to remove malicious uploads -- but notice no other major language package manager Javascript/Rust/Go had to do it that way. So we have that friction of coordinating multiple corporations. Even if that website was set up, many existing C++ library writers (that existed for decades before a C++ package manager) wouldn't bother uploading their code to it. So that's another friction. E.g. Conan is supposedly the current winner of C++ package manager mindshare and ffmpeg is not on it.
I think the disagreement is rooted in how we compare ISO C++ committee vs Google Inc. To me, releasing a C++20 language specification* does not say anything about implementing a canonical repo* so that a command line tool magically works the way people expect.
EDIT to reply: >You can have a package manager without having a discovery tool or a central repo.
This means your conversation is focusing on the tool which isn't the abstraction I'm emphasizing.
>Package discovery tools are not very relevant to the discussion.
It's relevant if the particular person wondering "why C++ doesn't have a package manager?!?" uses a mental of model of how npm and cargo work. They don't have to know if it's github vs gitlab vs somewhere else. The tool just works without thinking about the location. That's what a canonical repo as a default convention for the client tool provides.
> This means your conversation is focusing on the tool which isn't the abstraction I'm emphasizing.
Your edit implies that I'm talking about a useless tool that can't do anything in the absence of Google, which simply isn't true.
The Go Modules tooling does not depend on any central resource to work. Google could shut down tomorrow, and nothing would change for existing projects. The go CLI tools would still be able to find, download, and verify the dependencies. I would still be able to add new dependencies, and the tooling would be able to fetch those.
What are you talking about, if not a functional package management system? Google's websites are nice, but they're not required for everything to Just Work.
Anyone in the C++ community could stand those websites up at any time after the package management tooling came into existence. They're not required for the functionality of the package manager.
> It's relevant if the particular person wondering "why C++ doesn't have a package manager?!?" uses a mental of model of how npm and cargo work. They don't have to know if it's github vs gitlab vs somewhere else. The tool just works without thinking about the location. That's what a canonical repo as a default convention for the client tool provides.
Go's CLI tooling literally doesn't provide any way to search for packages at all. You may think it's a requirement, but it's really not! Go requires you to know where the dependency is located, because Go sure doesn't unless you tell it!
It feels like I'm really awful at explaining things.
>Go's CLI tooling literally doesn't provide any way to search for packages at all. You may think it's a requirement, but it's really not!
You're still misunderstanding the level of abstraction I'm emphasizing for what a "package manager" means to many people.
Let's dissect the following command from the Javascript ecosystem:
npm install react
Notice that the end user does not need to know whether React is hosted on GitHub or GitLab or Facebook's own servers. He doesn't even have to do a google search. The npm command just "magically" gets the React library.
Exactly _how_ does npm do that? From _where_ does npm fetch? The _how_ & _where_ is what the majority of my comment is about. All your explanations of Go not working that way does not address that mental model at all.
So you have 2 concepts in a "package manager":
(1) npm, the client command line tool
(2) the canonical default repo that npm tool points to -- and it's a virtuous cycle of easy use and trust because almost everybody publishes to it. It has grabbed mindshare.
You keep saying Go doesn't need (2) but I'm saying that doesn't change the fact that many mentally include (2) of what a comprehensive package manager _is_.
I don't understand what's confusing about this... it's literally specified in the name of the package.
Russ Cox is hosting his packages at rsc.io, which is a personal domain name he owns. If you visit it with a normal browser, he just kicks you over to pkg.go.dev, because he doesn't want to put in the effort to make a website for your human consumption. He's just hosting some packages there.
I really, really feel like you need to spend some time with Go Modules. You don't really seem to be getting the decentralized nature of it. But it works, and it works well!
In this case, Russ Cox has a meta tag there that tells the Go tool to download it from GitHub: <meta name="go-import" content="rsc.io/quote git https://github.com/rsc/quote">
But there's nothing stopping him from actually exposing a git repo at https://rsc.io/quote, instead of just exposing a redirect.
By telling people to use that package URL, he has the flexibility to change how and where he hosts the package in the future.
Sure, someone would complain. Just because some people would complain about the absence of one feature that you say is impractical to implement doesn't mean you should avoid implementing the rest of the thing. That's the classic "throwing the baby out with the bathwater" thing. The benefits of a standardized package manager seem worth a few people complaining. I'm sure someone somewhere would probably even complain that they would rather be writing JavaScript or another, non-C++ language, no matter how good the C++ package manager is.
Go has proven that a good package manager can work without that feature. You say that feature is something the C++ Committee could never tackle. My whole statement has been "fine, learn from Go!" Instead, you keep harping on this nice-to-have feature and saying it can't be done.
Package management is solvable in a way that suits C++. It seems inevitable that standardized package management will eventually happen for C++.
I understand your perspective now, but I just don't think I agree with it.
> In contrast, if someone proposes "C++ should have a package manager", exactly _who_ will implement and maintain the canonical repo mirror? This is not independent lines of work that GCC, Clang, Microsoft, and Intel can do on their own. Even if we hypothetically extend the website "isocpp.org" to actually start hosting the canonical C++ repos instead of just blog posts about syntax proposals, _who_ is paying for it? Again, there is no single entity like Joyent/Mozilla/GoogleInc that raises their hand and says, "We'll set it up".
Literally no one is required to do any of that. That is the answer. Plain and simple.
> I think the disagreement is rooted in how we compare ISO C++ committee vs Google Inc. To me, releasing a C++20 language specification* does not say anything about implementing a canonical repo* so that a command line tool magically works the way people expect.
But I'm saying that's not how Go works at all. The dependencies are hosted on GitHub, GitLab, or wherever else.
There is no central package repo. There is no "canonical repo".
Package discovery tools are not very relevant to the discussion. You can have a package manager without having a discovery tool or a central repo. Searching GitHub to find a C++ package, then adding that repo as a dependency of your current project seems like it would be entirely reasonable, if C++ had a standard package manager that worked. Some community members might build a website to help you find popular packages... but that discovery tool doesn't interact directly with the packages at all.
proxy.golang.org is a proxy. No one publishes packages to it, and you don't have to use that proxy. You can use no proxy at all, which was the default once upon a time, or your company can host a proxy, or you can potentially find some random third party proxy online. The proxy isn't where packages are hosted -- it's just a means of accelerating downloads, if GitHub were slow, for example.
C++ code is hosted in a myriad of locations. The Go approach is to specify the GitHub repository that you're depending on, and that repo will be cloned by the package manager in your terminal. The `go.sum` file contains hashsums to verify that the dependency you downloaded is untampered with since the last time you fetched it, and those hashes can also be used by any proxy that happens to be used.
Go's package management system is truly distributed. It isn't centralized at all. Yet it still supports SemVer, downloading the correct, exact version of a dependency, checking the integrity of dependencies, recursively collecting dependencies of your dependencies, etc. All the features you would expect out of a package manager.
Unlike Cargo in Rust, someone can delete one of these repos from GitHub and cause a real mess. `go mod vendor` is an option for anyone who prefers to vendor their dependencies.
Google has certainly provided some nice web tooling around the Go Modules system, but none of it is integral to the type of package manager that I'm proposing would suit the C++ dependency model. Go's package manager is very distinct from what you were discussing with Cargo, NPM, and others. It's much more attuned to the problems that C++ faces, and it walked a similar path to what C++ will inevitably have to do.
Which honestly mostly works well, except when it doesn't. There are at least a few module/version combos where people have shifted a version tag on their repo over time, leading to the (unfortunate) reality that you end up with different modules depending on if you fetch it through a proxy, or directly. Not entirely surprising, this can (and will) cause build failures.
Source: I build lots of Go modules for fun (well, specifically, I have some automation to do it for me) and notice these things, when I get more failures than expected. http://github.com/vatine/gochecker for anyone wanting to play along from home (the 1.16 release report is on hiatus, as there was a LOT of things that didn;t work smoothly this time).
Then give me a package manager that can handle cross-compilation well.
Currently, it seems almost nobody is taking that into account when packaging their wares for consumption by CMake, or distribution by Conan. Or if they do give it some thought, it always ends up making dubious assumptions, like "Clang == Linux", or "MSVC == Windows".
I have one at work that does okay. If your project is based on cmake I can create the package very quickly, though most projects still don't create a cmake configuration file. If your project isn't cmake - well at best a day, and often I will spend weeks fighting your half baked build system that doesn't understand the common things I want to do. (in practice autotools projects have the options to cross compile but it doesn't actually work...)
I actually extremely dislike language specific package managers. I'm on Linux, the packages should be in my package manager. I don't want to maintain multiple package managers. nmp is actually the worst here.
> I actually extremely dislike language specific package managers. I'm on Linux, the packages should be in my package manager. I don't want to maintain multiple package managers. nmp is actually the worst here.
As a user of software that doesn't care how it's built, sure. But system package managers are not a solution for general development with C++, or any other language.
If I want to use C or C++ to create software, how do I use libraries that aren't available in a system package manager? What if I need a version of a library that's not available in my system package manager? There are answers here but they aren't good answers (build from source, using whichever of N build tools the project happens to use, or hope there are prebuilt libs hosted somewhere)
Relying on system package managers to contain dependent libraries makes cross-platform development a complete PITA (more that it already is). Now you need the specific versions of all your libraries in package managers on all platforms, which is a complete non-solution for real development.
Sorry, but I'm really not sure what you mean by "solved".
Nix is yet another (language agnostic) package manager with certain tradeoffs. But, if there is not an available Nix package for a specific version of a library I need to use - I'm out of luck.
Nix is not a build tool designed to work with arbitrary or latest development versions of libraries, for example. And, it will never solve that problem even _if_ it is technically capable of doing so - because there is no force in the world that would get all projects in all languages to use it.
> Nix is not a build tool designed to work with arbitrary or latest development versions of libraries, for example.
No, that's exactly what it's designed for and exactly how we use it. And it's not a build tool (it just calls your existing build tools under the hood), it's a system for keeping different versions of dependencies installed at the same time without errors.
> because there is no force in the world that would get all projects in all languages to use it.
You can write your own wrappers for the projects that are missing. It's a simple and idiomatic process.
> Nix is not a build tool designed to work with arbitrary or latest development versions of libraries, for example
Well it sort of is and it sort of isn't. The great thing about Nix is how the provided packages remain malleable. You can usually quite easily make a small override to a provided package to make it build from a specific revision of the source you desire, or add a custom patch, and Nix will just build it all for you then & there. Then you can go and rebuild bits of the rest of the distribution that depend on that using your custom version. If you so want.
Also, i'm terrified of this idea of "library-manager download code from internet and run on this machine", without all the tests and QA of individual dependencies like we have in Linux packages.
Also, i've seen so many times people adding dependencies to projects because they did not know the standard library already had what they needed. I get it, it is easier to "pip install foo" than to look for "foo" in the docs. I don't think any sane person can learn everything that is available in the standard library, but searching the docs is always insightful.
The problem is that system-specific package managers are an obstacle to making portable programs.
Even within Linux and BSD there are many flavours of package managers with slightly different naming schemes for their packages.
This fragmentation makes it impossible to have dependencies that just work. You need to either make users install things manually or every author has to probe multiple package names/locations using multiple tools.
Language-specific managers support all of the OSes and just work, especially for users of macOS and Windows (telling people their OS sucks may state a true fact, but doesn't solve portability problems)
The Linux model of package management doesn't work for newer languages. In particular it is heavily reliant on dynamic linking, which tends not to work when you have (a) an unstable ABI (b) generics (c) a culture of static linking.
It works fine, you just ship the static libraries. With static linking your binaries won't have dependencies anyway.
That's not to say the static linking craze is a good thing. We'd be far better off finding a way to dynamically link templates, so you get the security benefits of automatically updated dependencies that dynamic linking gives you.
Code library managers don't belong inside OS package managers (because you want hermetic builds), unless maybe you have some Nix-live multi-manager that can provide many environments.
> Rust has package management, but not the ecosystem yet. On the other hand, C/C++ has the ecosystem, but no standard way to easily draw from it.
I take your point, and I share your desire for a canonical, Cargo-like package manager and build tool for C++ (it's one of the reasons I pivoted out of C++ development); however, I don't think C/C++ "has the ecosystem" these days. It certainly has an ecosystem--C/C++ dominates its own niches, but there's a big world outside those niches and there aren't good packages for much of it. Meanwhile, Rust is growing like a weed both inside and outside of the C/C++ niches, and the package manager largely enables that rapid growth. Also, Rust has a good interop story for C/C++, allowing it to leverage the existing C/C++ ecosystem. Anyway, I hope this doesn't read as contrarianism--I just thought it was an interesting distinction.
Most of the places where C++ doesn't have good libraries I wouldn't want to use Rust anyway, that is the domain of managed languages, GUIs, distributed computing, Web development.
And for the stuff I use C++ for, COM/UWP, Android NDK, GPGPU shaders, Unreal/Unity, Rust tooling is yet WIP or requires to leave the confort of the existing C++ frameworks and IDE integrations.
What’s wrong with using Rust for “GUIs, distributed computing, web development”?
In the case of a GUI, I’d expect a modern Rust GUI toolkit binding to look like any other GUI toolkit binding: an FFI-like abstraction that parses its own declarative view format, and exposes handles from those parsed views for native controller methods to bind to. Y’know, QML, NIBs, XAML, those things. This kind of GUI toolkit doesn’t exactly have high requirements of the language it’s bound to. (And I don’t believe many people want the other, procedurally-driven kind of GUI toolkit in the year 2021.)
Re: distributed computing — I can see the argument for Rust being the antithesis of easy network “rolling upgrade” (e.g. via being able to recognize known subsets of unknown messages, ala Erlang); but pretty much all languages that support distribution are very nearly as bad in that respect. (Only the languages that have distribution that nobody else actually uses — e.g. Ruby, Python, etc. — are on Erlang’s side of the spectrum in this regard.) But in terms of pre-planned typed-message version migrations, Rust can do this more idiomatically and smoothly than many other languages, e.g. Go, Haskell, etc.
Re: web development — there’s actually a lot of activity in building web frontend SPAs using Rust compiled to WASM. Started with games, but has expanded from there. Not sure about web backends, but the argument is similar to distribution: you need to do it differently in a static compiled language, but of static compiled languages, Rust is really a pretty good option.
The productivity hit produced by having to deal with the borrow checker and the design constraints it imposes into application architecture.
I won't take a GUI framework without a graphical designer, or a component ecosystem from companies selling GUI widgets in 21st century.
Distributed computing, again when thinking about distributed calls a la Akka, Orleans, SQL distributed transactions, I rather have the productivity of a GC.
Web development with Rust is nowhere close to the stack provided by JEE, Spring, ASP.NET, Adobe Experience Manager, Sitecore, LifeRay, Umbraco, enterprise RDMS connectors, ...
Rust best place is for kernels, drivers and absolute no GC deployment scenarios.
> I won't take a GUI framework without a graphical designer, or a component ecosystem from companies selling GUI widgets in 21st century.
Well, yeah, what I’m saying with “these types of modern frameworks don’t impose very many constraints on the language” is that there’s no reason that Qt, UWP, Interface Builder, etc. can’t support Rust (or most other languages, really), because in the end the tooling is just generating/editing data in a declarative markup language, that the language’s toolkit binding parses. You don’t have to modify the tooling in order to get it working with a new language; you just need a new toolkit binding. Just like you don’t need to modify an HTML editor to get it to support a web browser written in a new language. Qt et al, like HTML, is renderer-implementation-language agnostic.
> Distributed computing, again when thinking about distributed calls a la Akka, Orleans, SQL distributed transactions, I rather have the productivity of a GC.
I think I agree re: the productivity multiplier of special-purpose distributed-computing frameworks. I don’t think I agree that it’s a GC that enables these frameworks to be productive. IMHO, it’s the framework itself that is productive, and the language being GCed is incidental.
But, either way—whether it’s easy or hard—you could still have one of these frameworks in Rust. Akka wasn’t exactly easy to impose on top of the JVM, but they did it anyway, and introduced a lot of non-JVM-y stuff in the process. (I’d expect that a distributed-computing framework for Rust would impose Objective-C-like auto-release-pools for GC.)
> Web development with Rust is nowhere close[...]
Web development with Rust isn’t near there yet, but unlike distributed computing, I don’t see anything about web development that fundamentally is made harder by borrow-checking / made easier by garbage-collection; rather the opposite. I fully expect Rust to eventually have a vibrant web-server-backend component ecosystem equivalent to Java’s.
> Rust best place is for kernels, drivers and absolute no GC deployment scenarios.
Those are good use-cases, but IMHO, the best place for Rust is embedding “hot kernels” of native code within managed runtimes. I.e. any library that’s native for speed, but embedded in an ecosystem package in a language like Ruby/Python/Erlang/etc., where it gets loaded through FFI and wrapped in an HLL-native interface. Such native libraries can and should be written in Rust instead: you want the speed of a native [compiled, WPO-able] language; but you also want/need safety, to protect your HLL runtime from your library’s code that you’re forcing to run inside it; and you also want an extremely “thin” (i.e. C-compatible) FFI, such that you’re not paying too much in FFI overhead for calls from your managed code into the native code. Rust gives you all three. (I see this being an increasingly popular choice lately. Most new native Elixir NIF libraries that I know of are written using https://github.com/rusterlium/rustler.)
I would use Rust for distributed computing and GUIs, and I wouldn't be surprised if it begins to break into the graphics/gamedev world in the next 5 years. Agreed that Rust is still immature in those areas today, but it seems to be on a pretty aggressive trajectory and it's only a matter of time before Rust begins chipping away in those domains.
I did some real-time embedded development (including distributed embedded) in a past life in C and C++, and I really expect Rust to break through in that domain in a big way even though it's incredibly conservative (C++ is still the new kid on the block). It will take some time and it's never going to "kill" C or C++ in that domain (especially considering all the hardware that exists that LLVM doesn't yet target), but I think Rust will carve out a swathe of the embedded space for itself.
Sure if you like to do stuff by hand, I rather use visual design tooling (think Qt Designer, Microsoft Blend) and have bigger fish to fry in distributed network calls than who is owning what, instead of using Akka, Orleans or Erlang.
> I rather use visual design tooling (think Qt Designer,
Oof, I did professional Qt development and Qt designer was basically a joke. Not sure if it improved, but I've never experienced a visual design tool that saved me time. Not that they can't exist, just that the implementation is usually too buggy to justify itself. I don't enjoy debugging XML that gets compiled to C++ (I think it's compiled, anyway--maybe it's parsed at runtime... I forget). In whatever case, if you build a visual design tool for C++, you can build one for Rust as well.
> bigger fish to fry in distributed network calls than who is owning what
Agreed that I don't think distributed is the sweet spot for Rust, but there are certain niches (high performance, low level, etc) where Rust would be valuable. Previously I worked in automotive which is basically a bunch of distributed embedded computers talking to each other over a CAN network, and Rust would have saved a lot of time and money. On the other end of the spectrum, you have high frequency trading where performance is so important that C++'s myriad problems are worthwhile, so certainly Rust could add value here as well.
Imagine something like Swift UI for Rust, including the live preview.
Now imagine how to implement such designer in a way that supports component libraries, without having the burden of using Rc<RefCell<>> everywhere, while allowing the user to rearrange the component widgets in any random ordering.
It's a tool to manage C++ dependencies (using CMake), created and maintained by Microsoft. A lot of open source projects are supported (you can see part of the list here: https://github.com/microsoft/vcpkg/tree/master/ports).
I did, about a year ago. The usability was questionable.
Their main workflow appears to be, all developers use that thing, everyone building packages from source code and using their own binaries. For large dependencies that’s a large waste of time if more than 1 person is working on the software. It’s possible to export built libraries as nuget packages, but these are tricky to consume.
Another thing, these ports (where they applying patches to third party open-source code to squeeze them into vcpkg) are fragile. I remember cases when packages didn’t build, either at all, or subject to conditions (half of what I tried was broken when I only wanted release configurations).
Without a standard ABI, having c++ binary packages is a huge pain, requiring multiple artifacts for every permutation of compiler, os, and platform. It's less painful today than in the past, simply due to fewer compilers, OSes, and platforms, but it is still a problem.
A common ABI doesn't save anything as we still need to build for ARM, and x86 (MIPS, RISCV are also out there and may be important to you). Those processors all have different generations, it might be worth having a build for each variant of your CPUs. Once you take care of that different ABIs are just a trivial extension. RPM and .deb have been able to handle this for years.
All developers on that team used 1 compiler (VC++ 2017 at that time), one OS (Windows 10), one target platform (AMD64). Compiler/linker settings are shared across developers.
I wanted vcpkg to export just the required artifacts (headers, DLLs, static libraries, and debug symbols), so only 1 person on the team (me) is wasting time building these third-party libraries. The team is remote, upload/download size needs to be reasonable.
Maybe a difference is that C++ can be very, very slow to build. And C++20 will likely results in even longer build time now that you have concepts and can have exceptions and allocations in a constexpr context.
Rust's package management is actually a downside to my adoption. I have a lot of C++, a home-grown package manager, and a large cmake based build system. Rust wants to replace all this, but that means shelling out to Rust's build system, which is a bit of a messy situation and means I need to learn a new build system with the language. (not hard, but another thing to learn). Our home grown package manager means we have our own local copy of everything - I have a hard requriement to be able to rebuild software for the next 15 years (there is a closet someplace with a windows XP service pack 1 PC so we can build some old software - God only knows if it will boot anymore). In the embedded space we need to support our old code for years, and you can't always upgrade to the latest.
"cargo vendor" enables you to embed your entire dependency tree into your repo instantly, and compilation will work from that. Over half of your comment seems to have been predicated on the assumption that this either wasn't possible or wasn't easy... so I think your perceptions of Rust's package management system are more of an impediment to you than the actual package management system.
While I will admit ignorance about the details of Rust, what you described doesn't solve my problem. We do not believe in mono-repo here, and have broken our system down into lots of small repos with custom tools to manage that. Checking in a copy of the dependency tree into each repo is not the right answer. I'm sure I can make this all work, but everything I've heard about Cargo is it will fight the way we have setup our system. We are not changing, while there are things I'd do different (use Conan - but that didn't exist until just after we rolled our system out, and is just enough different that it will be hard to switch), the system works for our needs.
Cargo will allow you to solve this problem half a dozen ways. I’m certain at least one of those ways would fit the patterns you’re describing.
But you keep coming back to this idea of how Rust has to fit your workflow perfectly and you’re unwilling to make any changes to have things work better with Rust...
If you’re unwilling to change anything at all, then it’s laughable to imply that you would use an entirely different programming language for anything, even if it fits your workflow exactly.
So, I just don’t see the purpose of this discussion. You’re basically saying that you’re not going to use Rust, no matter what Rust does or does not do. That’s neat?
It sounds like you're implying git submodules are actually a good thing... I think you're the first person who has implied such a thing to me before. Everyone I actually know agrees that submodules are basically never the right solution or a pleasant solution.
But, to your question, no. Where would the submodules even point? The dependency source code artifacts are stored "immutably" (except for takedown notices or extreme abuse cases) on https://crates.io. They aren't git repos, and there's nowhere for git to point.
Yeah: using submodules makes maintaining vendor patches (which, FWIW, I pretty much don't do and will move mountains to avoid... but like, I totally appreciate why people do them) really natural and easy. Like, you don't just want a copy of the code: you want to be able to participate in the development of the upstream code with the same level of tooling that they have, and submodules does that.
The approach here would be to declare the dependency on the git repo directly. Vendoring is still going to copy the stuff you're building into your project, but you'd keep those patches in the repository of the dependency, not on your vendored copy.
The key thing here is being able to do it through multiple levels of dependency, for which I see someone else provided me an answer that is actionable! \o/
People definitely have strong opinions on submodules, but it is nowhere near so one-sides: a ton of people hate them, and a ton of people swear by them. FWIW, all of the Rust libraries I use are available as git repositories. With many other package managers, I can tell them "don't use the upstream copy from the package repo: use the copy I have in this folder" in a trivial manner. I thereby don't really want "automation" around either downloading the code for me to mirror or for the submodules I want: I want to set it up and then configure it so it is all "honored"... and I could totally see the feature you talking about somehow only working one way (with automatic copies) instead of being flexible.
Yes, it can pick dependencies from checked out submodules, or git URLs directly. It has ways to patch individual dependencies anywhere in the dependency tree, and multiple ways to mirror or replace the whole crates.io index. It's pretty flexible in this regard.
I’m not here to argue one way or another on dependency vendoring. The person I replied to was making an inordinately big deal about how they keep code around forever and it compiles decades later, as if Rust dependencies were some ephemeral thing that would break your code by next Monday!
If they want to reproduce their workflow using Rust, cargo allows vendoring and many other solutions.
I think the barrier to entry in the problem domain for C++ is much higher than something like nodejs. Installing dependencies is the least of one’s worries there.
Also, how many dependencies are we talking about? Node apps have a million dependencies for, I think, stupid simple stuff that should just be reinvented in a given codebase. In a C++ app too many dependencies invites incompatible stylistic choices which I think will turn to a Frankenstein codebase.
In Go this isn’t a problem because of “go fmt” plus a simple language at its core.
No matter the implemented package manager solution, it has deal with different packages types: from a single class library (a single hpp file) up to monster libraries like ffmpeg.
In the case of ffmpeg, what the package manager should do? Download the sources and all its dependencies and build from scratch? This is very difficult and time consuming.
Because right now the alternative is going to the ffmpeg website, download and include the dll (and lib) or .so and a couple of .h files to your project. And that's pretty simple to me.
It's not that the package manager fixes the problem, it's that having 1 or maybe 2 or 3 canonical or popular package managers gets the implementer to fix the problem.
The implementer, who has extensive knowledge of their own build system runs that aspect and creates a package that conforms to a universally expected output.
It's an incredible difference going from C++, where you end up in the details of all kinds of repos and build systems, to something like C# with Nuget packages where it's a simple command or single click to start using someone else's code.
Consider that C/C++, being highly portable, has support for many platforms and architectures, including the possibility of cross-compiling.
I guess if a package manages works on all those architectures and platforms, then the implementer would have to support all of them, and it's not always the main objective.
> guess if a package manages works on all those architectures and platforms, then the implementer would have to support all of them,
Other "highly portable" languages handle this by simply having the developer include a manifest of the platforms their library works for. The package manager only shows compatible packages for the targeted platform.
"FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created."
After that:
"It contains libavcodec, libavutil, libavformat, libavfilter, libavdevice, libswscale and libswresample which can be used by applications. As well as ffmpeg, ffplay and ffprobe which can be used by end users for transcoding and playing"
Conan is probably the flagship C++ package manager and supports multiple build systems including cmake. Nuget/vcpkg is also usable but does not come with build system integration.
Vcpkg has great CMake integration. Further, the model of Conan of distributing a bunch of binaries honestly seems like the wrong approach for C++ where you have to juggle all different sorts of compilers, triples, and ABIs. We use a completely custom toolchain which pretty much rules out Conan.
The one annoying thing about vcpkg, though, is that all packages are described in the vcpkg source tree. There are no “repositories”. Customizing or adding custom packages requires using the somewhat annoying to use overlay system.
I’d prefer some sort of hybrid between the two, with packages distributed as source code but pulled from a repository. I believe this is how Rust’s Cargo works.
We'd also need the one operating system on the one architecture. Perhaps the central planning committee can make that a goal for their next five year plan?
Python and Ruby only work on the Python and Ruby interpreters, respectively, and those require an OS-specific way to install them. They work on multiple OSes and architectures as well as JavaScript or HTML.
Go and Rust work only on a very very limited set of OSes and architectures. That's fine if you're targeting one of those, but it turns out the vast majority of computers in the world are not vanilla rice-pudding desktop systems or vanilla rice-pudding desktop systems adapted for the server room. The argument that some other tool solves a limited set of problems with your tool in a limited and limiting way is a poor one if you're trying to promote a universal solution.
Java runs on many platforms (and billions of devices as Sun used to love to point out), yet packaging and dependency management are pretty much solved problems.
Where there's a will, there's a way. In the C/C++ community there's no will. It's time they admit that to themselves and everyone else.
But it's not necessarily centralized, Java package management tools can use many third party repositories, if needed, and can also use proxy/cache/mirror systems where for example a company can point all their package manager just to their official company repo and everything goes through it.
BTW, Java's not interpreted, it's compiled. Just not to native code.
Some years ago I would have thought all this would be really cool. But who are they kidding? What sort of people will be able to keep this whole language in their head.
C++ books were thick bricks already 20 years ago, and students struggled hard to learn it. Now the language is like 3x as complex. Students are going to need a separate bag just for their C++ material.
Sure you can write in a subset of C++ that is easy to get. But when did that ever work? Who has worked in a company and seen people able to stick to a minimal C++ subset?
No, people get tempted and they start using all the new stuff. Short term it is a real gain. But once you hire junior developer who has to read this code, they suddenly have 3x as many concepts to learn and understand.
I predict a serious recruitment problem with C++ down the road. Old timers today will start using all the new features. When management start trying to add new team members they start realizing that it is really hard to get quality C++ developers.
Anyway who tries Go, Rust, Swift, Nim, D or some other moder/semi modern language are going to ask themselves why on Earth they would want to torture themselves with C++.
It is easy to know why the world's highest-paid programmers, coding for the world's most demanding applications, use C++ and nothing but C++: nothing else is even trying to be useful in those applications.
C++ has sharp edges and pitfalls to stay clear of, so users ... do stay clear of them.
A usable, better language would gain users. But nothing is even on the horizon.
Rust is closest, but its designers have consciously chosen not to support the most powerful of C++ features, to try to keep the language more approachable. Yet, Rust complexity is already beginning to rival C++. Some of that complexity is in how to work around the language's deliberate limitations. As Rust matures it will suffer from unfortunate early choices in precisely the way C++ has, and will only get more complex.
Every choice in the C++ design has been to provide better ability to capture semantics in libraries, so that independent libraries integrate cleanly with each other and the core language. People can use libraries with confidence that they are giving up no performance vs. open-coding the same feature.
Access to the most powerful libraries depends on language features no other language implements. Thus, the best libraries will only ever be callable from C++ programs. With (literally!) billions of lines of code in production use, abandoning interoperability is not a choice to take lightly.
When you start a big project, you never know what it may come to need. If your language "won't go there", your program won't, either, and you will be stuck with unpleasant choices. This is the concept of a language's "dynamic range", a more meaningful measure than "high" or "low" alone: how high can it reach, how low can it reach, how far can it reach, at once? C++ is king of dynamic range. Nothing else comes close, or is really even trying.
> It is easy to know why the world's highest-paid programmers, coding for the world's most demanding applications, use C++ and nothing but C++: nothing else is even trying to be useful in those applications.
There's no proof of this. The world's highest-paid programmers tend to work for FAANGs and a few other categories of businesses, and they might or might not work in C++, and they tend to move up the ranks by being able to scale humans (other devs), not raw tech.
It's a myth that being an über-geek is well paying, by the way.
I will be sure to pass that fact along to all the well-paid über-geeks I know (who will be quite surprised at their misapprehension).
But there is no necessary relationship between "the world's highest-paid", and your notion of "well paying". You could be simply wrong, or your measure of "well paying" could exceed what the actual "highest-paid programmers" cited get.
Dan Luu did a good essay about programmer compensation a few years back.
Could you provide some examples of C++ features that the Rust team has consciously chosen to not support, to try to keep the language more approachable?
Could you show some examples of how you need to work around these deliberate limitations?
Operator overloading. Standard library user-provided allocators. Move constructors. Inheritance. Certain kinds of specialization. SFINAE. Somebody who knows Rust better, and C++, will be able to supply a longer list.
There is a corresponding list of features C++ doesn't have yet, and others it is precluded from having. That programmed move-constructors can fail sucks. Thst moved-from object's still exist sucks.
Providing examples here would be more work than I am prepared for just now. (I am not happy to say so.(
* Standard library user-provided allocators: in nightly, on their way to stable
* Move constructors: not in for technical reasons and performance reasons, not for approachability
* Inheritance: not in for technical reasons combined with a lack of demonstrated need rather than just desire, not for approachability
* Certain kinds of specialization: you're hedging with "certain", but specialization is in nightly, and used in the standard library.
* SFINAE: Rust doesn't use templates, so this as a direct feature doesn't make sense. I'm not aware of any proposal to include something similar in Rust, the team has never said that this wouldn't be in for approachability
> Somebody who knows Rust better, and C++, will be able to supply a longer list.
I don't think your thesis is accurate, so I don't think so. And if this is so obvious, as you claim, then you should be able to provide examples!
There are good alternatives to C++: Rust and D. There is a number of languages with not quite as high but decent performance and varying expression power: Java, OCaml, Go, even Fortran for numerical stuff (not a joke; modern Fortran is quite advanced, and most likely runs faster then C++).
I see rather few reasons to start a new project in C++ in 2021, even though in some niches nothing else is viable, sadly.
This one-page format using "concept" -> "example" -> "reasoning" is fantastic for people like me who used C++ a lot in the past, and haven't touched it* in decades but still want to keep up to date.
It probably helps that the author understands this enough to ELI5. So Thanks Oleksandrikvl whoever you are.
* And by "touched it" I mean used its deeper features, not just STL containers and simple classes (and for/auto). (I still use it for TFLiteMicro, but generally I see that most users are topical C++ programmers, like me.)
But the actual implementation seems like a syntactic and ( partially) semantic mess to me.
Obscure syntax (`requires requires`), soooo many different ways to specify things, mangling together with `auto`, mixing of function signature and type property requirements (`&& sizeof(T) == 4`), etc etc.
This reeks of design by committee without a coherent vision, and blows way past the complexity budget I would have expected to be spent.
Rust (traits), Haskell (type classes) and even the Nim/D metaprogramming capabilities seem simple and elegant in comparison.
The original C++0x concept proposal had proper type signatures and was based, I think, on more traditional type theory. But it had to be continually tweaked as it did not work well in practice so it grew in complexity a lot. Additionally the only implementation was extremely slow to compile.
It was taken out of the standard, and the new version (aka concept-lite) is actually much simpler, although expression based. We lost the ability to type check template definitions though.
Far from being a design by committee, I think for the most part is the brainchild of a single author. The 'auto' thing is definitely a committee addition as many vetoed "implicit" templates and requiring auto after the concept name in the shorthand form was the compromise that pleased no one [1].
[1]: this is an obvious manifestation of Stroustrup's Rule
I haven't been following C++ for quite a while but when I did, I wanted modules. And now it looks like they're here and they've done it wrong. Or at least missed an opportunity to do it really right.
They've done the equivalent of * imports in languages like Java and Python. And style guides in those languages universally recommend against doing that.
Why? With named imports, if you see a symbol anywhere in the codebase, its declaration is somewhere within the file itself. If you see a call to foo(), it's going to be either a local function or a declared import. With C++ modules (as with C++ includes) it could come from any of the imports, so you have to look outside of the file to figure out where it came from.
Sure, IDEs help paper this over somewhat. But it just seems sloppy for a post-1980s language feature to throw all imports into the global namespace.
That's because modules and namespaces are the same thing in languages like Python, whereas they are separated in C++. The code in the imported module will go into whatever namespace it is in within that module, not the global namespace.
I found the cppcon video on c++ 20 features to be very informative and I am honestly excited to use ranges and other items mentioned, unlike the other guys on here who hate progress.
As somebody who uses c++ daily and spends a lot of time compiling, I am really excited about modules. Unfortunately, the latest even unreleased g++ v11 and clang v8 only say they support them partially. Does anybody have any experience trying them out? Do they work,l and are they ready for production use?
i tried to evaluate whether they would help build times at all. this is the only reason i want them. based on some research, they don't if you have parallel builds (since a module must be built before its users/not in parallel). so i think this lessened excitement from users/compiler devs/build tool makers. and it was enough to convince me to just buy more cores to parallelize builds more rather than put in work for half baked modules
There are already precompiled headers. They are a big speedup as far as I know, but semantically problematic. Maybe if you use them, you will not see a speedup.
Why is it cursed? Package managers are the wrong way to go for C++, IMHO. Just look at the bugs and bloat that every package manager is suffering right now. Rust isn't far behind. Keep C++ away from this.
EDIT: I just learned module systems are NOT package managers.
1) Headers cause programmers to take dependencies without realizing. (Especially when unity builds are setup).
2) They are fragile because they involve putting file paths into the code.
3) They cause build information to be in the code rather than with the other build information. To find out what is really being consumed I have to search every code file.
4) They cause all kinds of issues for beginners such as multiply defined errors.
5) They cause the compiler to revisit code hundreds even thousands of times bloating build times. (This is such an extensive problem a small industry has sprang up to address it i.e. precompiled headers, unity builds, fastbuild, etc)
6) They introduce confusing bugs (someone modifies a header in a dependent library but not the dependency, literally had to fix this for a 10 year+ game programmer at a studio you would know. Turns out adding virtual functions in a header will cause an off by 1 vtable lookup and hilarity insues.)
> Package managers are the wrong way to go for C++
I didn't say anything about a package manager. Modules don't require package managers. In .net you can use Nuget or not but the complier understands how to take source and an assembly and hook them up.
I just want a sane way to tell the compiler to build one thing and then use that when it builds the next thing. Rather than this weird concept that every TU has to stand completely on its own.
Getting rid of the pre-processor would be a radical evolutionary step for C++. You would have a devil of a time interfacing w older code, esp. C code. At that point, just start using a different language altogether.
> Getting rid of the pre-processor would be a radical evolutionary step for C++.
I said nothing of the kind! Obviously, that is not tenable at this time. But it's entirely possible to add a module system by which new code can take dependencies without the cumbersome #include mechanism.
Alternatively, C++ is an extremely mature language that has evolved through a painstakingly well considered process involving some of the brightest minds in computer science across multiple decades.
It continues to deliver on the promise of providing the structure you want, without any undue runtime cost.
My criticism stems more from C++'s steadfast refusal to drop backwards compatibility, in any way, for anyone, ever -- while also adding new features. What this means is that new features can't provide the guarantees they can in other environments leading to "ruined fresco" [1] syndrome.
Concrete example: std::move. Move constructors can copy, and `std::move` doesn't move. Naturally, it just casts your T onto `std::remove_reference_t<T>&&`. Because why not. It also leaves your source object in an undefined but totally accessible state -- whose validity is up to the implementor's convention! I think std:: collections are totally useable after they've been moved (correct me if I'm wrong) but your own types may just explode or fail silently. Talk about a giant footgun.
This approach leads to poor imitations of features from other languages getting stacked on top of the rickety footbridge that is K&R C.
It's specifically the evolutionary design philosophy that I take issue with.
The language has become borderline impossible to reason about. Quickly, what's the difference between a glvalue, prvalue, xvalue, lvalue, and an rvalue?
And the compiler, in the name of backwards compatibility, sets out not to help you because adding a new warning might be a breaking change. I've got easily 15 years of experience with C++ - granted, not daily or anything. To figure out what's actually happening, you need to understand 30 years of caveats and edge cases.
> My criticism stems more from C++'s steadfast refusal to drop backwards compatibility, in any way, for anyone, ever -- while also adding new features.
Languages that break backwards compatibility tend to have very slow uptake of the new versions. Python 3.0 was released in 2008 and took at least a decade to become the main version. And the changes made to Python were minor compared to what would need to be done with C++.
> The language has become borderline impossible to reason about.
This I agree but mostly it doesn't affect casual users of the language. I drop into C++ every 5 years or so and I don't find it difficult to understand or be productive. I have no idea what the difference between glvalue, prvalue, xvalue, lvalue, or rvalue but it's mostly not a concern for me.
As someone who creates production code in assembly, C, C#, and Java (among others), but who doesn't have that much experience with C++:
C++ certainly seems like a fragmented language from the outside. Lots of features added over the years to address problems with safety and provide additional "zero overhead" abstractions. The style and idioms of code written in this language seems to have changed pretty significantly over its lifespan. So breaking backwards compatibility to throw out old standards and force programmers to utilize new ones seems to make sense. However, it raises a few questions.
1) Who decides which parts of the language to throw out and which to keep? How do they decide this? Would the goal be to keep the multi-paradigm concepts, or re-focus the language? Which of the "zero overhead" abstractions should be kept?
2) Has this already been tried before in essence? There are certainly a number of languages out there that seem to strive to be "a better C/C++". What benefit is there to attempting to create a C++ 2.0 instead of using one of them?
3) Do the benefits of breaking backwards compatibility really outweigh the loss of all of the accumulated libraries and all the software of the past 30+ years? Even with ideal management of the new language, would it be enough to bring people to a new version?
4) Do you continue adding to this new version as you did with the previous one... surely that would eventually lead to the same fragmentation seen in the current version.
5) What happens to C++ 1.0 in this case? Do you continue to support and expand it? For how long? I suppose one could look at what happened with Python, but I'm not so sure it's that comparable.
If it wasn't backwards compatible then it would be a new language.
Compilers keep adding new warnings all the time. If someone's build is broken by -Werror then they should disable that warning, if it isn't relevant to them.
Certainly the C committee considers standardizing warnings to be a breaking change so that's not always true. [1]
Re: backwards compatibility, that's not really true. ABI compatibility is different than source-level compatibility. If a library or module is built to one language standard, so long as the ABI remains compatible, I think it's fair game to change syntax and semantics when compiling with a newer language release - especially when there's clear and obvious deficiencies in the existing. Obviously, the committee and I disagree on this.
However, my point remains that if you value backwards compatibility above all else, and it's that backwards compatibility that actually prevents you from adding features in a complete and honest way, maybe don't add the feature. Like, if `std::move` is the best you can muster, don't add it! It's not a move! I don't know what it is, but it's definitely not what the label on the tin says.
Backwards compatibility is the reason why C++ became what it is today, and why it prevailed over other (similar/better?) languages designed at the time. Herb Sutter himself discusses this in the talk here: https://herbsutter.com/2020/07/30/c-on-sea-video-posted-brid...
> but your own types may just explode or fail silently.
I mean, that's the point of them being "your own types". If you couldn't do anything that you want, including putting `assert(1 == 2)` in any method of your own type in C++... then people would be quick to design a Cwhatever language where you can, because it's a useful subspace of the design space of programming languages
Language discussions are always somewhat like debating the merits of various sports teams. Because the underlying computer architecture is the same, the how of expressing intent to the compiler for what you want it to do is always a mix of functional capabilities and aesthetics.
The C++ changes to templates are great, but they are only great if you like templates. Just like with enough cheese on them I'll eat brussels sprouts but I'm not a fan. Similarly with the other new features, that folks who love C++ are really excited about.
If I were advising graduate students I might have them evaluate the text to code ratio of various programming environments. I think it would make for some interesting insights into the ability to express execution as language. Then if you added 'total time to implement' from idea, and errors per thousand lines of code in the various choices you might be able to derive some metrics for how "effective" these languages were.
That said, I really appreciate someone putting down examples of all the changes. That is much easier for someone like me to internalize than the change text in the standard!
As an academic, it's really surprising to see people describe the C++ language cabal as academia. Like, no one is getting tenure for dumping more language features into that witch's brew.
Is there a /r/nottheonion but for programming? You can't make this stuff up.
"In C++20 stateless lambdas are default constructible and assignable which allows to use a type of a lambda to construct/assign it later. With Lambdas in unevaluated contexts we can get a type of a lambda with decltype() and create a variable of that type later."
"Sometimes generic lambdas are too generic. C++20 allows to use familiar template function syntax to introduce type names directly."
Unless you are doing it mostly for intellectual curiosity, don't just learn a language for the sake of it. Pick an area you are interested in an learn whatever language is the most used in that domain.
But why? C++ has some pretty good upgrades over C - like classes, smart pointers and a better type system. Even if you're not writing idiomatic C++20 code, having a better structure alone is worth it IMO.
If you don't need classes, maybe you should look into Rust - it's quite a bit more restrictive, but it allows a nice, functional style with similar performance while avoiding most of the pitfalls and footguns of C.
It's a coding pattern that has its specific use, but that's definitely not something that's missing from C or a natural extension to how things are generally done in it.
If you are porting Python to something else, then smart pointers are a very useful upgrade that save you from a ton of analysis that Python just did for you. Garbage collection does almost the same thing, but you need to deal with python's "with" somehow then.
We usually prefer the adjective "deterministic" over "stupid". Simplicity, zero overhead, and "user pays" are virtues.
People who complain about C++ smart pointers tend to be the ones who reflexively reach for shared_ptr. (We call that habit "Java Disease".) Most good C++ programs have zero uses of shared_ptr. Also, passing around shared_ptr or unique_ptr in public interfaces puts you in a state of sin.
If they are passed by value or move, you are doing complicated things with ownership that deserve to be encapsulated in a named type. If you are passing them by const reference, you would usually better pass a regular pointer or reference to the pointed-to type. If you are passing them by non-const reference, you have bigger problems.
In practice, the new type usually starts out as a PIMPL class with just a std::unique_ptr<impl> member. But often, once the type is named, you discover it is a good place to park more operations.
I wouldn't call smart pointers an "upgrade". If you are using smart pointers, you are not using the same language.
There are many features of C++ that would make a "better C", like namespaces and templates (which are better macros). But as good as they are in other contexts, smart-pointers are the last thing I want in C-like code.
Also, an advantage of C over C++ is that it doesn't have a runtime and it doesn't use name mangling. That makes linking much easier and particularly well suited to embedded applications.
If you have a program in "mostly C", you can start using RAII
to manage your ressources. Then use std::unique_ptr, std::shared_ptr, and references instead of raw pointers. And namespaces. That already brings you to a very nice place without shifting completely to modern C++.
You don't have to use all the features of C++ if you don't need them.
The problem of this phrase, which we all repeat (myself included), is that in reality you would need at least a superficial grasp of all of them, to make an informed decision of which ones you need (or want) to include in your project.
Then the thing grows to a team of 20, and you find yourself applying restrictive rules about which subset of C++ is admissible, because otherwise everyone will consider a different subset for their own code.
"You don't need to use all of the language" is a false claim that doesn't go too far without adding extra friction to the project management.
Yep, that's fair. I wouldn't call it "false claim", because I don't it's actually incorrect, you can be perfectly effective and go quite far with just a small selective subset of the language ("A tour of C++, 2nd edition" gives a good overview of this subset IMHO[0]), but the claim is ignoring the mental overhead and decision paralysis induced by C++ complexity.
In your example, of interfacing with Python, yes I would usually go for C because to interact with CPython needs a C ABI in the end. C++ requires another tool to get there (pybind, swig, etc).
but pybind11 makes interop so much more trivial between C++ and python. passing std::vector, std::string, custom types, etc... it all just works and with much less errors possible than the C Python API. Why would you subject yourself to it ?
I'm no C++ expert, but I like mucking about with game projects in C++ every few years, and I'll never not use raw pointers. smart pointers in C++, as well as a lot of these newer language features, really confuse me as to what niche C++ is supposed to fit in. when I use mostly-C-style-C++, I'm using it because I want to drill down into the nitty-gritty, I want to access raw pointers and feel free to do whatever I want with them. I don't want higher-level abstractions over something like pointers, I'm a big boy, I can manage allocating and freeing memory when I need to, I can write my own memory managers and so forth. maybe this is just something domain-specific to game development but I have seen many experienced C/C++ programmers advocate for this as the way to do things in C++, if you're going to use C++, and my (minimal compared to these people) experience (both before and after hearing these perspectives) lines up with what they have to say.
if I was using C++ to write business applications or something, like one would use C# or Java or whatever, then yeah, smart pointers seem like they would be useful in that specific domain... but at that point, why not just use one of those languages, or a language like it?
I'm probably going to try zig for my next endeavor into lower-level game development because that language, while different from the C-style-C++ I'm used to, seems much more in line with the kind of programming I'm looking to do, compared to modern C++. I don't want RAII, smart pointers, and all that conceptual overhead jazz. I want to allocate memory, run operations on said memory, then free said memory. I kinda miss just doing stuff in C89.
std::unique_ptr & std::shared_ptr are amazing. If you're still writing new & delete, you're just making things harder on yourself. I'll still use raw pointers in C++, but only the a borrowing context. It massively simplifies ownership. No more reading docs to try and guess if this pointer being passed in or returned needs to be freed or not. If I own it, it's a unique_ptr. If I'm giving it to someone else, it's a unique_ptr&&. If I'm letting something borrow it, it's a raw pointer or reference.
Or I'll make my own smart pointer containers for allocations with special lifecycles, like per-frame allocations from a custom arena allocation that must not be destroyed with delete/free.
Why try to remember all your lifecycle manually, which is incredibly error prone and no you're not immune to mistakes here, when you can compiler-enforce it instead?
weak_ptr is a complete different thing, but the meaningful difference between a raw pointer or a shared_ptr& or unique_ptr& is it avoids leaking irrelevant details into the function signature. If the function is only borrowing, it doesn't care about the overall lifetime management (that is, if it's shared, unique, or custom), so that detail shouldn't be part of the function signature.
It also needlessly prevents the function from working in both shared_ptr & unique_ptr contexts.
Same thing for plain old references, although I tend to stick to pointers if the function is mutating as it makes that clearer at the callsite.
It's convention in C++ to pass-by-pointer if the function will mutate the argument. The other way, by (non-const) reference, is discouraged because the function invocation looks the same as the extremely common pass-by-value case.
void ptr_add(int *f) {
*f += 5;
}
void val_add(int f) {
f += 5; // only changes local copy of f
}
void ref_add(int &f) {
f += 5;
}
int main() {
int foo = 42;
ptr_add(&foo); // foo is now 47
val_add(foo); // foo is still 47
ref_add(foo); // foo is now 52
}
To have either weak_ptr or shared_ptr& you need to have a shared_ptr to begin with. Unless you actively want shared ownership, there's little reason to use shared_ptr.
true, but I wasn't replying to that, I was replying to your assertion that nobody should use bare pointers in 2021. there is plenty of use for bare pointers in 2021.
absolutely—but I'm also not a fan of exceptions either & don't use them when I write C++. if you write your C++ in a mostly C-style, only taking C++ features here and there as you need them, the complexity of your code is often greatly reduced. again, this may only apply to game development—I haven't used C++ for anything else sizeable, aside from school assignments years ago.
But even in C you have to handle "exceptional situations" somehow... Like, for example, malloc() or fopen() returning NULL (which both could appear in the same block, by the way).
sure, which is why you handle those sort of things in your file-loading and memory-allocating routines accordingly. for high-performance game development you don't malloc() very often, and if you don't have enough memory to run the game, then you handle that by displaying a message or something and then ending the game. you only fopen() in a few specific places when loading resources, and if that fails, then either you made a mistake as a programmer or the user's assets are corrupted. either way, you display a message or something and end the game. in both cases, there's no need to pollute your entire codebase with the headache of exceptions.
like I said, this mindset might be domain-specific, I'm not sure, I haven't used C++ meaningfully for anything else.
my (C/C++) experience is only solo and small team projects, but again, in this specific domain, once you set up a system for memory management, nobody should ever be allocating/freeing anything outside of these systems, so as long as everyone on the team knows how to use the systems, and knows that they shouldn't be mallocing/freeing/newing/deleting things randomly as they see fit, it's not a problem?
There is no problem using raw pointers for non owning pointers. Also a lot of safe abstractions can be built on top of raw pointers (and smart pointers are obviously an example).
Interestingly there were a bunch of language alignment papers in the last mailing that proposed adding all the above (IIRC templates only in a simplified form) to C.
I'm eagerly awaiting better supported Rust integration with Python for this very reason. I really don't fancy wading back into C/C++ after being away from it for about 5 years and growing as a developer.
I'm curious if these new additions to C++ have genuine use cases (as in we could not do this because X) or if there are more academic driven arguments going on here.
Having recently done a lot of work using C++ previously coming from Go and Typescript I find it hard to understand all the reasons for the language to be so flexible.
Most of the language additions are aimed at library designers and writers, where they can give significant performance and ease-of-use improvements, rather than at C++ application developers.
I don't see any mention of std::format, it's great for app developers, I don't think any compilers support it yet though. It's pretty much just the fmt package that you can use now.
Well, yes, but it was added in the C++ standard. I guess the standard covers the language and the library, but to me, the standard library is part of the language.
This is really a false/meaningless distinction though. The C++ spec covers the “language” (it’s one of the first sentences), and that includes the standard library. The spec uses the terminology language and library but there is no formal distinction between them other than how they’re organized in the spec. A conformant compiler could implement the entire STL as a built in if it wanted to.
[intro.structure]
Clause 5 through Clause 15 describe the C++ programming language.
That description includes detailed syntactic specifications in a form described in 4.3.
For convenience, Annex A repeats all such syntactic specifications.
Clause 17 through Clause 32 and Annex D (the library clauses) describe the C++ standard library.
That description includes detailed descriptions of the entities and macros that constitute the library, in a form described in Clause 16.
it's pretty obvious from that than the library is seen as separate from the language - and yes, std:: could be entirely built-ins but as always that'd be under the as-if rule of optimization.
I think I addressed that: “how they’re organized in the spec”. They’re distinct, but outside of internal references in the spec the distinction between language and library doesn’t really matter. To a user of C++ it’s all one spec with distinctions for freestanding and hosted implementations, etc. You can’t have a conformant C++ implementation without elements from both.
The original point of this thread was about where std::format exists and from a implementation point of view it doesn’t really matter.
very informative, for application developers probably the basic syntax, plus STL are enough to start, instead of chasing modules/concepts which are great to have, but adds complexity for certain c++ users(e.g. application developers).
STL containers and algorithms are the true gems from c++, paired with the speed and memory efficiency and smart-pointers, c++ is just getting stronger these days.
A goal of C++ is to enable library developers to create a Go and a Typescript (or an other semantic experience) inside C++, for compatibility with the rest of to C+ ecosystem.
A single instruction computer is turning complete. Everything else is syntactic sugar. There are a number of things in here that will make my life easier. If nothing in here makes your life easier you probably have a trivial project that should be written in something other than C++.
It looks like a ton of companies with very large (and trivial!) C++ codebases will be rewriting their code in new programming languages soon. Thanks for the heads up! /s
For practically useful stuff, there’re lots of nice things in the standard library, which were out of scope of the linked article. I will happily migrate to C++/20 just for the <bit> header.
> However, `g_s` violates ODR because despite that there’s only one definition of it, there are still multiple declarations which are different because there are two different lambdas in a.cpp and b.cpp, thus, S has different non-type template argument.
Doesn't https://eel.is/c++draft/basic.def.odr#13.10 apply here? This would make it not an ODR violation, although I wonder if compilers implement this in this specific case.
I agree with the message though, lambda expressions in unevaluated contexts open new interesting ways for ODR violations.
I see it like this: requires can enforce any constraints (any logical predicate/boolean expression) that are able to be checked at compilation time / template instantiation time. So, you can use requires to check other stuff than if certain operations exist, like e.g.:
* that a certain non-type template parameter belongs to a set of hard-coded values
* that a certain non-type template parameter is an even number
* constrain the number of elements in a parameter pack
* things involving sizeof
* that a certain non-type template parameter has a popcount of one (i.e., it only has a single bit)
* that a certain type template parameter is integral AND unsigned
Not exactly - it was already enforced in C++98. The problem the enforcement was thousands of lines of compile error messages none which said anything remotely like "you didn't implement a Start function". With concepts you skip those lines that need an expert to read (and even they take a while to figure out what the error is really trying to say), and just put in a clear message.
Many, many banks are stuck at '03, deploying on RHEL 6.
The only motivation for a bank ever to make a clean break is if they must, just to attract talent. This is difficult for providers of services to banks, who are stuck at the level of their most archaic customer.
Addable<T> should evaluate to bool, so you can actually use it with a plain if.
But a plain if requires both true and false branches to type check and if your T is not actually addable and you use operator+ in the true branch you will get a compilation error.
Do [[likely]] and [[unlikely]] flags have anything to do with CPU code prediction / speculative execution that has been a burning subject recently (Spectre)?
IIUC, the optimisation is the same, but all it does is tell the compiler what the hot path is likely to be and so the code won't 'jmp' to it; the instructions are likely to be in I-cache.
Not really.. in part 1 of a hypothetical spectre attack, you’d have to figure out how to time a branch hit/miss, and the presence of likely may change the times/inputs you’d need to use to do that.
These attributes are just hints to move hot/slow code near/far away and to maybe to the right thing if there’s no branch prediction. They cannot prevent speculative execution.
Not quite sure what example you're referring to, but you mean if you do a lambda like this?
auto add = [](int x, int y) { return x + y; };
Then yes, nothing will be captured. This is a pure function lambda. It's essentially just a convenient way to create a function pointer, and will in fact implicitly convert to a C function pointer. This is very useful with C libraries that use function pointers as callbacks (they usually provide the "this" equivalent as an argument instead of a capture).
Oh, wow. The language was complex already and this makes me avoid C++ unless it's constrained to a narrow subset (something like Google C++ Style Guide). No wonder languages like Go and Rust gain so much traction.
I find comments of this type bizarre. C++20 is trying to make the language -less- complex by deprecating the aspects of it that make things gross. Yes, for it to still be C++, you have to "add" the modules feature to the compiler, but the whole point of adding them is so that you -don't- have to think about include's. All of the disgusting complexity that results from doing literal textual inclusion goes away if we all use modules. Instead of having mandatory ifdef's in every header, repeated compilation of the same code, humongous compilation units, separation of implementation and interface (except for templates!)(and inlines!), you get... the interface we know we want.
If you have arguments with the implementation that's one thing, but what would you prefer? That the language just stay still, warts and all? Ok... well then just keep using C++03. But you probably don't want to do that, because '03 sucks, right? Ok, and what would make it better? ----> All the things they're trying to fix via C'11 through C'20...
Well, hopefully NOT like the Google Style Guide, which is pretty universally seen in the C++ community as A Bad Thing, unless you work for Google.
And as I pointed out in another comment, these additions are mostly not aimed at C++ application developers. If you don't need them (and you probably won't) then don't use them.
And as I pointed out in another comment, these additions are mostly not aimed at C++ application developers.
That's often been true in other recent C++ standards, but looking at the linked page about C++20 in particular, quite a lot of those points might reasonably appear in application code.
If you don't need them (and you probably won't) then don't use them.
The trouble with this argument has always been that if your language provides a certain feature or syntax, even if you don't use it, there is no guarantee that everyone else whose code you depend on won't use it either.
Some language features are inherently contagious. If you are calling code that uses exceptions or const qualifiers or asynchronicity, you probably need to take that into account in your own code. I recognise that these aren't particularly esoteric as language features go, but I've still seen plenty of teams over the years that attempted to avoid using them in C++ based on some argument about making things too complicated, mostly with results that weren't great.
Even for new language features that are expected to be used mostly within libraries and not to be seen much in application code, you might still have to dig into the source code for a library to trace a bug or performance problem, which means in practice you still need enough awareness of the full language to do that.
Extra complexity in the design of a programming language always comes at a cost, whether or not you intend to use it. The important question is usually whether the price is worth paying.
> I've still seen plenty of teams over the years that attempted to avoid using them in C++ based on some argument about making things too complicated
Of course - the Google Style Guide being a prime example.
> you might still have to dig into the source code for a library to trace a bug or performance problem
I've been programming in C++ since the 1980s and I've never even tried to debug someone else's library - life's too short, and it's not what I'm getting paid for. Have you looked at the source for (say) your Standard Library implementation? If you are not intimately familiar with it (which kind of negates the advantages of using a library in the first place) you won't stand a chance of debugging it, no matter how deep your knowledge of the C++ Standard.
One recent thing I caught from going over the implementation was how MSVC's implementation of std::tuple is semantically different from GCC's, where GCC constructs left to right, but MSVC constructs from right to left.
I also debug through boost and have reported bugs, or just found bugs in it that had already been reported.
Qt is another library that I am constantly reading over, heck if anything just for the sake of learning interesting UI implementation techniques.
Anecdotally, I find it unfortunate that people who talk about how they've been programming in C++ for 30+ years are almost always the ones who have very backwards and archaic practices and talk about those practices like they are common place.
"I've been programming in C++ for 500 years and never once have I had to do this, therefore it follows that no other C++ developer will ever have to do it either!"
There are entire generations of developers who've learned C++ and use it in ways very different from you, in technical domains you may not even realize exist. Don't presume that just because you personally have never done something, that it can't possibly be relevant or useful to an entire community of developers using the same technologies as you but in different ways.
I'm saying that on principle I do not debug other people's libraries (I do, of course, debug my own). I also do use many modern C++ features, (C++11 and later) particularly from the Standard Library, and I think that more developers should do so.
Fair enough, that's a principle you can have for yourself and depending on your role and responsibilities that may suit you. But at least be aware that others may have a different principle and set of responsibilities. I have a professional responsibility to take all measures I can to deliver the most reliable software to my customers and if that means debugging third party libraries, so be it. Heck if that means debugging the toolchain, the operating system, whatever the case, then it's my job to do it.
I don't have the luxury of having the software I deliver to end users fail and then saying "Oh well, it's because there's a bug in a third party library and as a matter of principle I don't bother debugging it, reporting it, or taking basic measures to deal with it so you poor customers will just have to deal with it."
a) "I spent the week chasing down an apparent bug on one of the Standard Libraries we use."
or:
b) "I implemented connectivity with the Hong Kong stock exchange, improved our MT performance to get a 20% improvement on submitting trades, and identified a bug we were having with currency conversions as possibly being in one of the libraries we use, wrote a work-around for it, wrote the fix and tests up on our developer wikki, and submitted a report to the library vendor."
Now, I would say that(b) is of far more value for the company I work for and is at least as "professional" as (a).
Several times in my life I've been in situations where a) was more important because the bug was causing real issues now. In your industry a bug that miscalculates risk can cost billions of dollars. In my industry a bug can cause a safety system to fail and kill people.
It looks like we're in similar industries then based on your comment and if that's more or less the level you've been operating at on a weekly basis for decades then without a doubt you are a significantly more productive individual than I am and really kudos to you for it.
My point is mostly that not everyone is you though, similarly I don't presume everyone works like I do. Hence arguing that because you've done something for 30 years that it reasonably follows that everyone else should also do it is a really poor argument.
Now, I would say that(b) is of far more value for the company I work for and is at least as "professional" as (a).
That's fine, but it does assume that a workaround exists and can be implemented within a reasonable amount of time. If you're talking about a bug in a library of convenient text processing utilities, that might well be the case. If you're talking about a bug in a security-related library that you rely on to authenticate instructions to make high-value trades, maybe not so much.
Fair enough, that's a principle you can have for yourself and depending on your role and responsibilities that may suit you. But at least be aware that others may have a different principle and set of responsibilities.
I can't upvote this sentiment enough. C++ has been used by millions of programmers working in numerous fields over a period of decades. Any attempt to generalise from a single person's own experience of using C++ to that entire community is surely unwise. I note (for no particular reason, honest) that this applies even if you are a long-standing member of the standards committee.
I've never even tried to debug someone else's library - life's too short, and it's not what I'm getting paid for.
What is your general strategy to deal with bugs in other code? Sometimes just stepping through other code and fixing something small there is by far the fastest way to get things done. Not life-shortening at all :) I.e. way faster than alternatives like submitting a bug report and hoping it gets fixed, or hacking around the bug, or looking for another library. I do not understand why you'd abandon the idea out of principle. Perhaps you had some really bad experiences with it when you just started or so?
Have you looked at the source for (say) your Standard Library implementation?
Very often. Usually not to 'debug it' in the sense of finding bugs but to figure out in detail why my code invokes UB or asserts or whatever somewhere else. Or in case of documentation not being clear to find out what really happens. Also a good way to learn by seeing how others write code.
Of course - the Google Style Guide being a prime example.
Indeed, though it's been going on since long before Google was around!
Have you looked at the source for (say) your Standard Library implementation? If you are not intimately familiar with it (which kind of negates the advantages of using a library in the first place) you won't stand a chance of debugging it, no matter how deep your knowledge of the C++ Standard.
Some years ago, I did exactly that. Found a bug in it, too.
Part of my concern with the ever-increasing complexity of C++ since C++11 is that what I did back then would be increasingly difficult today, because there are so many intricacies aimed at library writers squeezing out every last drop of performance. Of course, for a systems programming language like C++, that emphasis is understandable. But as I said, extra complexity in language design always comes at a cost. And if we'd had to wait for someone upstream to fix the bug in the library I mentioned above, that cost would have had quite a lot of digits in it and a dollar sign at the front.
Many of the new features will help with readability. Conditionally explicit is a good example. There are a ton of places in the standard library where conditionally explicit constructors are needed, and the workaround isn't too nice, it's a ton of boilerplate.
Concepts will also help for the same reason. SFINAE is used a lot in libraries and it's simply not readable. Concepts will make it more approachable.
Many of the new features will help with readability.
Indeed, and this has been the argument for many of the new developments right back to C++11. There is something of a devil-you-know argument here as well, though.
SFINAE is a surprisingly useful consequence of the way overload resolution is specified, but as you say, it's been used a lot. Many C++ programmers have encountered it over the years. Much has been written about it to explain it for those encountering it for the first time.
Realistically, C++ programmers will still have to understand the resolution rules even after C++20 becomes widely adopted. Those rules are also relevant for other reasons, and even in the specific case of SFINAE, the entire ecosystem isn't going to rewrite all its code to use newer alternatives overnight.
So now, any new C++ programmer who wants to analyse a third party library to trace the source of a bug is going to need to recognise multiple techniques to achieve that kind of behaviour and not just the strange but at least ubiquitous way we had before.
Only somewhat. As new features become more common the old ones that were harder to use become less important. C++11 has made a big impact on the type of C++ you see in the real world, now that it is 9 years old we can see change. The change wasn't overnight, but it is there.
It's the style guide for Google's specific environment. If you are not Google, much of it is likely irrelevant.
For example, the style guide says that C++ exceptions are the way to go...except that by the time the guide was written there was already too much existing code that wasn't exception safe. Therefore the guide says that regretfully, exceptions cannot be used.
And this is why the guide is "bad" - you simply can't avoid dealing with exceptions in C++ code, unless you also forgo the Standard Library, use weird and unreliable methods to detect constructor failures, and a bevy of other problems.
And I disagreed. Application developers need to break their code up, Modules aid that. Applications generally have a few custom templates for something and so concepts will be useful. <=> is useful for a lot of classes.
As I said: "mostly not aimed at C++ application developers" - note the word "mostly". Of course, some features are usable and useful to application developers.
The only thing here where I think "wow, that's a lot" is the whole `module` thing (and even that, I'm sure I could love, it's just alien to me for now, and I doubt it'll gain much traction for quite a while). Everything else seems like a very C++ thing, or an improvement.
No one ever forces you to use extra features. But if you can improve/reduce your code, why not?
I have a suggestion for HN admins. There could be a thread about Rust superiority pinned to the top of HN where the Rust "brigade" could promote their chosen tool supremacy all day long and in return leave threads about other languages alone. Otherwise almost any thread about C, C++, C#, Go, Java, JavaScript,... is getting quickly derailed from Rust aficionados.
Unfortunately your comment is doing the very thing you're complaining about. Worse, because of the contrarian dynamic (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...), it got upvoted to the top of the thread, choking out all good discussion. This isn't the way to make HN better!
I'm going to mark it off topic now, which will downweight it. In the meantime, please review the site guidelines: https://news.ycombinator.com/newsguidelines.html. They ask you to contact us at hn@ycombinator.com if you want to raise a question like this. They also ask you not to be snarky.
The core value of this site is intellectual curiosity, and divisive meta comments are not in that spirit. If other people are posting tediously and offtopically, the solution is definitely not to post tedious and offtopic comments with an oppositional vector. I know it's tempting, but it makes the threads worse. The solution is to post more comments in the spirit of curiosity, or (failing that) not to post.
You don't it has something to do with people being fed up with almost every other programming language thread getting spammed by low quality "meta" post about a certain language superiority which are irrelevant to the discussion?
You guys don't have to convince me—I'm aware of the problem. The question is what's the solution. Creating a mirror image of the problem and upvoting that to the top of the thread is obviously not the solution.
Sure. But you have just proven with my post that you can push such content to the bottom of the thread. Why don't you simply do the same with those comments? If its to much work for you personally, maybe add something like a flag button but call it irrelevant button and posts that get too many irrelevant flags get pushed down to the bottom.
They can, and they do. You need to email them about specific instances, and they decide on a case-by-case basis. Meta complaints aren’t actionable. And there’s already an irrelevant flag: it’s called a downvote.
To be fair, I don't mind people talking about Rust. What I dislike the most are those that don't bring anything to the table, they will just repeat the same complaints over and over again and remind you how bad of a choice C++ is. While in reality, smart people can achieve a lot with it. But yes, C++ is not for everyone and it is denfinitely not easy.
No, because the other ones are higher level languages, with memory management, which completely removes many attack angles.
So Rust can't use its main strength against them, safety. Instead it can only offer extra performance, and even that depends on the use case (C# and Java can be quite speedy these days, especially with good tuning).
Not only did you change the list of languages under discussion just now, you also included Go, which oblio specifically did not say would avoid discussions of Rust.
You’re moving the goalposts, and complaints about discussions of Rust tend to be far more annoying than the actual discussions. How dare people want to discuss related languages they enjoy? I mean, right?
Most language discussions will have people join in and discuss other languages.
I’ve spent quite a few words in this very thread discussing Go’s package manager! Should I apologize to someone? No... I think language discussions normally involve discussions of languages.
Go is frequently attacked by Rust evangelists because they feel, and they seem right, that Go is a programming language evolutionary dead end. It's a very practical language, so very popular, but based on programming language theory from 40 years ago, with a very low adoption of concepts from the last 2-3 decades.
The risk being that in 20 years we'll wake up to the Go fad and realize that many people had been writing mountains of already legacy code from day 1, 20 years before. And those mountains of legacy code will have to be maintained because nobody's going to throw away so much working code.
Both D and Zig are direct competitors for Rust so I don't understand the fuss. This tango goes both ways, it's just that there are fewer D or Zig evangelists out there, overall. But they do visit Rust threads, too ;-)
And that's not bad, cross pollination is good. Insularity is bad.
Not just that. A quick Cmd+F brings up this which as you might have guessed derailed into mostly tangents with little added value:
"Oh, wow. The language was complex already and this makes me avoid C++ unless it's constrained to a narrow subset (something like Google C++ Style Guide). No wonder languages like Go and Rust gain so much traction."
Yeah but that is a point about C++ complexity with Rust getting a passing mention.
All programming language discussions bring up the same points over and over again. Rust threads also always have the ignorant question "Why is the language constantly changing?".
When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?
Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.
I know the intentions are good, and the argument is that "it's intended for library developers" but how much of a percentage is that vs. just regular app/backend devs? In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.
Meanwhile it's 2021 and C++ coders are still
- Waiting for Cross-Platform standardized SIMD vector datatypes
- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU
- Debugging cross-platform code using couts, cerrs and printfs
- Forced to use boost for even quite elementary operations on std::strings.
Yes, some of these things are hard to fix and require collaboration among real people and real companies. And yes, it's a lot easier to bury your head in the soft academic sand and come up with some new interesting toy feature. It's like the committee has given up.
Started coding C++ when I was 14 -- 22 years ago.