Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Extreme include discipline for C++ code (kowalczyk.info)
64 points by todsacerdoti on May 13, 2022 | hide | past | favorite | 89 comments


I seriously doubt the compiler deduplicating header files that specify "pragma once" is what is slow about parsing all those header files. I really think the main issue is that all the different translation units have to reparse the web of headers, which this doesn't solve since you still need to manually figure out the correct order and include them all in each source file. Seems like a lot of headache for no gain.

Holding things by pointer on the other hand actually does let you prevent the includes since you can just forward declare in the header, but it's an indirection :-/


IME the main problem of allowing to include headers in headers is that this quickly grows out of control, and after a while headers will include stuff that's not actually needed (usually, no longer needed).

Then you get quickly into a situation where a header includes another header that's no longer needed by itself, but is used by another header in the same 'include tree', which in turn doesn't include the required header itself. Gets very messy very quickly.

It's much easier to notice and fix such situations in the top-level implementation file, and another advantage is that the "dependency complexity" of an implementation file is visible at a glance by looking at the include list at the top of the file.

TL;DR: It's not primarily for compilation speed, but for 'header hygiene' (but of course this will also eventually help with compilation speed as the project grows).

Another well known (at least among game devs) proponent of the the same idea is Our Machinery (granted, the whole idea makes a lot more sense in C than in C++, because in C declaration and implementation is usually much stricter separated into header and source files than in C++)

https://ourmachinery.com/post/physical-design/


The tool for deal with this is https://include-what-you-use.org/

The idea is that each file (source or header) should include exactly those headers from which it uses things. In practice, it gets a bit more complicated has you don't want to include internal implementation headers and sometimes the same thing does not even have a canonical public header but IWUY does allow you to configure all that to your liking.


Great point. And this will soon be available directly from Clang tools such as clangd, which will make life much easier.


Hopefully explicit exports with modules in c++20 will solve the include tree problem


I think TFAs approach actually makes things worse over time as now you need even more discipline to clean out no longer needed #inlcues from all source files using a header when that header removes a dependency.

IME the best tools to solve the duplicated parsing (and compilation for templates) is to use IWYU [0] to cut down on unneeded includes and unity/jumbo builds to combine multiple translation units so common headers only need to be parsed once.

[0] https://include-what-you-use.org/


There actually is a bug filed against GCC [1] because it searches a linked list for headers using #pragma once and searches a hash table for headers using include guards. The performance difference is noticable for translation units that include a lot of headers.

It would be a relatively simple thing to fix, if anyone is looking for a fun little project to learn a bit about GCC.

[1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58770


> Holding things by pointer on the other hand actually does let you prevent the includes since you can just forward declare in the header, but it's an indirection :-/

And it means you now need to deal with raw pointers, which often isn't optimal or preferred (over say references for example).

At least we'll get modules soon. That should reduce the need for some of these workarounds that are really mainly used to reduce compile times, and not to improve the code in any other meaningful way.


There are issues with type safety when using raw pointers (e.g. void ) over alternatives.

Reminds me of the good ol' days when Modula-2 had .def (definition) and *.mod2 (implementation) files, and you could compile the definition files alone to get binaries of the interface files that you could write and compile against in a typesafe way before the interfaces were even implemented

And as far as I recall the decoupling also made things compile fast (e.g. with Applications Systems Heidelberg's Modula-2 on the Atari ST 520+).


Sure, but you can also forward declare references!


For some reason I thought they weren't but that's good to know.


The gain is mentioned in the article: without such discipline sometimes when you #include button.h (what you expect to be innocuous header file for a desired button functionality) you accidently inlcude nuclearpowerplant.h (a giant monstorisity of recursively included functionality none of which you actually need).

It may be possible to avoid that worst-case situation without such strict discipline, but with the no-recursive-include rule for sure it won't happen.


If including button.h ends including nuclearpowerplant.h it's because it's a dependency, so with the article method you will end up including nuclearpowerplant too, otherwise it won't compile.

Of course this assumes that headers only include what they really need. If a header includes something it shouldn't (maybe an old dependency that was forgotten) the issue is old dependencies, and the fix is to remove it from the header once. With the article method you will need to remove it from every file that includes button (except maybe it is also used by another dependency and compile will fail).

Edit: I've realized the problem. If A includes B because it requires it, and C requires A and B, C will probably only include A and it will compile. Later if A is refactored and no longer requires B, removing it will break C. The solution is to always include what you use, even if it it's already included as dependency, as explained by another comment.


> It's a known problem so we mitigate with #ifdef guards, #pragma one etc. but in my experience those band-aids don't solve the problem.

Errr, why?


The "problem" at this part in the text is that the headers still have to get parsed multiple times by the preprocessor. With his approach this would not be necessary.

That said, I don't think that is that big of a problem compared to final compile time reduction.

EDIT: Disregard that, the preprocessor remembers the states of include guards. So there is literally no point.[1]

[1] https://gcc.gnu.org/onlinedocs/cpp/Once-Only-Headers.html


Yeah, came here to say the same thing.

Plus:

> I don't think I've ever seen any C++ code bases that follows this rule.

It's not particularly useful to do this given that the moment you include anything from /usr/include and STL that rule will get broken from under you a 100 times.


This discussion is the reason I hope that C++20 modules gain traction. It is nice to improve myself a a developer (and have a look at different viewpoints), but in the end I don't want to have a tradeoff between code performance and readability.

I would convert my entire codebase to modules in a second if the CMake Support was nicer.


There is no danger of C++20 modules failing to "gain traction". They are already in the Standard. All that is left is to define Standard module names for the Standard library components.

Most likely that will be, simply, "import std" in almost all cases, with no reason for finer granularity.

Of course we will need to update our non-Standard libraries to get the benefits there. That will happen fast once more compilers finish implementing C++20 features.


For me at least, it seems like development on the build system side is very slow. The relevant ticket in the CMake Repository[1] is more than three years old for a critical functionality. There are efforts to make an open interface between Compiler and Build System[2] but these are non-standard and for now it seems like there is not "one good way" to do modules. For example it is possible to do modules with the CMake MSVC Generator[3] but I can't use Ninja when working with MSVC.

[1]https://gitlab.kitware.com/cmake/cmake/-/issues/18355

[2]https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p16...

[3]https://devblogs.microsoft.com/cppblog/cpp20-modules-in-cmak...


One compiler implements module support to the point it's useful, and that compiler targets one operating system.

I don't think it's hard to fathom that C++ modules might not gain traction when there is little indication of adoption in compilers, let alone code based that could benefit them.


They cannot claim Standard conformance without implementing all required features. Furthermore, since Gcc and Clang are also coded on C++, they will not get the benefit of rebuilding the compiler much faster without implementing it.


Cool. They're still the standard compilers for C++ on major platforms and don't have full module support, and won't have it anytime soon.


You can make up any definition you like for "soon". And did.


The GCC in CentOS 7 (what we use at work for being 'stable') doesn't even support C++14. In 2022.


This is meaningful, how?

You are using a snapshot frozen years ago. Surprise, frozen thing is still frozen! You are of course able to build for frozen system with modern compilers (e.g. "dev-toolset-11". If you don't, it is because, and only because, your employer has chosen not to.

My recommendation is, get a better employer.


I complained, and now I have an Amazon Linux 2 machine to play.

Why the default is that horrible CentOS (I am an Ubuntu person), is beyond me.

It's been decades since Oracle stopped being relevant (only reason to ever pick up a RedHat compatible distro, IMO).


CentOS 8 was released (and itself frozen, except security patches) years ago. Current CentOS has been broken loose from RHEL, so if you need bitwise compatibility with RHEL 9, you will need to switch over to building for Rocky or Alma when you move off targeting 7 or 8.

You can build for CentOS 7 in a docker image on your Amazon host, and also run Gdb on your Amazon host, using "target remote" to attach it to a gdbserver on your CentOS 7 target execution environment.

I use sshfs on my host to map a directory on the target, so my builds put binaries over there automatically.


The issue is: I am testing graviton instances, because they are supposedly cheaper than Intel/AMD ones.

Some 'normal' libraries still don't exist for ARM, and I have to compile more stuff myself.

My own C++ code is the one requiring GCC 5 or higher.


Again, there is no difficulty of any kind in using (say) Gcc-11 to build binaries to run on a stock CentOS 7 target. On CentOS (which user-space environment you would run in a docker image), I think the package you would need is dev-toolset-11. Look it up. (It is possible that is only in CentOS 8, so you might need dev-toolset-9 instead, which would serve.)


I really suggest to check out the fast and friendly Xmake that do support C20++ modules

https://xmake.io/#/ https://tboox.org/2021/10/30/xmake-update-v2.5.9/


> Don't get me wrong: the price of minimizing compilation times is eternal vigilance.

I don't follow any such rules for my own codebase and here are how my compilation times look like for three representative files: https://streamable.com/pboot1

- first and second one have tons of Qt / std / boost:: / etc... stuff

- third one includes opencv.hpp (which includes most opencv libraries)

I have a hard time seeing the benefit of putting more effort in this considering that the edit -> build -> run cycles takes around a second or two in all cases, and I don't even use all the clang PCH options available in recent clang versions.

So no,

> the price of minimizing compilation times is

using your damn tools correctly.


This.

For decades, Bloomberg development, and people so unfortunate as to follow them (because of silly book from the early '90s) adhered to an idiotic convention that wrapped every #include directive in an #ifdef block. Of course, all compilers, also for decades, recognized when they had already seen an include file that itself was wrapped in its own #ifdef block, and skipped it. Later this same behavior was simplified to #pragma once, obviating the need to invent unique preprocessor names for all headers, which on occasion collided or were misspelled, with amusing results.

There are no remaining C++ compilers in use, outside of sad pre-Standard backwaters, that do not implement #pragma once.

And anyway, the proper solution if you are worried about compilation time is to switch to C++20 modules.


See also the almost unknown Makeheaders program that is part of the Fossil project that will by default generate one .h file per .c with the same name as the C file. Only headers that needs be modified are touched so that when a header file is updated only the corresponding .c file is recompiled with no need for gcc/Makefile wizardry with -M flags and -include directives

Of course you then find the same function prototypes repeated over and over in many headers but I don't mind the boilerplate when it is automated and I don't have to actually write any of it.

The only problem is that it is bare bones and only supports outputting the header file in the same directory than the c files, that's mildly annoying but that could easily be modified and the program is contained in a single c file.

https://fossil-scm.org/home/doc/trunk/tools/makeheaders.html

https://www.hwaci.com/sw/mkhdr/

https://fossil-scm.org/home/file/tools/makeheaders.c



I'm currently on a code-formatting-and-linting spree on my team. include-what-you-use is definitely something I'm adding.

I'm currently having to fight against the tide of the very-broken way that the codebase was built ... very much _not_ as a modern or even old school project let alone a Docker-based project. So tons of things get included everywhere and some things are even compiled multiple times. It's a bit of a nightmare.

Putting tooling into CI will help prevent problems from showing up ... but the tooling wasn't there from the start so most of the project needs to be refactored so that CI doesn't immediately turn red. And that's the biggest headache tbqh


Adding this kind of thing to existing large codebases is always challenging. If you can, add it to CI but only fail if there are new warnings. Then things will get better over time until fixing the remaining cases becomes feasible.


> If you can, add it to CI but only fail if there are new warnings.

That's a great idea. But I'm not sure how to easily do that. Getting CI to fail if clang-format reports a warning is easy enough. But... you suggest that I should store all of the existing warnings somewhere and only report new warnings? That's a lot (!) more effort unless you know an easy way


This seems questionable. The author suggests 'faulting in' header files to trace dependencies, in pursuit of the iffy goal of including exactly what your compilation unit needs and no more.

So you're manually doing what the compiler and code pattern used to do for you. And creating dependencies in your source code to files that may become obsolete (include bar.h because foo.h needs it; then foo.h changes and now bar.h is no longer needed yet you still include it)

I have a different code pattern: header files include everything they need and no more; I include just the API's I use in my source files. I prefer this pattern by leaps and bounds over the OP's recommended technique. Because the source and tools manage themselves.


In other words, headers should represent interfaces, and implementations should include the headers of the interfaces they depend on. It makes no sense to include headers in your implementation's interface definition, when these are specific to the implementation and not its interface. This should have been common sense.


>Many ideas seem great on paper but fail in practice.

Yes, including the one posted by the author, sounds great on paper but in practice provides next to no benefit. Notice how the author has provided absolutely zero evidence to justify their claim.

>You just included and parsed bar.h twice.

No you didn't, because header files have header guards of the form #ifndef HEADER_GUARD/#define HEADER_GUARD or #pragma once which means that header files are only parsed the first time they are included.

>This makes me either a madman or a genius.

It makes you neither, you're just following something because someone with a big name who you respect said it, so you to do it without actually verifying for yourself whether it's true.

>Name an economically successful communist country.

China.


>> No you didn't, because header files have header guards of the form #ifndef HEADER_GUARD/#define HEADER_GUARD or #pragma once which means that header files are only parsed the first time they are included.

The header file still as to be read from disk ignoring everything until #end and then continue. This is probably not much of an issue these days with SSDs and even HDDs with caches.

I think the rule can still be a good thing by making you aware of what the dependencies actually are, which can be an indicator of excess complexity or poorly defined interfaces.


Loading the same file from disk twice (which will be cached in memory) and parsing a file are not the same things. Furthermore almost every C++ compiler I'm aware of, including icc, gcc, clang, msvc, has optimizations to special case header guards.


Every C++ compiler still in use will not even try to open a header-guarded or #pragma once file it has already seen.

Anybody trying to optimize on top of that is engaged in foolish superstition.

(If you are still using Sun's pre-Standard compiler, you are not reading this. I think even Bloomberg has abandoned that.)


It doesn't have to be read twice if the preprocessor recognizes and keeps track of include guards. GCC does this.


> Notice how the author has provided absolutely zero evidence to justify their claim.

That is true and unfortunate, but it is easy to find evidence that cleaning up header files will decrease compilation times: https://lore.kernel.org/lkml/YdIfz+LMewetSaEB@gmail.com/T/

>> Name an economically successful communist country. > China.

Except that China’s success has mostly come by instituting a “Special Economic Zone” where a lot of the normal communist rules don’t apply, and then gradually relaxing the rules elsewhere as well. Just the fact that China allows individuals to start and run businesses is a huge break from communism.


The article isn't about cleaning up header files. I don't think anyone criticizing the article is claiming that reducing the amount of code in header files will speed up compile times. The burden that the author should substantiate with evidence instead of claiming that he's a genius or a madman is whether moving includes out of header files and into source files as a strategy to avoid including the same header file multiple times has an effect on compile times.

Most people criticizing this article, including myself, argue that it does not (along with reasons why, such as header guards and the optimizations that compilers include to recognize them).


The rule to never include anything from a header is just the ultimate cleanup.

I reiterate that I think it is unfortunate that the author of this article collected no numbers. It would be interesting to compare them to the Linux kernel’s numbers:

                                     | v5.16-rc7                      | -fast-headers-v1
                                     |--------------------------------|---------------------------------------
    'touch include/linux/sched.h'    | 230.30 secs | 15.6 builds/hour | 108.35 secs | 33.2 builds/hour | +112%
    'touch include/linux/mm.h'       | 216.57 secs | 16.6 builds/hour |  79.42 secs | 45.3 builds/hour | +173%
    'touch include/linux/fs.h'       | 223.58 secs | 16.1 builds/hour |  85.52 secs | 42.1 builds/hour | +161%
    'touch include/linux/device.h'   | 224.35 secs | 16.0 builds/hour |  97.09 secs | 37.1 builds/hour | +132%
    'touch include/net/sock.h'       | 105.85 secs | 34.0 builds/hour |  40.88 secs | 88.1 builds/hour | +159%
Doubling and nearly tripling the compile speed is a huge win!

But the really eye–opening numbers are further down. Here are the first few rows from the table:

    ------------------------------------------------------------------------------------------
    | Combined, preprocessed C code size of header, without line markers,
    | with comments stripped:
    ------------------------------.-----------------------------.-----------------------------
                                  | v5.16-rc7                   |  -fast-headers-v1
      |-----------------------------|-----------------------------
     #include <linux/sched.h>     | LOC: 13,292 | headers:  324 |  LOC:    769 | headers:   64
     #include <linux/wait.h>      | LOC:  9,369 | headers:  235 |  LOC:    483 | headers:   46
     #include <linux/rcupdate.h>  | LOC:  8,975 | headers:  224 |  LOC:  1,385 | headers:   86
     #include <linux/hrtimer.h>   | LOC: 10,861 | headers:  265 |  LOC:    229 | headers:   37
     #include <linux/fs.h>        | LOC: 22,497 | headers:  427 |  LOC:  1,993 | headers:  120
Note in particular that sched.h includes, directly or indirectly, 324 _unique_ header files. Header guards are not going to help you here, because there are still 13k lines of code to parse and compile. The speedup comes from reducing this to 64 unique headers and just 769 lines of code to compile.

You cannot rely solely on #pragma once or header guards. You should definitely use them for correctness, but if that’s all you do you will leave a lot of compile–time performance on the table.


>The rule to never include anything from a header is just the ultimate cleanup.

What? No the two have nothing to do with one another.

The clean up in the Linux kernel involves having more granular includes, that is breaking up very large header files on the order of 10s of thousands of lines of code that often include unrelated or independent declarations, down into smaller header files that are on the order of 100-1000 lines of code and whose declarations are tightly coupled. I don't think many people would argue against breaking down large catch-all header files into smaller independent header files so that a consumer only needs to include what they need instead of having to include everything and the kitchen sink.

>Note in particular that sched.h includes, directly or indirectly, 324 _unique_ header files. Header guards are not going to help you here...

Of course not, and no one is claiming it will. It's also not going to help you if you decide to move all of those 324 unique header files into your .c file which is all this article is suggesting you do. What will help you is to refactor your dependencies so that you don't need to include 324 header files to begin with and instead can reduce your includes down to some minimal set. Once you've broken your dependencies down to a minimal set of headers, it won't matter whether you include that minimal set in a .c file or a .h file, what matters is that you've reorganized your dependencies to avoid having a bunch of declarations that are independent of one another.

Ultimately you're conflating two very different concepts under the vague term "cleanup". Having granular header files is something many people would agree makes compile times faster and also just improves overall software quality, by all means do it. Moving all your includes from .h into .c is just some kind superficial cargo-cult programming and will have no material impact on compile times or quality.


This person needs extreme ssl cert discipline.

"blog.kowalczyk.info uses security technology that is outdated and vulnerable to attack. An attacker could easily reveal information which you thought to be safe. The website administrator will need to fix the server first before you can visit the site.

Error code: NS_ERROR_NET_INADEQUATE_SECURITY"


Tell that to cloudflare, snarky person

I don't know what tool you're using to determine this, but it'll say the same for half of the internet as my site is hosted on render.com and proxied via cloudflare.

Also, everything on this website is open source https://github.com/kjk/blog

You can read all the "secrets" you want without even visiting the site.


My tool is Firefox ver. 100.00. I straight up can't visit the site.


No repro on either my work or home computer, both of which are Firefox 100. The certificate shows an encrypted connection using TLS_AES_128_GCM_SHA256 with TLS 1.3.


Oddly, mine is 100 too. I see a flat file with only 1995-grade HTML formatting.


> and bang! You just included and parsed bar.h twice

I'm sure I'm missing something, but I'm curious why the compiler can't parse the header just once and keep it in memory (with further dependencies forming a graph of ASTs referencing each other), instead of naively combining all the raw text every time before it can parse the entire combined file?


You can do some tricks but the basic issue is that the "environment" in which you interpret the preprocessor directives for a ".h" file may change across different contexts resulting in a potentially distinct AST: simplest example is the common #ifndef guard (evaluates the true branch in the first load, false in subsequent ones).


The compiler would store the condition in memory, and reevaluate the condition every time the file is included again. If the contents of the file aren't skipped, then it would actually read the file again and parse it normally.

For instance, if the file was this:

    // my-file.h
    
    #ifndef MY_FILE_INCLUDE_GUARD
    #define MY_FILE_INCLUDE_GUARD

    void f(void);

    #endif
the compiler would remember the “#ifndef MY_FILE_INCLUDE_GUARD” part.


Sure you can reuse a cached file at a token level, but you cannot easily do that at the parse tree can be completely different based on preprocessor state.


I'm just showing how a compiler could optimize files that have include guards. This doesn't do anything for files that are intended to be included multiple times.


Ach. Fun


Plus the default is to launch a different instance of the compiler for every .c/.cpp file, so that state has to be written out to disk and reread by the next one.

If I ever did another C or C++ project (which would probably only be at gun–point), I would go the opposite route and have only a single compilation unit. I would have a single primary source file that included all of the others, and none of the others would include anything. Then I could run the compiler a single time on just that one file and it would be as fast and as simple as possible. Well, I don’t know of any C or C++ compilers that can spread their work over multiple cpus (since we usually just run multiple instances simultaneously with make -j), but other than that it would be as fast as possible.


If you use cmake it can do exactly this for you automatically : just pass -DCMAKE_UNITY_BUILD=1


“Unity build” seems like a pretty good name for it.


so this discipline speeds up compilation, but makes the code "annoying" to write and requires "eternal vigilance".

fast compiles are nice but to trade away your code's hackability seems infinitely counter-productive to me. won't that have an immense negative impact on total development time?


I think the author is asserting that trading away the hackability is worth the faster compile times because faster compiles lets you iterate faster and therefore increases hackability (i.e. put in minutes of effort once and get minutes of benefits per compile in the future).


if you work on a non-trivial c++ code base, being vigilant about compile times can literally make the difference between compiling your codebase from scratch in a minute vs. compiling it in an hour or more. 0% exaggeration here. With similar relative differences for incremental compiles. Nothing kills a code-base's hackability like waiting around to be able to actually run code you wrote.

Include code style is also just the tip of the iceberg. This also includes not using many c++ features that blow compile times up unless the gain from using them is so big that you eat the compile time (like containers would be an obvious example).


The good news is that abandoning this "eternal vigilance" has no effect on compilation time, but speeds up development by the exact degree that you don't waste that time and attention anymore.


If a header file can't include another header file, then how do you #include <windows.h>?


I don't believe performance related blog posts without benchmarks, and neither should you!


C++ now supports modules.


#pragma once


Not standardizing #pragma once because of edge cases with with network and other esoteric filesystems when pretty much every C++ compiler implements #pragma once is really peak C++ committee bs. And it's not even like the C++ standard doesn't leave other edge case behavior undefined or implementation defined...


Fortunately there is no need to standardize it. Everybody implements it, and all the same way.


It may be widely supported but it's also nonstandard. You also have to use include guards anyway, so it's best to avoid pragma once altogether.

And as the article mentions, that doesn't always solve the problem.


> It may be widely supported but it's also nonstandard.

by that logic no one should use python, ruby or rust because they don't have an iso or ecma standard either


I wasn't making a general statement about software standards but one specific to C/C++ preprocessor directives.


but "standards" is a general statement in itself ; you can't dissociate the word from the whole ISO process in this context


(Ruby does have an ISO standard but it’s weird and nobody uses it.)


oh, TIL ! why wouldn't anyone use it ?


For reference: https://www.iso.org/standard/59579.html

This work happened in 2009. It was originally based on Ruby 1.8.7, the final draft was completed in 2010, and didn't cover the full standard library at the time. Took two more years to get through ISO.

If you don't have an encyclopedic remembering of Ruby version releases, when that draft was complete, the current released version of Ruby was 1.9.2. Ruby underwent huge changes between 1.8.7 and 1.9.2; many other languages would have characterized it as a major version change at the time; Ruby didn't follow semver back then.

So basically, it was already out of date well before it was final, and never was updated. So it's really irrelevant today.

Why did this even happen? Well, supposedly there are some requirements for Japan government work that require a spec, and so a spec was produced. That's my recollection, anyway.


Name one C++ compiler that keeps up with C++ standards but does not provide #pragma once.


Every compiler must support include guards, but only the ones that choose to support pragma once. It should be obvious, between the two, which the more stable option.


Except, all choose to. And, must, if they want to be compatible with code that all must compile.


I've been living on the edge using only #pragma once for several years and haven't regretted it. According to this support table, it's pretty widely supported these days: https://en.wikipedia.org/wiki/Pragma_once#Portability


Only to underline this common fallacy: "Name an economically successful communist country."

The purpose of communism has never been that of being "economically successful".

If you want to dismiss communism as a failure, first off, you should not measure money, and not even wealth, but simply happiness. Second, you should measure equality, and, finally, you should measure the happiness of the least happy, not that of the most happy.


While I agree with you that economic success does not have to be the utmost goal of a state because that's a goal that is easily too narrowly defined ("Growth"),

1. The doctrine those states called "Marxism" or "Marxism-Something" very much emphasized well-being as material well-being stemming from economic success.

2. Self called Socialist states, generally referred to as Communist states as they were dominated by an all powerful "communist" party, have not been in general successful in terms of general happiness, although they argued that they at least brought an industrialized level of economic prosperity whereas capitalism would have left those places stranded in misery. Wether that is true or it was worth the price is a question for historians to debate on.

To summarize : Humans don't live by bread alone, and I am not even sure the bread was great there.


So that's what the C in C++ stands for!


mmmmh, if your criterion is the happiness of the least happy... I guess C++ is a total failure!


"It works" is more important that it seems. Many ideas seem great on paper but fail in practice. Name an economically successful communist country."

Err..China ?


one of the reason i moved away from C/C++ and went full Zig/D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: