D doesn't need constexpr. It's much simpler - any function whose value is needed at compile time is evaluated at compile time. For example:
int square(int x) { return x * x; }
const y = square(3); // evaluated at compile time
int bar() {
int[square(2)] array; // evaluated at compile time
return square(3); // evaluated at run time
}
If a value is needed at compile time, and the function cannot be evaluated at compiler time (i.e. it relies on things like global variables not known at compile time) then a compilation error is issued.
There isn't any ambiguity about whether it is evaluated at runtime or compile time - there is no "fall back" to run time if it can't be done at compile time.
How does D deal with cases where the computation might be plaform dependent (like floating point arth. or bit kung-fu+ endianess) ? An more generally, what happens when a computation "can" be done at compile time , but "needs" (for correctness or ressource ) done at runtime ?
D doesn't run code at compile time as an optimisation. It only runs code at compile time in certain constructs like variable declarations or static if statements. If you need your code to run at runtime simply stick it in your code where it needs to run.
You can annotate D functions as 'pure', which would have that effect, but purity is used for other reasons.
Evaluating functions at compile time has another wrinkle in D - it is path dependent:
int square(int x) {
if (x < 100)
return x * x;
printf("oops\n");
return 0;
}
const int x = square(3); // works
const int y = square(100); // fails at compile time
I.e. the path taken through the function has to be pure, not every part of the function.
I think this might waste resources for some transient results. Sometimes that computation time required for evaluation is negligible, and the result might require more memory than "the function call".
It's been in D for a decade, and has been extremely successful, to the extent of becoming a critical feature.
An early example was someone wrote a ray tracer in D that ran and generated output completely at compile time.
A more useful example is D's regex package can build the engine at compile time, thereby emitting a custom engine for the regex, instead of at runtime.
While I think it's awesome D can do that (and am a bit jealous the languages I use regularly don't have any real sort of constexpr) what does that _really_ provide?
It would seem the startup cost of building a regexp engine would really only matter for, say, command-line programs (i.e., short-lived, run-once programs). For a web application the overhead would be negligible.
Again, don't get me wrong, but it just seems to be more of a "neat!" thing than a "we _need_ this for reasons X, Y, and Z."
Most regex engines build interpreter bytecode at runtime, and then execute the bytecode with an interpreter. Being able to generate the engine at compile time means the runtime engine is running custom compiled code.
So, what it _really_ provides is runtime performance.
Running code at compile time in D is not an automatic optimization. Only code in certain constructs are ran at compile time, for example function calls in variable declarations or static if statements. Generally it's not something you even need to think about. This runs foo() at compile time:
>In conclusion, at the time of this writing, you need to inspect the generated code of your compiler, if you want to be sure that something is really calculated at compile time.
I don't get this. Why? Doesn't making a variable constexpr ensure it is a compile time value, or it will fail to compile?
> Doesn't making a variable constexpr ensure it is a compile time value
It ensures that the variable could be evaluated at compile time, where "could be evaluated at compile time" means "conforms to the rules set forth in the C++ standard". Whether or not it is computed at compile time is dependent on the compiler.
The option was discussed during standardization. It is expected that some corners of the language will never be available at compile time, for example I/O. So you need a keyword to declare a function to be callable a compile time, otherwise its constexpressness would depend on its implementation details and wouldn't be possible to check it in isolation.
It might not be a great reason (some code becomes a keyword soup with all the pointless boilerplate, we really need a DWIM[1] keyword).
(EDIT to my last comment since I can't actually edit it -- I'm not sure if I misinterpreted your comment earlier or if you updated yours, but this is my updated reply.)
> It is expected that some corners of the language will never be available at compile time, for example I/O. So you need a keyword to declare a function to be callable a compile time, otherwise its constexpressness would depend on its implementation details and wouldn't be possible to check it in isolation.
Don't compilers already require the implementation of a function to be there in order to evaluate the code at compile-time? And don't they already have to verify that it can indeed be called at compile-time? I don't really get what the constexpr flag helps the compiler. It could just assume everything is constexpr implicitly unless proven otherwise.
constexpr isn't just a claim about today - it's a promise for the future. That is, the compiler can certainly notice that a function can be evaluated at compile time, and indeed optimizers will often perform enough inlining and constant propagation to boil down function calls to constant values in the binary. The constexpr keyword serves two purposes: it claims that the implementation is usable in constant expressions today (therefore allowing compilers to diagnose (i.e. emit compiler errors for) things that can't be done at compiletime, like I/O), and it promises that the implementation won't change in the future to be hostile to compile-time evaluation. This promise is important - otherwise, users could take a dependency on a function's current behavior (e.g. by using its result as an array bound, or as a template argument), and then they would be broken by implementation changes in the future that prohibited compile-time evaluation.
No sure I get the question: static_assert can only invoke constexpr functions, and constexpr functions can only call themselves other constexpr functions. Since IO functions are not constexpr, you can't call them there.
Shameless pedant kneejerk: 'inline' has semantics in additional to the optimization-hint. e.g. inlined functions won't trigger multiple-definition link-errors, even if they're left as ordinary functions in the executable.
Happy to be corrected (EDIT: indeed, I stand corrected! see replies below), but I don't think the C++ spec says anything about 'inline' encouraging the compiler to inline the compiled code?
'auto' would've been an example, but that also got removed, because they realized it was useless. I don't get how constexpr is any different.
I don't know my way around the spec, but Wikipedia is pretty unequivocal:
"... it serves as a compiler directive that suggests (but does not require) that the compiler substitute the body of the function inline by performing inline expansion, i.e. by inserting the function code at the address of each function call, thereby saving the overhead of a function call."
Thanks for this! I was very skeptical, but you made me look through every single mention of the word 'inline' in the spec, and you are indeed correct. :) Here is the relevant quote:
> [7.1.2] [dcl.fct.spec] A function declaration with an inline specifier declares an inline function. The inline specifier indicates to the implementation that inline substitution of the function body at the point of call is to be preferred to the usual function call mechanism. An implementation is not required to perform this inline substitution at the point of call; however, even if this inline substitution is omitted, the other rules for inline functions defined by shall still be respected
At first I was going to say that it's similar to asm.js - that it enabled an optimization but also could fall back to just running the code. However in trying to find references to back up my answer, I found a different explanation of constexpr which changed my own understanding.
Quoting Ben Voigt: "What constexpr does is provide guarantees on what data-flow analysis a compliant compiler is required to do to detect1 compile-time-computable expressions, and also allow the programmer to express that intent so that they get a diagnostic if they accidentally do something that cannot be precomputed."
Apparently constexpr is like "inline". Just as the inline keyword doesn't actually guarantee that the compiler will inline a function, so constexpr doesn't require the compiler to precompute something, but rather just gives a hint to the compiler that you'd like it to if it can.
It may not be a bug. The computation of Fibbonacci(10) requires a stack of ten frames. It may be that Clang has a limit on the depth of the stack for pre-compiled code.
This is exactly why I don't like C++: it is just too complicated, and it encourages to write complicated code, usually opaque boilerplate template hacks for what would seem a simple concept. This is difficult to read, review, understand, and hence error-prone. No, not only error-prone, but bug-encouraging.
BTW, you can also enforce compile time evaluation like this:
enum { aux1 = X };
use(aux1);
But that's two lines, I know. And, of course, it is still a hack ('enum'? What?).
Interestingly D actually used `enum` to define compile time values (as opposed to `alias` which is used for types). If you want a compile time value you can do this:
There might be a case where you expected the compiler to constexpr something but it didn't. I can imagine cases where a heavy function getting called more than once, because you had expected it to be pre-computed.
Oh look another flag that has several caveats and corner cases and works only half the times you think it works, while increasing compiler complexity
Here's a better idea, if you really need a compile time value in your program you calculate it yourself (or just run it at program startup and cache the value).
All of this is easy in Common Lisp, which includes interactive compile-time debugging. You can use #., EVAL-WHEN, LOAD-TIME-VALUE, and other things to control evaluation time, and use the same, usual, interactive debugger that you use for runtime code, without any extra infrastructure or scaffolding.
In C++, to debug failed constexpr’s, often, you have to make it a non-constexpr and debug at runtime.
I’m not sure that’s even close to equivalent in execution-time debugging, and doesn’t work well with large, existing code bases with crafted build steps. These are transparent, non-issues in Common Lisp.
What about debugging? If the code to generate the data is large and complex, how do you debug it when it only runs at compile time? Is the solution to start with two programs like the author and then merge them? That still seems like a maintainability issue.
A constexpr function can be used with a non-constexpr argument in which case the compiler will keep the function in the compiled code and you can use it as a normal function.
In the context of the linked article, take the first example and modify it, by reading a number at runtime with std::cin and call factorial or fibonacci with this number as an argument.
> ...how do you debug it when it only runs at compile time?
I prefer to unit test my constexpr logic with a bunch of static assertions in a cpp file. You could put them in with the code being debugged, but having them separate keeps the header files lighter weight with no real downside assuming you have CI or something set up.
> If the code to generate the data is large and complex, how do you debug it when it only runs at compile time?
that's up to the compiler actually. For instance, in debug mode visual studio skips constexpr and does everything at runtime (which sucks if you depend on stuff being constexpr).
You could use something like #ifdef NDEBUG #define MY_CONSTEXPR #else #define MY_CONSTEXPR constexpr so when you are running the debug build, everything is debuggable.
If you guys are going three days without testing a release build (let alone three months), then you probably have bigger issues to work through than what constexpr is doing...
Man, meanwhile, D has compile-time function evaluation. D's compiler is really awesome. Most of the time, no special syntax is to evaluate things at compile time (perhaps just a "static" keyword here or there) and the rules are much simpler than C++'s.
If you use a general meta layer preprocessor such as MyDef, you may be able to construct macros that comes from evaluations of code in any languages (instead of restricted in eg. C++ with rather complicated compiler mechanics)
A constexpr is a constant that is evaluated from a code at compile time, which is essentially just macros. Right?
A general preprocessor that just do language agnostic code manipulations are not difficult but rather useful. Much complicated syntax sugar I see in the recent language development become unnecessary if a general preprocessor is in place (where syntactic sugar belong). I wonder why it is often not used.
> A constexpr is a constant that is evaluated from a code at compile time, [...]
It's rather a constant expression, which is quite a bit more than a mere constant.
> [...] which is essentially just macros. Right?
Macros are basically just text replacement. You can't write a macro and expect most compilers to execute the expressions and code inside the macro at compile-time. Some compilers may still to do that, but it's not very common for more complicated stuff. With constexpr you get a guarantee that the expression can be evaluated at compile-time and it gives quite a good hint to the optimizer to spend more time optimizing that portion of the code.
With a macro, you get guarantee of compile time evaluation, as well as compile time independent literal syntax double check. And you can do better than hint. You can double check the results explicitly. Since it's compile time evaluation, optimization is out of context.
I didn't find any, so I rolled my own 10 years ago. Most of its features are suited toward very limited users. E.g. it doesn't really support macro evaluation from arbitrary language, only from Perl currently (but general for any target language). However,to support macro evaluation from arbitrary language seems trivial. It'll be just like gcc pulling different compiler together.
I am already a big fan of constexpr-ing everything that doesn't run away or make the compiler cry for our embedded application.
Yes, we are building a substantial embedded system with (a constrained subset of; no heap, but some templates for example) C++ 11. Heretical, I understand.
In the last ~five years I've converted all my embedded software development to C++11, even projects on small microcontrollers (32kb flash, 8kb RAM). There are features I don't use but it's hard to imagine going back to C at this point.
On the contrary, I've gone back to plain C for most projects and it's been a real relief after using C++11/14 extensively. So much accidental complexity just disappears.
The compile times are much better, too. Can't wait to get a Threadripper CPU which should push the build even closer to feeling instant.
I'm not sure this is really necessary. gcc and Apple's clang optimize the factorial code without "constexpr" and the fibonacci code might be an extreme case avoided by reasonable defaults for the numerous optimization parameters that can be adjusted:
max-inline-recursive-depth
max-inline-recursive-depth-auto
Specifies the maximum recursion depth used for recursive inlining.
For functions declared inline, --param max-inline-recursive-depth is taken into
account. For functions not declared inline, recursive inlining happens only when
-finline-functions (included in -O3) is enabled and --param
max-inline-recursive-depth-auto is used. The default value is 8.
I tried to watch the more interesting parts, it just doesn't seem like a useful approach to me to try to get the compiler to do this work at every compilation instead of the traditional approach of doing it once by generating C(++) code/structures from text/json data in my build. Especially with the "cognitive cost" and debugging issues involved.
Although ordinary functions can be evaluated at compile time without the constexpr annotation, only constexpr functions will raise an error if it _can't_ be evaluated at compile time (irrespective of if the compile thinks it should be). it's a judgement call for when explicit intent > implicit side-effects.
Genuine question: Is there anything protecting us from an entirely new class of compiler bugs where the constexpr code and the compiled code behave differently? Is it at all possible?
Of course it is possible. Just consider all the registers, memory locations changed during the execution of such functions. Such side effects can affect behaviour of other code in the presence of compiler bugs.
I personally do not understand the buzz around constexpr etc. All the real buziness is happening at runtime anyway and compilers already optimize code quite decently. The extra maintenance burden just doesn't pays off.
I think you should review his premise: he is using constexpr to eliminate a separate exe, a data file, and code that loads and reads the data file at runtime. His maintenance burden goes down.
C++ is never just C++. The build system for any C++ program includes both macro language and some form of make and/or make replacement. Explicitly generating tables at compile time and linking is bog standard at this point.
Write a table generator program, have output create a table, splice in the order into make and/or make replacement.
Sure it's three extra steps, but it's small steps that can be debugged. It also doesn't require any advanced compiler tricks that may or may not actually run at compile.
Why do you say writing auxiliary programs and tools to integrate the data is "standard practice", and a simple and straightforward language feature are "advanced compiler tricks"?
Because 'generate a table by having make run a simple program' is something people have done for decades with C, C++ and other languages, whereas constexpr is a new feature that's only just appeared in the C++ language spec and apparently silently degrades to "not actually at compile time".
> C++ is never just C++. The build system for any C++ program includes both macro language and some form of make and/or make replacement.
Your comment makes no sense, and shows some confusion. C++ is a programming language. That's what's being discussed here. You, on the other hand, are talking about build systems and how sofrware projects may be configured by some people. That is besides the point and actually completely misses the very topic being discussed. It's entirely irrelevant if you can fill a whole header with #define or write a convoluted macro with m4 to write it for you. That's not how a programming language like C++ implements compile-time constant expressions. That's acvomplished with C++'s support for constexpr.
Is there a C++ compiler that doesn't include 'make' and/or a 'make' replacement? There are linkage rules and even a few keywords just to allow linkage with code not generated by the compiler. It's very much part of the language.
'extern const int *table;'
Having a compile-time program create table would guarantee that it's created at compile time. Having a constexpr does not guarantee it would be done at compile-time, it silently degrades to run-time. If verifying requires an assembly listing, I see it less useful than the currently available tools.
> Is there a C++ compiler that doesn't include 'make' and/or a 'make' replacement?
Yes, all of them.
Because 'make' and/or 'make' replacements have absolutely zero to do with a compiler. It is a build automation tool that essentially determines which target needs to be built based on which preceding targets have been altered. This has zero to do with what a compiler does.
Embedded engineer here. Care to explain why ? Code generation is widely used, at least in automotive. Comparison of hard-to-read and hard-to-debug advanced template/constexpr machinery vs code generated by standalone tool that is easy to read and easy to debug would not be taken seriously
Both approaches have their problems, but resolving to compile-time constants with simple expressions is NOT "hard-to-debug" if done with care. As ever, tools can be abused, and real life can astonish.
Example: for the credit dept of a now-ex investment bank many moons ago we had a set of blessed (including with the correct correlations) random numbers pre-computed and baked in via code generation.
I discovered that I could actually generate numbers faster at run-time with highly-tuned code because of the high cost of paging in the large-precompiled numbers array across the network.
If the generator is itself in C++, then it requires to have 2 full toolchains to bootstrap it. Then if the developer have a great idea such as "hey, I got access to all my code! Let's reuse it" then you have to build the package twice. Since everytime you do that you increase the number of packages to be built on the host toolchain, after a while they start to bleed into each other (due to buggy build systems in the dependencies) and when you execute the final binary, you get "wrong file format" errors on the target (or worst, sizeof() mismatch at runtime)...
I am not saying there is no solution to these problem, of course there is. All I said is that it makes bootstrapping a system much harder than it would otherwise have been with constexpr. Many devs avoid those issues because dependencies rarely change and once you fixed all issues, it will most likely stay stable.
Mature and "made for embedded" projects tend to be better since cross compiling have been taken into account in each steps of the pipeline. But if you start pulling random code from the internet, expect the worst.
Code generators at our place are most often written in java (xtext, xtend), rarely in other interpreted languages, almost never in C++. Of course, there are grey areas when the price of using another tool, integrating it into build system(which itself sometimes is non-trivial task, if done properly) has to be carefully considered against obtained pros.
Sounds like opening a can of worms ... some developers will definitely opt for using a language which has "convenient" properties -- since the generator code is not running on the target platform, it has different constraints. And then one day, another dev team will face the task of porting rexx/perl/python/ocaml/you-name-it on a platform which has no capacities for that, or has no required depenedencies available, or neither.
You are making the error of thinking that one can only use one language, or that using the right language for a job is a bad thing; and you are making the error of thinking that build platforms did or do lack the capacity for generating source like this. These are platforms that have the capacity to run a C++ compiler; they have more than enough capacity to run tools like (say) od and sed.
Certain things, when used in a routine, make computation impossible at compile time. If the routine is marked with 'constexpr', the compiler will verify that.
Couldn't it already do the exact same thing without constexpr? (And shouldn't it have already done that when optimizing? In fact for simpler expressions compilers already do this, right?) How does specifying constexpr help?
But I think you nailed it -- the compiler doesn't have to signal optimization "failures" back to the developer, but it has to for the constexpr case. It is not that the constexpr routine can be used at compile time, it is that it must be useable at compile time.
> But I think you nailed it -- the compiler doesn't have to signal optimization "failures" back to the developer, but it has to for the constexpr case.
Is this true? Where do you see Clang emit a diagnostic in the example in the given article? (https://godbolt.org/g/HKcPFT)
You seem to be right -- at least "no diagnostics required" is mentioned a few times in $10.1.5 of N4700. To be honest, my comment is not from the article in the example, but from my own experience, and that is mostly with GCC 7.2.
Like "const", it's so the compiler can produce better feedback for the programmer, not better executables. constexpr will raise an error if a function _can't_ be evaluated at compile-time, just as const functions will raise an error if they do any non-const stuff.
> Like "const", it's so the compiler can produce better feedback for the programmer, not better executables. constexpr will raise an error if a function _can't_ be evaluated at compile-time, just as const functions will raise an error if they do any non-const stuff.
But how does it lead to better feedback? constexpr is purely suggesting an optimization that the compiler was already allowed to do anyway, and which it can still refuse to perform even with the keyword.
If the programmer's goal is to ensure compile-time evaluation is guaranteed, constexpr won't cut it, since the compiler can (and compilers currently do) silently fall back to run-time evaluation.
Alternatively, if the programmer's goal is to ensure compile-time evaluation is possible, constexpr still doesn't provide any value, since that's already obvious from the compiler analyzing the body of the function and noticing, say, that fopen() is getting called (and the compiler already has to do this anyway).
The situation is very critically different from const, too. Adding 'const' to a method that was previously non-const, even when it is legal and compiles perfectly fine, can entirely change the semantics of the code, and hence you want that decision to be explicit, not implicit. This isn't the case with constexpr.
I don't see how the C++ community will benefit from those, since the type of code that benefits is normally done in other languages. But I am certainly less creative than a community.
There isn't any ambiguity about whether it is evaluated at runtime or compile time - there is no "fall back" to run time if it can't be done at compile time.