Hacker News new | past | comments | ask | show | jobs | submit login
Why Const Doesn't Make C Code Faster (theartofmachinery.com)
291 points by mnem on Aug 19, 2019 | hide | past | favorite | 190 comments



> This is true in practice because enough real-world C code has “I know what I’m doing” casting away of const.

I found this out the hard way when I implemented OptimumC, the first data flow analysis optimizer for DOS C compilers back in the 1980's.

I had to undo doing optimizations on const.

Anecdote: Optimum C blew away all the other C compilers on a magazine benchmark article because it figured out that the benchmark code did nothing and deleted it. The author didn't ask me about it, he just assumed the compiler was buggy and gave us a bad review. Sigh.

When other compilers did DFA, the benchmarks got changed.


Just as in the saying about how the best code is the code you didn't write, I guess the fastest benchmark is the one that doesn't run.


Off-topic, but this discussion is reminding me of this (which some of you might enjoy): https://www.cs.princeton.edu/~appel/papers/conteq.pdf


> Anecdote:

I'd love to hear more about any aspect of this if you have time. (:


Here's another failure of marketing on my part.

We included full library source with the compiler, for free. No reviewer ever noticed that. One day, Borland decided to make source code for their library, sans the floating point stuff, available for an extra charge. This generated headlines in the next compiler roundup article. No mention that complete library source code, including floating point, came with my compiler.

So we decided to split off the library source code into a separate package, and charged for it. This solved the marketing problem, and doubled the money we were making.

This was how I learned about the common practice of upselling. I am a slow learner.

Of course, these days we give away everything for free!


By attaching a price to it, did people assume it was higher quality?


At the time, people assigned no value to bundled things. Most did not appear to be aware it was even there.

With it being packaged separately, people noticed its existence, and did assign value to it commensurate with its price.

It was attractively priced, and almost everyone buying the compiler also bought the library source.


>The author didn't ask me about it, he just assumed the compiler was buggy and gave us a bad review. Sigh.

Did you write a letter complaining? Any response? Do you have a link to the review or copy of it? (Sorry, I just find this history fascinating).


I did complain to the author, but the damage was done. Can't undo a magazine article. I didn't save a copy of the magazine I was so annoyed.

BTW, it goes on. A couple years ago a kind soul gently suggested I implement DFA since clang had invented DFA.

(No, I didn't invent DFA either, but I do believe that Datalight Optimum C was the first DFA compiler for DOS.)

http://www.program-transformation.org/Transform/CCompilerHis...

The Data Flow Analysis code: https://github.com/DigitalMars/Compiler/blob/master/dm/src/d...


(Bottom link:) does that mean the dmd backend code dates back to the mid 80s?


Yes. That file in particular has seen little change.


> Anecdote: Optimum C blew away all the other C compilers on a magazine benchmark article because it figured out that the benchmark code did nothing and deleted it.

I remember this problem being discussed in a CPPCon talk a few years ago, and some ways to "trick" the (modern) compilers into not optimising away the benchmark code - I think it was Chandler Carruth's "Tuning C++" talk: https://youtu.be/nXaxk27zwlk


Nice


The key difference between `const` and the `register` and `inline` keywords is that, despite all of them often flagging optimisations that the compiler can often work out anyway, `const` still aids human comprehension by declaring constraints, whereas the latter two do not.

It seems that interprocedure optimisations in modern compilers make a lot of const-by-reference optimisations apply even when the code is mutable-by-reference in the parameter list but the function body doesn't modify it in practice. This would only work if it could deterministically work out which function is called.

Local constant stack values surely can be completely deterministically verified as such by the optimiser even without the `const` modifier. They could be overwritten without a C assignment to it, via the stack from a buffer overflow of an array next to it, but that's undefined behaviour so a compiler is presumably free to assume it is not modified, eliminate unnecessary register/memory loading code, and let the developers deal with the consequences.

As trailing `const` on member functions outlaws modifications via `this`, it would follow that the same optimisation-even-without-modifier process would apply as to `const` local stack values.

As constancy is a constraint that aids human comprehension, there's a good reason for choosing a keyword just as short as the mutable equivalent, such as Swift's `let` vs `var`; if the more constrained equivalent is equally or more convenient, more constrained and thus easy-to-reason-about code becomes more common.


While specifying inline alone is useless for optimization, the inline specifier is important for another type of optimization: putting a function in a header file. If you put a function in a header file and don't mark it inline, that tends to be undefined behavior because it tends to violate the One Definition Rule, (both C and C++).


You typically dont hit undefined behavior (maybe technically undefined by the standard), bit you typically get hit with really annoying linker errors about duplicately defined symbols. And, yeah, annoying because of cryptic error messages.

Ive usually encoutered this most when trying to implement out of line template member functions and forget to include the inline keyword.


Force inlining code is actually quite a useful optimisation still. We have found many instances that after a few inline functions the compiler gives up and just calls the function instead. Force inlining certain low level functions has actually improved performance by several percent in our codebase.


From the language standpoint, the inline specifier doesn't force the compiler to inline the function. But I guess some compilers could provide guarantees that go beyond what the spec says.


I was talking about __forceinline for MSVC and things like it for other compilers. One's which for better or worse will make your function inlined.

One thing that I did find out what that even if you inline or force inline a function, that if you have it exported from a DLL or in a class that is exported as a DLL, then the function might still be called non-inline.


You’re completely correct. I was conflating the ODR-adhering standard ‘inline’ keyword with the various inlining pragma in some compilers that actually are about solely forcing inlining as an optimisation.


How does making the function static not solve this in C? Then the definition is local to the translation unit and there are no linkage concerns at all.


Static but not inline functions will trigger warnings of unused functions. Static and inline does not trigger this.


> Static but not inline functions will trigger warnings of unused functions.

Well, isn't it true? It's true also in the inline case. I consider it strange behavior not to emit an explicitly enabled warning for a case that definitely satisfies the warning conditions.

> Static and inline does not trigger this.

Tried -Wunused-function in gcc 6.3.0 and clang 3.8.1-24, with an unused-but-defined function declared "static inline int add(int a, int b)". clang still emits the warning, gcc does not.

If you're already dependent on whatever additional semantics gcc tacked onto the inline keyword you can as well use "__attribute__((unused))" to disable the warning for a particular function, which more clearly communicates the intent to disable the warning.


It turns out that the clang warnings work differently in header files, specifically. In my test I just put the static inline function in a C file. clang does however not trigger -Wunused-function when the static inline function is included from a header. The code is the same after pre-processor expansion.

IMO these special cases just add to the confusion and my criticism of strange behavior, but I stand corrected on the actual behavior of the two compilers.


Local constant stack values surely can be completely deterministically verified as such by the optimiser even without the `const` modifier.

Only if their address doesn't escape.


By ‘constant stack value’, I was referring to ‘effectively constant’ stack values, i.e. ones who aren’t reassigned in the scope or passed by mutable reference (or indeed are but aren’t actually modified by those functions anyway if the compiler can prove it).

My phrasing was probably a bit ambiguous there.


The compiler will detect any attempt at taking the address of the variable (including fancy things like most inline assembly) and skip the optimization, marking it as requiring a memory location.

However, it can still cheat by rearranging the memory write or even letting linker initialize the address.


We are in violent agreement.

An explicitly const-qualified object instead can be replaced with its value even if its address has escaped.


gcc has some attributes to help with that, for example try __attribute__((pure)) on funcs that will always return the same thing for the same inputs (given the same global state) and do not modify said input or state (and thus can be called more or fewer times than programmer wrote). Usually gcc will be quite aggressive with optimizations if you use that.

For even more aggressiveness, try __attribute__((const)) which tells it that the func accesses NOTHING but the params, not even global vars. Of course this is quite limiting. You can also use this as a trick when YOU know that the only accessed global state is itself constant but gcc does not know that. This can produce substantial savings in number of calls of funcs that, for example, do lookups in large constant data tables, like table-based versions of sin() and cos().

In the example the article gives, adding __attribute__((pure)) to constFunc() does produce the behaviour author wanted and then some

LWN has more: https://lwn.net/Articles/285332/


Does it enforce these attributes or are they just a promise by the code author?


D has a `pure` attribute for functions, and yes it is enforced. It's enforced even to the point where it becomes a PITA, but when you manage to make functions pure, you know it's solid.

One pain point with pure functions is you can't insert debug logging statements. D has a special case for that - purity for a statement isn't checked if it is prefixed with the `debug` keyword:

    pure int square(int x) {
        debug printf("called square\n");
        return x * x;
    }
which is very, very handy.


This seems a very pragmatic solution.


Isn't that erroneous, in the sense that printf() has side-effects, breaking the "pure" promise?


Indeed it does break the pure promise. But it's only for debugging builds.


The same "trick" is especially handy in a pure-by-design language like Haskell. There you have the `trace` function which prints to stdout while pretending not to. It's similarly useful, even if it may get called zero or many times. Is the same true of D's debug?


No, in D any statement prefixed with `debug` does not get its purity checked. `debug` prefixed code is enabled via a command line switch, otherwise it is excised.

I.e. it's for conditionally inserting debugging code where you don't care about purity, you just want to find the bug.


This means you can't re-order the computation, though, or memoize it—at least not the the extent it's possible without IO.


The use of the debug {} block in D is opt-in e.g. the compiler only includes those statements when given the debug flag

https://d.godbolt.org/z/Rwu4Cq (remove "-d-debug" to see)


Furthermore AFAIK things prefixed with debug don't get compiled into release binaries either.


I wonder does that mean you could also enforce freeing memory allocated by a pure function as well?


I'm not sure what you mean?

Allocators are often pure in D, they probably shouldn't be but that would mean allocating outside the function which would be ugly and probably buggy too


Obviously a pure function can allocate memory on the stack. I would think it could also allocate memory on the heap, iff it was guaranteed that the memory was freed before the function exits. Was thinking that would be something the compiler would do.

Forgive me I'm just a small brained firmware programmer.


So it seems that you are sort of talking about a function that is memoizable but not pure. Pure has no mutation of any global state. So hypothetically touching the heap means not pure - if somehow others can see it.


So where do we draw the line? Of course, no real computer implementation of any language is going to be able to generate fundamentally pure code because in the end they're going to be mutating memory (at the very least the program counter).

That's not a very useful distinction, and usually one talks about "purity" at a conceptual level. The language provides a layer of abstraction in which things can be considered pure (or not), even if the underlying implementation e.g. makes optimizations that require mutation. The language is just a guarantee that you can ignore these aspects of its implementation.


I think that is a good point at the end. Pure may be defined in regards to the language. So allocating memory in C may make a function unpure but something other languages may still be pure.


Rolling it over I think vaguely along the idea that marking a function as pure is a contract that tells the compiler that the function and it's call tree ultimately shouldn't modify external state.

A function that allocates memory isn't pure, except perhaps iff it frees that memory either itself or by compiler magic. Then perhaps it is.


They are a promise and that is precisely what makes them powerful. It gives GCC info it might not be able to derive, but you know to be true.

You can even promise a func is pure without even implementing it in the same translation unit! That produces lots of improvement. See these two godbolt links:

https://godbolt.org/z/v2zqJI

https://godbolt.org/z/kk3Q3T


> You can even promise a func is pure without even implementing it in the same translation unit!

Actually that's the main point, as if it's in the same TU, the compiler can probably figure out a function is pure.


often but not always, for example a function using asm() to access, say, some AES intrinsics is pure (const even) since the output only depends on input, but oftentimes compiler by itself will not declare a function with an asm() block pure or const


Well, fair point, but - compilers shouldn't be that lazy :-)


>__attribute__((const)) which tells it that the func accesses NOTHING but the params, not even global vars.

If you're actually not accessing any global state at all (including making no memory allocation), shouldn't the compiler figure that out anyway? That doesn't seem like something that would be very hard to check for.


Not if it is defined in another translation unit and you are not using LTO.


In other words, the compiler can infer this from the function definition (or at least clang does), but it cannot infer this from a function declaration, because the implementation is missing - unless you are using LTO like the parent mentions.


sometimes it cannot infer it even from function, for example, here is a function that in my particular OS gets me a pointer to my library's globals. To my code the returned pointer never changes, but the compiler has NO way to tell this function is const without my annotation

  void* __attribute__((const)) getGlobals(void) {
    void* ret;

    asm (
      " ldr %0, [r9]      \n"
      " ldr %0, [%0, %1]  \n"
      : "=r"(ret)
      : "I"(MY_MODULE_ID)
      :
    );
    return ret;
  }


Good point.


> try __attribute__((const)) which tells it that the func accesses NOTHING but the params, not even global vars

And it doesn’t do anything unintended if you’re lying to it by accessing the global state that you know to be constant?


It will only assume that it is safe to skip multiple calls to this function with the same parameters, safe to generate extra calls that you did not write, and safe to skip the entire call if the return value is ignored


Const does make C go faster, just not in most of the places you see it used.

For const to help, the object itself needs to be defined const. Just taking a `const object * ` (or `const object&` in C++) doesn't help you determine the constness of the underlying object, and that usually accounts for the majority of const usage by volume.

Limiting the scope to of actual const definitions, it can help a lot, but only in cases where the compiler couldn't move that anyways. So local variable const definitions rarely help, because the compiler can often already prove they are const by inspection (but they can help if the variable escapes).

They are useful especially for global variables (and moral equivalents, like static class members in C++), since the compiler cannot prove by examining only the current TU whether the variable is unmodified (and it is a hard problem even if the whole program can be inspected), so const is a useful promise there.


BINGO!!! The pointer is constant, not the item being pointed to. Since you’re dereferencing the pointer, i.e. accessing the non-const bits, the compiler must reload.


The examples in the articles are non-constant pointers to constant data. If you want to declare the pointer itself const, you need to do it after the asterisk.

    const int* const 
instead of

    const int*


Exactly. const int* is a non-const pointer to a const int; int* const is a const pointer to a non-const int. So I’m pretty confused by the claims.

Maybe the compiler will happily let you modify the dereferenced const int* (undefined behavior), wouldn’t try it now, but that’s not what the signature promises.

Edit: Thought about it more and read some other comments. Now it makes sense.


If the initial object itself isn't const, then merely declaring the function parameter as a pointer to const won't guarantee that some other thread of execution won't change the value under your nose. Or if you have multiple pointer parameters, they can alias each other.

Indirection in C and C++ is a mess, but at least C has the "restrict" keyword. Best to program with value types whenever you can, and use pointers and references when you must.


> won't guarantee that some other thread of execution won't change the value under your nose

My understanding is that compilers can always assume that there is no other thread involved, which (part of) why C11 atomics are necessary. Is that not the case?


Yes, but it can't assume any random function call it does doesn't use a different pointer to the same object which is not const. So, even if you don't cast const to non-const pointers you can run into the const value changing.


Concrete example:

  void foo(const int* const p1, int* const p2) {
    int a = *p1;
    *p2 += 42;
    int b = *p1;
    return a == b; // b = a + 42
  }
  int main() {
    int x = 0;
    return foo(&x, &x);
  }


Yup, it's a data race (and therefore UB) to access the same memory from two different threads without synchronizing instructions (like mutexes, thread start/join, atomics).

It is allowed to concurrently read from const regions. But not concurrently read and write, and definitely not concurrently write.


You're right, but what I meant about it was that just because the variable is const, doesn't mean it can't change from somewhere else between reads. Even if you use synchronization to avoid data races, if you read the pointer to const twice in the function, it could change between the reads.


The most common example of that is probably a const volatile variable for the input data register of some peripheral. New data can come in at any time, and writes do nothing.


Dammit! Why can I never keep the syntax straight?!


On some microcontrollers, marking variables/arrays as const allows the compiler to access them directly from flash rather than having to copy them into RAM at startup. I used this to great effect on a PIC24 with 256KB flash/16KB RAM.


Though this becomes incredibly annoying using const pointers when you didn't mean to refer to ROM but instead meant a read-only pointer to RAM.


Good point! On Arduino sometimes your program won't fit unless you store some parts in flash.


This assumes that the compiler can actually infer that a const-by-reference isn't modified.

The second we have indirection (function pointers), or a different source file without whole-program-optimization, or a library, these assumptions break down.

Further, the author assumes that the compiler can't benefit from the knowledge that something is const because the const can be cached away. This isn't true, e.g. per ISO9899 6.7.3.5:

If an attempt is made to modify an object defined with a const-qualified type through use of an lvalue with non-const-qualified type, the behavior is undefined. If an attempt is made to refer to an object defined with a volatile-qualified type through use of an lvalue with non-volatile-qualified type, the behavior is undefined.


I hardly ever bother optimizing my code anymore, with -O3 nothing I do ever really seems to make a difference.

The real reason to use "const" is to show intent.

See "Const and Rigid Parameters" in this wonderful article about the Doom 3 source code: https://kotaku.com/the-exceptional-beauty-of-doom-3s-source-...


> I hardly ever bother optimizing my code anymore, with -O3 nothing I do ever really seems to make a difference.

Your choice of data structure or algorithm would…


Yes, true, though I consider that "structural/procedural" optimization as opposed to "technical" optimization.

This means, I won't bother using "inline" anymore, but I will consider how many subfunctions in total I'd call.


I find that I can sometimes get significant speedups by changing data access patterns when considering cache. I have found this to be the first and most valuable thing to tune, before attempting SIMD.


Interesting! Any rules-of-thumb you can share about that process?


const_cast risk is not why the compiler emits a reload. Indeed we don't even need to pass x to constFunc to get the reload: see line 7 in https://godbolt.org/z/TjmWxL

Consider that `x` may point to a global variable which `something` may modify. That is why the reload is necessary.

edit: I just realized that the reload would be necessary even without an intervening function call. https://godbolt.org/z/QhVzlV

Consider that `x` may point to errno; then printf would modify it!


You're mistaken. What you're thinking of is a pointer to a _volatile_ ; for a regular pointer, you only need one load - even if it's not a pointer-to-const. See:

https://godbolt.org/z/XWW9Nq


I think you're talking past each other. Grandparent is examining the case where there is a function call (the printf), but you do not pass anything relevant into it. You're looking at the case where there is no function call, so peephole optimization can observe that the memory at that location can't change.

"volatile" basically tells the compiler "Hey, wait, this might be modified by DMA, another thread, or another process. Assume nothing."


The reload shouldnt be necessary unless there is an appropriate memory barrier, though. Seems like the compiler is being pessimistic in the examples.


I think the big problem here is that const doesn't really truly say anything about aliasing.

Even excluding casting evils, calling a method with a const pointer only means that the method isn't supposed to change the value. It does not mean that the caller isn't going to change things (particularly from another thread).

The languages are simply too permissible when it comes to what a pointer means and how it can be used.

For const to be optimizable, you'd need to take Rust's approach and make language level guarantees that "You can't do bad things with this". C and C++ missed that boat.


Even excluding casting evils, calling a method with a const pointer only means that the method isn't supposed to change the value. It does not mean that the caller isn't going to change things (particularly from another thread).

This is incorrect.

Actually, you promise not to change things from another thread, DMA, interrupt handler, signal, etc, with any non-volatile reference passed, let alone a const! The compiler loads things into registers and has no way to know if memory in a passed reference changes underneath the hood-- it freely generates code that assumes that the things it has pointers do, do not change. It can freely make optimizations that lead to incorrect computation, infinite loops, segmentation faults if this is not obeyed. If you've ever head about how "double check locking" is an antipattern, this is a big part of why.

e.g. from ISO 9899:

Alternatively, an implementation might perform various optimizations within each translation unit, such that the actual semantics would agree with the abstract semantics only when making function calls across translation unit boundaries. In such an implementation, at the time of each function entry and function return where the calling function and the called function are in different translation units, the values of all externally linked objects and of all objects accessible via pointers therein would agree with the abstract semantics. Furthermore, at the time of each such function entry the values of the parameters of the called function and of all objects accessible via pointers therein would agree with the abstract semantics. In this type of implementation, objects referred to by interrupt service routines activated by the signal function would require explicit specification of volatile storage, as well as other implementation-defined restrictions.

This is the model used by pretty much every C compiler you'll encounter. When you e.g. acquire a lock, you call something in a different linkage unit so multithreaded stuff behaves properly.

For const, the guarantees go further: you promise it won't change elsewhere (relevant standards text quoted in my other comment).


Your comment is incorrect as well.

The C specification describes semantics in terms of an abstract machine which is actually quite different from real hardware (most notably in terms of how memory works!). It then goes on to say that the compiler may choose to implement it radically differently, so long as the observable semantics (I/O calls and volatile accesses) are preserved. I'd have to double check whether it was C or C++ that said that the compiler is free to assume that infinite loops do not exist.

> Actually, you promise not to change things from another thread, DMA, interrupt handler, signal, etc, with any non-volatile reference passed, let alone a const!

This is not the case. The C specification requires that you use volatile to indicate that code outside of the C execution model may access the memory location. Of your list, only DMA and interrupt handlers are outside the execution model; signal handlers and threads are both considered inside the memory model. The only way for a signal handler to communicate with code outside the signal handler is with volatile sig_atomic_t; volatile int does not cut it. To communicate between threads, you need to ensure proper synchronization. This may involve the use of locks, fences, or atomics with appropriate orderings chosen.

> The compiler loads things into registers and has no way to know if memory in a passed reference changes underneath the hood

To be pedantic, the compiler relies on undefined behavior here. It is undefined behavior if you cause the value to be changed in a way that violates these rules, so the compiler has absolutely no restrictions on what may happen in such executions.

> If you've ever head about how "double check locking" is an antipattern, this is a big part of why.

This has absolutely nothing to do with why double-checked locking is incorrect. Double-checked locking is problematic in large part because of hardware reordering of loads and stores. In general, you need a store barrier to guarantee that all of the modifications the first thread changed has been made visible to other processors followed by a load barrier to guarantee that all prior modifications from other processors have been made visible to the second thread. Volatile does absolutely nothing to provide these barriers (except if you use MSVC, which documents that they treat volatile variables as equivalent to acquire/release semantics on x86 because regular loads and stores on x86 have those semantics anyways--this is nonportable behavior). The double-checked locking pattern does not provide a load barrier in the second thread, which means the ordering semantics are not guaranteed. If you use atomic loads and stores when implementing double-checked locking, you do get the necessary semantics for correctness.


??

You say that it is only thing outside of the execution model that require volatile, but you explain that a signal handler requires volatile and to be the signal-atomic type. ;) In practice, volatile with lock-free types (that now stdatomic has a way to query whether types are lock-free) are correctly used all the time to pass data between threads.

A volatile declaration may be used to describe an object corresponding to a... an object accessed by an asynchronously interrupting function.

Like, execution in other thread contexts.

Or, from the C99 Rationale:

The translator may assume, for an unqualified lvalue, that it may read or write the referenced object, that the value of this object cannot be changed except by explicitly programmed actions in the current thread of control, but that other lvalue expressions could reference the same object.

vs volatile:

No cacheing through this lvalue: each operation in the abstract semantics must be performed (that is, no cacheing assumptions may be made, since the location is not guaranteed to contain any previous value). In the absence of this qualifier, the contents of the designated location may be assumed to be unchanged except for possible aliasing.

> Double-checked locking is problematic in large part because of hardware reordering of loads and stores.

This was unsafe even on in-order uniprocessors, because the compiler was free to reorder loads and non-aliased stores.


> You say that it is only thing outside of the execution model that require volatile

No, I'm saying that the statement that volatile is only necessary and sufficient for outside the execution model. For signal handlers, it is necessary but not sufficient; for threads, it is neither necessary nor sufficient.

> Or, from the C99 Rationale:

C99 does not consider threading at all. C11 and C++11 do, and any compiler written in the past decade is going to be obeying the rules for the C++11 memory model (which C11 adopted as its memory model). The committees explicitly considered whether it would make sense to imbue volatile with any special threading semantics, and they explicitly rejected doing so.

A volatile read or write is not guaranteed to be converted into a single hardware load or store, even for lock-free types. There are situations where the compiler will narrow or widen the load/store, or even insert extraneous loads and stores to the value.

> This was unsafe even on in-order uniprocessors, because the compiler was free to reorder loads and non-aliased stores.

The compiler is free to reorder non-volatile loads and stores with volatile loads and stores. Only reordering volatile loads and stores with respect to other volatile loads and stores is prohibited. Volatile is not sufficient to make double-checked locking safe.


I've been quoting C99 and my answers should be considered in that context. However, to say C99 does not "consider threading at all" is to neglect that the standards body document above explicitly mentions the thread of execution... in the quote that I stated.

You seem to not be reading what I'm saying and also to be arguing with things I never asserted. e.g.

> Volatile is not sufficient to make double-checked locking safe.

OK, but I didn't say volatile makes double-checked locking safe... I said that the compiler's ability to reorder non-qualified variable accesses is "a big part of why" double-checked locking is an antipattern. My statement:

> Actually, you promise not to change things from another thread..., let alone a const! The compiler loads things into registers and has no way to know if memory in a passed reference changes underneath the hood... It can freely make optimizations that lead to incorrect computation, infinite loops, segmentation faults if this is not obeyed. If you've ever head about how "double check locking" is an antipattern, this is a big part of why.


This issue is why D has both const and immutable qualifiers. const can't be optimized because there may be other mutable references to the same memory object. But for immutable references, there cannot be.

It is possible to cast away const and immutable in D, but these are only allowed in system code, presumably where the programmer actually does know what he's doing.


The most succinct way I've heard this explained is: "Const means 'I won't mutate this'. It doesn't mean 'nobody will mutate this'."

To do optimizations, compilers really need to know that nobody will mutate a value; just knowing that a particular function won't mutate a value isn't that helpful.


True to form, there's multiple forms of const in C++.

const on a methond says "I won't mutate the state of this class (except for members explicitly marked as mutable)

A const reference says I won't mutate this (unless I const_cast away the constness first)

But a const variable actually is a promise to C++, saying "This object will never mutate after initialization ever forever I promise for real this time " (except in the destructor, then it's OK).

Confusingly, all together this means you're only allowed to cast away constness if the object wasn't const to start with.

___

Some of the above may be wrong, not a language expert.


You are allowed to cast away const-ness of an actual const object, just as long as you don't modify it.


Unfortunately in C++, the compiler is not allowed to assume that if it passes a const reference to a function, that function will not change the object. The function is allowed to cast away const and modify the object. I'm not happy that they did it that way; I would have preferred it if cast-away-const were more restricted (for example, allowed when calling a child function that takes a char* but doesn't modify the pointed-to C string), with the idea being that if a function has only a pointer to const or a reference for a const object, it has read permission on the object and lacks write permission. But that isn't how the language works.


The fact that std::launder (https://en.cppreference.com/w/cpp/utility/launder) exists blows my mind. Like, why is this a thing that the standard allows?


http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p053... has an explanation.

In short, placement new (or some scenarios involving unions) technically cause undefined behavior if you try to use the object after the call to placement new, since the pre-existing object there has had its lifetime expire. std::launder lets you use a pre-existing pointer to the memory at the same location to access the data there.


Yes, I know what std::launder does, and that document contains some of my own thoughts: namely, placement new (the only use I've seen suggested) should automatically launder, and std::launder should just not exist.


AFAIK placement new does launder. But both new and launder have no effect on their argument (well, placement new of course constructs an object there), they only 'bless' pointer returned from it. The use case is if you placement new on some byte storage. Now you want to access the object stored there, and of course you do not want to placement new the storage again, nor you have cached the result of the previous placement new (it would be suboptimal), instead you want to get a pointer to T from the storage address itself.


> AFAIK placement new does launder. But both new and launder have no effect on their argument (well, placement new of course constructs an object there), they only 'bless' pointer returned from it.

I may be misunderstanding, but this seems to directly contradict what the linked paper says:

> Note that std::launder() does not “white wash” the pointer for any further usage.

> The obvious question is, why don’t we simply fix the current memory model so that using data where placement new was called for implicitly always does launder?


Because the placement new might have been called in a other translation unit for example so the compiler can't track the it.


> I may be misunderstanding, but this seems to directly contradict what the linked paper says: >> Note that std::launder() does not “white wash” the pointer for any further usage.

no, it is consistent. Launder does not white wash its parameter, only its return pointer and any pointer returned by it. Same for placement new.


> The function is allowed to cast away const and modify the object.

Correct, but only if the original (referenced) object is not const itself. Casting away constness to modify an object that was declared const is 100% UB.


I find it awesome that you can do that, the extra flexibility is great, e.g. when you're working with const char* arrays, sometimes, you need to create a copy of that char array, but save its pointer (char* ) in a (const char* ) variable, but you're still going to need to delete that copy later... and then you're glad that you can simply recast a (const char* ) into (char* ).

I save the data of whether or not it's a copy into a flag variable, so my program knows when and when not to delete a const char* .

The alternative would be of course to create a two variables, one char* and one const char* , but that would double the memory needed, but only one type of variable would be used at a time.


Couldn't you use a union for that?

  class string {
    
    union {
       const char *referenced;
       char *copied;
    } data;
    bool copied;
    
  public:
    
    string(const char *source, booly copy) {
      if(copy) {
        data.copied = strdup(source);
        copied = true;
      } else {
        data.referenced = source;
        copied = false;
      }
    }
    
    operator const char *() const {
      return copied ? data.copied : data.referenced;
    }
    
    ~string() {
      if(copied) {
        free(data.copied);
      }
    }
    
  };
No const_cast and no wasted memory and it's clear that you are storing two semantically different types.

I would expect the conversion operator to optimize down to a noop since in all cases the returned address is equal to this.


Yes, that could work too, but in my opinion, only unnecessarily complicates things.

I want the pointer (actually, that to what it's pointing to) to be referenced as const in all cases, except when I create a copy or delete that copy, in which case I then explicitly recast them as not-const. That's just two tiny exceptions in code I have to write anyways.

Such as it is, I would only consider a union of them if I'd actually need const/non-const 50/50 of the time, but my current use case is more like 98/2.

Apart from that, I'm guessing you're from a C++ background, as there are no classes in C? (and also, if you use C++, it's recommended to use "delete" instead of "free", since the latter is a C function)

EDIT: Actually, it's only one exception/re-cast, when I delete it, as no compiler would complain about saving a char* in a const char* data type.


> Unfortunately in C++, the compiler is not allowed to assume that if it passes a const reference to a function, that function will not change the object.

how could it make that assumption ? there is nothing that prevents you to implement the function in fortran or even in raw assembly


You have it backwards. const is your promise to the compiler that the object won't change. It doesn't care whether you const_cast, implement the function in fortran or use raw assembly to mess with that object - it assumes that you don't. If you modify a const object in any way you have violated the C++ standard and get UB in return.

Now, the tricky part is that const objects (i.e. objects declared as const) enjoy these optimization benefits, but the compiler usually cannot prove that a reference or pointer to const actually refers/points to a const object. Taking a const pointer to a non-const object, casting away the constness again and then modifying the object is totally legal.

Since the former case (objects declared const) is much rarer than the latter case (working on const object pointers/references) the compiler cannot often optimize based on const.


Interestingly, top level cost does not guarantee immutability, but any const member reachable from that reference is in fact guaranteed to be immutable: ergo:

  template<class T>
  struct deep_const { const T value; };

  template<class T>
  const deep_const& freeze(const T&) { /* magic */ }

  template<cass T>
  const T & thaw(deep_const<T> const&) { /* more magic */ }
magic and more magic left as an exercise for the reader.

edit: I do not think either GCC or clang take advantage of that though, so currently it is just a curiosity.

edit: missed actual const keywords /facepalm


Cast away const. One of the many reasons I gave up on C++. What is const if it can be cast away?


const should have been renamed to readonly, because that's what it means. It means that a variable only has readonly access to a value, but it is possible that another variable has write permissions, and so a compiler can not assume that just because one variable has readonly permission, that all variables have readonly permission except for in very limited circumstances.


readonly was the original proposed name, if I remember my D&E [0] correctly. Bjarne outlined in that book why he went with const instead, but I forget why off the top of my head. I guess it's tine for a re-read!

[0] The Design and Evolution of C++


const values are really const (i.e. immutable). Const references are actually readonly. Could have used different keywords of course.


I have done a lot of profiling work and I have observed similar things. One thing is const, another is virtual functions. A lot of people think they add overhead and avoid them for performance reasons but my profiling almost never showed them as a problem. Same for const and inline. it’s really hard to predict what the optimizer will do.

Obviously there are stupid things that can be avoided from the start but in general I prefer clean code where people write for readability and simplicity and not speed.


Inlining especially is call-site dependent. Little of the benefit of inlining comes from eliminating the call overhead on a modern processor. But quite often, functions are called with one or more parameters constant. So that enables a bunch of constant folding for that call site -- the compiler is good at noticing that if(1<7) is always going to be true, and dropping out the whole else branch, which can enable code motion and more common subexpressions... so inlining is often a win but often people misunderstand exactly why.


"Inlining is a gateway drug^H^H^H^H optimization"


Yes, profile. Anecdotally, I once removed a number of virtual function calls as they were imposing significant overhead in a tight loop (image processing). Would that be the case today? No idea; I'd have to measure it again.


Image processing is a different beast. A lot of subtle things are in play here. I remember getting big improvements by iterating through the images buffer either line vs row or even arranging memory in 64 byte squares instead of linear. I was often surprised when I looked at the optimized assembly. Optimizer are really clever and do a lot of surprising things....


Yes: row traversal (no hopping around in memory and causing cache flushes) and block size reads/writes are a big deal. Luckily they're easy enough to do right every time once you know. I used to take some flak from the "premature optimization blah blah" side of the house on occasion.


I used to strive for "const correctness" in my game engine code, and wasted a fair amount of time fighting the compiler and editing function signatures as requirements changed. Mostly eliminating const from the codebase simplified things a lot.

Didn't do any comparison studies of performance, but the productivity boost from not having to think about it was nice, and I suspect any performance difference is margin-of-error stuff like the sqlite results in this article.


Productivity boost from not writing tests or thinking about function preconditions is also nice ... until it isn't.


Not quite the same issue, but this reminds me of my very first job, back in 2001, where we used a homebrew C++ framework that separated lots of concerns, and then used a single massive .h file with 10,000 const uints to tie everything together.

Compiling took over 2 hours. The header was generated automatically, but after I added a macro that replaced those const uints with #defines, compile time dropped to 30 minutes, which was quickly voted the biggest productivity enhancement in that project.

More relevant to the topic, in javascript programming I recently switched to using const instead of let wherever possible. Most variables don't actually vary, so let's make that explicit. That can prevent some unexpected surprises (though not all, as objects and arrays in constants are not immutable).


Many laws exist that express the intent of a law-making body but are either unenforceable or too expensive to enforce in any meaningful way. Perhaps the const declaration in C is an example of this type of law.

In contrast, a Standard ML compiler can enforce this law because the language itself insists that all bindings are immutable.


I wish there was a C++ compiler flag which made everything const by default and required the mutable keyword otherwise. This is obviously non-standard and would break included headers, but alternatively a #pragma could scope that option for project code coming after non-project includes.


Well, of course:

  void constByArg(const int *x)
  {
    printf("%d\n", *x);
    constFunc(x);
    printf("%d\n", *x);
  }
Here, the object referenced by pointer x is non-local, and so is constFunc.

The object could be modified in legal ways nothing to do with constFunc stripping away the qualifier.

Also, if the object is not defined const, then constFunc is allowed to do that, too.

Then the next, correct, example with a const protected local shows that two instructions are shaved off. That's a wortwhile saving that could be leveraged to get faster code.

If you're passing local variables into helper functions which are not supposed to change them, you can shave off some cycles with const.

> I mean, I removed const from the entire program

Maybe sqLite doesn't use const specifically with a view toward optimization. To see an overall performance impact, there would have to be some case in a "hot spot" of the program, where const is used with some local variables being passed into functions. (Or whatever other case we can ferret out where const happens to help.)

There are some widely applicable optimizations which scoop up all the "low hanging fruit" improvements, but after that, optimization is a game of eking out small gains with specialized cases.


If clang can see the definition of `constFunc` and deduce that its parameters are `noescape`, then I think it can avoid reloading `x`.

A recent optimization in clang (not sure if it's in clang-9 or clang-10) will remove memsets to variables declared const, which usually come from assigning through a pointer that's had const (of the pointed to type) casted away. The MIPS Linux kernel won't boot when built with clang due to the above (I sent a patch 2 weeks ago)


overlooks the value of const qualification for the caller - if the argument is marked as const, the caller might reasonably assume the data won’t be changed.


I don't think he's saying that you shouldn't use const (if he is, I vehemently disagree!) just that using it won't make the code faster. It will definitely prevent you, or the next maintenance programmer, from making a lot of dumb mistakes.


Yes, there are many cases where the caller can optimize the dependency chain / order of

    constFoo(x);
    constBar(x);
where it couldn't make that assumption if one function wasn't const.


Unfortunately, C++ const functions are not necessarily pure functions and cannot be freely reordered.


You're right, I was mistaken. And if the function is in the same compilation unit so the compiler is able to prove that it's pure, it would also be able to prove that a non-const pointer is actually const.


C++, rather than C, but there was a GOTW post about this years ago, which explains the situation in detail.

http://www.gotw.ca/gotw/081.htm


It's true that, as demonstrated, const-qualifying your pointer arguments is very unlikely to allow any optimisations on its own.

However, const-qualifying your pointer arguments where the pointed-to object isn't changed is what allows you to make more liberal use of const-qualified declarations, and as also demonstrated by the article, those in turn do allow some optimisations.

Additionally, there is some safety on the table: if your C program makes use of pointers to structures full of function pointers to implement polymorphism, making those pointers const-qualified allows your underlying structures full of function pointers to be declared const, which in turn allows them to be stored in hardware-enforced read-only memory.

As a side note, I've often thought that block-scope variables declared const and whose address is never taken should be automatically made static.


  // x is just a read-only pointer to something that may or may not be a constant
  void constFunc(const int *x)
Is that correct? I read that as “x is a pointer to a const int”

If you want “x is a read-only pointer to an int”, you would need

  void constFunc(int * const x)
The article also doesn’t mention the case “x is a read-only pointer to a constant int”. To state that, you would use

  void constFunc(const int * const x)
(https://stackoverflow.com/a/1143272)


I believe the distinction they’re making is that there is actually no guarantee that the value pointed to by x will remain constant. All this tells you is that you’re not allowed to use x to make the modification.

Consider:

  int foo = 42;
  const int *x = &foo;
  foo = 43;


Restrict keyword does do that if you wish to say that to the compiler. It tells it that no parameters point to the same memory. So if you won't change things via x, *x won't change.


Overly broad statement. const for global or static variables do make code faster, as they are placed in the .rodata segment, which allows swapping them out for free. This can be a huge win.

He only talks about const args, where the compiler does not try yet to check for casts which violate the constness guarantees. (DFA, Escape Analysis). And thus misses all important optimizations. You cannot rely on that never being implemented in the future. With LTO they already do I think.


I use const all over the place, but no so much to make the code faster as to eliminate potential stupid mistakes.


Const won't necessarily make things faster but constant values will (whether or not explicitly inferred). I noticed, for example, a 2x+ speedup in one of my ray tracers by downgrading my dynamic 2d/3d/4d vector class* to always have a constant size of 3.

* (poor choice, I know)


Having a constant size probably made it easy for the compiler to properly generate bytecode that uses vectorized instructions


I would say that's unlikely. SIMD instructions don't usually get used by normal compilers without very specific loops that make sure there are no obstacles to vectorization. Also the best way to use SIMD is to loop through a large array of two and do very simple operations with them. Modern CPUs actually have three (I think) floating point slots so their total floating point throughput isn't simply a fraction of the SIMD size.


> SIMD instructions don't usually get used by normal compilers without very specific loops

Compilers are smart. Even Java's JIT will generate SIMD instructions pretty well


I would have to see this to believe it. Even Intel's own compiler is extremely sensitive to small changes turning off SIMD use in a loop. You can try out compilers and see their asm at godbolt.org


That has nothing to do with const. Your dynamic vectors are allocating memory over and over on the heap, which is expensive. When you use a constant size they can be allocated on the stack, which is cheap.


Good guess, but in this case the size of the vector container was always constant - 4 elements (even if only 2 or 3 of them were ever used). Presumably what made it faster was being able to unroll loops? (only a guess, never looked at the disassembly) -- so instead of 0 to x on each vector element, it would loop through 0 to 2 and could optimize around this for each function.


You should also test it with older compiler versions from the 80s/90s. There are some falsehood believes about programming now that were true some decades ago. I'm not saying that this is the case here, but it's worth to keep that in mind.


I just see too many programmers that right from the start try to micro-optimize their code with things like this. You'll usually see very well written code but with all sorts of hacks, taken from blogs and StackOverflow answers and whatnot all across the web, but in the end they fail to optimize their actual algorithm and end up with some ultra-high memory footprints or O(n^3) stuff instead of some nice O(log(n)) for example. And they'll argue that they already "optimized the sh*t out of it", because they saved a couple of hundred assembly calls in a binary file around the size of a few megabytes ...


Is the same true for other compiled languages? Like Golang?


Rust has shared references where you can trust the pointer is actually immutable.


Golang doesn't have const types. (It does have const declarations, but those are different.)


> So, what’s const for? For all its flaws, C/C++ const is still useful for type safety.

Const variables can also be mapped read-only. This gives you hardware protection against modifications, and also uses less RAM if the const variable is in a shared library (multiple processes can share the same mappings).


Surprised no one has mentioned what the author of C, Dennis Ritchie, said about the addition of "const" to the language.

https://www.lysator.liu.se/c/dmr-on-noalias.html


> Assigning an ordinary pointer to a pointer to a `noalias' object is a license for the compiler to undertake aggressive optimizations that are completely legal by the committee's rules, but make hash of apparently safe programs

I can only imagine what he thought about aggressively optimizing C-family compilers in his later years.


We're talking about billions of calls that have milliseconds in difference. That's irrelevant when building a basic app that sits on a 10K worth of hardware machine with hundreds of CPUs.


Shouldn't the test instead be: const int * const <variable name>

Haven't profiled it but would make more sense to have the pointer also be const instead of having a non const pointer as input?


It rarely makes sense. Pointer to const is a contact between caller and callee. A signature like char <star>strdup(const char <star>) says "I take a pointer to memory that I promise not to modify, and you get a pointer to memory that you may modify".

Const pointer is a statement about the internal variables of a function definition, usually not of any interest outside the function itself and therefore rarely used.


> Const pointer is a statement about the internal variables of a function definition, usually not of any interest outside the function itself and therefore rarely used.

...and in fact not even part of the name mangling (for the exact reason you mentioned): https://godbolt.org/z/1pjecq


> So most of the time the compiler sees const, it has to assume that someone, somewhere could cast it away, which means the compiler can’t use it for optimisation. This is true in practice because enough real-world C code has “I know what I’m doing” casting away of const.

Some years ago i'd agree, but experience has shown that C/C++ compiler writers would rather win benchmark games than keep working code working. So it'd be nice if there was a better reason than "well, a lot of code would break if they did that".


That's typically in the case of UB, where compilers can make whatever assumptions they want. In this case, even if the code is completely within the C spec, the compiler cannot guarantee that the data is unmodified.


It's not even that: it's that if it sees a pointer declared const, it doesn't know if it was actually defined const.

I may have a non-const object that I pass to a function that takes it by const pointer: that doesn't magically make it const! The object can still be modified, e.g., by an opaque function in the caller which modifies the object not necessarily by casting away cost, but because the object isn't const at all (e.g., it may have a non-const reference to the object).

Const definitely helps optimizations, but only when the object is actually cost. References or pointers to const objects can't tell you that.


I think the point is that correct code could break.


It seems to me that compilers could benefit from doing analysis of functions and providing that information to callers.


they do.


[flagged]


For JavaScript const is more for letting the developer know intent. If I declare a local variable within a function with const, I am saying this variable should not be reassigned. If I use let I am saying that the variable can be reassigned.

It's nice when looking at someone else's code and being able to tell if they intended to have a variable be reassignable or not.

Maybe you've talked to JavaScript devs who are just bad at explaining their reasoning or maybe they just strictly adhere to linters which encourage the use of const and let and throw linting errors when they come across a var.

Either way, I'd recommend you reevaluate your language and refrain from calling other dev's weenies just for having a different opinion than yours and maybe not being able to properly explain their stance. Using language that borders bullying is not helpful, it's no better than calling someone a snowflake or other degrading term.


Const is primarily for documentation to indicate that the assignment happens only once and won't be redefined. Using 'let' or 'var' indicates to the reader that the variables value will be reassigned in the function.

It's not about the compiler.


Also let/const don't get hoisted to the start of the function, declaration using var does.


Comparing "const" and "var" is incorrect; the former is block-scoped (like "let"), while the latter is function-scoped.

More to the point, "const" in JS is a maintainability tool to prevent accidental reassignment. I don't think people use it for performance reasons, nor should they.


Actually no, compilers can't assume that `foo` is const.

    var foo = 0;
    eval("foo = 1");
With that said, virtually nobody uses `const` for performance reasons, but for developer readability and avoiding mistakes.


A javascript engine can tell that a variable is used in a const-like way if it statically checks that `eval` is not used in the within the variable's scope. I believe there are a few optimizations in V8 and other engines that are disabled if eval is used in the scope at all. (I don't know if there are any optimizations around a variable being used in a const-like way. As you said, const in JS is mainly for developers.)

Given code like `function foo(fn, str) { var foo = 0; fn(str); return foo; }`, you might be wondering what happens if foo is called with `foo(eval, "foo = 1");`. The answer is that a standard eval call isn't actually a normal function call but better understood as a special syntax resembling a function call. There's a big difference between `eval(x);` and `var e = eval; e(x);`. When you take the value of `eval` (by passing it as a parameter or assigning it to a variable, etc), the value you get is actually the "indirect eval" function, which works slightly differently than standard eval: it does not get access to the local scope that it's called in. This means its usage won't break any optimizations that assume the local scope won't be modified.


> Actually no, compilers can't assume that `foo` is const.

They can assume it, they just need to be able to reverse that assumption if you do change it through something like eval. This is called 'speculative optimisation' and we use it to optimise languages like JavaScript, Ruby, Python, etc.


your question only makes sense if they are using const for performance reasons... which I don't think anyone does. No real reason for using var anymore over let in production code. I'll still use var in the console for testing stuff.


It tells the reader something too. There's no excuse for var anymore.


I've not seen this as the basis for the default usage of const over var (or let) in javascript. The primary reason that I'm aware of is that if you use const by default, the compiler will tell you if you accidentally re-define the value of a variable. Possibly because people feel that immutability by default reduces error frequency, though I don't have any data to back up that assertion.


Defaulting to more restrictive, less featureful constructs just seems like engineering 101.


const doesn't actually stop data from changing, just reassignment.

  const user = {id: 1, status: 'active'};
  user.status = 'disabled';
is valid javascript


You're talking about two different things. In this case, you'd use...

    const user = Object.freeze({id: 1, status: 'active'});
But const is immaterial to this topic, as it only prevents you from being able to reassign user to some other value, i.e.,

    user = 7;
While...

    let user = Object.freeze({id: 1, status: 'active'});
    user.status = 'disabled'; // throws an error.
Provides the behavior that you're seeking to guard against, regardless of let/var/const usage.


>You're talking about two different things. In this case, you'd use...

No, I'm talking about the behavior of const.


That is how object oriented languages work. It is a constant reference to a mutable object.

  final int[] arr = {1, 2, 3};
  arr[1] = 10;
is valid Java.


Which is to say that the const pointer itself doesn't change. It doesn't impose any constraints on the values of any underlying data structures.

Javascript could use some proper immutable types.


I've wondered why the JS people didn't call their keyword for this purpose "final" like Java did.


You use const because you want a runtime error if someone attempts to change it.

It's also documentation of intent.

Some code, for better or worse has very long functions, and you might use the variable 100 lines down from where it is declared.

Is a runtime error better than letting it silently change? Debatable, but the error is more likely to blow up a unit test, so you know there is a problem before you ship.

And needless to say: in Typescript you'll get a compile time error. In Typescript it's a no-brainer to use const when appropriate.


'const' in JS is 100% for readability and enables no additional optimizations above 'let'. In fact 'const' and 'let' are often worse than 'var' because of the temporal dead zone.


> In fact 'const' and 'let' are often worse than 'var' because of the temporal dead zone.

Seems like a massive overstatement. What are you even referring to as the downside?

For 99% of usecases/people, TDZ just means you get a runtime error for your use-before-declaration bugs.

The only reason I can see for using `var` is if you're doing some very confident performance hacking in a hot loop and you have the performance testing harness to prove that you're not just wasting your time.


> Shouldn't the compiler be able to notice that a variable isn't changed in it's lifetime?

The compiler can, I can't (efficiently). const spam also occasionally catches when I modify the wrong variable.


Extra protection of data costs speed;


I once had a very stubborn intern who decided to add as much const as he could in our codebase. What followed was a heated discussion and I did a global replace of const by nothing.

The main problem with const is that it's ugly, it's viral and it does not add any value to the code.

And I know all the supposedly good things about this, but I never found any real usage in practice.


You should fire yourself. Seriously.

Understanding which object parameters to a function are inputs and which ones are modified is key information about the behavior of a function. It's omission requires devs to divine the intent from the name and hope that every previous dev was a good citizen w.r.t keeping function names accurate to intent.

Finding and replace const with empty is the moral equivalent of replacing all the types of parameters with void*. After all, types are viral and ugly too.

I really hope that intern found a better place to work than your company.


With "const" you still need to rely on devs being good citizens since const can be trivially casted away.

I've been programming in C for 25+ years. const is clutter.


Don't let consts get "trivially" casted away in code review. If something is no longer const than remove the keyword, or find another way to solve your problem


> const can be trivially casted away.

Yes, types can be trivially casted away too. That's not a reason to use void* everywhere.


I strongly disagree, obviously. Using const to document code might seems like a good idea in some organizations, with extremely large codebase, I don't know.

But first, when you're an intern, you have to follow the local codestyle, even if you think it's not ideal.

Second, I think there are much better rules than spamming const to attain the same objective, but you might need more experience to really appreciate that.


> Second, I think there are much better rules than spamming const to attain the same objective

Like?

> you might need more experience to really appreciate that.

Practically a picture perfect appeal to authority.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: