I agree with your sentiment, but I think the situation is a bit more nuanced.
The usual argument is that headers don't actually provide good encapsulation: implementation details internal to the class (i.e. private members) end up getting leaked into the public header. There are workarounds (pimpl, opaque "handle" types, etc.) but they all have their own disadvantages (mostly an extra layer of indirection).
Complicating the matter is the issue of static polymorphism--CRTP and its friends cause the template-ization forcing lots of code into headers. Smarter compilers help a bit (with de-virtualization and constexpr), but if you want to guarantee that something is resolved at compile time, turning it into a template is the only real option.
Lastly there's the inlining specter; the separate compilation model means that without link-time optimization (which has admittedly made great strides in the last few years), code must be moved into headers to be eligible for inlining. Premature optimization and all that, but this is a death-by-a-thousand-cuts situation where the language gets in the way of doing the performant thing (and you're probably only using C++ if you care about performance in one aspect or another).
None of these things is an insurmountable problem, to be sure, but they represent little inefficiencies (either for the programmer or the program) which are not generally present in more modern language implementations. I am thinking primarily of Rust, for which the static polymorphism and link time optimization stories are strong--although perhaps that is in direct reaction to some of these C++ shortcomings, and we'll discover Rust has warts of its own after a few decades of wear and tear.
Call me old-fashioned, but I actually think the separate compilation model is a good thing: it makes it possible to delegate the build process to a mostly language agnostic tool, and to mix languages easily (C, C++, D) in the same project.
About the implementation details internal to the class: the 'private' keyword is the problem, not the header files.
If you want polymorphism, why not just use an abstract base class - as you would have had a layer of indirection anyway.
If you don't want polymorphism, just use "handle" types. Am I missing something?
The usual argument is that headers don't actually provide good encapsulation: implementation details internal to the class (i.e. private members) end up getting leaked into the public header. There are workarounds (pimpl, opaque "handle" types, etc.) but they all have their own disadvantages (mostly an extra layer of indirection).
Complicating the matter is the issue of static polymorphism--CRTP and its friends cause the template-ization forcing lots of code into headers. Smarter compilers help a bit (with de-virtualization and constexpr), but if you want to guarantee that something is resolved at compile time, turning it into a template is the only real option.
Lastly there's the inlining specter; the separate compilation model means that without link-time optimization (which has admittedly made great strides in the last few years), code must be moved into headers to be eligible for inlining. Premature optimization and all that, but this is a death-by-a-thousand-cuts situation where the language gets in the way of doing the performant thing (and you're probably only using C++ if you care about performance in one aspect or another).
None of these things is an insurmountable problem, to be sure, but they represent little inefficiencies (either for the programmer or the program) which are not generally present in more modern language implementations. I am thinking primarily of Rust, for which the static polymorphism and link time optimization stories are strong--although perhaps that is in direct reaction to some of these C++ shortcomings, and we'll discover Rust has warts of its own after a few decades of wear and tear.