It's a good observation, but I'm still going to disagree. Let's look at the two cases.
(1) It really is dead code. OK, great, but I've seen people spend a whole day writing hundreds of lines trying to exercise it before they conclude it's truly dead. Is it worth it? If a small volume of dead code is worth expunging at all, I suggest that there are more efficient ways to solve that particular problem.
(2) It should be dead, but it's "revived" by constructing an artificial situation in which it does get called even though it never could in real life. Again, I've seen people waste days on this exercise. Now you're carrying around the dead code and the tests/mocks that make it undead.
So in what situation is there a net benefit? In my experience, any dead code that's found and removed that way is only so at great expense, by people who only found it because they were pursuing the arbitrary 100% goal. I don't think that makes the case that 100% unit test coverage is a goal worth pursuing.
In both your cases you have a situation where someone is creating tests based on what they think will increase coverage the most. I don't think that's necessarily what the parent is saying though. I think what they are saying, is that you can write tests based on the expected/documented behaviour of the module, and if the coverage ends up less than 100%, it's because your module has code paths which are not required according to the expected behaviour. The key is that adding new tests is not the solution unless you can specifically identify expected behaviours that you missed in the tests. Looking at the code and trying to reverse engineer what tests are necessary to achieve 100% coverage will always lead to the situations you describe.
These sorts of tests aren't efficient though. I suspect the only way to get this outcome from your tests is if you encode all your edge cases into end-to-end integration tests (which would reveal which portions of the code can never be hit)... I find that approach to testing to be too expensive, and prefer an approach where success cases and known to-be-difficult cases (like say, using a strange third party tool with weird error signaling) are encoded in end-to-end tests with edge cases being limited to small units of code.
> if you encode all your edge cases into end-to-end integration tests
I think that's what you should do. Your integration tests should check that all your specifications are validated. And your specifications should cover all edge cases.
So yes that's a lot of "slow" tests. But I think the best would be to work on the tooling to make those tests faster and easier to setup, not limit their quantity.
I don't believe tests (and the resulting coverage reports) are a time efficient way to locate dead code, you will need 100% logically covered code and some of your dead code may be under unit test and end up being included in the category of "covered code". I think the best way to locate dead code is to simplify it or refactor it - and when you approach a well factored code base then it's usually much easier to see which portions are unreferenced.