I'm not the GP in question, but I have worked on code that I think fits this phrasing.
In that project's case, 99% of tests were mocks where the only thing being tested was whether or not the mocked function got called the expected number of times or with expected arguments.
So the many thousands of tests ran very quickly, and over 90% of the code was covered by tests; however, nothing was actually being functionally tested in those cases.
In other words, the tests ran like shit off a shiny shovel.
What then happens is that the mocked out functions change what they return, or the order of what they do inside (e.g
so that for a given set of inputs now throws a different exception). Someone forgets to update the mocks. All the tests continue to pass, even though none of the conditions are actually possible in the program.
In that project's case, 99% of tests were mocks where the only thing being tested was whether or not the mocked function got called the expected number of times or with expected arguments.
So the many thousands of tests ran very quickly, and over 90% of the code was covered by tests; however, nothing was actually being functionally tested in those cases.
In other words, the tests ran like shit off a shiny shovel.