Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd like to add:

4) Don't know how to mock effectively

I've seen three flavours of this one:

- tests that take forever because they don't mock a slow external process that gets called 100s of times in the suite.

- tests that randomly break because someone left a file in /tmp/ or some other lack-of-isolation mystery.

- tests that run like shit off a shiny shovel, because everything is mocked, meaning that nothing is actually tested.



>tests that run like shit off a shiny shovel, because everything is mocked, meaning that nothing is actually tested.

Can you expand on this a bit?

I've always worked under the impression that tests should be focused on testing a narrow slice of code. So if my SUT has some dependencies, those will be mocked with the expected result from the dependency, and possibly even with the expected parameters in the arguments. The asserts will check the result from the SUT, but also that the expected method on the mock was called. This way, I'm just testing the code for the SUT, nothing more.


The problem is that bugs rarely occur in one unit. They occur in the interaction between multiple units. The smaller you choose to define what a "unit" is (because everybody has their own definition of what a unit test is), the more true this rule becomes. Extreme case of small unit would be testing every single line individually which obviously nobody has resources to do, doesn't tell you anything about the complete system and which still gives you 100% code coverage!

Your "expected result from dependency" might be volatile or hard to mock due to bugs, state, timing, configuration, unclear documentation, version upgrades or other factors inside the dependency. So when the system breaks while all unit tests are still passing you get this blame-game where one team is accusing the dependency for not behaving as they expect, when the truth is that the interface was never stable in the first place or was never meant to be used that way.

What you have to do is to choose your ratio of system test vs unit test. The scenario GP describes is companies that spend 99% of their testing budget on unit test and 1% on system test, instead of a more healthy 40-60.


Thanks. That makes a lot of sense. So while testing a given class, it may have some dependencies, but those may be external resources (a DB, an API, etc), or internal ones. It sounds like the recommendation is only to mock where those external dependencies lie, and leave the internal dependencies. Eventually, as you go down the chain, those internal dependencies will get to external ones (which will likely still need some sort of mock/fake/stub), but you're allowing more of the logic and interaction of the system to be tested, rather than just the logic in the one class that's directly being tested.


I'm not the GP in question, but I have worked on code that I think fits this phrasing.

In that project's case, 99% of tests were mocks where the only thing being tested was whether or not the mocked function got called the expected number of times or with expected arguments.

So the many thousands of tests ran very quickly, and over 90% of the code was covered by tests; however, nothing was actually being functionally tested in those cases.

In other words, the tests ran like shit off a shiny shovel.


Yes. This.

What then happens is that the mocked out functions change what they return, or the order of what they do inside (e.g so that for a given set of inputs now throws a different exception). Someone forgets to update the mocks. All the tests continue to pass, even though none of the conditions are actually possible in the program.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: