Hacker News new | past | comments | ask | show | jobs | submit login

This is obvious, as anoter commenter said, but this is nonetheless useful.

You can use it to show graduates. Why have them waste time relearning the same mistakes. You probably need a longer blog post with examples.

It is useful as a check list, so you can pause when working earlier in the lifecycle to consider these things.

I think there is power in explaining out the obvious. Sometimes experienced people miss it!

The diagram can be condensed by saying SMUR + F = 1. IN other words you can slide towards Fidelity, or towards "Nice Testibility" which covers the SMUR properties.

However it is more complex!

Let's say you have a unit test for a parser within your code. For a parser a unit test might have pretty much the same fidelity as an intergation test (running the parse from a unit test, rather than say doing a compilation from something like Replit online). But the unit test has all the other properties be the same in this instance.

Another point is you are not testing anything if you have zero e2e tests. You get a lot (a 99-1 not 80-20) by having some e2e tests, then soon the other type of tests almost always make sense. In addition e2e tests if well written and considers can also be run in production as synthetics.




After 10+ years working on testing practices inside Google, I have found that even the most obvious practices somehow get ignored or misunderstood. As with a lot of programing practices, for every person that has thought deeply about why the practices exists, there exist many many more who just apply the practice as a matter of course (eg mocking, dependency injection, micro services, etc).

It might be useful to provide a little more context for why I wanted to write this in the first place - Over the last 15 or so years we have been tremendously successful at getting folks to write tests. And like any system, once you remove a bottleneck or resource constraint in one place, you inevitably find one somewhere else. In our case we used to take running our tests for granted, but now the cost of doing so now has actual cost implications that we need to consider. I also observed some in internal discussions that had become a little to strident about the absolutes of one kind of test or another, and often in such a way that treated terms like "unit" or "integration" as a sort of universal categories, completely ignoring the broad, practical implications we have bound together into a few shorthand terms.

My goal when trying to develop this idea was to find a way to succinctly combine the important set of tradeoffs teams should consider when thinking, not about a single test, but their entire test suite. I wanted to create a meme (in the Dawkin's sense) that would sit in the background of an engineer's mind that helped them quickly evaluate their test suite's quality over time.


What's useful here? There's nothing actionable, no way to quantify if you're doing "SMURF" correctly. All the article describes is semi-obvious desirable qualities of a test suite.


You're not "doing SMURF". It's not an approach or a system. It's just a specific vocabulary to talk about testing approaches better. They almost spell it out: "The SMURF mnemonic is an easy way to remember the tradeoffs to consider when balancing your test suite".

It's up to your team (and really always has been) to decide what works best for that project. You get to talk about tradeoffs and what's worth doing.


I touched on this a bit up thread, but I just want to note that my intention wasn't to get anyone to "do SMURF correctly". My goal was to create an idea to compete with the "Test Pyramid" which, while a useful guide in an environment with limited or no testing, didn't lead to productive conversations in an organization with a lot of tests.

My hope is that this little mnemonic will help engineers remember and discuss the practical concerns and real world tradeoffs that abstract concepts like unit, integration, and E2E entail. If you and your team are already talking about these tradeoffs when you discuss how to manage a growing test suite, then you're you will likely find this guidance a bit redundant, and that's fine by me :)


> What's useful here?

It is up to the reader to figure out this one.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: