Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

James O. Coplin has a long of experience and is steeped in theory and practice. His list of publications is about 200 entries long: https://sites.google.com/a/gertrudandcope.com/info/Publicati...

Unit tests are not free as they are also code that much is obvious. Coplin however delves also into less obvious aspects of impact of unit tests on design and also the organizational aspects. Ultimately coding patterns are going to reflect the incentives that govern the system.

Software development is a lot about trade-offs. There is plenty to be learned here how to do it. A addendum by him can be found here: http://rbcs-us.com/documents/Segue.pdf but the meat is in the 2014 article.



one thing that is really imposed by unit testing is the ability of exercise single part of code independently.

it really brings out code smells: if you need mocks injected everywhere instead of being able to use dependency injection cleanly, it shows. if you have code paths that can only be triggered within events, it shows, etc.

having "wasteful" unit testing is more an investment for the future: when users came with real bugs, the ability of reproducing their step in code and fixing that in a no-regression suite is invaluable, but requires your app to be testable in the first place, lacking which you are stuck with manually testing stuff or even worse coughselenium cough


A lot of times, the argument for why a lot of things are "code smells" turns into circular logic:

Code that is hard to test is poorly architected. Poorly architected means it's hard to test.

Example. Take this code (I'm reusing an example from this thread since I think it represents typical "well-factored" code, and isn't an obtuse example to prove a point):

   class FooBuilder
     def create(object)
       if FooValidator.valid?(object)
         FooFactory.save(object)
       else
         raise "Can't save an invalid object"
       end
     end
   end
It's reasonably easy to intuit the purpose of this code, and what exactly makes for a valid object (since the validator is an explicit dependency of this class.) I've often seen this called "poorly architected code", however, since an isolated unit test needs to depend heavily on mocks to implement, and you end up with something like this:

   class ObjectBuilder
     def initialize(foo_validator, foo_factory)
       @foo_validator = foo_validator
       @foo_factory = foo_factory
     end

     def create(object)
       if @foo_validator.valid?(object)
         @foo_factory.save(object)
       else
         raise "Invalid - can't save object"
       end
     end
   end
From the perspective of a coder coming in and trying to understand what's happening here, this code is much more difficult to understand, despite being more "testable". What makes a foo valid? How do I know what a "foo_factory" is? I suppose I could assume that the class defined in foo_factory.rb is probably one - but I can't actually be sure.

The code is more extensible, for sure, but in a way that probably doesn't matter. I can pass in any validator I want! Amazing! Except, in 99% of cases, I'm going to have one way of validating something. The same goes for saving.

I would posit that at least 90% of the time that I see dependency injection in a codebase, it's there solely to aid testing and almost never adds practical (as in, actually being used and not just theoretical) value to a codebase.


The main advantage of IoC is to decrease coupling. The first code that you posted is highly coupled with FooValidator and FooFactory, every change to those objects (name, namespace, etc.) will have effects on your code. Your code is also less flexible because is bound to exactly that validator and you have to explicitly change it in all the places where it is used if you want to use another one. The better testability of the second code is just a nice side effect of IoC. The fact that you cannot tell which type are your parameters is a ruby problem, certainly not an IoC shortcoming.


That's a false dilemma. One doesn't start criticizing from the most extreme angle, because then it'd be fair to assume the alternative is equally extreme, and I don't see anyone advocating for testing backends only exercising them from the ui level.

Iow "if we take things to the extreme bad stuff will happen" contains its own solution.


I don't think my point was extreme at all. The example I posted was an extremely typical, even tame example of dependency injection and architecture for the benefit of testability. I've seen countless variations of pretty much exactly that or something quite similar (pulling out explicit dependencies to be injected that should never reasonably change in normal circumstances).

Regarding "should we only exercise them from the UI level" - I'm not 100% sure what you're getting at - but if your point is that we should focus our testing on business-facing use cases and not trivialities of what class calls what method, then we're speaking past each other and are in complete agreement.


I think a method called "create" that does in fact save the object is a bad example. Also, why is it possible to have invalid objects in the first place? Wouldn't it be the object's responsibility to make sure it's valid?


The create vs. save is a typo, those could easily both be create and my point is identical.

Regarding validation - depends on who you ask. Single Responsibility Principle taken to an extreme would probably support the idea of having a single class whose purpose is to validate an object.

Regardless of nitpicking though, the point is that dependency injection often makes it harder to reason about code (as large parts of the "business logic" of code are relegated to a dependency that isn't obvious to locate), and are often done strictly for the benefit of the tests, rather than the functionality or comprehensibility of the code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: