You can replace TDD with object orientation, functional programming, scrum and other buzzwords. I never understood why people try to make everything into a rigid religion. Maybe it's a substitute for thinking for themselves?
Decisions are costly, and things that reduce the number of decisions you have to make can be useful in moving work forward and making you more productive. Out at least making you feel productive. It's obviously not a great idea to follow one paradigm always, whether it's appropriate or not. But I think it's attractive for the reason you mention. We can't reason about every little thing all day long, so rubrics that come from trusted experts both reduce cognitive overhead and make us feel like we have "borrowed" somebody else's expertise.
I can see the benefits of reducing decision time. But I have seen it many times that you start with some paradigm which at the beginning is useful. But over time you discover more and more conditions where it doesn't work well anymore. Instead of reevaluating the situation a lot of people just stick to the paradigm although it's obviously not useful anymore. I see that a lot with scrum people where the answer is often "you are doing it wrong" but they are not able to answer the question how to do it right.
On the flip side, I find having a system like Scrum is beneficial even when parts of it are being done wrong because it's still an agreed-upon system that the entire team adheres to. Having 5 or 10 people each going at things the way they see best based on their experiential understanding can lead to counterproductive chaos even if each individual's approach in a vacuum is sound.
It's great when everybody can see through stuff like this and work with a coherent but imperfect approach (no matter what field you are in). I guess it takes knowledgeable, experienced folks to really understand the context and make the best tradeoffs, since no way of doing something is perfect for all applications.
Saw a perfect example of this on stackoverflow (or stackexchange I forget which).
But someone posted some actual code, it had a switch statement, only 3 case options, about 7 lines and if the switch didn't hit, throw exception after
"I would be careful with that switch, it looks like a code smell, this is how I would do it: 2+ interfaces, 3+ classes, abstract this, factory that and over 100 lines of code.
I just refuse to believe a simple switch that everyone can understand in about 1 second is not the better solution to some complex code, even if it does use a "pattern"
Without having seen this specific SO thread, I can easily imagine both 1. agreeing that the 7-line switch-based code is the better solution to a little SO-question-sized problem, and 2. finding that same solution with no abstraction inflexible and "smelly" in a huge codebase very actively developed by tons of people. Context matters, and knowing what the context is currently and predicting what it will be in the future are some of the tough things about programming.
This. If it's a small program with limited scope, switches are fine. If it's part of a much larger program, there's a chance that switch will explode to switching on the same thing all over the codebase.
However, I think GP's point is valid. If during a code review I have a switch and it makes no sense to abstract it, I'd smack the reviewer on the head if he/she wants me to "fix" it.
I used to have answers, now most of the time I say 'it depends'. Some days I wish I could take the blue pill and not have to try to explain the nuances of context.
I think objectively the code is better for it but sometimes it takes more social sophistication than I possess. I end up spending too much time talking in post mortems and it all sounds like 'I told you so'
Indeed. One of the reasons that I, for one, tend to be very critical of that sort of advocacy is that one day someone less wary will read it, and they will think it is good advice because it looks convincing and comes from someone with a reputation in the industry, and then they might become the next junior developer I have to mentor or the next ex-senior-developer whose code I have to fix before I can get on with doing some real work. It's happened before, and no doubt it will happen again.
If you're doing OOP this can mean leveraging polymorphic behavior rather than switching on data or data types. It's good to keep in mind even if it doesn't fit every single case.
A switch can increase the cyclomatic complexity of code and make it difficult to visualize how code flows at runtime. It may be the best solution, but often violates the single responsibility principle. What is best is often situational and depends on a developer's code aesthetic. But a single switch statement is useless in isolation, so one can't evaluate whether it's appropriate in isolation without looking at the context in which it's used.
> A switch can (...) make it difficult to visualize how code flows at runtime
You mean visualize with a tool, or in your head? If the former, I'd love to know of a tool for visualizing code flow.
If the latter, then I think a switch - with entry and exit points, and all code paths clearly visible together on one screen - is much easier to visualize than an implicit branching hidden behind interfaces, requiring you to jump around definitions of several objects.
It may be easier, when considering the switch in isolation. But code is rarely considered in isolation. A switch creates multiple possible code paths. Combine it with other switch statements and the possible code paths grow geometrically. This is the theory behind measuring cyclomatic complexity.
When I said visualize, I meant primarily in the programmer's mind, but there are other tools that help. Code coverage tools ensure that all branches of code are tested, but fail to ensure that all permutations of all branches get tested. This is something where fuzzing tools can attempt to ensure that all branches are taken, but will quickly run into the limitations of the underlying machine if the cyclomatic complexity is too high.
Beginning programmers learn flow control very early in their development. Most will also learn fairly quickly that creating a rat's nest of ifs and switches can become difficult to reason about. It's for that reason that we learn abstractions that help us increase the complexity of the tasks we can solve without increasing the complexity of the code we have to understand.
A switch creates multiple possible code paths. Combine it with other switch statements and the possible code paths grow geometrically.
That is certainly true, but it is equally true of any mechanism for making those decisions. With some mechanisms, like if-else and switch-case, the decisions are explicit. With others, like virtual function dispatch, the decisions are implicit. Either way, the decisions are still being made and the combinatorial properties of multiple decisions are still the same.
Switches seem to increase the metrics for cyclomatic complexity absurdly in all of the tools I've used to measure it, compared to directly equivalent constructions, like a raft of else-ifs or indexing into a lookup dictionary. Things like a simple mapping between two slightly incompatible enums will have whopping cyclomatic scores, despite being incredibly simple to reason about. Taking that kind of a metric to heart can lead to a different flavor of cargo-culting.
Buzzwords are buzzwords. The reason people flock to those things is that they feel it offers value to them.
From my perspective all three tried to solve maintainability. OOP via encapsulation and standardisation, functional via uncompromising standards, and TDD via validating each small subsection of the solution.
They all "work" if you commit to them enough. But none of them work when you half arse them. So instead of people working hard they move on to the next "solution" because the last solution required too much effort (TDD is definitely a victim of this, considering how much work it requires).
PS - People being "anti" something are just as religious as the people being "pro" something, except the anti people often have a smug sense of superiority. See the people arguing against GOTO in all and any circumstances in spit of how little sense their arguments make.
"They all "work" if you commit to them enough. But none of them work when you half arse them."
This sounds dangerous. It gives rise to the attitude that if X doesn't work you are just not doing it enough. Instead of accepting that maybe it's not the right approach to this particular problem.
From an experiential perspective, here are paradigms that have resulted in me spending less time creating and hunting down bugs, which has made me noticeably more productive:
1) Unit testing/TDD, specifically setting things up where I am able to run a unit test in less than a second on any code I'm working on has been a huge productivity boost vs. running a whole suite periodically
2) Immutable variables (or generally avoiding mutation)
3) SRP and cyclomatic complexity reduction
4) Pattern-matching (Elixir/Erlang) which also reduces the amount of branching in code and thus its depth (reducing cyclomatic complexity further)
5) FP (which incorporates (2) and (4) plus generally constraining side effects and I/O to the smallest part of the code possible, and leaving the rest non-side-effecting)
I have done scrum, but I haven't seen it as directly effective as these other things, it's more for effective PM IMHO.
All the items you list are useful things. But you probably have noticed that they work well in some cases but in others they don't. The problem starts when people declare that X is always the right thing even in situations where it clearly doesn't work.
Agree, but I'm with John Carmack when Mr. C++ says something like:
"My pragmatic summary: A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in. In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention. Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible."
>Maybe it's a substitute for thinking for themselves?
Re-thinking everything for every project is wasteful. Sometimes you already know a way to do a project that works for you. Sometimes that way is mirrored by a lot of other people.
Other times you are selecting these things because it's a common thing to talk about and when you're teaching people the project, you can prime their learning by telling them to expect these models and methodologies in this project.
The mistake IMO is making the religion industry-wide when really you'll be lucky to implement it consistently just in your company/project.
That's very much the point, and I'd suggest that "not thinking for themselves" is sometimes a totally reasonable goal.
Joel Spolsky covered this fairly well in his piece on McDonald's-style software. The best way to write a single program is to master programming, and develop alone or in a small group of masters, and build a beautiful and enduring tool with whatever practices suit you best.
But none of that can be packaged up and passed around to new programmers on demand. It takes time to master, and it certainly can't be smoothly overseen by someone who never learned it. The best way to get 100,000 programmers writing a billion lines of code is to develop simple routines that can be followed and overseen by anyone.
The results won't be as good as mastery, but in many cases they don't need to be. There's a lot of simple software out there which benefits more from being done now than being done right.
Having said all that, I do think rigidity is becoming a problem for the software industry. It's not being employed at the levels needed to make software work, it's being employed at the levels needed to make development legible. Adding procedures that worsen development but make it easier to watch is a serious issue.
Experts often make lousy teachers. They don't have the slightest clue what they know and so they can't explain it to anyone else. Or they can explain it but can't write it down so they're always the bottleneck.
The worst of this group often become indispensable, because they've designed something nobody else can understand. It's politically difficult to push back on this sort of problem, which is why I am job hunting now. Of the six or so really smart guys that remain on my current project, I trust maybe three of them, but would only work with two again (the third is so happy to be here that he doesn't resist any of the chaos)
But you aren't thinking for yourself. You're thinking for the team.
Software is a team sport. All of my decisions affect how long it takes for you to fix a simple bug. All of yours affect how often I have to stay late to fix a production issue.
I think these things become a religion because there is a large group of developers who simply won't do something unless management makes a rule about it. So now to get them to participate, dogma has to be involved.
In general these are the same folks who attribute all adversity to bad luck instead of lack of forethought. They won't see the consequences of their shortcuts because that's just the way things are.
I think these things become a religion because there is a large group of developers who simply won't do something unless management makes a rule about it. So now to get them to participate, dogma has to be involved.
At that point, you're basically in permanent damage limitation mode anyway, though.
Software development is a skilled and creative activity. Certainly not everyone has to be a "rockstar" or a "ninja" or whatever we're calling those people this week. I've worked with plenty of developers who do a decent, competent job without spending their lives thinking about code 24/7 and staying up to speed on every little development in the industry. However, there is a baseline level of giving a $#!% that is necessary to do a decent, competent job.
You can also replace it with vaccines, evolution and climate change.
I have the feeling that, while TDD is not an absolute, this article doesn't put forward good case examples for skipping it, or at least doesn't try to discuss these examples and find out why they aren't fit. Instead; it takes a shot at stereotypes and builds a strawman on it.
The article is calling for moderation, not saying you should skip TDD but that you should understand what the exceptions are in TDD for when to write and when not to write tests. The author is asking the experts in TDD (and, implicitly, other methods that become dogma) to qualify their statements rather than issue commandments to "always write tests first" (a statement you see in material instructing on TDD, but then later gets qualified to show where and when to actually write those tests first and where and when not). But since many people probably don't read past the first few chapters, they miss the nuance.
The whole point of TDD is to submit one's code to experimental validation. It seems wrong that, in order to better grasp it, one should be taught all the counter-examples.
Moderation comes from practice, in fact. It comes from trying and failing, not from extra-prodding the experts, or from studying the matter. And should one be confronted with a zealot, their experimental rebuttal should be convincing enough that zealotry isn't a problem.
On the other hand, OP's assertion that, e.g. a pseudo random generator shouldn't be tested comes without explanations, and frankly is unconvincing : there are acceptance tests (they may not be enough, but passing them is the least you can do) https://en.wikipedia.org/wiki/Randomness_tests, and you can also split the seed and the function you implement, and test thoroughly that the function performs what the algorithm describes.
That's not the whole point, that's a point. It's also got a large emphasis on testing frequently, writing tests frequently, and writing them in close temporal proximity to the code (dogmatic preference is before, but practically the order is less relevant), and writing or executing tests after relatively small code changes (additions, deletions, modifications).
Imho, that's the whole point. Testing frequently and using small steps is merely a question of how TDD is best used.
Likewise, I would totally accept that someone that writes zero tests but has an inexpensive and permanently available source of validation (e.g. from prod) practices TDD.
> Likewise, I would totally accept that someone that writes zero tests but has an inexpensive and permanently available source of validation (e.g. from prod) practices TDD.
I'm not sure how you can do validation without writing tests, but let's clarify our terms. I work in the embedded field. We rarely, if ever, write unit tests in the typical sense. Most tests are based on black box testing of the system (related to integration tests) sometimes of a system component if we can plugin a simulator of other components (closest we get to unit tests). Most often, these tests are not written in the same language as the system or even running on the same platform. They talk over serial or something to the device and record the responses back.
These tests still have to be written, even if it's just:
Send messages X, Y, Z
Receive messages A, B, C as responses
A, B, C should match *data* modulo time tag
A test has been written. And if this testing is run frequently against the code changes, and failures treated typically as blockers on working on other features or issues, then this is definitely TDD. Often, however, these tests are run infrequently, ergo not TDD, despite the comprehensiveness of the test suite. The tests do not drive the development (though perhaps they should more often than not).
> And if the director of a movie has asked you write some code to 3D render a "really awesome-looking explosion" you won't benefit much from unit tests either, unless you can write a unit test to determine awesomeness.
I was skeptical of this post to begin with but this line sums it up for me. The author is railing against a perceived threat. Of course you can't unit test _awesome_. That's ridiculous.
So what's the claim here?
I think more developers need to be introduced to the concept of the, "state of the art," and be led to resources like SWEBOK[0]. It has a section on software testing and it's good to know what the state of the art is across the industry.
I don't have the link in my db, but I've come across studies that have shown no significant difference in productivity when utilizing test-first vs. test-after approaches to software development. That's the only real religious sell I still hear these days and it has been rather well debunked at this point. All that matters is that software is tested... how you approach it seems to just be a matter personal preference.
> I was skeptical of this post to begin with but this line sums it up for me. The author is railing against a perceived threat. Of course you can't unit test _awesome_. That's ridiculous.
You'd be surprised. When I advocate unit-tests, I often get the question back "But how do we unit-test this GUI-code?". Which, as you point out, is ridiculous. But they still ask the question.
The truth is that these people have inter-twined their GUI-code and business-logic (and maybe even DB-access code) and they don't even realize that this is a code-smell.
They don't see how they can apply unit-testing to their code, because they haven't realized that their code should be divided into smaller, independent units.
To "convert" these people (to further reiterate the religious analogy) you first need to convince them that you're not actually crazy. Towards that end, this might be a start.
I find that systems written with few tests are inordinately hard to add tests to. So I tell people "please TDD" because if they don't, the tests are never backfilled.
This is a common occurrence. The practice of writing tests shouldn't be avoided. You still have to write the unit tests which implies your code must also be testable. The difference in productivity between "write the test first," and "write the test after you've written a bit of code," don't seem to matter as much as "write the tests."
As someone who reviews code for a good portion of my day it makes little difference to me how you, the programmer, managed to write the tests. We share the same goals and the end result seems to matter more than how it was accomplished.
I think the "test-first" approach has more focus on ensuring you don't write any production-code until you have formalized the requirements for that piece of business-logic.
If you write the tests afterwards, it's hard(er) to know if you've managed to retro-actively cover all the formal requirements. And that makes it harder to later on trust that the tests will cover your ass entirely (as far as formalized business-requirements go).
I can definitely see the appeal, but I'll be the first to admit I don't go the full mile myself, always. And that's I guess what this blog-post is all about. Be critical about how you apply well-meant advice.
> I think the "test-first" approach has more focus on ensuring you don't write any production-code until you have formalized the requirements for that piece of business-logic.
In fact the SWEBOK 3.0 section on test-driven design mentions this. It was first proposed in extreme programming as an alternative to more formal specification methods.
> I can definitely see the appeal, but I'll be the first to admit I don't go the full mile myself, always. And that's I guess what this blog-post is all about. Be critical about how you apply well-meant advice.
As banal as such advice is if that is as far as the article went I'd be fine with that. However the argument was weakened when the author eschewed unit tests out of, what appears to be, ignorance.
I don't always write tests first either... when I'm prototyping I'll usually try a few approaches first. However I generally throw out that code once I find a tack I want to take and begin specifying the interfaces, constrains, and invariants through tests.
In higher-level languages with sound type systems I generally rely on properties of the type system and compiler in lieu of tests I'd write in a more dynamic environment.
It's just a matter of preference that I tend to think more before I write code and write tests first.
The question is, how do those studies measure productivity?
In my case, I've found I write more readable, easier to understand code when I use tests to drive out my design. It may actually take more time to write the code itself, but the product ends up better, imo.
(Granted, I think I take the approach this article advocates and only do so when I can help inform the design.)
> In my case, I've found I write more readable, easier to understand code when I use tests to drive out my design.
In my case, I've found the opposite—when I use tests to drive my design, I end up with lots of tiny moving parts; each one individually is understandable, but the design as a whole is not, and figuring out how to do anything different is an exercise in frustration. I end up jumping between 2 or 3 files, trying to figure out where anything happens, because it's all been unit tested into perfect isolated pieces.
I think that when you need to test the code the naive solution is to write less complexed code, witch is a win in itself. In some cases you can get rid of all couplings/complexity and state, so that the code/function/unit never breaks, and writing automated tests for those becomes kinda unnecessary.
It's like design patterns, scrum etc.. When the tool is used correctly it is awesome. When it's mis-used it becomes a cargo cult style liability.
With TDD a large part of the religion thing comes from people like Uncle Bob. It's not that his style is bad, but the biggest problem I've seen with his style is when people who are already passionate with their craft and maybe practicing TDD go crazy with his advice while missing the context of his message.
That context is his focus towards systematically sloppy corporate shops that have picked up really bad habits and won't touch the tool.....often the same ones that still use a waterfall development process (now with a thin scrum veneer). When you have these kinds of shops it is probably better to to a bit to the other extreme first, and to then find a happy medium after you develop better habits and understand the tool
.
>With TDD a large part of the religion thing comes from people like Uncle Bob. It's not that his style is bad, but the biggest problem I've seen with his style is when people who are already passionate with their craft and maybe practicing TDD go crazy with his advice while missing the context of his message.
I honestly don't know what Bob should do. I know he occasionally says stuff that gets people upset. However, the majority of the stuff that is attributed to him is just outright false. It doesn't matter how many times he publicly corrects the record.
Uncle Bob does not believe in unit testing every function.
Uncle Bob does not believe in a 1:1 mapping of function with unit test (or even close). Uncle Bob is fine if a unit test spans/covers multiple functions.
Uncle Bob is not against unit tests that touch the database or disk.
The list goes on and on.
I watched many of his Clean Coder videos, and was surprised to see how nuanced he is. He does not speak in absolutes. His blog posts are much less polished, and often get him into trouble. But even there I see a fair amount of nuance.
Not as often as some of his critics might suggest, perhaps, but he really does speak in absolutes at times and some of those absolutes really are as foolish as his critics say.
He also frequently adopts a style that presents his personal opinions and experiences as if they were more than that, despite a lack of other evidence to support his positions or even the existence of other evidence that seems to undermine those positions. While he may hide a small disclaimer somewhere, many of his readers aren't going to see it, and I don't believe for a moment that he isn't fully aware of that when he chooses to present his material as he does.
I agree, but even Uncle Bob advocated TDD not as a religion of itself, but as a means to achieve the underlying goal - test your code.
In one of his articles he mentioned that there are very few cases where he deems it acceptable to not write unit tests immediately, but one such scenario is where you develop your code interactively using a REPL. Of course it's still better to then create the unit tests for regression testing, refactoring etc.
But the point is that the most important thing is simply that the code is tested somehow. Basically what agentultra said above.
Sadly, I don't think all of the black and white advocacy for certain ideas that we see from some sources in our industry is because the experts don't realise how expert they are. I think much of it is because it sells more books and consulting gigs, and because being controversial helps to keep a high profile.
I wouldn't call myself a regular practitioner of TDD but I can tell you that it's incredibly valuable in certain circumstances. When you're writing code that, for example, must be mathematically accurate, writing unit tests can be very useful just so you can assure yourself that you wrote the calculations down correctly.
They're also very useful for other "pure" operations. A factory should always produce an object with such and such properties. A user input validation method should always fail given some classification of invalid input, etc.
Hardcore TDD is definitely not The Way for me but it's very good to use at times. You'd be amazed how much it speeds up testing these kinds of processes when you can just watch your list of tests change from pass to fail or vice versa. As with anything, your mileage may vary.
I'm not. I know you can do TDD without unit testing but it's very rare in my experience. I'm saying that the TDD cycle (write tests, run all and verify failures, write actual code, run all tests hoping to see no failures, refactor as needed until step 4 passes) works best when applied to unit tests. Integration tests, acceptance tests, and other "big" tests don't lend themselves well to that style of programming, in my opinion.
I think RyanZAG is saying you can do unit testing outside of TDD, not that you can do TDD without unit testing.
The fact is that both are true. You don't need TDD to use unit testing — you can implement first and test afterwards. In some cases, this may actually be preferable.
However, although TDD isn't a silver bullet solution, it can be very helpful in a lot of cases. Like with a lot of things, the usefulness of TDD depends on the situation at hand, and a good software engineer chooses the right development techniques for the job.
Unit testing of some sort, however, is considered good practice regardless of whether you're using TDD or not. (And of course, integration and acceptance testing is a good idea, too.)
> You'd be amazed how much it speeds up testing these kinds of processes when you can just watch your list of tests change from pass to fail or vice versa.
That sounds like a test-first strategy, which I would categorize as TDD. Yes, you can write unit tests without it being TDD, but I'll second the notion that the sorts of situations the parent pointed out benefit from a test driven approach.
Remember the mind map craze? For a few years, educators everywhere wanted you to do almost all learning by drawing circles with words and connecting them. For many people, this type of visual link between concepts allows them to learn far faster than regular textbook style.
For some people, mind maps make zero sense and they learn far slower. Some concepts just don't fit into mind maps at all and even for very visual people it just slows them down. There's often a more specific visual learning method for many problems than a simple mind map too.
TDD is mind maps. Give it a try, but don't feel bad just because mind maps don't gel with your brain.
I'm unfamiliar with the mind map craze. I'm familiar with them, we used the concept as a tool in grade school for writing essays and papers. I'm curious where and when it became dogmatic for some educators to push it beyond "this is a tool that can be useful, use it for some tasks and see how it works" to "this tool is the shit, mind map all the things".
Education is almost as cargo-culty as software development, with nearly as little empirical research being done to check the validity of the various schools of pseudo-religious dogma.
The only thing that has made me even more productive than TDD is functional programming/immutability (in conjunction with TDD).
You're basically wasting your time not doing it. If you want maximal productivity, learn TDD. Writing unit tests upfront is most definitely a case of "one step backward, two steps forward." Your code under test will be written better (which is to say, more modular/less cyclomatically-complex/easier to maintain and refactor), you'll break it far less often (from the same body of code or other code that relies on it), and you'll feel far more confident about your code. You won't be afraid to refactor it (afraid that you'll break something) because the tests validate it. It's also invaluable on teams, where not everyone is sharing the same mental model.
Speaking of which, remember that your mental model of the code is the true limitation. Unit tests allow you to offload some of that, so that you no longer have to have a perfect representation of all possible states your code can produce, inside your head (which if even possible would allow you to write perfect bug-free code). This is incredibly liberating. You can literally feel the difference. I know it sounds all touchy-feely to describe it this way, but so be it. But hey... If you can imagine every possible state all your code will ever produce at all times, you don't need TDD nor unit-testing, because you already know the "fails". ;)
I didn't do TDD most of my career, product I delivered worked, where delivered on time and reached the goals set out by the business. Why bother? Because it's good practice? Hardly a valid argument in a business perspective.
"Business perspective" can be like hill climbing: you evaluate a potential direction based on whether it makes your situation more profitable or less profitable, regardless of the peculiarities of your local landscape.
I think got into TDD for its aesthetic or philosophical appeal. The idea of being able to make reliable software by first writing down what its supposed to do just struck me a poetic and tractable. On the one hand, I spent a lot of time and money going through a dip in productivity while learning all the things to make testing work for me. On the other hand, I eventually emerged from that dip with skills and a mentality that allow me to be productive and relaxed in a way that I wouldn't have thought was possible.
I use it because it's faster and it makes making changes later easier. It also allows me to leave the app mid-feature (happens a lot as I do more than just code) and comes back to it without having to load the whole app into my mind again. If I get my specs to pass then the app will work. Simple.
What about it seem so horribly wrong to you? I'm by no means a zealot, and will often build things before writing tests, but there are definitely some problem spaces such as writing reusable libraries where going the other way round is a big help.
I can answer - most of the time I do not know at start what data structure/effect should be the exact result of my code, even if it is just a more complicated version of the "hello world".
In that case write higher level tests. TDD works best and provides most value when you go outside in. So your initial test would just determine that you expect to see "hello world" printed. Maybe on the next level down you now have to do some validation and you have to define how the interface (not used in the Java sense here) of the function or object or whatever looks that does the validation. The key thing here is that you don't need to think about how that will actually work on the inside, but how you want it to be exposed. As you start to TDD that portion of your code that implements that new functionality your tests shouldn't care too much about how the code under test is implemented, only about its outside behavior (this might include calls to other functions though that ideally get mocked). This process makes it easier to many practitioners to increment on functionality that they don't know in advance how to implement. That's why it's called "test driven development" and not"test first development".
All that being said, determining the right amount and granularity of tests is certainly much more of an art than a science and friends on the developer(s), what the system is doing, framework and the language you are using. I definitely write many more tests when I'm writing Ruby code than when I'm writing Rust. I'm also very new to Rust so we will see if that lasts.
TDD is also a bit weird in Rust since you can't do nearly as much of the dynamic tricks test frameworks tend to do. So I find myself writing less tests for this reason, or at least, different kinds of tests.
In TDD, you write the test to help figure this out. The idea is that you start with "how do I call this method/interact with this code/etc?", then ask yourself, "what should the result be?"
TDD is meant to help inform your design thinking, but there's no rule that says you can't change your mind as you go.
In particular, you should be changing your mind as a result of trying to write the tests. If it's hard to write a test, that's telling you something - the interface is clumsy to use, for example. TDD is different from just writing tests, because the tests drive development. "Drive" may be a bit too strong, but they at least significantly influence it.
By pg in On Lisp:
>> It’s a long-standing principle of programming style that the functional elements of a program should not be too large. If some component of a program grows beyond the stage where it’s readily comprehensible, it becomes a mass of complexity which conceals errors as easily as a big city conceals fugitives. Such software will be hard to read, hard to test, and hard to debug.
>>>
One of the problem is not treating tests also as a design problem as in what to test and what the code for tests should be like? I never see the rule of simplicity applied to the test code as well. Eventually it all becomes like a big city that conceal fugitive bugs.
I see a lot of projects where everything is just a vast context based decisions. If you factor out all of the things that are not related to databases or handling user input there's very little left.
However I can imagine having that logic built up "abstractly" by imagining user input and database output objects and fully tested is something that would really help a project.
Having only 20% code coverage or less is nothing to be ashamed of as long as that code is the code that matters and it's better, more abstract and more readable because you started writing it TDD. Before things got ugly with user input and database results.
> You need to take every piece of advice and figure out when and where it does not apply.
This. It is almost obvious. But you still see lots of cases where advice is followed where it doesn't make sense.
A particular contrarian thing that I do in Java. When I need objects that are pure Data Transfer Objects (DTOs) with no logic whatsoever inside I just declare all members public and do not bother writing (or generating) getters and setters. I know we were told NEVER to do this. But in this case there is not attempt to encapsulate anything as there is nothing to encapsulate.
It happens all the time. It happened with "design patterns". It happens with frameworks, languages, and many other things. Human nature. It's quite hard to argue with someone who cites an "expert", because you are a relative nobody.
It happens remarkably easily. But like every other mistake, this is one you frequently have to make for yourself to understand why it's a mistake.
The trick is having enough introspective capability to identify when you have made a mistake, and not fall back on dogma to say "I followed The Path, so it can't be a mistake".
All methodology has the potential to become pseudo-religion.
The worst ones are the ones that infect every aspect of the business like ITIL. The scope and complexity of the methodology is so broad, you need to hire priests (ie. consultants) and build shrines (ServiceNow, Remedy, etc) to the religion and you end up in a state where nobody in the business understands WTF is going on.
That answers the question frequently asked here re: why horrific and dysfunctional companies like IBM get a tight grip on companies.
Six sigma is definitely that way, especially when you consider that, like AA, it is a 12 step program. On a side note if you put the two lists next to each other six sigma almost looks like a dispassionate generalized version of AA ...
This is so not new, it has been covered in religions for thousands of years. Buddhist teachings tell that there are three forms of understanding.
The first is devotional understanding. Martin Fowler is a wise and learned programmer and says that TDD is the best way, so One believes that.
The second is intellectual understanding. TDD is good because it codifies the programmer's intentions for what the code should do, makes refactoring easier, prevents regressions and all the other logical reasons that TDD proponents talk about.
The third kind, and the only one that's true understanding, is experiential understanding. TDD is good because you've experienced development with and without it and feel how different they are. Once you get to this point, there's no dogma about writing tests first or anything of that nature. The practice becomes natural and you're free to deviate when it feels wrong.
The interesting thing about this model is that it maps pretty directly to current research on learning.
In this context, experiential understanding (when you're an expert) is the point where you start suffering from Expert Blind Spot.
And there's a bunch of research that suggests the best teachers (by default) are the people at middle point, intellectual understanding, precisely because the understanding is less automatic and integrated.
Interesting. For me, teaching when I'm an expert means I must begin another path of learning. I'm at Ri in understanding the material, but at Shu at understanding how novices see the material. But I could well believe people at the middle point will always be better than me.
For those who want a good example, I strongly recommend Julia Evans, aka @b04k:
She writes a lot of novice-focused material on technology. She does a much better job at explaining things than I would. And she does it with an excitement for the material that is infectious. Even for those who know the technical details, I recommend following her; I always walk away saying, "Wow, she's right! I had forgotten how cool that is!"
Or, arguably, the interesting thing about current research on learning is that it maps pretty directly to things Buddhists have been saying for thousands of years.
(I know nothing about Buddhism or learning research, just making a point about chronology.)
Buddhism also talks a lot about purity. It doesn't necessarily follow that those practices should be abandoned or even avoided, just because parallels can be drawn with religion.
Let's say that I spent some time brewing your favorite tea and that I did everything right so that the tea was ~perfect~. And then I took a pin and dipped it in feces, and then dipped the tip of that pin into your tea. Would that in any way have any effect on your enjoyment of the tea?
Could programming, and perhaps many other things, work the same way? Maybe we would all be creating better software if we were a little more religious with our code?
The old Unix koans were funny, and probably a bit gauche to real Buddhists, but they did a good job of capturing this.
The growth of programming as an abstract, professional business has seen a lot of growth in cargo cult practices. There's a lot to be said for respecting the system you describe, where no one is a master of a system unless they can clearly understand when to disregard that system.
I think the difference with healthcare advice, at least with the examples you gave, is that the effects are directly measurable.
If I am eating 2000 calories a day now, and start eating 2500 calories a day, I can measure the impact of that change on my weight as well as my lifting performance.
I really doubt the impact of TDD is directly measurable like this. I may be wrong though.
My experience: TDD is awesome for writing certain types of code... and it's terrible for writing other types of code. Simple as that. The same is true for many programming languages: good for somethings and terrible for others.
What is it terrible for? I've encountered situations where someone believed it was bad for something and were wrong. Such as user interface code. The solution in that case for example is to make as much of the code testable as possible leaving just a thin layer of untestable stuff (which can still be tested via an integration test of some sort). This idea generally follows the principles of http://alistair.cockburn.us/Hexagonal+architecture or "functional core, procedural IO". Gary Bernhardt expounds on this brilliantly in his https://www.destroyallsoftware.com/talks/boundaries talk, btw.
It might also be good for certain types of programmers and worse for others. I tried to do TDD on an appropriate project but it hampered my creativity too much. I slowly started writing code ahead of the tests and ended up back to regular unit testing.
I think a lot of this religious behavior stems from an underlying belief, mostly by people who've never tried it themselves, that programming _must be_ easy and anything that makes it appear hard (and especially slow) means that somebody is making a mistake somewhere. So they go looking for a silver bullet, and the latest fad seems to fit the bill.