Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Testing is a separate skill and that’s why it can be frustrating (medium.com/planet-arkency)
167 points by kiyanwang on March 10, 2017 | hide | past | favorite | 78 comments


The best testers that I know really enjoy breaking stuff. They have this special mission to break anything they get their hands on. They take great pride in finding the stuff that the developer probably didn't think of. They smile when it behaves weird, they laugh when it crashes. Their ego rises when they tell the developer that they were smarter.

And therefore, the one who built is, is not the person who makes it his lives mission to break it. Breaking it means more work, more searching for bugs, and then testing again. And nobody likes doing that. Except for awesome testers. They love looking at the face of a desperate developer getting frustrated by a nasty bug.


> Their ego rises when they tell the developer that they were smarter.

I've been in QA for a few years, after an initial career in development.

I do find great pride and joy when I break things, and I do laugh sometimes when I see silly behavior. Testing can be thankless, and even anti-appreciated, so the validation of finding something is welcome.

But I never "tell the developer that I'm smarter," and I've never thought that way. They are merely two different roles, with different focuses and different allocations of time. The tester role exists in part because we are not perfect.

The relationship between the two roles would be much better without "I'm better than you" or "you can't do what I do, so you test" attitudes. Both sides, and the organization, profit when we work together with respect and appreciation.


Maybe it was just me, but I read GP as slightly tongue in cheek - I hope most testers don't actually have such an antagonistic relationship with their devs!

I agree with you completely that they're underappreciated and minimized - I've seen corporations cut their testing staff in favor of automated tests, only to find that overall number of bugs between releases rises. (note: this is not a knock on automated tests)

I've also noticed an elitist mentality, usually among junior developers (but sometimes in senior devs or management as well), that a developer can do any job or role better than anyone else with a bit of training and coding. IMO, both of these are very immature views, and extremely dismissive of the value that domain expertise and a critical, analytical set of eyes can bring.

From the Dev end of the spectrum to the QA end: thanks for everything you folks do. Shitty software destroys people's confidence in our work and solutions, and rework is expensive and bug-prone: you save us from a lot of that.


>The relationship between the two roles would be much better without "I'm better than you" or "you can't do what I do, so you test" attitudes. Both sides, and the organization, profit when we work together with respect and appreciation.

The relative pay scale for a developer and tester with equivalent education and coding experience already speaks volumes about how deep that respect goes. Also, note who gets laid off first when the budget gets tight.


In a way that makes sense, when things are bad. You can sell bugs, but you can't test nothing ("first write a test that fails" notwithstanding).


But I never "tell the developer that I'm smarter,"

Agreed. I am also a tester and I would never do this. I try to encourage a more cooperative relationship with developers rather than a psuedo-antagonistic relationship.

I often think of the story about IBMs black team. It's a cute story, but in reality, you don't want your company running that way.

http://www.t3.org/tangledwebs/07/tw0706.html#


Very true. If your business model is to be the smartest guy in the room, you have a problem, because there's always a smarter guy.


Maybe it's a coincidence, but the smartest guy in the room is often also the loudest and most dominating guy in the room.


Great to hear that you have the good tester mentality. My text was indeed meant to be a bit tongue in the cheek. But none the less it is completely correct.

The ego thing depends on the relationship you have with your colleagues. Sometimes I have very professional relationships with them, but other times they really become friends. And in that case, the "I'm smarter than you" really get's said, but more in a fun way.


I was a tester for a few years, and then moved into development. With tight deadlines on projects, I appreciate having a tester to send my work to for validation. Unfortunately, I am not doing TDD or any other automated testing, so having a second set of eyes is not something to take for granted. And usually, they don't laugh in my face when something is broken.


Great perspective and I wholly agree, especially as a consistently oddball way of looking at the world. I've been fortunate to have a ridiculously large capacity for, uh, knowledge? Facts? Trivia? Stories? I'm an explorer and I like to break things because it's like crossing fences to find new worlds. What's back here?!

It's like my life is a never ending quest to fill a bucket that can never be full - but it's an enjoyable one!

The skill of breaking things down is also what I believe made me very successful as an "Proposal Coordinator" because an RFP is a big, intimidating, multi-section Hydra of tasks. Looks scary!

Then, piece by piece, the threads can be tugged at, finding which parts need what, who, when, why, and working backwards from a Deadline, it's a quite gratifying process.

Correlative: I frequently assert that Creativity can not be taught, because it is inherently Disobedient. It can be Nurtured, or Channeled, but it's not well-suited for rigorous, lock-step cultures.

https://medium.com/@6StringMerc/why-i-keep-secrets-as-a-wann...


I'm sure this describes some developers and QA people, but good professionals are not usually this petty.

I try pretty hard to break my own code, and I'm also glad when I find something wrong.

But here is why it can be very good to have someone else test my code: I am blind to the same things that the guy who wrote the code, since both are me. I also, despite my best efforts, have a developer perspective on the system rather than a user perspective.


As a developer you should be thinking through your edge cases more than a tester would. The alternative is needing more testing.

Ideal situation: tester doesn't find anything.


It may be the ideal, but the reality is that they will always be finding bugs. The really good ones will automate the simple cases, incorporate combinatorial and fuzz testing, and then go on a mission to break things.

Sadly, most of the really good ones are developers-in-potentia, and move out of the QA role after a year or two. Those who really are meant to remain in QA are gold - if you find them, don't ever let them go.


Academically I would agree, but I disagree out of business principles. Testers finding bugs is the ideal situation of a good production process. Ideally not a ton of them, but if a programmer has really thought through every potential edge case then they're likely over-engineering the product, and the business will suffer from being too slow.


Not ideal if it takes you 4 weeks what I can write buggy in 2 days, and my tester does 2 days and me an extra 2 days to fix it up. (my tester is also cheaper than me).

Maybe your software is perfect, but maybe my customer is happy to pay me 4 times less for the software, even though it still contains the few bugs my tester overlooked.


however the real world sucks, most field bugs are caused from configurations you would never think of and a ever changing world

i.e. apple just recently decided to break 5 years of responsive design practices by ignoring the maximum-scale meta tag. their decision is not without merit, but suddenly many web applications started behaving weirdly and encountering usage pattern they were never tested on before.

those edge cases never existed before, but I bet only the shops with dedicated testers caught them within the first week.


I agree. My argument isn't against exploratory testing but rather against the idea that developers shouldn't also be cultivating that way of seeing systems.


Fair and I think you are correct. As a counter balance though, there's an expression via Murphy's Laws that I think is worth keeping in the back pocket:

"It is impossible to make anything foolproof because fools are so ingenious."


Zero bugs: program faster


I have testers that act exactly like that...

... Except the things they break aren't real breakages. "Hey the favicon is showing as a 404 in the console. WE CAN'T LAUNCH THIS. How can these engineers be so STUPID!?"


So as you say good testers are really good at breaking things, so what makes a developer really good? Maybe there is a related skill seeing testing and developing are related (testers test/break what a developer builds). So then maybe a developer needs to be good at fixing what the tester breaks. Which would mean he lets go of his ego or pride in his own code and is able to accept that he/she writes flawed code and be really good at fixing or re-building it. I think it's been ruled out that developers are never going to always write the code correctly the first time (surely that can be a good goal to have, but not something to believe is going to actually happen) so maybe what sets apart a really good developer from a regular one is that he knows that his code is going to have flaws and is happy to hear about them so he can go fix them and improve and re-build his broken work.

Edit: another thought: maybe the developer should see his code as a problem rather than an answer so that he can constantly question it. Rather than being satisfied that something you wrote is now done or solved, instead you see it as something that has likely created new problems for you to be able to solve (Mark Manson said something insightful about this, how solutions don't really solve everything completely as it might solve 1 problem it always presents new problems/challenges)


Developer is too general a term for deducing an ideal mindset.

A tester should be empowered by defeating the system. That's also great for some kinds of development (like in security). For some systems, it's better to be empowered by seeing the things you build grow as you act, for others, it's better to be empowered by seeing they grow by themselves after you act. There are places best fit by people empowered by ensuring no problem arises (also the dominant mindshare in ops), by comprehending things none could before, and by making hard things easy. There are probably places for other people that I'm overlooking too.

Testing is a much more standardized position.


I still think the 2 are more related than they appear to be. I saw a comment on programming reddit (in regards to this same article) about how developers use constructive thinking whereas testers use desctructive thinking and it is a separate mindset, but I don't know that I agree with that as destruction kind of leads to construction, maybe developers could also benefit more with destructive thinking as well.

Take for example Socrates, he went around asking questions that would contradict people's arguments by destructively questioning a particular definition that was presented, but his purpose in doing so was to construct a better definition of whatever thing he thought was ill-defined. Each time a contradition happened a more well-defined definition was able to be produced


Knuth: "I get into the meanest, nastiest frame of mind that I can manage, and I write the cruelest test code I can think of; then I turn around and embed that in even nastier constructions that are almost obscene."

[1] http://www.zerobugsandprogramfaster.net/essays/4.html


Article is about testing in the TDD sense, not the QA team sense.


That's a distinction without a difference, good software testing is still an ethos of imagining boundary conditions and corner cases to "break" the code. A bad tester will create a test with a few basic inputs, see that it looks fine and be content with that.


Having been both QA and dev in my career (and vastly preferring the former), I beg to differ.

- TDD / unit testing: "Does it do the thing right?" i.e. does the code behave as the developer intended? This type of testing tends to be more internal than external.

- QA testing: "Does it do the right thing?" i.e. does the code fully satisfy customer requirements (including integration testing) and other functional and parafunctional requirements?

Certainly there is overlap between the two, e.g. the developers are working from their own understanding of the customers requirements, but they are not IMO the same thing.


TDD is a way to design software and make it easy to refactor. It is mostly "white box". That it leaves behind a solid test suite may not even be the most important thing about it.


The classic example is IBM's famous Black Team. http://wiki.c2.com/?BlackTeam


And the On-Board Software Group's rivalry between the development group and the verification group: https://www.fastcompany.com/28121/they-write-right-stuff


Yeah, there's someone that I know who just savored doing that with any game or piece of software they encountered. They just really took a "for shits and giggles" attitude towards breaking everything. It's an attitude that I've just never had, but it's really great that people like him exist.


I recently watched a talk by an Adobe lead developer, and he stated that they had twice as many testers on the payroll as they had developers. Makes sense.


No doubt testing is a skill, and a very useful one, though I doubt this is the main frustration. Most people (at least myself) got into coding because they enjoy creating something from nothing. Maintaining and testing code is boring because it doesn't really do anything new, it's mostly fixing edge cases. Personally I try to make testing more creative by making ambitious testing tools (fuzzing, self-testing base classes, etc), which makes it more fun and therefore more likely to get done.


A programmer walks into a bar, checks his bank balance, checks his blood-alcohol level, checks his stomach capacity, checks the menu price, checks for bartender availability, checks for open seating, and orders a beer.

A tester walks into a bar and orders a beer. Then orders 0 beers. Then orders -1 beers. Then orders pi beers. Then orders 2.2 billion beers. Then orders 1 cerveza. Then orders an aircraft carrier. Then changes the programmer's beer order to 2 beers. Then cancels an order for a beer. Then flips the calendar back to 1980 and orders a beer. Then flips the calendar to 1925 and orders a beer. Then sets the clock to 23:59:45 and orders a beer....

Those bar jokes represent not only different skills, but different stereotypical personalities. The builder-programmer has to be cautious, methodical, and well structured, whereas the tester has to channel the spirit of the chaos monkey to find something that no one else ever expected.


I tend to agree, I'm a developer/programmer for the most part but I have also been involved in writing automation testing infrastructure, mostly within the games industry and being responsible for leading automated testing. I think the author is correct to point out that there is frustration with testing, I think this is a real problem and just getting team members to write tests can be a herculean task (from exhaustive experience), but it's not just because testing is necessarily a different skill, it's because most people don't enjoy it and don't feel it will be appreciated. When you make a product code change you (and everyone you work with) can immediately SEE the impact. It's highly gratifying. Often with testing, and talking specifically about automated testing here no one even knows you wrote a test, the lack of visibility makes people less want to do it. Especially within an organization where highly visible changes are incentive for career development. This is a problem because having automated testing and doing it right has a huge impact on quality and development time. Being without this brings down everyone in a silent/invisible type of way where there is a lot more firefighting and protracted development time.

I helped found Tesults (https://www.tesults.com) and one of the main reasons it exists is due to some of mine and others' experience working in automated testing. It makes automated testing highly visible across a team. When someone writes a test, everyone sees it, you also see how often the tests run, across various branches, platforms and build flavors. You then get all of the benefits of aggregated test results reporting. Once you have this running you cannot imagine going back to automated testing without having it because it feels like you are flying blind. It is undergoing rapid feature development as it is only a few months old but we have companies of various sizes using it and if it looks interesting to you feel free to contact me if you have any questions. We are adding screenshot and log file uploading soon. Some dev teams invest so much into their automation infrastructure including writing test hooks, writing test case strategies, adding and maintaining test scripts, setting up and configuring continuous integration systems, only to treat results reporting as an afterthought without realizing that the data being generated is valuable and they fail to make the most of their time and investment into the automation.


> Most people (at least myself) got into coding because they enjoy creating something from nothing. Maintaining and testing code is boring because it doesn't really do anything new, it's mostly fixing edge cases.

A good testing setup can be an alternative to redundant manual testing. This can really help speed up the creation process (for me, at least).


I don't think this is a good way of framing things. In fact int can be counter-productive.

Automated testing is the process of writing code to understand your code. We already have to understand our code. Testing is being deliberate about that understanding and writing it down. The same design/testing thoughts that lead us to edge cases can lead us their elimination without testing at all. It's an integrated process, not something separate.


> Automated testing is the process of writing code to understand your code. We already have to understand our code. Testing is being deliberate about that understanding and writing it down.

Tests are documentation of code behavior. Who better to document that behavior than the person that wrote it in the first place.

> It's an integrated process, not something separate.

I agree. Though I'm biased since I've mostly been in SET roles.


QA is a separate set of eyes. No one writes a news article without an editor. QA brings a different interpretation of specifications. Developers writing there own unit tests is a single point of failure. The developers understanding of the specification.


The problem with having a separate test group is that developers who don't test don't write anything to be testable. They don't add features that allow fault injection, and they don't modularize code to be independently tested. As a result, testing any of their code is highly tedious.

Developers should also write their own tests. QA should have tests too. It shouldn't be a matter of throwing software over the fence to a separate group.


Hopefully no company has a QA department or QA person ("Tester") that writes unit test for code already written by a developer. Unit tests can't be separated from the code, and has to be written in conjuction with the code.

That said, it's often possible to write higher level tests in advance as a specification, or afterwards as a regression/integration suite. Testing code can be used for many things such as UI regression testing, performance testing etc. That kind of test is black box and not tied to a specific piece of code.

So I think it's hard to debate "who tests" or "who writes tests" without discussing exactly what kind of test.


This is a problem that a good project manager and the company's development culture can solve. Ops also frequently has things they need from developers but rarely get.


> Developers writing there [sic] own unit tests is a single point of failure.

For unit tests that's pretty much the norm, and you could easily split those up between two developers. One writes the tests for the code of the other and vv.

Integration tests tend to be written by a test engineer, sometimes doubling as QA.

Team size is a big factor in these decisions, one solution does not fit everybody.


For higher level tests that's fine. At the unit level it's problematic. It delays feedback.


Developers writing there own unit tests is a single point of failure.

I think that's a misunderstanding of the use of unit tests. Devs should be writing unit tests so that I can write more interesting tests that assess quality (which is not what unit tests are for).

And then there's the real world, where my agenda today consists of...writing unit tests for code I didn't write.


As it relates to the topic of testing, I'd invite readers to check out "An introduction to property based testing" [0]. It's available in both video format with slides, and in the form of two blog posts. I found it insightful when I first stumbled on it.

I'm not frustrated with testing in the slightest. I consider it a fundamental requirement for any serious production application.

I get the impression that people tend to be far too dogmatic about testing methodologies. Write lots of unit tests, as long as they add value or help improve stability. Not everything needs unit tests. It depends on what the module does, and how it relates to the application.

[0] https://fsharpforfunandprofit.com/pbt/


this technique is also called pairwise testing[1]. there are some interesting optimization problems around this approach. it can significantly reduce the number of tests while providing good coverage.

[1] https://en.wikipedia.org/wiki/All-pairs_testing


The same is true of Pair Programming. It's a lot of fun and very productive when you know what you're doing.

Ideally, you need several weeks of pairing with experienced pairers to have learned the skill. Even then, you usually need 1-3 days pairing with a new person to get to know each other's styles.

Having two programmers who have never done pairing try it is a bit like having people who never danced try to Tango based on youtube videos. I suspect many people who hate pairing encountered it that way.


One problem with testing is that you're not just testing your code, but you are testing your assumptions. If you make a mistake in your code, you might make the same mistake while testing.


I'm not so sure they are separate skills. It's possible to write functions that are easy to test and functions that are hard to test.

I find that the two tasks of writing functions and tests for those functions are closely intertwined. I like for developers to write their own unit tests, and then for Q/A to develop the functional/integration tests from the perspective of the client (machine or human).

It's very difficult to come in after the fact and write unit tests for someone else's code, especially if they weren't thinking about writing testable code.


Yes, testing is a skill, no, developers (at least none I know) are not frustrated because they lack the skill.

Testing is (usually) debugging without understanding the code behind the application one is testing. Software developers really, really understand that working on a black box is much less productive for someone knowledgeable than code reviews.

What frustrates me about "testing" is the TDD-idiocy of writing a million unit tests - tests that verify only that a specific code unit works. Real "testing" is executing functional tests: User clicks "Signup", we have a new record in the users table, the subscription total changes and the admin/moderator/community manager can immediately see the change in the admin-UI. A good test will catch as many violations of the expectations as possible while minimizing the effort/code required to perform the test. A good test maximizes meaningful output while keeping the required effort minimal.

But, what am I talking.. the whole post is just a prelude to the post scriptum: "Watch our video!!!!!"


That, and correct code is often not very testable, at first.

My process is write code -> repeat until it works -> refactor code until the tests make sense -> break up code into individual tested commits -> submit for review.

After enough years of this you end up with a grab bag of techniques both for writing testable code and for writing tests.


For me, the frustration regarding testing comes not from it necessarily being a separate skill set but that writing tests can be daunting considering the tools at hand. Consider, for instance, that only recently has JSDOM even gained a way to log issues inside its own runtime environment, making all usage of it previous to this fix amazingly difficult to debug:

https://github.com/tmpvar/jsdom/pull/1108

Now this is only one in a myriad of nuanced struggles which I can potentially face as a developer when deciding what and how to test. When these tools facilitate this need without creating such an undue burden then I, and I'd bet many others, will naturally gravitate toward automated testing.


Is this about testing or unit testing? Personally one of the problems is that unit testing has such a hype train it leads to people using it always when it isn't always necessary. With unit testing brings all the baggage: the DI, IoC containers, factories, mocks & all the stale test cases that were written a few years ago and no-one ever looks at but take 10 minutes to run.

My current project has objects that are only ever used once in one place but still have abstract interfaces, lots of injected parts & test cases. Its only a simple class, it doesn't need all this extra complexity. For me its frustrating that lots of people think its "good design". Sure - you need to be able to test your application but unit testing everything is rarely the right way.


Unit tests are often the only spec there is. Code with no tests usually can't be rewritten/refactored without first making tests. They are significantly cheaper to write when the code is first created, than when the code needs to be refactored a decade later.

DI/IoC aren't necessary (or aren't made necessary by unit tests).

What tests do make is enforce small interfaces, and other good practices such as "Tell, don't ask" etc.


If you think that's bad, try inheriting a code base that has too few tests.


I wish I could get to the bottom of this. I've seen all kinds of arguments for and against whether or not to unit test and how much / when unit testing should happen. I haven't been able to figure it out yet.

- I heard it mentioned before code reviews found more bugs in an experiment more than unit testing did (I heard this study was mentioned in Code Complete but I haven't checked the source on this). I wonder if reading code catches more bugs than writing code wouldn't this be an argument for spending more time reading our code rather than writing code that tries to understand it?

- Unit testing has the added benefit that when you change existing code or refactor you know that the behavioral unit tests guarantee those same behaviors happen after refactoring (my first point above doesn't give the same benefit so this may be worth a lot more than finding bugs in code reviewing new code (as it isn't reasonable to read all the existing code over again).

- I've also heard that code coverage is a terrible metric and can cause bad unit test practices, trying to cover all lines of codes instead of trying to test for specific behaviors.

- Some unit tests may become irrelevant as the design changes and also have maintenance costs

Since my team does unit testing while striving for 100% test coverage of our back-end web service code, we put a lot of time into unit testing (quality of the unit tests might be questionable sometimes as we aren't testing for all behaviors only ensuring all code is covered), so I wonder if we spent all the time that we spend unit testing and instead would read over all the code areas affected by the new code we and other teammates are writing to ensure we understand how it works, would it be more beneficial to read the code than to unit test it? I'm not convinced either way myself


I'm convinced, after working on a (micro)service-heavy 2+ million line code base for a couple of years now.

Our thousands of unit tests are invaluable, primarily for providing a strict definition of what the code should do, but also for catching bugs when refactoring. We don't often introduce bugs that break tests when refactoring simple parts of the system. But for the complex parts of the system, the tests both maintain a ground truth for how the code should behave, and make it trivial to ensure correct behavior after refactoring.

If we hadn't had the tests, we would be almost guaranteed to change the intended behavior of the code when modifying complex parts of the system. Perhaps no big deal if you're writing an Instagram clone and crashes will show up in your logs and prompt you to investigate, but critical when your code processes millions in financial transactions and subtle logging errors could cause a nightmare.

As always, it depends on your problem domain and priorities. There's an expensive overhead to having good test coverage. Many failed tests will be false positives.


> I've also heard that code coverage is a terrible metric and can cause bad unit test practices, trying to cover all lines of codes instead of trying to test for specific behaviors. ... (quality of the unit tests might be questionable sometimes as we aren't testing for all behaviors only ensuring all code is covered) ...

Ugh, code coverage is _not_ a metric. Tell your team and your managers: "A high code coverage number is not the end goal; working code is the end goal." Code coverage simply tells you where you did and (more importantly did _not_) check for problems; the point is to consciously examine the code paths not covered and decide what unit tests are needed there, if any. It's perfectly fine for to say "This particular code does not need to be covered." or "This code path isn't practical to validate except in integration testing, therefore we'll forego a unit test and notify the QA team.", as long as the decision is reviewed and approved the same as one would do code review.


The article is about TDD. You can see on the page, that they are offering a TDD course using ruby/rails. TDD in Ruby community is overhyped. This infection spread to management too. If you apply to Ruby job today, you better have TDD or BDD in your CV. If you don't, you are garbage to them. I have experience with people infected with TDD in ruby land, they often look down on people who don't write tests for every single line of code they write. I am all for unit testing, but can we at least acknowledge that example based testing is just no panacea for bad software.

No, easily testable code does not always imply good design.

People that don't do TDD are actually doing TDD too, they test code in REPL/console/browser/whatever, they just don't save those "tests".

You can not write perfect example based tests, that would cover all possible inputs and states of your system. You need generative testing solutions for this. Let the program write tests for you. [0] [1]

See what Leslie Lamport [2] has to say about TDD --> https://www.youtube.com/watch?v=-4Yp3j_jk8Q&t=3158s He nearly flips out on the guy :)) Surely Leslie knows what he is talking about.

I am ok with unit testing as a communication/documentation tool, so developers know how to use some piece of library. And to prevent regressions with new releases, but I would argue that you can never completely prevent those anyway, requirements change all the time. Maintenance of a large test suite can quickly get out of hand.

[0] https://wiki.haskell.org/Introduction_to_QuickCheck1

[1] https://clojure.org/about/spec

[2] https://en.wikipedia.org/wiki/Leslie_Lamport


The thing that frustrates me is running into situations where creating the infrastructure and code to set up and run a test is orders of magnitude more complicated than actually building the application. This is typically not at the unit test level (which mostly isn't that bad), but rather at the "wow, I basically need to implement a local version of this 3rd party API that doesn't have a test environment" sort of level. And that's not even getting into the times I've had to work with "APIs" that are so bad they aren't even testable (wishing wells).


Someone who is genuinely good at writing code should be able to produce code that has no or very few bugs. Similarly, they should be pretty good at inferring what is wrong in the rare case of bugs by simply running the program. Therefore, they would find writing unit tests and similar rituals unnecessary and time consuming.

They would most probably write specialized tests where appropriate to catch subtle bugs that elude even their superior skills. This would be a matter of personal judgment based on the problem at hand.


This works decently for code that is written once and never needs to change.

It does not work for code that repeatedly gets changed by different people during a project. This is 99% of all meaningful code.


Donald Knuth did say once that he's not much of a fan of unit tests. See his answer to the second question of his interview at: http://www.informit.com/articles/article.aspx?p=1193856

Then again, there are probably less than a dozen people at his level of ability on the entire planet so unit tests will be part of our lives for the foreseeable future.


Surely no one is a fan of unit tests. I'm not even though I am responsible for a few hundred in the program that I work on. They are valuable for several reasons, including: writing them often exposes an unconscionable degree of coupling between the various components of the code and they often trap incautious changes when someone fixes a supposedly unrelated bug. I would need fewer of them if I could use a functional language language with a good type system like F# instead of VB and C#.


Tests also describe how a particular code is expected to behave.


Interesting hypothesis!


I'm surprised there's so little comment here about testing as a facilitator. As in something that refines your coding ability at the coalface, rather than merely something reassuring you after the fact. For me testing is like the entire team and paraphernalia of surgery, setting me up to undertake the specifics of a critical act, unincumbered by the weight of, "I'm saving someone's life" or, "I'm solving the problem of searching the entire internet in a few milliseconds".

Testing let's me create far more complicated things, not just merely sturdier things.


A good tester is invaluable to the developer that is willing to embrace them. Not all developers have a good relationship w/ their testers, but they should do the best to create the best communication possible. Some developers view testers as enemies, but they shouldn't. They're actually your best friends. Toss your ego aside and listen to your tester. Developers/engineers test what what they were asked to do. aka. sunny day scenarios. These never cover the "what if the user did this..." scenarios, which are also important.


I don't understand how as a developer I have to write code to test my code, surely there should be a non-coding solution, especially end-to-end.

and if you leave end-to-end to a tester they'll have to learn to code.


Good testers have an ability to anticipate where bugs are likely to be found. It is not intuition, it comes from understanding of the purpose of the system and many aspects of its implementation, such as its architecture, as well as general systems knowledge.

This ability to foresee problems before they are manifest is also an important skill for developers. Those who do not have so much of it will spend more time finding and fixing problems they did not anticipate.


I mostly agree with the truism that good code is highly testable, so in a way it's not separate. If you have lots of trouble testing your code you maybe just don't design things very well yet. Tests have a way of immediately highlighting structural issues with what you're building.


I think the skill difference comes down to learning to write code vs learning to run code. Testing, manual or automated, involves executing your software, whereas often while coding you've only ran the code enough times to make sure it does what you expect of it most of the time.


Separate skill from what? Programming is already an agglomeration of many different skills.


I always felt that building (on Linux et al.) was a separate skill and that's why it can be frustrating.


The engineering compression has removed the opportunity to specialize.


This is a long article that essentially repeats the claim without evidence or solutions. The claim is false for basic testing but true for sophisticated testing. Let me illustrate by comparing the coding and testing step:

Coding: There’s a spec in their head of what the code is supposed to do. This usually has output, may produce side effects, and may have an output. They write a series of steps in some functions to do that. They then run it on some input to see what it does. They’re already testing.

Testing. Typing both correct and incorrect inputs into the same function to see if it behaves according to the mental spec. This can be as simple as tweaking the variables in a range or just feeding random data in (i.e. fuzzing). Takes less thought and effort than coding above.

So, the claim starts out to be false. The mechanics of testing are already built-in to the runtime part of the coding phase. The others are slight tweaks. The basic testing that will knock out tons of problems in FOSS or proprietary apps takes no special skill past willingness to do the testing. Now let’s look at a simple example of where testing might take extra knowledge or effort.

https://casd.wordpress.ncsu.edu/files/2016/10/kuhn-casd-1610...

So, the government did some research. They re-discovered a rule that Hamilton wrote about for Apollo program: most failures are interface errors between two or more interacting components. That interfacing used them in ways they weren’t intended. The new discovery, combinatorial testing, was that you can treat these combinations of interfaces as sets to test directly in various random or exhaustive combinations. Just testing all 2-way interactions can knock out 96% of faults per empirical data. Virtually all faults drop off before 6-way point.

Why is this sophisticated enough to deserve a claim like in OP’s post? First, people don’t learn the concepts or mechanics during the course of programming. You have to run into someone that’s heard of it, be convinced of its value, think in terms of combinations, and so on. Once you know the method, you might have to build some testing infrastructure to identify the interfaces & test them. There’s also probably esoteric knowledge about what heuristics to use to save time when combinations go past 3-way toward combinatorial explosion. So, combinatorial testing is certainly a separate skill whose application could frustrate the hell out of developers. Until they learn it and it easily knocks out boatloads of bugs. :)

Regular testing of making inputs act outside of their range? Nope. Vanilla stuff same as the coding you’re doing. Easier than a lot of the coding actually since the concepts are so simple. Basic arithmetic and conditionals on functions you already wrote. What stops basic testing from happening is just apathy. Incidentally, that also stops them from learning the sophisticated stuff for quite a while.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: