The first is that integration tests can be (not always, but can be) too inclusive and thus fail too much, especially early on. A good test process requires tests to stay green most of the time, otherwise you learn to ignore them as "known" and distrust and eventually dismiss the results. So you turn off or xfail failing tests while you triage. But if you turn off or dismiss an integration test as a known failure, you lose a lot of coverage.
I have never done that, and no-one on my team has ever done that. All tests need to pass when we deploy, every time. The fact is that the cause of regressions are usually so trivial that having unit tests doesn't have any value. All you need to know is that something is broken, and see some error message, and you can track the problem down in a matter of minutes. Starting with high level tests pays dividends on day one. Unit tests have much longer payback times, and a much higher chance that the system will still fail, even if your tests pass.
The second is isolation. Even if you don't have full unit coverage, having -something- beneath the integration tests lets you tell a lot more from the combo of failures. Integration covers ABC and fails, have unit test for B and C, must be A.
I've never failed to identify the cause of a failure in a matter of minutes when a test fails. The concept of isolating the system under test is taken to extremes in XUnit frameworks and the amount of work you have to do to properly isolate the system under test is cost prohibitive. Given the fact that an integration test is by definition a test of any system not sufficiently isolated to XUnit standards, really what you need to have is an integration test that is "as isolated as practical". On greenfield developments, it's typically just a test that makes sure the thing runs at all. Once you start fixing bugs, you write tests that are more focused, but worrying about isolating systems under test (with mock objects and that kind of malarky) kills productivity.
Also, a political reason: once you start down that route it's super-tempting (especially for your boss or your boss's boss) to say "we'll test everything full stack, that gives us all of the coverage." In reality, it only gives you some of it, and you probably have no idea what even with code coverage tools (remember that paths are ultimately what you're testing, not lines or branches). And it's incredibly indirect, since you're trying to find externally driven cases to make the internals do different things. It's usually easier just to manipulate the internals directly via the nearest interface.
Yep, I say "we'll test everything full stack until the QC team finds bugs, then when we fix bugs we'll add specific tests" (I'm the boss). The philosophy we operate under is that it's the job of QC to find bugs, it's the job of automated tests to ensure we only fix each bug once and to reduce the cost of QC (more here: https://github.com/iaindooley/Murphy).
We use a mix of backend integration tests (using Murphy) and frontend integration tests using this: http://monkeytestjs.io/ but the main tenet is: don't waste time testing components on a system you're developing, only test components once they have been shown to fail. Even then, isolating the system under test might happen by coincidence, but is not a requirement of the tests.
If someone gets it wrong, and creates a test that passes even though the bug is still present, it's the job of QC to catch that -- the only requirement is that this happens infrequently enough that it doesn't make QC cost prohibitive.
So one or two high level acceptance/integration sanity tests, sure. But I'd never say that you should have "integration testing in place" first. That implies a level of completeness or formality that I think would probably be counterproductive in almost all cases.
I've found exactly the opposite in all cases. In fact, I've never seen productive Unit Tests (real ones) -- only integration tests that people called unit tests, and in all cases I've seen, people started with high ideals and then all testing was abandoned when the shit hit the fan, mostly because they were too ambitious and low level with test coverage to begin with.
I have never done that, and no-one on my team has ever done that. All tests need to pass when we deploy, every time. The fact is that the cause of regressions are usually so trivial that having unit tests doesn't have any value. All you need to know is that something is broken, and see some error message, and you can track the problem down in a matter of minutes. Starting with high level tests pays dividends on day one. Unit tests have much longer payback times, and a much higher chance that the system will still fail, even if your tests pass.
The second is isolation. Even if you don't have full unit coverage, having -something- beneath the integration tests lets you tell a lot more from the combo of failures. Integration covers ABC and fails, have unit test for B and C, must be A.
I've never failed to identify the cause of a failure in a matter of minutes when a test fails. The concept of isolating the system under test is taken to extremes in XUnit frameworks and the amount of work you have to do to properly isolate the system under test is cost prohibitive. Given the fact that an integration test is by definition a test of any system not sufficiently isolated to XUnit standards, really what you need to have is an integration test that is "as isolated as practical". On greenfield developments, it's typically just a test that makes sure the thing runs at all. Once you start fixing bugs, you write tests that are more focused, but worrying about isolating systems under test (with mock objects and that kind of malarky) kills productivity.
Also, a political reason: once you start down that route it's super-tempting (especially for your boss or your boss's boss) to say "we'll test everything full stack, that gives us all of the coverage." In reality, it only gives you some of it, and you probably have no idea what even with code coverage tools (remember that paths are ultimately what you're testing, not lines or branches). And it's incredibly indirect, since you're trying to find externally driven cases to make the internals do different things. It's usually easier just to manipulate the internals directly via the nearest interface.
Yep, I say "we'll test everything full stack until the QC team finds bugs, then when we fix bugs we'll add specific tests" (I'm the boss). The philosophy we operate under is that it's the job of QC to find bugs, it's the job of automated tests to ensure we only fix each bug once and to reduce the cost of QC (more here: https://github.com/iaindooley/Murphy).
We use a mix of backend integration tests (using Murphy) and frontend integration tests using this: http://monkeytestjs.io/ but the main tenet is: don't waste time testing components on a system you're developing, only test components once they have been shown to fail. Even then, isolating the system under test might happen by coincidence, but is not a requirement of the tests.
If someone gets it wrong, and creates a test that passes even though the bug is still present, it's the job of QC to catch that -- the only requirement is that this happens infrequently enough that it doesn't make QC cost prohibitive.
So one or two high level acceptance/integration sanity tests, sure. But I'd never say that you should have "integration testing in place" first. That implies a level of completeness or formality that I think would probably be counterproductive in almost all cases.
I've found exactly the opposite in all cases. In fact, I've never seen productive Unit Tests (real ones) -- only integration tests that people called unit tests, and in all cases I've seen, people started with high ideals and then all testing was abandoned when the shit hit the fan, mostly because they were too ambitious and low level with test coverage to begin with.