Agreed and in my experience libraries like this perpetuate that anti-pattern. Inexperienced developers think because there's a library that enables it, it must be OK, right?
Low bid contractors will probably use this library to pump their code coverage numbers. Some of the shit shops I have worked at that hired lowest bid contractors have done some shady shit to meet “management expectations”.
honest question: how are you writing integration tests? We are writing these as separate test suite often with the same test style. And in this scenario testcontainers are very valuable.
Really, you use testcontainers so that you can manage everything for your test with a single build command, instead of running something extra, then running your tests, then shutting down your docker containers. Plus, with it integrated into your test suites, you can run code against your docker containers on setup/teardown, before/after container start, before/after each test, etc.
Meanwhile docker compose selling point: 'because you don't have to muck around with testcontainers; I guess some people might find that more attractive'.
Oh, absolutely! And as the other guy pointed out, docker-compose can be quite reusable when developing locally if you write it right.
But at $WORKPLACE we often use pytest-xprocess to start the required app in the same container where the tests run. It's probably the easiest way mostly because a custom wrapper does all the heavy lifting (starts the app, checks that it is running and responding to requests before the tests start, correctly terminates it when tests end).
Arguably you're no longer testing a unit if the unit involves an integration with an external component, making it an integration test per definition.
Integration tests are fine, but they test something else - that your component integrates as intended with <something>, while a unit test moreso tests that your unit behaves in accordance with its specification.
I've rarely found this to be worth it, for the effort required for a proper mock, in a complex system. I've seen most people mock in ways that are so superficial that it's basically a no-op.
Mocks are a contentious topic as you've probably guessed. In my opinion they're a sign of coupled code, you should be able to hit very high coverage without a single mock, but if you're a dev in an org that tracks code coverage you'll probably end up writing a fair number of them since the odds are high you'll be consuming coupled code.
If you have a dependency like a third party API (or even internal code), and you write an API client, then depend on that client, would it be considered couple code?
In such cases, if I am using dependency injection and creating a (stub?) versions of that client which returns a hardcoded or configured output, would that be considered a mock? OR would this be OK and not "coupled"?
Most people will say something like for unit tests you should test your functions by passing the state as parameters to test. I'm going to call this "outside in" loose coupling.
Mocking is for the inverse. When you want to test a unit of code that is calling some other outside unit code. Its really not any different just "inside out".
So imo with DI you gain loose coupling through dependency inversion. But because of dependency inversion you need to mock instead of passing state as params.
So I think if you are injecting a mocked stub this is still loose coupling because you are testing against its interface.
You're still passing state through your test but its coming from inside instead of outside, hence the mock.
Another way I have thought about this is: framework (framework calls you) vs library (you call library).
Frameworks naturally lend themselves to a more mock way of testing. Library lends itself to a more traditional way of testing.
Testing something that accepts a callback is also essentially a mock.
A good rule of thumb for a unit test is that you should be able to run it a few thousand times in a relatively brief period (think: minutes or less) and it shouldn't ever fail/flake.
If a unit test (suite) takes more than a single digit number of seconds to run, it isn't a unit test. Integration tests are good to have, but unit tests should be really cheap and fundamentally a tool for iterative and interactive development. I should be able to run some of my unit tests on every save, and have them keep pace with my linter.
This makes no sense tho. Simple example, your code needs to reach into Cosmos / DynamoDB, why mock this service when u can get so much wrong by assuming how things work?
Mocking doesn't mean you have to reimplement the fully featured service. In the simplest form your internal library which calls out to Cosmos is mocked, the mock records the request parameters and returns ok, and the test verifies that the expected data was passed in the call.
Then you're testing the implementation and need to change the test and mocks every time the implementation changes.
Making stuff quicker is a good reason to mock stuff. So is not hitting real network services. But, in all cases, the best thing is to avoid mocking if possible.
Why do you care how Cosmos or DynamoDB or any other dependency is implemented? You only need to mock the interface to these services. Their internal code can change every day without affecting your tests.
And if you want to catch potential changes in Cosmos that modify the behavior of your own service, that isn't the purpose of unit tests.
I want to be able to update to the latest version of DynamoDB (or something else - not every dependency is as stable as DynamoDB) and know that all of my code that calls it still works.