I think it's relative, right? That's how abstractions and interfaces work.
I can write a module with integration tests at the module level and unit tests on its functions.
I can now write an application that uses my module. From the perspective of my application, my module's integration tests look like unit tests.
My module might, for example, implicitly depend on the test suite of CPython, the C compiler, the QA at the chip fab. But I don't need to run those tests any more.
In your case you hope the in-memory database matches the production one enough that you can write fast isolated unit tests on your application logic. You can trust this works because something else unit-tested the in-memory database, and integration tested the db client against the various db backends.
Lol same, writing SQL and directly wrangling Async connection pools always seemed way easier for me than trying to jam sqlalchemy into whatever hole I'm working with.
I think the idea here is that your first approach is what you think is correct. However, there's a chance the model is just outputting text that confirms your incorrect approach.
The second one is a different perspective that is supposed to be obviously wrong, but what if it isn't actually obviously wrong and it turns out that the model is outputting text that confirms what is actually the correct answer for something you thought was wrong?
The third one is then a prompt that pushes for contradiction between the two approaches you propose to the model to identify the correct answer or at least send you in the correct direction.