For me, dev testing is something that I use directly in the hot feedback loop of writing code. Typically, I'll run it after every change then manually inspect the output for quality assurance. it could be as simple as refreshing the browser or re-running a CLI tool and spot checking the output. Importantly, dev tests for me are not fully fleshed out - there are gaps in both the input and output specifications that preclude full automation (yet), which means my judgement is still in the loop.
No so with CI tests. Input and output are 100% specified and no manual intervention is even possible.
There are some problems where "correct" can never be well defined. Think of any feature that has aesthetic values implied. You can't just unit test the code, brush off your hands and toss garbage over the wall for QA or your customers to pick up!
I use this technique mainly to avoid an over-reliance on automated testing. I've seen far too many painful situations where the unit tests pass but the core functionality is utterly broken. It's like people don't even bother to run the damn program they're writing! Unacceptable and embarrassing - if encouraging ad-hoc tests and QA-up-front helps solve this, it's a huge win IMO.