I don't get the point of a runtime type checker.
It adds a lot of noise with those decorators everywhere and you need to call each section of the code to get full coverage, meaning 100% test coverage.
At that point just use rust, or am I missing something?
It looks like you call a function near the beginning of your Python program / application that does all the type checking at startup time. IDK for sure, I haven't used the library.
Someone using Python doesn't "just use Rust", there are very clear pros and cons and people already using Python are doing so for a reason. It is sometimes helpful to have type checks in Python though.
> Use beartype to assure the quality of Python code beyond what tests alone can assure. If you have yet to test, do that first with a pytest-based test suite, tox configuration, and continuous integration (CI). If you have any time, money, or motivation left, annotate callables and classes with PEP-compliant type hints and decorate those callables and classes with the @beartype.beartype decorator.
Don't get me wrong, I think static type checking is great. Now if you need to add a decorator on top of each class and function AND maintain 100% code coverage, well that does not sound like "zero-cost" to me. I can hardly think of a greater cost just to continue dynamically typing your code and maintain guarantees about external dependencies with no type hints.
I prefer mypy but sometimes pyright supports new PEPs before mypy, so if you like experimenting with cutting-edge python, you may have to switch time to time.