Sure, but isn't it nice that you don't even have to read the paper to play with the full model, to see if the claims hold up, using the same Python tools you and everyone else uses?
To be honest that is not really true for many ML implementations from papers. Often, I will pull a repo and it will be half-baked, or, only works on python 2.4, or have a broken dependency on something or another (zmq was common issue for a hot sec), or not publish weights, or be in tf when I want torch, or be in torch when I want tf, etc.
Yes, I agree: a lot of code is indeed half-baked! But good papers tend to have better-than-average code, and ~100% of the time, the code is in Python, not some other language.