Hacker News new | past | comments | ask | show | jobs | submit login

Yeah but I suspect there’s something more going on than Rust magic. They’re claiming a savings of about an hour and a half. (4% of the original time) That sounds like an architecture change, or more likely a whole lot of caching.



Your assignment, should you choose to accept it, is to write a small, simple Python program. Maybe something that builds a list of five or six elements, runs a simple map over it, then converts it to a dict somehow. Something simple.

Then run it through gdb and watch every single CPU opcode that is executed for the code you wrote. Not Python opcode; CPU. Print something first and trigger off of that or something, I'm not worried about Python startup costs here.

Then do the same with a roughly equivalent compiled program. Doesn't have to be Rust, C or whatever you know would do just fine.

Yes. You can get a big speed improvement over pure Python simply by moving it to a compiled language and making effectively no other changes. Python is slow. This is not a value judgment. It is a big mistake to think that this is somehow an emotional judgment about Python stemming from anger or hatred or something and that anyone talking about Python's speed issues are just haters or something. I like Python just fine. But you will understand how it is not a value judgment when you are running the Python program through gdb. It is simply an engineering reality that anyone using Python needs to understand about Python. You can get yourself into a lot of trouble not understanding this about Python.

Is there more things going on than a straight port? Possibly. But I don't find the difference unbelievable for it to be coming from a straight port with no major changes.


If the first step of your build process is creating a new pip venv from scratch, it's entirely believable to me. Pip resolves packages sequentially, so there's a lot of low hanging fruit even without yoinking a state of the art constraint solver.

As a concrete example, I have a raspberry pi 4 I use for "what's the most pathetic hardware a user might try". It takes 5-10 minutes to build up a fresh venv even when it has all of the packages cached. If I nuke the caches and make it download, it's 20+ minutes. For comparison a modern laptop takes ~1 minute.


Is that on a SD card? My expectation is that SD cards are garbage and any number of IO access patterns break down vs a SSD.


I threw in a moderately fast USB drive (~90 mb/s after pi4 overhead) because I got sick of SD cards dying every 6-12 months. Pi4 is very speed limited. Even if you tried to use an ssd, I think the highest I've seen is around 250mb/s with an nvme drive that gets a few gb/s in a real computer. In theory it should get to like 500mb/s for sata limits.


Converting python code to rust for data processing can get you 20x perf increase (single threaded).


I guess the uv authors don't want to maintain a better algorithm in Python then


I don’t know how you can seriously imply that Python could be just as fast as Rust if the authors of uv cared enough.

That seems like a disingenuous framing of why UV exists, let alone doesn’t pass the smell test that Python can just be as fast as a compiled, native machine code language. Python and Ruby and all that are just so god awful slow that it’s completely obvious that’s a rewrite in native code would have these sorts of benefits.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: