Hacker News new | past | comments | ask | show | jobs | submit | briggers's comments login

Machine learning productionization: https://paulbridger.com


Absolutely yes.

I write fairly deep ML performance tuning articles at https://paulbridger.com and the (many) hours I've spent on each article have been hugely worth it.

Many people reach out to me via this work, and when we talk they already see me as an expert or already want to work with me.

I need to blog more, thanks for the reminder.


Before you start optimizing runtime performance: measure, trace, inspect, or whatever is appropriate to understand current performance.


Perhaps you are looking for something like this?

docker run -d -p 5000:5000 --name registry registry:2

https://docs.docker.com/registry/#:~:text=The%20Registry%20i....


I used to love doing this with Clojure, it’s an awesome way to increase productivity by a good chunk.

It’s less about saving the time to re-run something, and more about removing conceptual overhead (I think).


Yes! I do this and love it. Works locally or via SSH with no difference at all.


Nice one! I've long been interested in the ONNX serving path.


Yeah. A 2080Ti doesn't fit in your pocket or in your AR glasses but the same techniques and tools scale down.


BTW, this is pumping the same video file through the network - not just a single file. I don't measure latency, but this is not a deep pipeline so it's easy to calculate.


Ok, I guess I misread that part.


Mobile phones definitely since these days most of them have pretty powerful GPUs.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: