Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is great! I’ve been diving deep into local models that can run on this kind of hardware. Been building this exact same thing, but for complete recordings of meetings and such because, why not? I can even run a low-end model with ollama to refine and summaries the transcription. Even combining with smaller embedding models for a modern, semantic search. It has surprised me how well this works, and how fast it actually is locally.

Hopefully we will see even more locally run AI models in the future with a complete package.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: