Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This looks really useful! I'll definitely give it a try at the next opportunity. One improvement I would suggest for the future is adding an option for a locally running LLM, such as Llama or Mistral.


That would be very simple to implement and a reason why I didn’t think of it is probably because llama runs so slowly on my machine!

Thank you for the feedback and the suggestion.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: