Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whisper is great. You can get faster results running the tiny model. I used it for podcast transcription and it is much faster and the quality is not worse than the medium model - there are some podcast episodes that the transcription is the same.


If speed is important, you're much better off using a larger model and whisper.cpp.


Wow thank you! That's a nice speedup indeed, with whisper I get

    33,53s user 2,05s system 443% cpu 8,023 total
with the 'tiny.en' model whereas whisper.cpp gives me

    22,71s user 0,12s system 745% cpu 3,062 total
with the 'base.en' model for a 15s audio clip on an i7-3770 (8 threads).


Awesome! Thanks for posting the stats.

In my workflows I've found rare but noticeable quality differences between the model sizes. So when practical I try to use the larger ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: