Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
BatteryMountain
on Jan 19, 2023
|
parent
|
context
|
favorite
| on:
Which GPU(s) to get for deep learning
With this card you can also run Open AI's Whisper with the Large model (the multilingual one!), as it requires 10GB.
crabbycarrot
on Jan 19, 2023
|
next
[–]
Highly recommend quantizing the model (
https://pytorch.org/tutorials/recipes/recipes/dynamic_quanti...
). I converted the large model to use int8, and I'm able to run it 5x real-time on CPU with pretty low RAM requirements with still very good quality.
Const-me
on Jan 19, 2023
|
prev
[–]
My implementation of Whisper uses slightly over 4GB VRAM running their large multilingual model:
https://github.com/Const-me/Whisper
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: