Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
reissbaker
on March 22, 2023
|
parent
|
context
|
favorite
| on:
Show HN: Finetune LLaMA-7B on commodity GPUs using...
This is awesome! I noticed it said a prereq is >16GB VRAM — is that >= 16GB, or is it really explicitly greater than 16? Would be sweet to be able to finetune locally on, say, a 3080.
capableweb
on March 22, 2023
[–]
I gave this a try and it seems to max out using about 12GB of VRAM on a RTX 3090 Ti.
capableweb
on March 22, 2023
|
parent
[–]
Tried the 30b-hf set too, but was too much (24GB available). 13b-hf works fine, maxing out at 17GB.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: