Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
rnosov
on March 3, 2023
|
parent
|
context
|
favorite
| on:
Facebook LLAMA is being openly distributed via tor...
Generally, you'll need multiply model size by two to get required amount of video RAM. There are 4 sizes, so you might get away with even smaller GPU for say 13B model.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: