Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
janmo
on April 19, 2023
|
parent
|
context
|
favorite
| on:
StableLM: A new open-source language model
Don't need a GPU to run the model, you can use your RAM and CPU, but it might be a bit slow
cmsj
on April 19, 2023
[–]
It's very slow, and for the 7b model you're still looking at a pretty hefty RAM hit whether it's CPU or GPU. The model download is something like 40GB.
MacsHeadroom
on April 20, 2023
|
parent
[–]
There's already support in llama.cpp. It runs faster than ChatGPT on my old laptop CPU.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: