Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
XMasterrrr
3 months ago
|
parent
|
context
|
favorite
| on:
Ask HN: Is anyone doing anything cool with tiny la...
I think I know what he means. I use AI Chat. I load Qwen2.5-1.5B-Instruct with llama.cpp server, fully offloaded to the CPU, and then I config AI Chat to connect to the llama.cpp endpoint.
Checkout the demo they have below
https://github.com/sigoden/aichat#shell-assistant
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
Checkout the demo they have below
https://github.com/sigoden/aichat#shell-assistant