Hacker News new | past | comments | ask | show | jobs | submit login

Ollama will (nearly always) work provided you have enough RAM. I was actually pretty surprised that it didn't work on my N5105 (which has 16GB) because it relies on AVX instructions...



Thanks! Someone else mentioned llama.cpp but it appears that ollama is just a gui frontend for llama (which is good because I find guis easier). I'll hopefully set it up soon!


It's not a GUI it's a cli but very easy to use "ollama run {model}". you can also `ollama serve` which serves an api, and then you can use or build a simple gui.


Thanks, I’ll keep that in mind!


Next upcoming Ollama version will support non-AVX CPUs




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: