Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've got an aging 2080Ti and Ryzen 3800X with 96GB RAM, any point in trying to mess with the GPU or?

Haven't really been able to justify upgrading to a 4090 or similar given I play so few new games these days.



Yes, offloading some layers to the GPU and VRAM should still help. And 11gb isn't bad.

If you're on linux or wsl2, I would run oobabooga with --verbose. Load a GGUF, start with a small number of GPU layers and creep up, keeping an eye on VRAM usage.

If you're on windows, you can try out LM Studio and fiddle with layers while you monitor VRAM usage, though windows may be doing some weird stuff sharing ram.

Would be curious to see the diffs. Specifically if there's a complexity tax in offloading that makes the CPU-alone faster but in my experience with a 3060 and a mobile 3080, offloading what I can makes a big diff.


> Specifically if there's a complexity tax in offloading that makes the CPU-alone faster

Anecdotal, but I played with a bunch of models recently on a machine with a 16GB AMD GPU and 64GB of system memory/12 core CPU. I found offloading to significantly speed things up when dealing with large models, but there was seemingly an inflection point as I tested models that approached the limits of the system, where offloading did seem to significantly slow things down vs just running on the CPU.


I had only cuda installed and it took 2 ollama shell commands in WSL2 from quite literally 0 local LLM experience to running mixtral fast enough on a 1070 and 12700k. Go for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: