It's almost a myth these days that you need top end GPUs to run models. Some smaller models (say <10B parameters with quantization) run on CPUs fine. Of course you won't have hundreds of tokens per sec, but you'll probably get around ~10 or so, which can be sufficient depending on your use case.