That's a fair recommendation, but I think there is a massive difference when you have offline-hands-on compared to this. At first, it's all iterations, which at it's fastest, huggingface won't be able to do. It's fair to try, but I think you really need hands on local.
Thank you, I'm new to the site and didn't know about spaces. I'm more interested in trying to run a model locally. All I've ever used locally are toy implementations, and 20-line DQN experiments that never seemed to converge.
If you want to run it locally, the Automatic1111 repo that was mentioned does have most up-to-date models, but it's also messy and breaks often. But is your GPU up to the task?