I mean, can you expect a vibecoding company to do stuff with 0 downtime? They brought the models down and are now panicking at HQ since there's no one to bring them back up
This made me laugh only because I imagine there could possibly be some truth to it. This is the world we are in. Maybe they all loaded codex to fix their deploy? ;)
I had terrible results during the holidays -- it wasn't slow but it was clear they were dealing with the load by quantizing in spots because there were entire chunks of days when the results from it were so terrible I gave up and switched to using Gemini or Codex via opencode.
I know it's one me for thinking this -- since the domain is fly.io -- but I was really hoping this is some local solution.
Not self-hosted, but just local. A thin command line wrapper to something (docker? bubblewrap?) that gave me sort of a containerized "VM" experience for my local machine using CoW.
Yeah I can make an lxc container called "ai" that has an ssh read key and then a few pre cloned projects. When I want to work I can clone and start it then get the same effect on my own hardware and for free. Just need a small little wrapper to make this a bit more streamlined
I strongly agree with this.
At $WORK I usually work on projects comprising many small bits in various languages - some PowerShell here, some JS there, along with a "build process" that helps minify each and combine them into the final product.
After switching from shell scripts to Just and having to deal with a ton of issues on the way (how does quoting work in each system? How does argument passing? Environment variables?) I simply wrote a simple script with Python, UV shebang and PEP723 dependencies. Typer takes care of the command line parsing, and each "build target" is a simple, composable, readable python function that takes arguments and can call other ones if it needs to. Can't be simpler than that and the LLMs love it too.