I know this is just an example, but I think it’s emblematic of the main issue I have with the widespread use of LLMs.
Do you mean, “have Gunicorn keep N workers running?” If so,
that’s in the manual (timeouts to kill silent workers, which defaults to 30 seconds).
Or do you mean “have Gunicorn itself be monitored for health, and restarted as necessary?” There are many ways to do that – systemctl, orchestration platforms like K8s, bespoke scripts – and all of them have tricky failure mechanisms that a casual copy/paste will not prepare you for.
Blindly using answers from ChatGPT is no different than a random SO post, and you are no more prepared for failure when the abstractions leak.
This is also my current worry. If you know the concepts (Not the workflow) about the problem you’re solving, I find it easy to get answer, and in the meantime you’ll collect some new knowledge in the process. even when asking someone, they will often point out the knowledge you lack while providing the answers.
Getting straight answers will be detrimental in the long term, I fear. It feels like living in a box, and watching the world on a screen and the person answering my questions is mixing lies and truths.
Do you mean, “have Gunicorn keep N workers running?” If so, that’s in the manual (timeouts to kill silent workers, which defaults to 30 seconds).
Or do you mean “have Gunicorn itself be monitored for health, and restarted as necessary?” There are many ways to do that – systemctl, orchestration platforms like K8s, bespoke scripts – and all of them have tricky failure mechanisms that a casual copy/paste will not prepare you for.
Blindly using answers from ChatGPT is no different than a random SO post, and you are no more prepared for failure when the abstractions leak.