Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Certainly a relatively tight feedback loop, but not too tight. Syntax errors are very tight, but non negotiable: Fix it now.

Test failures are more explicit, you run tests when you want to and deal with the results.

Code review often has a horrible feedback loop - often days after you last thought about it. I think LLMs can help tighten this. But it can't be clippy, it can't interrupt you with things that _may_ be problems. You have to be able to stay in the flow.

For most things that make programmers faster, I think deterministic tooling is absolutely key, so you can trust it rather blindly. I think LLMs _can_ be really helpful for helping you understand what you changed and why, and what you may have missed.

Just some random ideas. LLMs are amazing. Incorporating them well is amazingly difficult. What tooling we have now (agentic and all that) feels like early tech demos to me.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: