Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to believe the error rate fallacy, but:

1. Multi-turn agents can correct themselves with more steps, so the reductive error cascade thinking here is more wrong than right in my experience

2. The 99.9% production requirement is so contextual and misleading, when the real comparison is often something like "outage", "dead air", "active incident", "nobody on it", "prework before/around human work", "proactive task no one had time for before", etc.

Similar to infra as code, CI, and many other automation processes, there's mountains of work that isn't being done and LLMs can do entirely or large swathes of



How about those large swaths are done with LLMs, but instead of spending all that time reviewing that code (really reviewing it, not just a brief LGTM), which would make the time savings moot, you just decide to personally assume responsibility for that code being dead wrong sometimes and the consequences it causes (you cannot blame the AI. As far as anyone is concerned, you wrote the code and signed off on it). As in, legal liability. Would you take the deal?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: