Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's the problem, friend: I also have put in my 10,000 hours. I've been coding as part of my job since 1983. I switched to testing from production coding in 1987, but I ran a team that tested developer tools, at Apple and Borland, for eight years. I've been living and breathing testing for decades as a consultant and expert witness.

I do not lightly say that I don't trust the work of someone who uses AI. I'm required to practice with LLMs as part of my job. I've developed things with the help of AI. Small things, because the amount of vigilance necessary to do big things is prohibitive.

Fools rush in, they say. I'm not a fool, and I'm not claiming that you are either. What I know is that there is a huge burden of proof on the shoulders of people who claim that AI is NOT problematic-- given the substantial evidence that it behaves recklessly. This burden is not satisfied by people who say "well, I'm experienced and I trust it."



Thank you for sharing your deep experience. It's a valid perspective, especially from an expert in the world of testing.

You're right to call out the need for vigilance and to place the burden of proof on those of us who advocate for this tool. That burden is not met by simply trusting the AI, you're right, that would be foolish. The burden is met by changing our craft to incorporate the necessary oversight to not be reckless in our use of this new tool.

Coming from the manufacturing world, I think of it like the transition in metalwork industry from hand tools to advanced CNC machines and robotics. A master craftsman with a set of metal working files has total, intimate control. When a CNC machine is introduced, it brings incredible speed and capability, but also a new kind of danger. It has no judgment. It will execute a flawed design with perfect, precision.

An amateur using the CNC machine will trust it blindly and create "plausibly good" work that doesn’t meet the specifications. A master, however, learns a new set of skills: CAD design, calibrating the machine, and, most importantly, inspecting the output. Their vigilance is what turns reckless use of a new tool into an asset that allows them to create things they couldn't before. They don't trust the tool, they trust their process for using it.

My experience with LLM use has been the same. The "vigilance" I practice is my new craft. I spend less time on the manual labor of coding and more time on architecture, design, and critical review. That's the only way to manage the risks.

So I agree with your premise, with one key distinction: I don’t believe tools themselves can be reckless, only their users can. Ultimately, like any powerful tool, its value is unlocked not by the tool itself, but by the disciplined, expert process used to control it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: