I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.
Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.
More charitably, it's a person yet to feel the disabling phase of using an LLM.
If he's a security researcher, then I'd imagine much of his LLM use is outside his area of expertise. He's probably not using it to replace his security research.
I think the revulsion to LLMs by experts is during that phase when its clearly mentally disabling you.
Now I'm a fairly cynical person by trade but that feels like it's straying into conspiracy theory territory.
And of course the key point is that the author of that article isn't (IMO) working in the security research field any more, they work at fly.io on the security of that platform.
I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.
Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.