Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you help me understand which articles you're referring to? A link to the biggest "AI made me a 10x developer" article you've read would certainly clear this up.


My goal here was not to publicly call out any specific individual or article. I don't want to make enemies and I don't want to be cast as dunking on someone. I get that that opens me up to criticism that I'm fighting a strawman, I accept that.

Your article does not specifically say 10x, but it does say this:

> Kids today don’t just use agents; they use asynchronous agents. They wake up, free-associate 13 different things for their LLMs to work on, make coffee, fill out a TPS report, drive to the Mars Cheese Castle, and then check their notifications. They’ve got 13 PRs to review. Three get tossed and re-prompted. Five of them get the same feedback a junior dev gets. And five get merged.

> “I’m sipping rocket fuel right now,” a friend tells me. “The folks on my team who aren’t embracing AI? It’s like they’re standing still.” He’s not bullshitting me. He doesn’t work in SFBA. He’s got no reason to lie.

That's not quantifying it specifically enough to say "10x", but it is saying no uncertain terms that AI engineers are moving fast and everyone else is standing still by comparison. Your article was indeed one of the ones I specifically wanted to respond to as the language directly contributed to the anxiety I described here. It made me worry that maybe I was standing still. To me, the engineer you described as sipping rocket fuel is an example both of the "degrees of separation" concept (it confuses me you are pointing to a third party and saying they are trustworthy, why not simply describe your workflow?), and the idea that a quick burst of productivity can feel huge but it just doesn't scale in my experience.

Again, can you tell me about what you've done to no longer have any hallucinations? I'm fully open to learning here. As I stated in the article, I did my best to give full AI agent coding a try, I'm open to being proven wrong and adjusting my approach.


I believe that quote in Thomas’ blog can be attributed to me. I’ve at least said something near enough to him that I don’t mind claiming it.

I _never_ made the claim that you could call that 10x productivity improvement. I’m hesitant to categorize productivity in software in numeric terms as it’s such a nuanced concept.

But I’ll stand by my impression that a developer using ai tools will generate code at a perceptibly faster pace than one who isn’t.

I mentioned in another comment the major flaw in your productivity calculation, is that you aren’t accounting for the work that wouldn’t have gotten done otherwise. That’s where my improvements are almost universally coming from. I can improve the codebase in ways that weren’t justifiable before in places that do not suffer from the coordination costs you rightly point out.

I no longer feel like my peers are standing still, because they’ve nearly uniformly adopted ai tools. And again, you rightly point out, there isn’t much of a learning curve. If you could develop before them you can figure out how to improve with them. I found it easier than learning vim.

As for hallucinations I don’t experience them effectively _ever_. And I do let agents mess with terraform code (in code bases where I can prevent state manipulation or infrastructure changes outside of the agents control).

I don’t have any hints on how. I’m using a pretty vanilla Claude code setup. But im not sure how an agent that can write and run compile/test loops could hallucinate.


Appreciate the comment!

> I mentioned in another comment the major flaw in your productivity calculation, is that you aren’t accounting for the work that wouldn’t have gotten done otherwise. That’s where my improvements are almost universally coming from. I can improve the codebase in ways that weren’t justifiable before in places that do not suffer from the coordination costs you rightly point out.

I'm a bit confused by this. There is work that apparently is unlocking big productivity boosts but was somehow not justified before? Are you referring to places like my ESLint rule example, where eliminating the startup costs of learning how to write one allows you to do things you wouldn't have previously bothered with? If so, I feel like I covered this pretty well in the article and we probably largely agree on the value that productivity boost. My point is still stands that that doesn't scale. If this is not what you mean, feel free to correct me.

Appreciate your thoughts on hallucinations. My guess is the difference between what we're experiencing is that in your code hallucinations are still happening but getting corrected after tests are run, whereas my agents typically get stuck in these write-and-test loops and can't figure out how to solve the problem, or it "solves" it by deleting the tests or something like that. I've seen videos and viewed open source AI PRs which end up in similar loops as to what I've experienced, so I think what I see is common.

Perhaps that's an indication of that we're trying to solve different problems with agents, or using different languages/libraries, and that explains the divergence of experiences. Either way, I still contend that this kind of productivity boost is likely going to be hard to scale and will get tougher to realize as time goes on. If you keep seeing it, I'd really love to hear more about your methods to see what I'm missing. One thing that has been frustrating me is that people rarely share their workflows after makign big claims. This is unlike previous hype cycles where people would share descriptions of exactly what they did ("we rewrote in Rust, here's how we did it", etc.) Feel free to email me at the address in my about page[1] or send me a request on LinkedIn or whatever. I'm being 100% genuine that I'd love to learn from you!

[1] https://colton.dev/about/


> but getting corrected after tests are run, whereas my agents typically get stuck in these write-and-test loops

This maybe a definition problem then. I don’t think “the agent did a dumb thing that it can’t reason out of” is a hallucination. To me a hallucination is a pretty specific failure mode, it invents something that doesn’t exist. Models still do that for me but the build test loop sets them aright on that nearly perfectly. So I guess the model is still hallucinating but the agent isn’t so the output is unimpacted. So I don’t care.

For the agent is dumb scenario, I aggressively delete and reprompt. This is something I’ve actually gotten much better at with time and experience, both so it doesn’t happen often and I can course correct quickly. I find it works nearly as well for teaching me about the problem domain as my own mistakes do but is much faster to get to.

But if I were going to be pithy. Aggressively deleting work output from an agent is part of their value proposition. They don’t get offended and they don’t need explanations why. Of course they don’t learn well either, that’s on you.


What I'm saying is that the model will get into one of these loops where it needs to be killed, and I'll look at some of the intermediate states and the reasons for failure and they are because it hallucinated things, ran tests, got an error. Does that make sense?

Deleting and re-prompting is fine. I do that too. But even one cycle of that often means the whole prompting exercise takes me longer than if I just wrote the code myself.


I think maybe this is another disconnect. A lot of the advantage I get does not come from the agent doing things faster than me, though for most tasks it certainly can.

A lot of the advantage is that it can make forward progress when I can’t. I can check to see if an agent is stuck, and sometimes reprompt it, in the downtime between meetings or after lunch before I start whatever deep thinking session I need to do. That’s pure time recovered for me. I wouldn’t have finished _any_ work with that time previously.

I don’t need to optimize my time around babysitting the agent. I can do that in the margins. Watching the agents is low context work. That adds the capability to generate working solutions during times that was previously barred from that.


I've done a few of these types of hands off and go to a meeting style interactions. It has worked a few times, but I tend to just find that they over do it or cause issues. Like you ask them to fix an error and they add a try catch, swallow the error, and call it a day. Or the PR has 1000 line changes when it should have two.

Either way, I'm happy that you are getting so much out of the tools. Perhaps I need to prompt harder, or the codebase I work on has just deviated too much from the stuff the LLMs like and simply isn't a good candidate. Either way, appreciate talking to you!


> One thing that has been frustrating me is that people rarely share their workflows after making big claims

Good luck ever getting that. I've asked that about a dozen times on here from people making these claims and have never received a response. And I'm genuinely curious as well, so I will continue asking.


People share this stuff all the time. Kenton Varda published a whole walkthrough[1], prompts and all. Stories about people's personal LLM workflows have been on the front page here repeatedly over the last few months.

What people aren't doing is proving to you that their workflows work as well as they say they do. You want proof, you can DM people for their rate card and see what that costs.

[1] https://news.ycombinator.com/item?id=44159166


Thanks for sharing and that is interesting to read through. But it's still just a demo, not live production code. From the readme:

> As of March, 2025, this library is very new, prerelease software.

I'm not looking for personal proof that their workflows work as well as they say they do.

I just want an example of a project in production with active users depending on the service for business functions that has been written 1.5/2/5/10/whatever x faster than it otherwise would have without AI.

Anyone can vibe code a side project with 10 users or a demo meant to generate hype/sales interest. But I want someone to actually have put their money where their mouth is and give an example of a project that would have legal, security, or monetary consequences if bad code was put in production. Because those are the types of projects that matter to me when trying to evaluate people's claims (since those are what my paycheck actually depends on).

Do you have any examples like that?


Dude.

That code tptacek linked you to? It's part of our (Cloudflare's) MCP framework. Which means all of the companies mentioned in this blog post are using this code in production today: https://blog.cloudflare.com/mcp-demo-day/

There you go. This is what you are looking for. Why are you refusing to believe it?

(OK fine. I guess I should probably update the readme to remove that "prerelease" line.)


Lol misunderstanding a disclaimer in a readme is not refusing to believe something. But my apologies and appreciate the clarification.


Yeah OK fair that line in the readme is more prominent than I remember it being.

I never look at my own readmes so they tend to get outdated. :/

Fixing: https://github.com/cloudflare/workers-oauth-provider/pull/59


See, I just shared Kenton Varda describing his entire workflow, and you came back asking that I please show you a workflow that would find more credible. Do you want to learn about people's workflows, or do you want to argue with them that their workflows don't work? Nobody is interested in doing the latter with you.


I don't think you understood me at all. I don't care about the actual workflow. I just want an example of of a project that:

1. Would have legal, security, or monetary consequences if bad code was put in production

2. Was developed using an AI/LLM/Agent/etc that made the development many times faster than it otherwise would have (as so many people claim)

I would love to hear an example where "I used Claude to develop this hosting/ecommerce/analytics/inventory management service that is used in production by 50 paying companies. Using an LLM we deployed the project in 4 week where it would normally take us 4 months." Or "We updated an out of date code base for a client in half the time it would normally take and have not seen any issues since launch"

At the end of the day I code to get paid. And it would really help to be able to point to actual cases where both money and negative consequences of failure are on the line.

So if you have any examples please share. But the more people deflect the more skeptical I get about their claims.


Seems like I understand you pretty well! If you wanted to talk about workflows in a curious and open way, your best bet would have been finishing that comment with something other than "the more people deflect the more skeptical I get". Stay skeptical! You do you.


Sorry if I came of as prickly, but it wasn't exactly like your parent comment was much kinder.

I mean it's pretty simple - there are a lot of big claims that I read but very few tangible examples that people share where the project has consequences for failure. Someone else replied with some helpful examples in another thread. If you want to add another one feel free, if not that's cool too.


It almost feels like sealioning. People say nobody shares their workflow, so I share it. They say well that's not production code, so I point to PRs in active projects I'm using, and they say well that doesn't demonstrate your interactive flow. I point out the design documents and prompts and they say yes but what kind of setup do you do, which MCP servers are you running, and I point them at my MCP repo.

At some point you have to accept that no amount of proof will convince someone that refuses to be swayed. It's very frustrating because, while these are wonderful tools already, its clear that the biggest thing that makes a positive difference is people using and improving them. They're still in relative infancy.

I want to have the kind of conversations we had back at the beginning of web development, when people were delighted at what was possible despite everything being relatively awful.


I don't care about your workflow, that can be figured out from the 10,000 blog posts all describing the same thing. My issue is with people claiming this huge boost in productivity only to find out that they are working on code bases that have no real consequence if something fails, breaks, or doesn't work as intended.

Since my day job is creating systems that need to be operational and predictable for paying clients - examples of front end mockups, demos, apps with no users, etc don't really matter that much at the end of the day. It's like the difference between being a great speaker in a group of 3 friends vs standing up in front of a 30 person audience with your job on the line.

If you have some examples, I'd love to hear about them because I am genuinely curious.


Sure, I'm working on a database proxy in rust at the moment, if you hop on GitHub, same username. It's not pure AI in the PRs but I know approximately no Rust, so AI support has been absolutely critical. I added support for parsing binary timestamps from PG's wire format, as an example.

I spent probably a day building prompts and tests and getting an example of failing behavior in Python, and then I wrote pseudocode and had it implement and write comprehensive unit tests in rust. About three passes and manual review of every line. I also have an MCP that calls out to O3 as a second opinion code review and passes it back in

Very fun stuff


I use agentic flows writing code that deals with millions of pieces of financial data every day.

I rolled out a PR that was a one shot change to our fundamental storage layer on our hot path yesterday. This was part of a large codebase and that file has existed for four years. It hadn’t been touched in 2. I literally didn’t touch a text editor on that change.

I have first hand experience watching devs do this with payment processing code that handles over a billion dollars on a given day.


Thanks, it's quite helpful to hear examples like that.

When you say you didn't touch a text editor, do you mean you didn't review the code change or did you just look at the diff in the terminal/git?


I reviewed that PR in the GitHub web gui and in our CI/CD gui. It was one of several PRs that I was reviewing at the time, some by agents, some by people and some by a mix.

Because I was the instigator of that change a second code owner was required to approve the PR as well. That PR didn't require any changes, which is uncommon but not particularly rare.

It is _common_ for me to only give feedback to the agents via the GitHub gui the same way I do humans. Occasionally I have to pull the PR down locally and use the full powers of my dev environment to review but I don't think that is any more common than with people. If anything its less common because of the tasks the agents get typically they either do well or I kill the PR without much review.


> But I’ll stand by my impression that a developer using ai tools will generate code at a perceptibly faster pace than one who isn’t.

And this is the problem.

Masterful developers are the ones you pay to reduce lines of code, not create them.


Every. Single. Time. You say you get productivity gains from ai tools on the internet someone will tell you that you weren’t good at your job before the ai tooling.

Perhaps, start from the assumption that I have in fact spent a fair bit of time doing this job at a high level. Where does that mental exercise take you with regard to your own position on ai tools.

In fact, you don’t have to assume I’m qualified to speak on the subject. Your retort assumes that _everyone_ who gets improvement is bad at this. Assume any random proponent isn’t.


I think what GP is saying is that in most cases generating allot of code is not a good thing. Every line of LLM generated code has to be audited because they are prone to hallucinations and auditing someone else's code is much more difficult and time consuming than auditing your own code. Allot of code also requires more maintenance.


The comment is premised on the idea that Kasey either doesn't know what a "masterful developer" is or needs to be corrected back to it.


It's a commentary on one of the things I perceive as a flaw with LLMs, not you.

One of the most valuable qualities of humans is laziness.

We're constantly seeking efficiency gains, because who wants to carry buckets of water, or take laundry down to the river?

Skilled developers excel at this. They are "lazy" when they code - they plan for the future, they construct code in a way that will make their life better, and easier.

LLMs don't have this motivation. They will gleefully spit out 1000 lines of code when 10 will do.

It's a fundamental flaw.


Now, go back and contemplate what my feedback means if I am well versed on Larry Wall-isms.


Wait, now you're saying I set the 10x bar? No, I did not.


> Wait, now you're saying I set the 10x bar? No, I did not.

I distinctly did not say that. I said your article was one of the ones that made me feel anxious. And it's one of the ones that spurred me to write this article. I demonstrated how your language implies a massive productivity boost from AI. Does it not? Is this not the entire point of what you wrote? That engineers who aren't using AI are crazy (literally the title) because they are missing out on all this "rocket fuel" productivity? The difference between rocket fuel and standing still has to be a pretty big improvement.

The points I make here still apply, there is not some secret well of super-productivity sitting out in the open that luddites are just too grumpy to pick up and use. Those who feel they have gotten massive productivity boosts are being tricked by occasional, rare boosts in productivity.

You said you solved hallucinations, could you share some of how you did that?


I asked for an example of one of the articles you'd read that said that LLMs were turning ordinary developers into 10x developers. You cited my article. My article says nothing of the sort; I find the notion of "10x developers" repellant.


If you really need some, there are some links in another comment. Another one that was made me really wonder if I was missing the bus and makes 10x claims repeatedly is this YC podcast episode[1]. But again, I'm not trying to write a point by point counter of a specific article or video but a general narrative. If you want that for your article, Ludicity does a better job eviscerating your post than I ever could: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-arti...

I'm trying to write a piece to comfort those that feel anxious about the wave of articles telling them they aren't good enough, that they are "standing still", as you say in your article. That they are crazy. Your article may not say the word 10x, but it makes something extremely clear: you believe some developers are sitting still and others are sipping rocket fuel. You believe AI skeptics are crazy. Thus, your article is extremely natural to cite when talking about the origin of this post.

You can keep being mad at me for not providing a detailed target list, I said several times that that's not what the point of this is. You can keep refusing to actually elaborate on how you use AI day to day and solve its problems. That's fine. I don't care. I care a lot more to talk about the people who are actually engaging with me (such as your friend) and helping me to understand what they are doing. Right now, if you're going to keep not actually contributing to the conversation, you're just kinda being a salty guy with an almost unfathomable 408,000 karma going through every HN thread every single day and making hot takes.

[1] https://www.youtube.com/watch?v=IACHfKmZMr8


how much faster does an engine on rocket fuel go, than one not on rocket fuel?

The article in question[0] has the literal tag line:

> My AI Skeptic Friends Are All Nuts

how much saner is someone who isn't nuts to someone who is nuts? 10x saner? What do the specific numbers matter given you're not writing a paper?

You're enjoying the click bait benefits of using strong language and then acting offended when someone calls you out on it. Yes, maybe you didn't literally say "10x" but you said or quoted things in exactly that same ballpark and its worthy of a counter point like the OP has provided. They're both interesting articles with strong opinions that make the world a more interesting place so idk why you're trying to disown the strength with which you wrote your article.

[0] : https://fly.io/blog/youre-all-nuts/


I'm not complaining about "strong language", I'm saying: my post didn't say anything about "10x developers", and was just cited to me as the source of this post's claims about 10x'ing.

I'm not offended at all. I'm saying: no, I'm not a valid cite for that idea. If the author wants to come back and say "10x developer", a term they used twenty five times in this piece, was just a rhetorical flourish, something they conjured up themselves in their head, that's great! That would resolve this small dispute neatly. Unfortunately: you can't speak for them.


10x is a meme in our industry that relates to developer productivity and I think it well reflects the sort of productivity gain that someone would be "nuts" to be skeptical about. You might not have specifically said "10x" but I imagine many people left your article believing that agentic AI is the "next 10x" productivity boost.

They used it 25 times in their piece and in your piece stated that being interested in "the craft" is something people should do in their own time from now on. Strongly implying, if not outright stating; that the processes and practices we've refined for the past 70 years of software engineering need to move aside for the next hotness that has only been out for 6 months. Sure you never said "10x", but to me it read entirely like you're doing the "10x" dance. It was a good article and it definitely has inspired me to check it out.


No. There's all sorts of software engineering craft that usually has no place on the job site; for instance, there's a huge amount of craft in learning pure-functional languages like Haskell, but nobody freaks out when their teams decide people can't randomly write Haskell code instead of the Python and Rust everyone else is writing. You're extrapolating because you're trying to defend your point, but the point you're trying to make is that I meant to communicate something in my own article that I not only never said, but also find repellant.


Sure, I'm extrapolating what I read as strong language in your article as being a direct attack on making the code precise and flexible over good enough to ship (mediocre code, first-pass, etc). I imagine this might continue to be a battleground as adoption increases, especially at orgs with less engineering culture, in order to drive down costs and increase agentic throughput.

However there is a bit of irony in that you're happy to point out my defensiveness as a potential flaw when you're getting hung up on nailing down the "10x" claim with precision. As an enjoyer of both articles I think this one is a fair retort to yours, so I think it a little disappointing to get distracted by the specifics.

If only we could accurately measure 1x developer productivity, I imagine the truth might be a lot clearer.


Again, as you've acknowledged, there's a whole meme structure in the industry about what a "10x" programmer is. I did not claim that LLMs turn programmers into "10x programmers", because I do not believe in "10x" programmers to begin with. I'm not being defensive, I'm rebutting a (false) factual claim. It's very clearly false; you can just read the piece and see for yourself.


> I'm not being defensive, I'm rebutting a (false) factual claim.

You're rebutting a claim about your rant that -if it ever did exist- has been backed away from and disowned several times.

From [0]

> > Wait, now you're saying I set the 10x bar? No, I did not.

>

> I distinctly did not say that. I said your article was one of the ones that made me feel anxious. And it's one of the ones that spurred me to write this article.

and from [1]

> I'm trying to write a piece to comfort those that feel anxious about the wave of articles telling them they aren't good enough, that they are "standing still", as you say in your article. That they are crazy. Your article may not say the word 10x, but it makes something extremely clear: you believe some developers are sitting still and others are sipping rocket fuel. You believe AI skeptics are crazy. Thus, your article is extremely natural to cite when talking about the origin of this post.

[0] <https://news.ycombinator.com/item?id=44799049>

[1] <https://news.ycombinator.com/item?id=44804434>


Thanks for this. The guy really wants to pin me on the 10x thing coming from him but I keep saying it's not and he keeps ignoring me. The claims of his article are extremely plain and clear: AI-loving engineers are going "rocket fuel" fast, AI skeptical engineers are crazy (literally the title!) and are sitting still.

My post is about how those types of claims are unfounded and make people feel anxious unnecessarily. He just doesn't want to confront that he wrote an article that directly says these words and that those words have an effect. He wants to use strong language without any consequences. So he's trying to nitpick the things I say and ignore my requests for further information. It's kinda sad to watch, honestly.


Yeah, I don't know what's up with him. I'll feel very foolish if he was always this nuts. If something has happened (or crept up on him) somewhat-recently to drive him berserk, then my heart goes out to him and those who know and/or care about him.

Speaking of his rant, in it, he says this:

> [Google's] Gemini’s [programming skill] floor is higher than my own.

which, man... if that's not hyperbole, either he hasn't had much experience with the worst Gemini has to offer, or something really bad has happened to him. Gemini's floor is "entirely-gormless junior programmer". If a guy who's been consistently shipping production software since the mid-1990s isn't consistently better than that, something is dreadfully wrong.


A cursory scroll on X, LinkedIn, etc... will show you.

That seemed to me be to be the author's point.

His article resonated with me. After 30 years of development and dealing with hype cycles, offshoring, no-code "platforms", endless framework churn (this next version will make everything better!), coder tribes ("if you don't do typescript, you're incompetent and should be fired"), endless bickering, improper tech adopting following the FANGs (your startup with 0 users needs kubernetes?) and a gazillion other annoyances we're all familiar with, this AI stuff might be the thing that makes me retire.

To be clear: it's not AI that I have a problem with. I'm actually deeply interested in it and actively researching it from a math's up approach.

I'm also a big believer in it, I've implemented it in a few different projects that have had remarkable efficiency gains for my users, things like automatically extracting values from a PDF to create a structured record. It is a wonderful way to eliminate a whole class of drudgery based tasks.

No, the thing that has me on the verge of throwing in the towel is the wholesale rush towards devaluing human expertise.

I'm not just talking about developers, I'm talking about healthcare providers, artists, lawyers, etc...

Highly skilled professionals that have, in some cases, spent their entire lives developing mastery of their craft. They demand a compensation rate commensurate to that value, and in response society gleefully says "meh, I think you can be replaced with this gizmo for a fraction of the cost."

It's an insult. It would be one thing if it were true - my objection could safely be dismissed as the grumbling of a buggy whip manufacturer, however this is objectively, measurably wrong.

Most of the energy of the people pushing the AI hype goes towards obscuring this. When objective reality is presented to them in irrefutable ways, the response is inevitably: "but the next version will!"

It won't. Not with the current approach. The stochastic parrot will never learn to think.

That doesn't mean it's not useful. It demonstrably is, it's an incredibly valuable tool for entire classes of problems, but using it as a cheap replacement for skilled professionals is madness.

What will the world be left with when we drive those professionals out?

Do you want an AI deciding your healthcare? Do you want a codebase that you've invested your life savings into written by an AI that can't think?

How will we innovate? Who will be able to do fundamental research and create new things? Why would you bother going into the profession at all? So we're left with AIs training on increasingly polluted data, and relying on them to push us forward. It's a farce.

I've been seriously considering hanging up my spurs and munching popcorn through the inevitable chaos that will come if we don't course correct.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: