One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools.
Everytime I use AI tools in my domain-expertise area, I find it ends up slowing me down. Introducing subtle bugs, me having to provide insane amount of context and details (at which point it becomes way faster to do it myself)
Just code and chill man - having spent the last 6 months really trying everything (all these context engineering strategies, agents, CLAUDE.md files on every directory, et, etc). It really easy still more productive to just code yourself if you know what you’re doing.
The thing I love most though - is having discussions with an LLM about an implementation, having it write some quick unit tests and performance tests for certain base cases, having it write a quick shell script, etc. things like this, it’s Amazing and makes me really enjoy programming since I save time and can focus on doing the actual fun stuff
When I'm doing the coding myself, I'm at least making steady progress and the process is predictable. With LLMs, it's a crapshoot. I have to get the AI to understand what I want and may have try again multiple times, many times never succeeding, and end up writing a lot of text anyway. And in between, I'll have to read a lot of code that probably ends up being thrown away or heavily modified.
This probably depends a lot on what kind of project one is working on, though.
But it's like you said, I like using LLMs for completing smaller parts or asking specific kind of help or having conversations about solutions, but for anything larger, it just feels like banging my head to a wall.
Don't use agent mode, only use ask mode. Once I did that, it works as expected. I can still code but not have to rely on the randomized nature of "vibe coding."
Devs are starting to realize that the sweet spot for AI support in coding is on a small scale, i.e. extended code completion. Generating huge chunks of code is often not reliable enough except for some niches (such as simple greenfield projects which profit from millions of examples online).
One AI workflow I rather like seems to have largely vanished from many modern tools. Use a very dumb simple model with syntax knowledge to autocomplete. It fills out what I'm about to type, and takes local variables and pass them to functions I wanna call.
It feels like just writing my own code but at 50% higher wpm. Especially if I can limit it to only suggest a single row; it prevents it from effecting my thought process or approach.
This is how the original GitHub copilot worked until it switched to a chat based more agentic behavior. I set it up locally with an old llama on my laptop and it's plenty useful for bash and c, and amazing for python. I ideally want a model trained only for code and not conversational at all, closer to the raw model trained to next-token predict on code.
I think this style just doesn't chew enough tokens to make tech CEOs happy. It doesn't benefit from a massive model and almost drains more networking than compute to run in the cloud.
Most editors and LSPs offer variable, method, keyword and a bunch of other completions that are 100% predictable and accurate, you don't need an LLM for this.
One of the core principles of my workflow (inspired by REPL development and some unix tools) is to start with a single file (for a function or the whole project). The I refactor the code to have a better organization and to improve reliability, especially as I'm handling more scenarios (and failure modes).
LLMs are not useful in this workflow, because they are too verbose. Their answers are generic and handle scenarios you don't even support yet. What's useful is good documentation (as in truthful) and the code if it's open.
This approach has worked really well in my career. It gives me KISS and YAGNI for free. And every line of code is purposeful and have a reason to be there.
I’ve been actively using the first tier paid version of:
- GPT
- Claude
- Gemini
Usually it’s via the cli tool. (Codex, Claude code, Gemini cli)
I have a bunch of scripts setup that write to the tmux pane that has these chats open - so I’ll visually highlight something nvim and pipe that into either of the panes that have one of these tools open and start a discussion.
If I want it to read the full file, I’ll just use the TUIs search (they all use the @ prefix to search for files) and then discuss. If I want to pipe a few files, I’ll add the files I want to nvim quickfix list of literally pipe the files I want to a markdown file (with a full path) and discuss.
So yes - the chat interface in these cli tools mostly. I’m one of those devs that don’t leave the terminal much lol
I also have a personal rule that I will try something for at least 4 months actively before making my decision about it (programming language, new tools, or in this case AI assisted coding)
I made the claim that in my area of expertise - I have found that *most of the time it is faster to write something myself than I write out really detailed md file / prompt. It becomes more tedious to express myself via natural language then it is with code when I want something very specific done.
In these types of cases - writing the code myself, allows me to express the thing I want faster. Also, I like to code with the AI auto complete but still while this can be useful I sometimes disable it because it’s distracting and consistently incorrect with its predictions)
claim that i claimed you claimed: "for any coder to claim AI tools slow them down"
---
claim you made: "One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools."
---
You did make that claim but I'm aware my approach would bring the defensiveness out of anyone :P
This is what you said - and I didn’t make this claim. I specifically said that in “my domain”. Meaning a code base I know fully well and own, and it’s a language, framework and patterns that I’ve worked with for years.
For certain things - yes, it’s faster to do myself than write a long prompt with context (or a predefined one) because it’s faster to express what I want with the code than natural language.
or the complete opposite. Very skilled people with a lot of experience in a specific project. I am like that too at my current job. I've REALLY tried to use AI but it has always slowed me down in the end. AI is only speeding me up in very specific and isolated things, tangent to the main product development.
For seasoned maintainers of open source repos, there is explicit evidence it does slow them down, even when they think it sped them up: https://arxiv.org/abs/2507.09089
Cue: "the tools are so much better now", "the people in the study didn't know how to use Cursor", etc. Regardless if one takes issue with this study, there are enough others of its kind to suggest skepticism regarding how much these tools really create speed benefits when employed at scale. The maintenance cliff is always nigh...
There are definitely ways in which LLMs, and agentic coding tools scaffolded in top, help with aspects of development. But to say anyone who claims otherwise is either being disingenuous or doesn't know what they are doing, is not an informed take.
I have seen this study cited enough to have a copy paste for it. And no, there are not a bunch of other studies that have any sort of conclusive evidence to support this claim either. I have looked and would welcome any with good analysis.
"""
1. The sample is extremely narrow (16 elite open-source maintainers doing ~2-hour issues on large repos they know intimately), so any measured slowdown applies only to that sliver of work, not “developers” or “software engineering” in general.
2. The treatment is really “Cursor + Claude, often in a different IDE than participants normally use, after light onboarding,” so the result could reflect tool/UX friction or unfamiliar workflows rather than an inherent slowdown from AI assistance itself.
3. The only primary outcome is self-reported time-to-completion; there is no direct measurement of code quality, scope of work, or long-term value, so a longer duration could just mean “more or better work done,” not lower productivity.
4. With 246 issues from 16 people and substantial modeling choices (e.g., regression adjustment using forecasted times, clustering decisions), the reported ~19% slowdown is statistically fragile and heavily model-dependent, making it weak evidence for a robust, general slowdown effect.
"""
Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially.
I think we agree no the limitations of the study--I literally began my comment with "for seasoned maintainers of open source repos". I'm not sure if in your first statement ("there are no studies to back up this claim.. I welcome good analysis") you are referring to claims that support an AI-speedup. If so, we agree that good analysis is needed. But if you think there already is good data:
Can you link any? All I've seen is stuff like Anthropic claiming 90% of internal code is written by Claude--I think we'd agree that we need an unbiased source and better metrics than "code written". My concern is that whenever AI usage in professional developers is studied empirically, as far as I have seen, the results never corroborate your claim: "Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially."
I'm open to it being possible, but as someone who was a developer before March 2023 and is surrounded by many professionals who were also so, our results are more lukewarm than what I see boosters claim. It speeds up certain types of work, but not everything in a manner that adds up to all work "sped up substantially".
I need to see data, and all the data I've seen goes the other way. Did you see the recent Substack looking at public Github data showing no increase in the trend of PRs all the way up to August 2025? All the hard data I've seen is much, much more middling than what people who have something to sell AI-wise are claiming.
One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools.
Everytime I use AI tools in my domain-expertise area, I find it ends up slowing me down. Introducing subtle bugs, me having to provide insane amount of context and details (at which point it becomes way faster to do it myself)
Just code and chill man - having spent the last 6 months really trying everything (all these context engineering strategies, agents, CLAUDE.md files on every directory, et, etc). It really easy still more productive to just code yourself if you know what you’re doing.
The thing I love most though - is having discussions with an LLM about an implementation, having it write some quick unit tests and performance tests for certain base cases, having it write a quick shell script, etc. things like this, it’s Amazing and makes me really enjoy programming since I save time and can focus on doing the actual fun stuff