False. In order to maintain high quality I often rejected the first result and regenerated the code with a more precise prompt, rather than taking the first result. I also regularly used "refactor prompts" to ask Kiro to change the code to match my high expectations.
Just because you use AI does not mean that you need to be careless about quality, nor is AI an excuse to turn off your brain and just hit accept on the first result.
There is still a skill and craft to coding with AI, it's just that you will find yourself discarding, regenerating, and rebuilding things much faster than you did before.
In this project I deliberately avoided manual typing as much as possible, and instead found ways to prompt Kiro to get the results I wanted, and that's why 95% of it has been written by Kiro, rather than by hand. In the process, I got better at prompting, faster at it, and reached a much higher success rate at approving the initial pass. Early on I often regenerated a segment of code with more precise instructions three or four times, but this was also early in Kiro's development, with a dumber model, and with myself having less prompting skill.
If there was such a thing you would just check in your prompts into your repo and CI would build your final application from prompts and deploy it.
So it follows that if you are accepting 95% of what random output is being given to you. you are either doing something really mundane and straightforward or you don't care much about the shape of the output ( not to be confused with quality) .
Like in this case you were also the Product Owner who had the final say about what's acceptable.
I am not doubting 95% acceptance rate all. I've pure vibecoded many toy projects myself.
> in line with what they would have written,
point i am making is that they didn't know what they would've written. they had a rough overall idea but details were being accepted on the fly. They were trying out bunch of things and see what looks good based on a rough idea of what output should be.
In a real world project you are not both product owner and coder.
To be clear I did not have a 95% acceptance rate. I'm saying that in the final published repo, 95% of the lines of code were written by AI, not by me. I discarded and refactored code along the way many times, but I did that by also using the AI. My end goal was to keep my hands off the code as much as possible and get better at describing exactly what I wanted from the AI.
> if you are accepting 95% of what random output is being given to you
I am not, and don't expect to be able to do that for many years yet. The models aren't that good yet.
I would estimate that I accepted perhaps 25% of the initial code output from the LLM. The other 75% of output I wasn't satisfied with I just unapplied and retried with a different prompt, or I refactored or mutated it using a followup prompt.
In the final project 95% of the committed lines of code in the published version were written by AI, however there was probably 4x as much discarded AI generated code along the way that was also written by AI. Often the first take wasn't good enough so I modified it or refactored it, also using AI. Over the course of using the project I got better at providing more precise prompts that generated good code the first time, however, I rarely accepted the first draft of code back from Kiro without making followup prompts.
A lot of people have a misguided thought that using AI means you just accept the first draft that AI returns. That's not the case. You absolutely should be reading the code, and iterating on it using followup prompts.
I think its because you didn't have hard expectations for the output. You were ok with anything that kind of looked ok.