I think this is an interesting question, and I’d like to genuinely attempt an answer.
I essentially think this is because people prefer to optimize what they can measure.
It is hard to measure the quality of work. People have subjective opinions, the size of opportunities can be different, etc, making quality hard to pin down. It is much easier to measure the time required for each iteration on a concept. Additionally, I think it is generally believed that a project with more iterations tends to have higher quality than a project with less, even putting aside the concern about measuring quality itself. Therefore, we put aside the discussion of quality (which we’d really like to improve), and instead make the claim that we can actually measure (time to do something), with the strong implication that this _also_ will tend to increase quality.
Energy consumption and data protection were a thing and then came AI and all of a sudden it doesn’t matter anymore.
Between all the good things people create with AI I see a lot more useless or even harmful things.
Scams and fake news get better and harder to distinguish to a point where reality doesn’t matter anymore.
I think quality takes time and refinement which is not something that LLMs have solved very well today. They are very okay at it, except for very specific targeted refinements (Grammerly, SQL editors).
However, they are excellent at building from 0->1, and the video is suggesting that this is perfect for startups. In the context of startups, faster is better.
Why faster and not better with AI?