You still can not "could not train a deep learning model to simply read a product description and generate the appropriate codebase", unless the product is very, very, very simple or trivial.
Pretty sure the text there didn't have an exception added in.
And the complexity threshold for the product where writing a description and getting working code is successful increases each year with no real reason to doubt that it will plateau, particularly as models hook into linters and static compilation in multiple rounds of correction over the next eighteen months.
"It won't ever do this thing" and "ok, it kind of does it in simple cases and may do it in increasingly complex cases as time goes on" are two very different statements with a gulf between them.
> "It won't ever do this thing" and "ok, it kind of does it in simple cases and may do it in increasingly complex cases as time goes on" are two very different statements with a gulf between them.
Until it's perfect, the amount of trust you can have in the latter case makes it equivalent to the former. There's no gulf really — you can't count on what the code "may do".
"Ok, it can win a chess game, but if we don't actually keep track of the moves it is making we'll have no idea if it can win a chess game, and therefore it's the same thing as if it can't win a chess game."
You can always read code you didn't write, by the way.