No, LLM needs excellent communicator. It can statistically spit out knowledge but someone has to embed that knowledge. Given how vague and contradictory most requirements are and how complete and excruciatingly detailed prompts must be, LLMs will be useful to generate prototypes faster to check assumptions of the lost knowledge, nothing more or less.
The real trouble with LLMs is that they emulate knowledge so well. People assume they can depend on it to know things, but it is not reliable at all. A lot of traps are being laid in code by people trusting the output or behavior of LLMs.
Yes, this what i’m saying; the development process will be about writing and talking into a new tool, and then with that recorded information generating summaries, mocks prototypes and code. Definitely people would be involved. What I’m pointing out is that LLMs are natural tools for summarizing and synthesizing domain expertise, which can be naturally applied to the product development process. If an LLM based tool can be a great personal assistant, it can also be a great knowledge repository for an organization.