I'm not wondering about how the system will determine what's most helpful but instead determining what's even "correct". A model will learn what's "correct" from Stack Overflow by finding accepted or highly-voted answers but when it can't find such content anymore (in this case because Stack Overflow is hypothetically gone) then what would even exist to generate these discussions to be used as training data?
Github, per the sibling comment, is a good example because projects will have issues (tied to the individual repository of source code to be seen as a working implementation of the idea) which will be where such discussions happen.
When Google search became important, people structured their information so that Google could best index it. When AIs become important in the same way, people will start to structure their information so that a particular class of AI can best index it. If that involves API documentation, perhaps there will be a standard format that AIs understand the best.
Those topics that AI replaces the forums for won't need discussion. People won't be confused about that thing because the coding AI knows the details of it. Soon that'll be most syntax questions, soon simple to mid-level algorithms, etc.