I do not think it could. What I see GPT doing here is generating a lot of plausible boilerplate. We already have this via things like code snippets. I find them to be useless, like autocorrect on my phone. It gets in the way of my reasoning and does not really enhance it in any way. Sometimes I make mistakes typing but I’d rather them be my honest mistakes than the computer censoring/rewriting my thoughts.
Good engineering requires good reasoning skills and GPT has exactly zero reasoning. It cannot do the thing that humans do and it cannot do what a calculator can do. I think it is neat and fun, but that is all, a novelty.
I’ve used auto-routers for PCB layout and they will do a 90% job that takes just as much work to redo to get the last 10% as it would to have done it right by hand from the start. There may be a future for operator-in-the-loop type guided AI generative models but I don’t see a lot of effort devoted to making real systems like that. Watson seemed to have this potential and failed even after a brilliant display of ingenuity on Jeopardy. I see these models headed the same way.
I don’t think anyone knows. I gave it the famous syllogism:
> All men are mortal
> Socrates is a man
> Is socrates mortal
To which it gave a very detailed and correct reply. I then tried:
> All cats are white
> Sam is a cat
> Is sam white?
To which it gave an almost identically worded response that was nonsensical.
I personally do not think it is the size of the model in question, it is that the things it does that appear to reflect the output of human cognition are just an echo or reflection. It is not a generalizable solution: there will always be some novel question it is not trained against and for which it will fall down. If you make those vanishingly small, I don’t know, maybe you will have effectively compressed all human knowledge into the model and have a good-enough solution. That’s one way of looking at an NN. But the problem is fundamentally different than chess.
I think this composed with more specialized models for things like identifying and solving math and logic problems could make something that truly represents what I think people are seeing the potential in this. Something that encodes the structure behind these concepts, is extensible, and has a powerful generative function would be really neat.
Good engineering requires good reasoning skills and GPT has exactly zero reasoning. It cannot do the thing that humans do and it cannot do what a calculator can do. I think it is neat and fun, but that is all, a novelty.
I’ve used auto-routers for PCB layout and they will do a 90% job that takes just as much work to redo to get the last 10% as it would to have done it right by hand from the start. There may be a future for operator-in-the-loop type guided AI generative models but I don’t see a lot of effort devoted to making real systems like that. Watson seemed to have this potential and failed even after a brilliant display of ingenuity on Jeopardy. I see these models headed the same way.