It would be a copout. Instead of actually tackling AI's problem of common sense, claim that maybe layers of logistic regression and matrix factorization is all there is, we are its equal, just a few layers up in abstraction and evolution. Does one really stem and count tokens to decide if a movie review is negative sentiment? Or does one empathize with its writer and build a complete model inside your head?
The horse would be the AI researcher claiming reasoning and understanding from an activation vector trained on word co-occurence on Wikipedia, and the farmer giving clues, is the heated community and industry, mistaking impressive dataset performance for a solution for a problem they're starting to forget.
I remember using some kind of software for math problem sets in high school. Some of the kids would just look at the equation, get it wrong, see the answer, and try and figure out the answer to a new version of the problem with generated coefficients. That sounds very much like a Clever Hans solution done by a human. I think what AI is lacking is the mechanism which causes us to reject such a solution, and that's much more complex than just finding the answer and I'm not sure related to the ability to find solutions in the first place.
For example a problem might be solving for the roots of a polynomial, and on each try it would randomly generate a new polynomial with new coefficients.
Ah, so what you're saying is that they would give up on the problem once they saw the answer, and then move on to a new one, rather than work through the first one until they understood the answer, right?
Sort of. The software would give you several attempts in order to get points on the problem. So they didn't really give up so much as they never had any intent to solve the problem in the first place, so much as see whether there was some kind of obvious relationship between randomly generated coefficients and the answer in order to get points on the question.
Ah, I see, sort of gaming the test platform (or trying to) rather than actually understanding the math. So a case of "you get what you measure" and also an example of what happens when you force kids to learn something they have no interest in, perhaps?
It would be a copout. Instead of actually tackling AI's problem of common sense, claim that maybe layers of logistic regression and matrix factorization is all there is, we are its equal, just a few layers up in abstraction and evolution. Does one really stem and count tokens to decide if a movie review is negative sentiment? Or does one empathize with its writer and build a complete model inside your head?
The horse would be the AI researcher claiming reasoning and understanding from an activation vector trained on word co-occurence on Wikipedia, and the farmer giving clues, is the heated community and industry, mistaking impressive dataset performance for a solution for a problem they're starting to forget.