Hacker News new | past | comments | ask | show | jobs | submit login

Hubert Dreyfus in his 1986 book "Mind Over Machine":

> The digital computer, when programmed to operate by taking a problem apart into features and combining them step by step according to inference rules, operates as a machine—a logic machine. However, the computer is so versatile it can also be used to model a holistic system. Indeed, recently, as the problems confronting the AI approach remained unsolved for more than a decade, a new generation of researchers have actually begun using computers to simulate such systems. It is too early to say whether the first steps in the direction of holistic similarity recognition will eventually lead to devices that can discern the similarity between whole real-world situations. We discuss the development here for the simple reason that it is the only alternative to the information processing approach that computer science has devised. [...] Remarkably, such devices are the subject of active research. When used to realize a distributed associative memory, computers are no longer functioning as symbol-manipulating systems in which the symbols represent features of the world and computations express relationship among the symbols as in conventional AI. Instead, the computer simulates a holistic system.

Further down, this is quite a good summary of Dreyfus general argument:

> Thanks to AI research, Plato's and Kant's speculation that the mind works according to rules has finally found its empirical test in the attempt to use logic machines to produce humanlike understanding. And, after two thousand years of refinement, the traditional view of mind has shown itself to be inadequate. Indeed, conventional AI as information processing looks like a perfect example of what Imre Lakatos would call a degenerating research program. [...] Current AI is based on the idea, prominent in philosophy since Descartes, that all understanding consists in forming and using appropriate representations. Given the nature of inference engines, AI's representations must be formal ones, and so commonsense understanding must be understood as some vast body of precise propositions, beliefs, rules, facts, and procedures. Thus formulated, the problem has so far resisted solution. We predict it will continue to do so.




I think that Dreyfus has unfortunately set back the cultural understanding of computers by decades, by confidently declaring certain tasks impossible for computers to do, because minds have "insight" or "tacit knowledge" or are "holistic", each of which functionally lets a mind be a ghost in the machine.

A lot of the rhetorical momentum comes from pointing at the progress of technology at various stages in human history, especially the fits and starts of AI/Language research in the mid 20th century, and remarking at how little progress has been made.

And the terms used to define how computers were are also vague.

>Given the nature of inference engines, AI's representations must be formal ones

When AI trained on images of dog faces "dreams" on an image, and progressively twists flowers and purses into dog faces and noses, is the connection made between patterns and dog faces "formal" ? Are the images generated by ThisPersonDoesNotExist informal? The ways computers work on data now deals with abstractions & fuzziness in a way that I think Dreyfus did not imagine to be possible. I think Dreyfus wanted to say that the higher-level methods that we now employ to generate images, human-like language, transpose art styles and create nearly photorealistic faces are on a foundation of principles that are new and distinct from the characteristic principles that he understood to be central to computing. But all of our new progress is implemented on a foundation of silicon and bits, too, which simulate neural networks, meaning those are just as computational as the desktop calculator app. I think Dreyfus just couldn't imagine that 'computing' could include all this extra stuff, and, to take a term from Dennet, Dreyfus mistook his failure of imagination for an insight into necessity.


He is talking about AI as presently conceived when writing. The quote I posted has him explicitly imagining what you say he could not imagine. His critique was in fact INFLUENTIAL for the currently successful approaches.


>The quote I posted has him explicitly imagining

It's him imaging things that he think couldn't be done on computers under one definition, based on vaguely defined terms. Dreyfus was open to another, more expansive definition that included things like 'holistic' and 'tacit' knowledge, which he believed were outside the scope of what computers of a certain sort could do. That distinction turns out to be moot because all the 'new' stuff: e.g. neural networks, GAN, GPT-3 etc are, while in some sense new and innovative, ultimately are running on foundation the same old of logic gates, zeros and ones, and ultimately are, really are, computable in the classical turing machine sense, which is exactly what he had spent his whole career denying. It was a limit of Dreyfus' imagination that he didn't understand that computation, even the kind he criticized, could model the higher order conceptual structures he thought were inaccessible to classical computers. He's not wrong to think that something called 'tacit' knowledge would be important, and would call for specialized approaches and new concepts. Where he went wrong was in veering to the insane, overconfident extreme of denying that these were computable.


[flagged]


Uhhhhhh




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: