In my experience when philosophers say something about those things, they say it in natural language (typically English), and that is not precise enough for measurement. All problems can be handwaved away with ad-hoc modification.
The difference between philosophers 'doing' AI and what e.g. DeepMind do is that the latter are precise enough (indeed as precise as possible -- pace the Church-Turing thesis) about their research hypotheses that they can measure and confirm/refute their hypotheses, unlike the former.
Whence all progress in AI since Turing, Shannon, Zuse et al has come from programmers and not philosophers.
Which philosopher has laid down the foundations of the field? One good starting point for AI is Leibniz' Calculemus!, and Leibniz was a mathematician/programmer and not a philosopher in the sense that the original article by Tim Maudlin seems to defend. Leibniz even built automata, formaliesed (propositional logic) etc!
I've had this discussion on HN several times before. As soon as you start pointing out the contributions of philosophy to various fields, people start denying that the people in question were philosophers. So you really can't win. By this logic, any philosopher who made a contribution to mathematics or science was ipso facto a scientist or mathematician and not a philosopher.
I completely agree with you. It's a difficult subject.
I have proposed
the following two definitions:
1. Philosophers in the original article: best understood as acedemic
philosopers.
2. Progress in AI/maths/hard science: comes from those who actually
"do the maths/implementation/repeatable measurement" as opposed to
using natural language only for discussing their ideas.
In my opinion the purpose of all science is truth, and truth (pace
Socrates and the slave boy) must -- among other things -- be
reproducable by others, ideally by every human. Technology for truth
has improved over time, with mathematisation (and edge case
programming and exectuion on a computer) as the current state of the
art in reproducibility. When Frege succeeded in formalising
first-order logic, the sacred heart of rationality, informal methods
became second-class. All substantial progress in subjects formerly
restricted to informal methods has since come from formalisation and
empirical experiment.
If you don't agree with my (1, 2) above, than that's fine, we are talking abotu (slightly) different things.
You seem to be assuming that philosophers are somehow restricted to using natural language only, but
* the formalization and regimentation of natural language has always been a fairly central concern in philosophy (that's where formal logic comes from);
* mathematics can be, and used to be, done in largely natural language.
What was a good definition of philosopher then is not what is a good definition now. Meaning evolves!
I invite you to think historically, and in terms of ongoing differentiation of science: the drive towards formalising/axiomatising mathematics which was started in earnest at the end of the 19th, beginning of the 20th century, has been accelerating. These days mathematics is partly verified in interactive theorem provers like Isabelle/HOL, Coq, Agda and Lean. A Fields medallist (Voevodsky) dedicated his post-Fields career towards more mechanisation of Mathematics. I predict that in 100 years from now, mathematics that is not formalised in a mechanical tool will not be publishable in reputable venues.
Philosophy is also much more formal than it was 1000 years ago (e.g. compare [1] to [2]). Indeed, the formalization of mathematics was driven by philosophers trying to put mathematical reasoning on an adequate foundation.
> The difference between philosophers 'doing' AI and what e.g. DeepMind do is that the latter are precise enough (indeed as precise as possible -- pace the Church-Turing thesis) about their research hypotheses that they can measure and confirm/refute their hypotheses, unlike the former.
They still remain in a framework of axioms we made. This gains nothing, and what's more, many scientists used to know this. Everything you measure you measure according to a ruler you or someoone else ultimately made. Yes, numbers are more precise, but more importantly, they're just numbers. And like what Douglas Adams said about money.. it's very odd how much revolves around numbers, seeing how it's not the numbers that are unhappy, guilty, and so on. Never bought into that, and always preferred the company that puts me in.
> And so in its actual procedure physics studies not these inscrutable qualities, but pointer-readings which we can observe, The readings, it is true, reflect the fluctuations of the world-qualities; but our exact knowledge is of the readings, not of the qualities. The former have as much resemblance to the latter as a telephone number has to a subscriber.
— Arthur Stanley Eddington, The Domain of Physical Science (1925)
> The danger of computers becoming like humans is not as great as the danger of humans becoming like computers.
-- Konrad Zuse
> But the moral good of a moral act inheres in the act itself. That is why an act can itself ennoble or corrupt the person who performs it. The victory of instrumental reason in our time has brought about the virtual disappearance of this insight and thus perforce the delegitimation of the very idea of nobility.
-- Joseph Weizenbaum
How would you measure something like nobility? Do things you cannot measure exist? Can things you cannot prove mathematically true? Can they be right? Should a person who doesn't love wisdom, or people for that matter, even be allowed program machines that decide over the lives of others?
In game theory (such as prisoner's dilemma) there is a concept of cooperation and betrayal. When an agent interacts with another agent, she has to decide whether it is in her best interest to cooperate or exploit the other. Depending on the social environment and the existence or future interactions with the same agent, the choice can change. A noble human would be one who does not betray the larger good for its own limited gain. Thus nobility emerges from the cooperation/betrayal strategy in a multi-agent game.
Only if you think philosophy of mind is the same thing as computer science. I would consider neuroscience and psychology to be more informative for questions about the mind.
I am in awe of the empirical work in Neuroscience. The last few years have seen a "Cambrian Explosion" of new measurements. We can now measure live neurons at scale! I do think this work is also much more interesting than arm-chair thinking about the brain, consciousness, embodied cognition etc. However, as a working programmer/logician/foundations of maths person I'm in much better a position to compare and contrast formal work in my field with philosophers contribution than I can in neuroscience.
How do you see the influence of the Heideggerian critique of cognitivism, via Hubert Dreyfus, on the "Heideggerian AI" movement which preceded the shift away from classical symbolic AI towards connectionism and embodied learning?
Here's from the introduction to his paper Why Heideggerian AI failed and how fixing it would require making it more Heideggerian:
> When I was teaching at MIT in the early sixties, students from the Artificial Intelligence Laboratory would come to my Heidegger course and say in effect: “You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand how the mind works. We in the AI Lab have taken over and are succeeding where you philosophers have failed. We are now programming computers to exhibit human intelligence: to solve problems, to understand natural language, to perceive, and to learn.” In 1968 Marvin Minsky, head of the AI lab, proclaimed: “Within a generation we will have intelligent computers like HAL in the film, 2001.”
> [...] As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of a “universal characteristic” – a set of primitives in which all knowledge could be expressed, — Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Russell’s postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.
Dreyfus agrees with you, in a way, although where you criticize philosophers doing AI, he criticizes the philosophical prejudices of AI practitioners, who often hold beliefs derived from Cartesian views on the mind. He especially criticized the grand claims of early AI researchers, but I think the criticism is still easily applicable.
Here, for example, from his book Being-in-the-world:
> Having to program computers keeps one honest. There is no room for the armchair rationalist's speculations. Thus AI research has called the Cartesian cognitivist's bluff. It is easy to say that to account for the equipmental nexus one need simply add more and more function predicates and rules describing what is to be done in typical situations, but actual difficulties in AI—its inability to make progress with what is called the commonsense knowledge problem, on the one hand, and its inability to define the current situation, sometimes called the frame problem, on the other—suggest that Heidegger is right. It looks like one cannot build up the phenomenon of world out of meaningless elements.
I'm not familiar with the Heideggerian critique of cognitivism, or of Hubert Dreyfus' work, but some of your quotes sound agreeable. I am not convinced however that the frame problem and related issues are unsolvable. The way forward is to program, measure and improve.
The difference between philosophers 'doing' AI and what e.g. DeepMind do is that the latter are precise enough (indeed as precise as possible -- pace the Church-Turing thesis) about their research hypotheses that they can measure and confirm/refute their hypotheses, unlike the former.
Whence all progress in AI since Turing, Shannon, Zuse et al has come from programmers and not philosophers.