Btw, Jürgen Schmidhuber, who proposed Gödel machines, is also the inventor of LSTMs, an important building block for many neural network designs. Amazing guy.
Could humans be considered to be a Gödel machine? If not a human, could the entire humanity considered as one entity be considered to be a Gödel machine?
On a different plane could evolution qualify as a Gödel machine as well?
I think that's stretching the analogy too much. Basically, we might say cows are spherical objects if we want to solve some problems in classical mechanics, but you won't gain much insights about how cows actually work by assuming they are spherical objects. Same thing here, IMHO.
Not so much as you might think. Didn't we invent math and progressively improve it, and then invent computers to automate the math we suck at precisely to extend our problem solving powers? There are a lot of similarities.
We have not yet rewritten any of our own code (i.e., DNA), and if we do we still have to prove that the "re-written" person is more capable of re-writing subsequent people in an even more efficient way.
> We have not yet rewritten any of our own code (i.e., DNA)
Interpreting "code" as "DNA" is too literal. It could apply to thinking processes and mental models, cultural and social norms, and so on.
A universal Turing machine (UTM) can mimic any other Turing machine via interpretation, including a Godel machine. So an interpreted Godel machine isn't changing the code of the UTM, the analogue of our DNA, but it's changing the interpreted code, the analogue of higher level processes like those I list above.
> could evolution qualify as a Gödel machine as well?
I would say so. See also: Gaia Hypothesis [0], and the Big History project [1]. Arguably, biological evolution is but one layer in a self-improving information-energy complexity stack, which has now achieved sufficient stability to provide a foundation for ecosystems of technological and memetic evolution.
If you reduce the definition to "something that gets better", then yes. But for a learning machine to really qualify, it has to be getting better by proving that the next state is better than the current state before adopting it.
Not only do human beings not do this, but we actually exhibit inconsistency, change our goals unpredictably, and can be deceived by ourselves or others about our goals and motives.
Yeah, we're more like reinforcement learning agents than Godel machines. If something apparently works, we do it, even when it is not supported by reason.
What matters is to learn to act in such a way as to continue to exist, not to learn the truth. Of course we have to reconcile with reality when we stray too much. Humanity has flourished long before we invented the scientific method, and even today we're not applying it universally.
> Could humans be considered to be a Gödel machine?
At least parts of our brain, for sure. It is clear that we can improve some of our own learning algorithms. Everything that is innate, no (or not yet). For example, a liver cannot optimize itself in novel ways during its lifetime.
> If not a human, could the entire humanity considered as one entity be considered to be a Gödel machine?
Good question. I'm not sure humanity has a utility function, so even though it clearly can self-modify, the question might be nonsensical.
> On a different plane could evolution qualify as a Gödel machine as well?
A think evolution is recursively a Gödel machine. It learns how to create things that become better at learning how to create things that... Also, certain innovations fundamentally change the game, e.g. the emergence of genders.
On a human level, we experiment and modify our biology constantly, eg vaccines and working out.
On a humanity level, we're researching gene editing and gene therapy, and historically had appalling theories that led to genocide, which, in its roots, was about changing humanity's fundamental code.
Proving gene editing is currently slow and expensive so it takes a long time gain confidence in result to make such a change.
Has this been put into practice in any AI to date?
Does TensorFlow count as a Gödel Machine? ML is basically the machine deciding its own weights right? Is that the same thing? Or have we not crossed into "writing its own code" territory?
And if we haven't, why not? This seems like the kind of thing we already have the technology for.
In typical ML you just learn the weights.
But the architecture stays fixed, and since we don't have an architecture that can realistically solve well enough all tasks, it shouldn't count as Godel machine.
Meta-learning, where you also learn hyper-parameters (including potentially the architecture) are closer to that, see for example:
Hyperparameter optimization (including architectures) is not really meta-learning. Meta-learning, also known as "learning to learn", is more like MAML[0], RL2[1], L2RL[2], etc.
> Has this been put into practice in any AI to date?
Meta-learning is an active subfield of research in machine learning. Gödel machines may overlap with program synthesis as well which is another subfield.
> Does TensorFlow count as a Gödel Machine?
No. TF is itself is static and not self-improving. Most standard machine learning models are also not typically self-improving, though you can implement meta-learning with TF.
A big challenge with meta-learning is scale. It's relatively easy to solve multi-armed bandit problems but complex tasks are still challenging.
Some attempts at evolutionary AI / genetic programming might technically count given that they can achieve program synthesis almost ex nihilo, but the computational requirements to do anything interesting are monstrous. So far neural networks that can learn weights but whose architectures are tuned for specific problems have far eclipsed the performance of things like GP for real world applications.
Coding an approximation to a Godel machine naively probably isn't that hard, relatively speaking - it would essentially be an extension to a symbolic computation system. What would be hard is running such a thing with any efficiency.
Schmidhuber has a great talk where he sets up the GM then drops the boom on practical implementations (I forget the details, but the exponent is ~3000, an impossible lifetime-of-the-Universe search space) and the audience laughs, but then he also points out that the self-improvement search may well find massive speed-ups, and so it is perhaps not as hopeless as all that.
Didn't Gödel explicitly proof that there are some statements that cannot be proven this way? Is it just assumed that these will not be relevant for all the hard problems that humans cannot solve or am I missing something here?
1. rarely does that come up in practical problems
2. for many reasons (and completeness is one) the machine cannot prove the next version is better, thus must discard it
https://en.wikipedia.org/wiki/J%C3%BCrgen_Schmidhuber