>And how can you know that what you call rationality is also not an artifact of neurology? In fact, Gödel proved that if you don't doubt your own rationality, you are in fact being irrational.
That is not at all what Goedel's theorems actually say.
>We lived in a philosophy-starved culture. Too much social media, not enough deep thought.
No, we live in a culture that loves to engage in cheap, shoddy philosophizing by generalizing incorrectly from facts.
> That is not at all what Goedel's theorems actually say.
An informal description of his second incompleteness theorem (from the "Stanford Encyclopedia of Philosophy"):
"For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself."
One example of a sufficient "certain amount of arithmetic" for this to apply to a system is the use of the integer numbers, addition and multiplication. Such a system can no longer prove its own consistency.
If you think that this does not apply to human efforts at rationality, I would like you to explain why.
Debate becomes cheap and shoddy not when someone is wrong (I could be), but when you resort to name-calling instead of pointing out where you think the mistakes are.
>If you think that this does not apply to human efforts at rationality, I would like you to explain why.
Human beings aren't proof systems. We don't operate under conditions of certainty via deductive reasoning. We're inductive (or rather, abductive) reasoners from the get-go.
Sure, and abductive reasoning can be formalized in certain modal logics with Kripke semantics.
What Gödel tells us is that, as long as you have a sufficiently powerful formal system, you cannot prove the consistency of the system itself. Modal logics are no exception.
If you are a computationalist (that is to say, you believe that the human mind can be emulated by a Turing machine), then you might want to take a look at Gödel, Escher, Bach, where Hofstadter discusses how the second incompleteness theorem applies to Turing machines.
You might also enjoy "Forever Undecided" by Smullyan. It uses puzzles to guide you to an intuition about what the incompleteness theorems means to human knowledge and its limitations. In the worst case it's a fun read.
> Sure, and abductive reasoning can be formalized in certain modal logics with Kripke semantics.
No, it can't. Abductive reasoning is probabilistic modelling, and notably, there's a line of research by Cristian Calude showing that you can soundly, non-paradoxically place probabilities on Halting questions.
(Computational tractability is still an obstacle with his current approach, but it has been shown not to generate paradoxes, which is already a major step forward.)
>you might want to take a look at Gödel, Escher, Bach, where Hofstadter discusses how the second incompleteness theorem applies to Turing machines.
This is backwards: halting problems and Kolmogorov complexity for Turing machines give us the two Incompleteness Theorems for proof systems, via Chaitin's Incompleteness Theorem.
Which also neatly gives a way around the Second Incompleteness Theorem: a hierarchical-probabilistic reasoner can create an abstract, compressed model of themselves which consists of small-enough amounts of information that they can reason about its behavior without becoming subject to Chaitin Incompleteness.
That is not at all what Goedel's theorems actually say.
>We lived in a philosophy-starved culture. Too much social media, not enough deep thought.
No, we live in a culture that loves to engage in cheap, shoddy philosophizing by generalizing incorrectly from facts.