He's essentially trying to say that there is more to knowledge than reason. Which is true and often ignored in today's "scientific" society [for example, the demise of humanities funding in colleges]. Proust is completely unreasonable when he talks about certain smells evoking memories of his mama, but it is unfair to dismiss his thoughts - as many uber-rationalists do - as worthless sentiment.
It's reasonable, and scientifically compatible, to talk about smells evoking memories of parents. There isn't a rational process that creates the link between a sensory trigger and a memory, but there's a rational process and scientific field(s) of study by which that link (neurobiology) and its formation (psychology) can be understood.
That kind of argument seems to me like, "Nobody [unless there's a God] rationally constructed the theory of gravity, or the complexity of fluid dynamics, therefore there's more to gravity or fluid dynamics than can be understood through rationality and science."
Scientifically compatible, yes. Reasonable, no. Emotion is not reason; it is a completely separate enterprise. That's partly what the article is talking about.
Emotion is ingrained reason through the process of evolution. For example, we feel disgust seeing an open wound because apes who didn't care got infected and died. The ones who learned to stay away, through reasoning their observations, later on abstracted it to the emotion of disgust, rather than spending energy to reason it out every time.
Similarly, a good fragrance could easily be a bad fragrance to an alien. Maybe, because good fragrances are associated with eatables, our mind categorised it to be a "good" fragrance.
Emotion is a mechanism developed by the brain to not spend energy reasoning things out every time. We understand this today and hence decide that emotion is a bias in the scientific method, but since we are humans and emotional by evolution/definition, we prove that bias did not occur by providing data for the experiment to be reproducible.
However, the evolutionary reason for the development of a particular emotion might not exist anymore. We now know that urine is sterile and no longer need to be disgusted. There are many tribes that have learned this and although the natural emotion of disgust might kick in, they still use it for its antiseptic properties to heal wounds. Many hindus drink cow urine.
It's interesting that you bring up disgust, because that exact reaction is at the core of the seminal work by the sociologist Norbert Elias, The Civilizing Process[1].
He traces the evolution of manners through etiquette books (a remarkably enduring genre going back many centuries), and shows how things that evoke a strong digust response in us today were actually slow-evolving social norms that have been internalized and turned into habitus (or a super-ego), and he even mentions urine, which for a long while hadn't evoked the same reaction as today. For example, some centuries ago in Europe, urinating under the staircase indoors was actually quite acceptable, and blowing your nose into the tablecloth was considered good manners.
This, of course, doesn't mean that the capacity for disgust isn't evolutionary, but that its particular triggers are social, even though we perceive them to be natural.
Actually not. This is your explanation for what emotion is, based on your scientific framework. It is not some kind of absolute truth. Given that you have no explanation for consciousness, there is a limit to your framework when it comes to explaining emotion.
No this is not my conclusion. The reason I specifically talked about disgust, as opposed to the context of good fragrance in the previous comment, is that it specifically is a scientific conclusion, read in the works of Steven Pinker and Paul Bloom among others.
The debate is whether emotion is independent of reason, and both are pretext under consiousness, so you are sidetracking.
Does that "separate enterprise" amount to knowledge? I don't think so. It's an artifact of neurology. It's an important artifact, and one that can't be ignored (when studying psychology or sociology, or managing humans, or planning events involving humans), but nevertheless that emotional artifact is not useful knowledge. Only the [scientifically and rationally understandable] mechanisms behind the emotional and sentimental connections are useful knowledge. The connections themselves may serve sociological purposes, enabling cultural knowledge generation and accumulation, not to mention improving societal stability, but in themselves emotional artifacts are not knowledge.
Sure, knowledge is true justified belief (we can add that it must be able to be transmitted). This separate enterprise is certainly true, insofar as it is a qualitative experience of somebody, it is justified and it is also a belief. He transmits it through his writing.
Your argument is presupposes your conclusion that this kind of thinking is not knowledge. That said, even if it is isn't knowledge, so what? Does not mean it is not valuable. With questioning the primacy of reason we can also question the primacy of conventional modes of knowledge.
> Only the [scientifically and rationally understandable] mechanisms behind the emotional and sentimental connections are useful knowledge.
You understand that that is a pure value judgement.
At their heart, love of freedom over slavery, compassion over apathy and wisdom over ignorance are value judgements. I know them to be true, but I cannot prove them rationally.
The fact that something is an artifact of something else doesn't mean that it can be meaningfully reduced to it (or even tractably reduced to it at all). Suppose we discovered the most basic of physical laws, and suppose that somehow computational power made a simulation of them tractable. Is our ability to simulate the universe the same as understanding every aspect of it?
On a more basic level, running software is an artifact of hardware (yet the software can simulate a computer with different semantics than the computer it's running on). So is the study of hardware the only relevant knowledge of the running software? And if you say that the software exists independently of the hardware, you'll find yourself with an idealist philosophy that, according to you, is at odds with your materialistic view.
You fly through words like "knowledge" and "useful" without giving them proper thought. What do these words mean? We lived in a philosophy-starved culture. Too much social media, not enough deep thought.
And how can you know that what you call rationality is also not an artifact of neurology? In fact, Gödel proved that if you don't doubt your own rationality, you are in fact being irrational.
I think science is cheapened by this sort of blind faith. The defining characteristic of the scientific attitude is doubt, not certainty.
>And how can you know that what you call rationality is also not an artifact of neurology? In fact, Gödel proved that if you don't doubt your own rationality, you are in fact being irrational.
That is not at all what Goedel's theorems actually say.
>We lived in a philosophy-starved culture. Too much social media, not enough deep thought.
No, we live in a culture that loves to engage in cheap, shoddy philosophizing by generalizing incorrectly from facts.
> That is not at all what Goedel's theorems actually say.
An informal description of his second incompleteness theorem (from the "Stanford Encyclopedia of Philosophy"):
"For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself."
One example of a sufficient "certain amount of arithmetic" for this to apply to a system is the use of the integer numbers, addition and multiplication. Such a system can no longer prove its own consistency.
If you think that this does not apply to human efforts at rationality, I would like you to explain why.
Debate becomes cheap and shoddy not when someone is wrong (I could be), but when you resort to name-calling instead of pointing out where you think the mistakes are.
>If you think that this does not apply to human efforts at rationality, I would like you to explain why.
Human beings aren't proof systems. We don't operate under conditions of certainty via deductive reasoning. We're inductive (or rather, abductive) reasoners from the get-go.
Sure, and abductive reasoning can be formalized in certain modal logics with Kripke semantics.
What Gödel tells us is that, as long as you have a sufficiently powerful formal system, you cannot prove the consistency of the system itself. Modal logics are no exception.
If you are a computationalist (that is to say, you believe that the human mind can be emulated by a Turing machine), then you might want to take a look at Gödel, Escher, Bach, where Hofstadter discusses how the second incompleteness theorem applies to Turing machines.
You might also enjoy "Forever Undecided" by Smullyan. It uses puzzles to guide you to an intuition about what the incompleteness theorems means to human knowledge and its limitations. In the worst case it's a fun read.
> Sure, and abductive reasoning can be formalized in certain modal logics with Kripke semantics.
No, it can't. Abductive reasoning is probabilistic modelling, and notably, there's a line of research by Cristian Calude showing that you can soundly, non-paradoxically place probabilities on Halting questions.
(Computational tractability is still an obstacle with his current approach, but it has been shown not to generate paradoxes, which is already a major step forward.)
>you might want to take a look at Gödel, Escher, Bach, where Hofstadter discusses how the second incompleteness theorem applies to Turing machines.
This is backwards: halting problems and Kolmogorov complexity for Turing machines give us the two Incompleteness Theorems for proof systems, via Chaitin's Incompleteness Theorem.
Which also neatly gives a way around the Second Incompleteness Theorem: a hierarchical-probabilistic reasoner can create an abstract, compressed model of themselves which consists of small-enough amounts of information that they can reason about its behavior without becoming subject to Chaitin Incompleteness.
> there's a rational process and scientific field(s) of study by which that... can be understood.
Yes, but only for a very specific definition of "understanding". See my other comment about universal computation and phenomenology. There are other, no less valid, forms of understanding. I believe that the idea of universal computation reconciles materialism with idealism, putting them both on equal footing. The workings of the software cannot be tractably (and certainly not meaningfully, by any common sense of "meaning") reduced to the material existence of the computer.