That was a nice introduction to the concept of induction: the idea that patterns observed in our past experience will hold in the future.
Since we're talking about induction as the basis of science, I'm surprised the concept of "falsification" wasn't mentioned, which has been the "workhorse" of most science during the past two hundred years. See https://en.wikipedia.org/wiki/Falsifiability
Specifically in the context of classical statistics methods (frequentist statistics), the idea of using p-values for scientific discovery only makes sense as part of repeated studies (induction over multiple tests of a theory). It's easy for any one study to observe some pattern by chance (one black raven), but if repeated studies all show this pattern exists, then we kind of start to believe the pattern might be true.
Bayesian statisticians don't use the falsification paradigm directly, but instead focus on estimation, and combining the evidence from multiple experiments to obtain "tighter bounds" on the estimated quantities of interest. The conceptual machinery is very different, but the idea of induction is still kind of present in the form of "more data reduces uncertainty".
The concept of falsification was introduced by Karl Popper in 1934... not 200 years ago. Popper is a historically important philosopher of science, but under scrutiny much of his thinking doesn't actually match up with how science is done (now or historically), and his philosophy hasn't been the dominant philosophy of science since the 1960s (enter Thomas Kuhn and Paul Feyerabend and Imre Lakatos). Nonetheless, the idea that "science = falsification" permeates popular culture, and is even repeated by many scientists who think about their work, but don't as often think about how they think about their work.
Which ought to be a good reminder that the philosophy of science is a branch of philosophy, not science.
Similarly the problem of induction is a problem for philosophers, not scientists: scientists have a method of doing science that works, and has created a modern world, with all sorts of popular things like televisions and smartphones. However, despite the fact that we’re next door neighbors to philosophy, sometimes borrow their tools, and even have some roots in “natural philosophy,” the best tool we have—the one that makes it all useful to society at large—seems to be the one that their framework says can’t be justified.
When your framework with a long history and lots of proponents seems to indicate that a single little project is not well grounded, that project has a problem. When it seems to indicate that the grand project which underpins all of modern society is not well grounded, the framework has a problem.
Philosophy of science is philosophy, not science. Kuhn, Lakatos, etc, however, muddied the waters a bit by introducing the notion (at least implicitly) that philosophy of science should consider how science is actually practiced, which looks like a scientific approach to me.
I think science and philosophy of science would both improve if there was less separation between the fields. Many, if not most, practicing scientists have a Doctorate of Philosophy in a scientific field, yet none of the graduate programs that I'm aware of require a single course in philosophy of science. Equally problematic, I'm not sure how many philosophers of science have actually taken the time to empirically discover anything.
There are citations both for and against falsificationism at the Stanford Encyclopedia of Philosophy[0]; quoting:
"Popper’s demarcation criterion has been criticized both for excluding legitimate science (Hansson 2006) and for giving some pseudosciences the status of being scientific (Agassi 1991; Mahner 2007, 518–519). Strictly speaking, his criterion excludes the possibility that there can be a pseudoscientific claim that is refutable. According to Larry Laudan (1983, 121), it “has the untoward consequence of countenancing as ‘scientific’ every crank claim which makes ascertainably false assertions”"
From the citation on Hansson, the abstract[1] reads:
"...Furthermore, an empirical study of falsification in science is reported, based on the 70 scientific contributions that were published as articles in Nature in 2000. Only one of these articles conformed to the falsificationist recipe for successful science, namely the falsification of a hypothesis that is more accessible to falsification than to verification."
Popper's criterion in a vacuum could seem to be exclusionary, but his philosophy of science involves his underrated idea of evolutionary epistemology. That all theories, seemingly pseudoscientific and the rest, compete to explain something, testable or not. Explanation is the most fundamental aspect, the rival statements compete to solve some problem in terms of how and why.
It's kind of like how the modern standards of mathematical "proof" don't extend backwards very well. Nobody would say that Euler wasn't one of the greatest mathematicians of all time, but much of his work was very unrigorous despite being accepted by the math community long before it was formally verified. Similarly there is lots of science that was settled way before it actually made falsifiable predictions which were confirmed through experiment. The classic example is Newtonian physics, which was contradicted by several observations that were only reconciled by Einstein. Another example would be like Boyle's models of gas and liquid. These models all hold approximately, and are very useful (hence their acceptance), but wouldn't really pass the criterion of falsifiability.
Kuhn frames it as "anything goes", which is supposed to be what the horrified scientist utters to themself as they peruse the historical record of their field. Scientists tend to go with "whatever works best" (and whatever secures more funding), rather than following a strictly falsificationist paradigm.
Popper didn't equate science with falsification. That's a gross simplification that's often pushed (I once believed that myself) yet doesn't actually hold up (once you put his thinking under scrutiny)
As a scientist, I'm glad you mention Lakatos, as he described science in a way that is recognizable to practitioners, yet is more philosophical than Kuhn (more a sociologist of science). He significantly impacted how I practice science and evaluate theories (and research programs).
Lakatos is interesting because his philosophy tried to rescue Popper's philosophy in light of Kuhn's arguments, but mutated it considerably in the process.
Popper is more methodology of science, while Kuhn is more protosociology of science. Kuhn’s book doesn’t refute Popper’s, even if he would have liked it to.
I don’t know about the two others, but after reading about them briefly they seem as antirational as any philosophers subscribing to Hegel and/or Marx. When it comes to Lakatos specifically, I can say with certainty that his philosophy of mathematics has so little to do with mathematics that it’s not worth any paper (I studied maths and know many professional mathematicians personally).
Nevertheless falsification is surely the most coherent justification for scientific reasoning in the sense of "a philosophy professor has a gun to your head and demands that you justify your work as a method for approximating reality"?
No, if anything falsificationism and scientific realism are in tension. From Popper's The Logic of Scientific Discovery:
> I think that we shall have to get accustomed to the idea that we must
not look upon science as a ‘body of knowledge’, but rather as a system
of hypotheses; that is to say, as a system of guesses or anticipations
which in principle cannot be justified, but with which we work as long
as they stand up to tests, and of which we are never justified in saying
that we know that they are ‘true’ or ‘more or less certain’ or even
‘probable’.
The most common realist arguments are probably variations on the "miracle argument": our scientific practices work, and the only plausible explanation is that they have at least some ability to find the truth.
Isn’t that a refutation of justificationism, not of realism? Isn’t he just saying that there’s no process you can follow that will guarantee that you haven’t made a mistake, or similarly, that there are no claims which can reach the status of being beyond criticism?
This can be said of all of philosophy (at least outside of the purely formal world of logic), not just the physical sciences. From that perspective, 'standing up to tests' is a poor way to evaluate hypotheses, except in comparison to all the others.
Paying attention to falsification also performs the useful task of pushing hypotheses into saying something definitive.
Science is much more than falsification, of course: for one thing, a falsificationist stance does not generate hypotheses on its own.
I'm not suggesting that all of philosophy should adopt falsification as a principle (and much less that it should be ignored if it cannot): ethics, for example, is a field which I think is important beyond what is falsifiable.
"The classical or frequentist approach to statistics (in which inference is centered on significance testing), is associated with a philosophy in which science is deductive and follows Popper’s doctrine of falsification. In contrast, Bayesian inference is commonly associated with inductive reasoning and the idea that a model can be dethroned by a competing model but can never be directly falsified by a significance test. The purpose of this article is to break these associations, which I think are incorrect and have been detrimen-
tal to statistical practice, in that they have steered falsificationists away from the very useful tools of Bayesian inference and have discouraged Bayesians from checking the fit of their models."
Well that is not really what the article is about. It's more about the problem of induction itself, and the problems of its use as basis for both Science and Philosophical endeavors. The article only made a flimsy drive into the subject, and the Wikipedia article does a much better job.
"...First formulated by David Hume, the problem of induction questions our reasons for believing that the future will resemble the past, or more broadly it questions predictions about unobserved things based on previous observations. This inference from the observed to the unobserved is known as "inductive inferences", and Hume, while acknowledging that everyone does and must make such inferences, argued that there is no non-circular way to justify them, thereby undermining one of the Enlightenment pillars of rationality..."
To address your surprise, the article was attempting to focus on just induction while falsifiability is more the higher level https://en.wikipedia.org/wiki/Demarcation_problem , but admittedly, philosophy of science topics do overlap a lot.
> Since we're talking about induction as the basis of science, I'm surprised the concept of "falsification" wasn't mentioned, which has been the "workhorse" of most science during the past two hundred years.
An article on the subject of falsifiability and Popper was also published on the site (and posted on HN by me within minutes of this one):
The benefit of the Bayesian approach is the need to explicitly state a prior distribution, in this case, you would need to state the prior probability that all ravens are black. This forces us to realise that induction does not prove anything, and reduces to the matter of having the evolutionarily-gifted skill of coming up with good priors.
Bayesianism is the only reliable way of creating knowledge, as it is the direct application of the mathematics of probability. All frequentism can do is tell you how poorly your data fits an unrealistic straw-man model.
Important to distinguish Bayesian epistemology (the idea that knowledge comes from induction) from Bayesian techniques in statistics. Karl Popper’s epistemology (critical rationalism) refutes the former, but does not mind the latter.
Popper would agree that Bayesian techniques are useful (and even powerful) tools in statistics and practical problem-solving. But he would say that Bayesian ideas are insufficient to understand how knowledge grows and progresses. From a critical rationalism viewpoint, knowledge grows through conjecture and refutation - the proposal and testing of new explanatory theories.
Bayesian statistical approaches can be part of the toolkit to falsify those theories, but they can’t explain how they come about, so they aren’t adequate as an epistemology. Popper’s critical rationalism may not be entirely complete, but it seems far better as a starting point for explaining how we get knowledge (which is what an epistemological framework needs to do) than Bayesian ideas about measuring truth statistically. Major breakthroughs like relativity and quantum entanglement are evidence for this: they required conjectures that at first seemed at odds with what we had observed. Physicists had to follow rational theories down very strange paths before any empirical confirmation, eventually to create new explanations that allowed us to model and predict distant cosmological events that had previously appeared random to us.
If anyone is interested in these ideas (which are still fairly new to me; sorry for anything I’m expressing poorly), I highly recommend reading The Beginning of Infinity. It is an exciting, wide-ranging study of Popper’s epistemology by the guy who ‘invented’ quantum computing. It also discusses the ‘many universes’ theory, which is fascinating.
Falsifiability means that a claim is presented with specifics that could be proven wrong in some universe, not that they have to be shown wrong in this one (which is absurd). You're hanging on to the wrong meaning of "falsify", like to falsify a passport.
One example of something that is not falsifiable is a deductive argument. Deduction isn't science; it is logic. A deductive argument is true regardless of the truth values of its constituent propositions. So in the most literal sense of the world, a deductive argument is not falsifiable.
Another example of non-falsifiability is verbal bunk that refers to poorly defined or undefined concepts. The claims are not specific enough that they can be investigated and found true or false.
E.g. "The world didn't exist yesterday; it came into being moments ago, with everything in it, including all your memories, and all of civilization and its apparent historic record." Okay, great; by what means of investigation can you tell that this is right? If this claim were false, how would the investigation inform you?
Claims about gravity are falsifiable if they are specific and testable. E.g. "If I release this rock, it will move toward the ground and come to rest" is falsifiable because in some imaginary world, it could happen that the rock instead flies up and away when released. The claim is specific: we can construe what kinds of events could be observed which would make it false, like that the opposite of what is predicted happens. If a proposition is falsifiable, and we are not able to find it false, then that gives us confidence that it may be right.
Nobody said it's the defininig concept in the scientific method. It's just a characteristics that scientific claims (by whatever method they were produced) must have.
Suppose we have not done any scientific investigation at all yet, and just have a hypothesis about something. That hypothesis should be a falsifiable statement, otherwise it is not something that can be investigated (e.g. by a scientific method). It doesn't define the method, but it defines the characteristics of some of the inputs and outputs.
There have been smoke-filled blow-hards in academia for generations, such as Hegel, whose entire opus consisted of deliberately nonsensical word salad, and whose careers in income depended entirely on confusing people in such a way as to seem profound, with word salad intended to be impenetrable and non-specific so nobody could clearly point out the myriad ways they were full of crap. Schopenhauer had a good go at explaining it, if you're interested.
Popper was one of those people. He was not a scientist. He never had anything to do with science. He was a sophist. He talked rings around people in such a way that they never knew what the hell he was talking about, but they assumed it must be profound and amazing.
Karl Popper himself based his ideas about science not on actual science, but on the philosophies including Marxism and Freudian psychology.
Karl Popper's own words: "A theory which is NOT REFUTABLE by any conceivable event is non-scientific." Therefore the theory of gravity is not a scientific theory because nobody has yet conceived of an event or experiment which refutes it.
I realise this will cause a negative reflex among those who are addicted to internerding and have held tightly to the unfounded belief (picked up from other internerds) that "Science must be falsifiable!", but it's just not reality. It's just a little thing that people saw on Wikipedia, latched on to, and figured it was really significant and legitimate. But it's just Popper's nonsense.
There are plenty of extensive criticisms of Popper's sophistry out there by actual scientists and philosophers. Feel free to read some.
As for the scientific method, if you can't do a web search and find out what they actually is, well, nobody can help you.
> Popper said that for a theory to qualify as science, it must be capable of being proven wrong. For example, for the theory of gravity to be scientific and correct, there must be an experiment which can prove it wrong. Is there such an experiment? Has anyone proven the theory of gravity wrong? Is it therefore not scientific?
I think you're misunderstanding it?
The requirement isn't that there exists an experiment that proves gravity wrong, and has been performed in reality. The requirement is that for the theory you propose, there must be some conceivable scenario in which you decide your theory is incorrect.
Eg, if you posit "All swans are white", when it's falsifiable. All you need is a black swan to show up. But there is no requirement for this to actually happen, there's no problem with every swan in the universe being white.
I'm a scientist and I feel like Philosophers of science have much to say about science, even more than scientist even. Specially on this grounds. What you said was correct, but it's a reduced vision. Why don't we add Kuhn and Lakatos to the mix? https://en.m.wikipedia.org/wiki/Research_program
What people also tend to misunderstand that Popper and also logical positivists provided a normative picture of what science ought to be be, rather than what it is. This naive normative picture of the scientific method was (perhaps even now it is with many scientists) in vogue during the 1950-60s, for example look at Medawar's books who was heavily influenced by ideas of Popper. The very fact that scientists do not give up on their pet theories despite of contrary evidence is a good evidence against such simple views, which have an image of a scientist as someone without any human emotions and personal beliefs.
Also associated are several ideas of science being value neutral and observations being theory agnostic which were foundational to these view points have been seriously challenged.
The idea that we ought to believe what's true is itself a normative picture of how humans should think rather than a descriptive picture of how humans do think.
There's no scientific way around underdetermination, but there certainly are a number of philosophical ways around it.
Starting with: I'll believe whatever the heck I want to believe.
The theory of gravity is easily falsifiable. If we identify an object that has a given mass and does not produce the expected gravitational attraction, we will have to review the theory.
Your assessment is very harsh. Popper's framework is hugely influential in science.
I disagree. The law of gravity is what you describe, masses attract, and is falsifiable. The theory, that masses are distorting and curving space to cause that effect, is very hard to falsify.
What you are alluding to is a modern interpretation of the theory, one that arose from Einstein's postulation about the relationship between space and time. This interpretation produces novel, testable predictions which have been experimentally verified.
As a small quibble, there is no such distinction between "laws" and "theories". In common language a theory may imply less certainty than a law, but in science this is not the case.
There is a difference. A law comes through observation, for example the laws of thermodynamics, there has not (really) been an observation of them being violated (macro-scale). There's not really a theory underpinning them, mathematically a infinite motion machine could work. But it has been observed that it hasn't and it appears that it can't be violated. That atoms make up all matter is a Theory, it's a model based on hypotheses that have been tested over and over. A different model could be created that may match better (like everything being made of fields and waves). Basically a theory is how, a law is what.
[Yes i am aware that atoms don't necessarily conflict with waves]
I don't think the principle has anything to do with the difficulty or hardness of falsification. If a theory excludes something from occurring in reality, it is falsifiable, if it does not, then it is not falsifiable.
So think about it like this, if a theory is "compatible" with everything that happens, it really does not make any predictions, as regardless of what happens, the theory holds. What the point of such a theory is is unclear.
General relativity precludes many things from happening, and many tests have been performed that would have detected things that general relativity precludes from occurring: https://en.wikipedia.org/wiki/Tests_of_general_relativity
There is a difference between classical gravity whereby masses attract in proportion to GMm/r^2 and the curved space time interpretation in which masses that are held from coming together are actually accelerating due to the curvature (and all that). Due to that difference, some experiment is possible which could go one way or the other.
> some experiment is possible which could go one way or the other.
Yes, I agree, but like I said it's much harder to find what these are and create experiments to show them. I'm also saying the theory part is more than the math formula, it's the explanation.
I think your exaggerating by saying it has nothing to do with science, even if most scientific thinking isn't focused on falsification.
Theories are the explanatory part to explain why a law or correlation occurs. The law can be falsified, but I think you're right, that it's very hard to falsify the theory portion.
Indeed, the first theory of gravity was proven wrong.
Einstein's theory of general relativity falsified Newton's law of universal gravitation.
If you want to take this even further (which is a leap that would certainly upset most people's sensibilities) - Einstein proved that gravity doesn't exist. What does exist is the curvature of spacetime which explains all the phenomena which we used to attribute to gravity.
The raven paradox is not that mysterious in my opinion, observing non-black non-ravens is indeed support for all ravens being black. How do you check that all ravens are black? From the set of all objects in the universe you select all objects of type raven and then check for each one that it is black. Or you do it the other way around, you select all non-black objects and then check for each one that it is not of type raven. Same result. [1] The cardinality of the two sets - all ravens and all non-black objects - will however be vastly different and I would certainly much prefer manually checking the list of all ravens.
I think it is this disparity that makes us intuitively dismissive of the idea that seeing green apples helps us in any way with the color of ravens. And we might maybe also have some bias in our way of thinking - do you first see that a thing is a apple and then that it is red, or do you first see that it is a red thing and then that it is an apple? Ad hoc I would say that color usually comes second especially as we can essentially paint any object in any color we like, for example a blue apple. Color seems to only be the primary attribute if one can not clearly identify something, than it becomes just a red thing in the distance.
[1] Also note that in both variants we ignore black non-ravens, in the first variant because they are not ravens, in the second one because they are not non-black.
"The truly great advances in our understanding of nature originated in a manner almost diametrically opposed to induction. The intuitive grasp of the essentials or a large complex of facts leads the scientist to the postulation of a hypothetical basic law, or several such basic laws. From the basic law (system of axioms) he derives his conclusion as completely as possible in a purely logically deductive manner." - Albert Einstein [0]
This was the fall of Einstein though. He refused to acknowledge that the world truly was incomprehensibly weird in the quantum realm and kept trying to find some intuitive workaround or another for decades. The world is a sadder place because his true unique mind could have propelled us to unimaginable leaps (unimaginable to us mere mortals) if he had taken his head out of his own ass and accepted the reality for what it is and try to find the next breakthrough.
It seems like Einstein was saying "there's no such thing as a theory-free fact," specifically in his case that observation itself is influenced by some pre-existing theory. Some deduction is involved. But this begs the question... where did *that* theory originate?
The problem of induction since Hume has been a massive ideological misdirection against the possibility of knowledge. We do not use induction. We use abduction.
We first posit a universal, and then each subsequent data point is taken to confirm or delimit the universal. With a clear counter-example, the universal is dropped.
As soon as the child touches the fire once, the conclusion "fire is always dangerous" is reached.
This is impart why all ML (etc.) approaches based on conditional probability break: they are subject to the problem of induction.
You cannot do science with statistics: there is nothing in the data that generalises. The generality is the hypothetical properties of the data generating process itself.
ie., the shadows on the wall of Plato's cave cannot be averaged to produce the vases on the outside. It is the properties of the vases (the universals) that produce the shadows -- there are an infinite number of possible shadows, and no statistical operation on any amount of them reverses to "clay pot"
Some scientists use statistics, sure -- by "doing science" I mean directly explaining how the world works.
The way that (actual,) scientists use statistics is to "choose between possible vases". Suppose we're in plato's cave: we notice strange shadows. We then build pots inside the cave to match those shadows. different pots produce indistinguishable ones; and some pots produce impossible shadows.
Here we can statistically analyses all the shadow data to select the best possible pot model.
You can see that we never get to certainty, because there's always a class of pots producing identical shadows. So we use stats + "theoretical virutes" to select the pot we think most likely.
My point is that in this story stats was never used to build the pots. Such a thing is provably impossible.
This is, essentially, the problem of induction. And it's why the ML approach to pot = "compression of the shadows" breaks. It only works if you never move around the cave, ie., are always sat in exactly the position that the "compressed shadow" looks identical to the real one.
The goal of science is to provide predictive models of the observed data. Explaining 'how the world works' is a bit too ambitious. You need to reach to religion for that.
I know of no such scientific model. Indeed, almost all of them do not predict observed data.
Newton's universal law of gravitation didn't predict huge amounts of stuff in the solar system. Rather than say it's wrong, we supposed there were missing planets.
All models i'm aware of "predict" in the sense that they make existence claims; they don't "predict" observations -- on the latter, they'd basically all 'wrong'.
The body of science we call "gravity, space/time, etc." makes the following existence claims: there is a sun, earth, planets, there are forces, masses,; these have properties that give rise to interactions; etc.
From this body of knowlege, most of what we can predict abotu the observable universe is wrong. simply because, ex hyp., we arent able to observe enough of it.
The models here arent wrong, we just do not have enough data to match them to observations. The world really has forces,masses,planets,etc. and they interact such that F=GMm/r^2 subject to {some constraints}. But if we use that to predict where "everythign should be", nothing is in the right place. We're missing a lot of "Everythign".
This "science predicts observations" is humean problem of induction BS. It's never been true; it's radically sceptical; and is a deeply broken model explaining how we know things.
If we had to literally have models from observations to observations, knowledge would be impossible.
Your approach of the problem, while theoretically correct (world is infinitely complex, therefore no model can encompass it, news at 11), is of zero practical use.
As a matter of fact, IIRC, the Chinese around the 11th century walked the exact same intellectual path, which led them to conclude that trying to model the behavior of the natural world was a waste of time, and made them completely abandon the idea of simplified, yet predictive and therefore useful models.
The net result was: they killed their then budding tech/science endeavors in the egg.
In other words, your take does not seem to be particularly fruitful.
Engineers build "useful models", scientists as a matter of fact, build explanations.
Whether those explanations prove useful is always an open question; newton's took centuries to have much of an engineering use.
Engineering is a kind of pseudoscience justified by its utitility -- if it's undertood to be mere utility, it's fine. But "engineering thinking" has taken over large areas of thinking, and it's a catastrophe.
You cannot build really new things by engineering. Engineering is just "statisticalisation" of new science; a simplification, an eloration, a proceduralisation. The science must come first.
Yes, there is a dynamical equilibrium between engineering and science; with "analogical engineering" often occurring before "direct explanation".
So, accidental discoveries -> lenses -> optics -> telescopes -> theory of gravity -> etc.
The full picture is here is very subtle. The idea is that we are in the world as animals engaged in a protoypical science in every action we perform: we interpret causes, take goal-directed actions, refute universals, etc. Our conscious mind is a kind of "local, heuristical science" of our own bodies and how we relate to our environments.
This "native science" gives rise to a "naive engineering" where we play around with possible tools and see what they do. You then bootstrap those tools to do non-naive science.
Science, in the sense I mean of mechanistic explanation involving making claims about stuff that exists, how it works, how it interacts etc.... must necessarily proceed engineering (in the sense of "utility of coincidences").
But it's an incredibly naive science of basic animal intelligence at first -- a science of one's body hurting finding out why, etc. -- you need to supplement this with iterated engineering-science loops to "get to the stars".
The problem with Hume, and the sceptics in this thread, is they either have this backwards; or have no science at all in the picture.
Hume just assumed science is impossible because we were just "engineers of images in our heads", rather than "scientists of our bodies"
In the specific case you have chosen as an exemplar, George Cayley established some important principles before the Wrights (as the latter acknowledged), and they themselves succeeded where others failed by doing the science that they needed to cross the threshold. Subsequent development often followed from advances in the science of aeronautics, sometimes in very counter-intuitive ways - for one example, look up Whitcomb's area rule.
> The goal of science is to provide predictive models of the observed data.
This, a 100%.
The "explaining how the world works" is only interesting insofar as it helps produce further and better predictive models (BTW Feynmann had that exact same view [1])
Everything else is intellectual onanism, an unfortunately rather common academic pursuit.
The problem of induction since Hume has been a massive ideological misdirection against the possibility of knowledge.
Hume's observations were shocking, precisely because they undermined the most foundational of our assumptions and they have never been satisfactorily countered since; the use here of "ideological" and "misdirection" then seems rather unfair to Hume (though I acknowledge your use of "since") - he flagged our own misdirection in the most un-ideological way imaginable.
I am surprised no one have mentioned David Deutsch yet as he probably have the best critique of induction (and a much more useful alternative) than any other living person.
He is also the father of quantum computation and his two books Fabric of Reality and Beginning of Infinity the most important books I have ever read.
13 minutes in that "short lecture" and he's still droning on about how gamblers originated and promoted probability, which is a pointless narrative bordering on political. Sheesh.
How does Bayesian reasoning fit into this? To go with the introductory example, having seen the sun go up ~ 11000 times in my lifetime can never prove that it will go up tomorrow, but every time it does go up, I adjust my priors for expecting the sun to go up the next day, and assign it a bit more likeliness.
I think a discussion on induction is best done by splitting the resulting models in two: models based on statistics, and models based on (abstract/iconic) simulation. (Eg all swans are white vs all swans lay eggs.)
Since our reality is "atoms and void", and since the sun and earth are huge configurations of atoms locked together in a stable pattern, the sun coming up tomorrow has nothing to do with statistics. And bayesian reasoning plays no role in our predictions or certainty. At least not directly. It does indirectly, by asking what perturbation, what intervention, can stop this from happening? And how likely are such events?
If you are a human who wrote that statement, you are a conscious being with free will. You know this to be true through your direct experience. It is not subject to debate or falsification, and it is not merely a model.
Any attempt to debate this point is self-refuting.
If you reject consciousness and free will, then no discussion of any kind on any subject is possible.
By what means would one even “show their work” on this subject? That would be asking me to prove consciousness using some means outside of consciousness.
There are no skipped steps. Any attempt to challenge it first requires its acceptance. A non-conscious entity would be unable to raise an objection in the first place.
Determinism is totally incoherent and self-evidently false. There is no evidence for determinism available to you. The very concept of “evidence” requires free will.
There are no stupid determinists; it takes a certain level of intelligence to become so disconnected from literal moment-to-moment lived experience.
Edit: I erred in saying it is false; that is too generous. It is arbitrary.
I don't think most philosophers would agree with you.
One framing of determinism is that you can only choose from the options in your head, weighted by your preferences. Both your preferences and those options were acquired through your experience, so how can choose anything other than what your experience already influenced you to do?
It's fine to disagree, but do you find that "incoherent" and "self-evidently false"?
If everything is the result of previous factors, and nothing could have turned out differently, what is the point of making any argument about anything at all?
You are relying on my free will: that I will focus my mind, read your argument, incorporate my experience, judge its truth using my reasoning faculty, and choose.
Without free will, reason is impotent. Philosophers can’t agree or disagree because they have no choice in the matter.
Free will is not about being able to choose any random thing at all. That would be like saying I’m not a human because I can’t will myself into becoming a banana peel.
Determinism is observable on a micro scale- I can tell someone to do something and they do it, or I could predict someones behavior relatively exactly, or I could even control their behavior directly. How then is determinism not translateable to the macro scale?
Those examples are not typically considered within the scope of the word determinism.
Entities act in accordance with their nature. Aspects of a man’s nature of man include consciousness, free will, and the character he cultivates. This doesn’t mean they behave randomly.
Here is an approach demonstrating an alternative to randomness, from Peikoff’s “Objectivism: The Philosophy of Ayn Rand”:
The law of causality affirms a necessary connection between entities and their actions. It does not, however, specify any particular kind of entity or of action. The law does not say that only mechanistic relationships can occur, the kind that apply when one billiard ball strikes another; this is one common form of causation, but it does not preempt the field. Similarly, the law does not say that only choices governed by ideas and values are possible; this, too, is merely a form of causation; it is common but not universal within the realm of consciousness. The law of causality does not inventory the universe; it does not tell us what kinds of entities or actions are possible. It tells us only that whatever entities there are, they act in accordance with their nature, and whatever actions there are, they are performed and determined by the entity which acts.
The law of causality by itself, therefore, does not affirm or deny the reality of an irreducible choice. It says only this much: if such a choice does exist, then it, too, as a form of action, is performed and necessitated by an entity of a specific nature.
The content of one’s choice could always have gone in the opposite direction; the choice to focus could have been the choice not to focus, and vice versa. But the action itself, the fact of choosing as such, in one direction or the other, is unavoidable. Since man is an entity of a certain kind, since his brain and consciousness possess a certain identity, he must act in a certain way. He must continuously choose between focus and nonfocus. Given a certain kind of cause, in other words, a certain kind of effect must follow. This is not a violation of the law of causality, but an instance of it.
How is an unavoidable, if arbitrary, causality any different than determinism? That passage supports determinism, any never even argues for a free will, random or otherwise.
The universe exists. It exists outside of our minds. We have the ability to explore and understand it, by choosing to focus our minds and engaging our rational faculty.
If we don’t have free will, then we can’t reason. If we can’t reason, then there is no such thing as evidence.
To learn more about this approach, see Rand’s “Introduction to Objectivist Epistemology” and Peikoff’s “Objectivism: The Philosophy of Ayn Rand”.
The article has not mentioned Bayesian inference, which allows to make sound decisions under uncertainty.
For example, in practice the raven problem is not to guess if all ravens are black but to predict the color of the next raven if that color affects a decision.
From that perspective if one knows absolutely nothing about ravens and has seen a single black raven, then it is mathematically sound to guess that the next raven will be black, not white, and make a decision accordingly.
What do you mean by sound? Mathematically sound means "there exists a model validating it", and of course it exists, but so what? If you mean you can bound the probability of error, then in your formulation you actually can't.
Bayesian inference is mathematically sound as it is based on a very generic postulates and allows to compare probabilities based in the current information and made a decision accordingly. With the proper approach the errors are automatically accounted for. I.e. if the errors are large, then one will see that probabilities are too close each other to make a sound decision. Still if one must made a decision, then one can just use the answer based on Bayesian reasoning.
The problem in practice is that accounting for the existing information is hard with guessing of priors etc. But that is the problem of applicability of Bayesian inference, not the problem with the principle itself.
I.e. Bayesian inference is a good answer to the philosophical problem of induction. It is sad that the article has not even touched on that subject.
I just finished the Very Short Introduction on philosophy of science a couple weeks ago. It was quite good. It covers the induction problem and more. https://academic.oup.com/book/517
My favorite example of the problem of induction: by induction, a cow believes the farmer to be the cow's best friend and loyal supporter, because the farmer dutifully cares for the cow's needs right up until the last day of the cow's life.
The problem is not that Truth is impossible to find, rather that undeceiving the self is difficult.
The Truth is a perturbation of Objective Reality (the state resolve of universal potential into manifest form in the moment of now); and the truth is a figment of mind. Integrity is the measure of consistency between.
More mental models should factor in the unreliability of conceptual state in reference to external resolve.
Since we're talking about induction as the basis of science, I'm surprised the concept of "falsification" wasn't mentioned, which has been the "workhorse" of most science during the past two hundred years. See https://en.wikipedia.org/wiki/Falsifiability
Specifically in the context of classical statistics methods (frequentist statistics), the idea of using p-values for scientific discovery only makes sense as part of repeated studies (induction over multiple tests of a theory). It's easy for any one study to observe some pattern by chance (one black raven), but if repeated studies all show this pattern exists, then we kind of start to believe the pattern might be true.
Bayesian statisticians don't use the falsification paradigm directly, but instead focus on estimation, and combining the evidence from multiple experiments to obtain "tighter bounds" on the estimated quantities of interest. The conceptual machinery is very different, but the idea of induction is still kind of present in the form of "more data reduces uncertainty".