F.A. Hayek[0] wrote in great depth about this "confabulation" (which he termed Scientism), and, ironically enough, lectured about it directly in his Nobel Prize speech of 1974[1].
"It can hardly be denied that such a demand quite arbitrarily limits the facts which are to be admitted as possible causes of the events which occur in the real world. This view, which is often quite naively accepted as required by scientific procedure, has some rather paradoxical consequences. We know: of course, with regard to the market and similar social structures, a great many facts which we cannot measure and on which indeed we have only some very imprecise and general information. And because the effects of these facts in any particular instance cannot be confirmed by quantitative evidence, they are simply disregarded by those sworn to admit only what they regard as scientific evidence: they thereupon happily proceed on the fiction that the factors which they can measure are the only ones that are relevant."
Since "confabulation" is our only option, Hayek warns against disguising it as science -- this is what happens in economics, he says. Most historians, on the other hand, understand the limits of their knowledge, don't try to make predictions (certainly not with the same definitiveness common to many economists), and don't disguise their narratives as physical science. The author of the article under discussion, though, seems to wish history was more like physics, which it just can't be.
Interesting discussion here. I don't think you're representing the author's point correctly, though, given the larger context of the work he's doing and current debates in the historical field. Here's what he says:
"I suppose the point of this post is to articulate my growing concern that we are so damn good at coming up with post-facto historical explanations to contextualize any given observation, that we are particularly susceptible to confabulating these post-facto rationalizations with the idea that we somehow knew the results of this quantitative work already."
The implication here, I think, is simply that historians should be open to multiple ways of testing explanations (i.e., combining qualitative and quantitative approaches, if those are available). And in the larger context of debates about digital humanities, he's swatting down the argument that DH simply tells us what we already knew, because if done right it can be used to add a new perspective on existing arguments. The point isn't that history should be science but that "scientific"/quantitative approaches can be used to test the validity of historical claims based on more traditional historical work, like finding manuscripts in archives, doing a close reading of a key text, tracing correspondence networks, tracing the material history of a painting or object, etc. I agree with him.
Edited to add: also need to be careful about confusing prediction with explanation. Historians don't predict much, but we certainly try to explain things a lot! I read this post as being exclusively about the latter.
Yes, I am guilty of writing the post for an audience already largely familiar with the context.
I should probably add that the types of "explanations" I put forward in this post are actually not of central concern to me - certainly not explanations derived solely from parsing quantitative results. I'm far more interested in the descriptive evidence this kind of measurement can provide. It can give wider context to what tends to be a very case-study-centric discipline (e.g. oh, this guy happened to work a lot with Italian publishers in this period? We didn't realize it before just looking at 5-10 artists per article/monograph, but actually that is quite exceptional/normal for this period...)
Then again, proposing these kinds of explanations is also something of a disciplinary norm, for better or worse.
It isn't a great book, but it does clearly identify the difference between explanatory and predictive accounts, and makes a strong case that explanatory, after-the-fact, accounts are simply noise. The exercise the author engages in here, of "making sense" of two contradictory datasets can be done with almost any phenomenon.
The more systematic and inclusive data gathering is, the more strongly any explanation is going to be tied to the phenomenon it is going to explain, and historians--like any science--should always be asking themselves, "How can I test this explanatory idea? If it is true, then what else ought to be true?"
Otherwise, when you "explain things a lot" you are doing nothing but generating noise. The financial press provides strong evidence for this: tens of thousands of words of "explanation" every single day, and not a single person made rich by any of the predictions the same people make. When you can explain everything and predict nothing, you had better be able to test your explanations by indirect means if you want to be taken more seriously than a fabulist.
The latter part is important: historians and others like them expect us to take their pontifications seriously. But we know the untested explanation is almost certainly nothing but a confabulation. So why should we take anything historians say seriously?
The history of science--which I define inclusively as "publicly testing ideas by systematic observation, controlled experiment and Bayesian inference"--is one of realizing we've been doing it wrong, that ideas we believed and took seriously were actually rather silly. Blood circulates. Geese don't come from barnacles. Bad air or moral turpitude don't cause plague. And so on.
If historians are starting to ask, "Are we doing this wrong?" that's a good thing, and based on my understanding of what science is and how it works I predict that over the next few decades the "digital humanities" will make a lot of traditional beliefs look untenably foolish, and there will be a general shift of the field toward Bayesian methods that will alienate and annoy a great many people whose freedom to confabulate will be greatly curtailed. It will change the face of the humanities to the extent that they will have as little to do with their historical roots as modern psychology has to do with the story-tellers of the early 20th century who gave it its start.
You are doing nothing but generating noise if you try to treat this knowledge the same as hard scientific knowledge. You generate something very valuable if you treat it as a different kind of knowledge.
I like to give this example: the statement, "the moon is made of cheese" is false when treated as a scientific statement of fact. It can -- and, in fact, has -- been falsified. On the other hand, if taken as a literary statement, it can hold some truth -- or maybe many truths. Except that is a different kind of truth. It can teach us something about human imagination, human desire, human whimsy. Maybe even something deeper, once you analyze the significance of "moon" and that of "cheese".
History -- my favorite field of study alongside math -- is somewhere in the middle, a perfect spot between science and literature. It is as scientific as possible when it comes to the what and how stuff happened (which is most of what historians do); when it gets to the why (which doesn't happen as often) history is more literary (and doesn't make any claims of definitiveness). History tries to leave the definitive answers to religions.
Finally, I think you're putting too much faith in human ability to defy the math of complex systems. If anything, I predict a different sort of confabulation: that we have managed come up with a scientific theory to something that is provably impenetrable to hard scientific quantitative theory. I have every faith in our ability to convince ourselves that we can somehow use math (and data) to beat math.
Is there a data-driven religion being born? It would be one that believes that anything not quantifiable is not worth knowing, and would try to plug any missing holes in our knowledge with data and quantitative theory, even if that's mathematically impossible.
Hell, just the other day I had a discussion with some people on Reddit who believed we could somehow beat the Halting Theorem or, at least, make it irrelevant in practice. But just like we'll likely never write a computer program that can fully reason about other computer programs, we will probably never be able to explain human history enough to make definitive explanations/predictions on most things. I am perfectly OK with that. I am also well aware that not being OK with that, and trying to beat the odds is how (some -- not all) progress is made :)
> On the other hand, if taken as a literary statement, it can hold some truth -- or maybe many truths. Except that is a different kind of truth.
We can learn from people telling stories, we don't need to call these stories "truths" to do that. No need to muddle up the terminology. There's a hilarious proverb about kinds of truth BTW, but I can't manage to translate it to English, idioms just work differently.
> It would be one that believes that anything not quantifiable is not worth knowing, and would try to plug any missing holes in our knowledge with data and quantitative theory, even if that's mathematically impossible.
I think you misunderstood. It's not that it's not worth knowing. It's that you don't really know it if you can't quantify it. You're just slapping together some explanation to feel better.
> But just like we'll likely never write a computer program that can fully reason about other computer programs
Not about ALL the possible computer programs, but about many useful ones - why not? If our brains can do it - surely turing machines can do it too. At worst it would take exponential time and space.
> We don't need to call these stories "truths" to do that. No need to muddle up the terminology
And what is that terminology? Even math and physics have different definitions of truth; why can't literature (
literature's use of the word is closer to math's definition rather than physics)? I think most people are capable of accepting different definitions in different contexts.
> I think you misunderstood. It's not that it's not worth knowing. It's that you don't really know it if you can't quantify it.
You don't really know even when you can. The consequence of the Gettier problem[1] is that we cannot define "knowledge" in a way that is both completely satisfactory yet still applies to anything other than math. That is, if we define knowledge narrowly enough to be rigorous and in accordance of our ideal of what knowledge is, we find that we don't know anything. Hence, all knowledge is knowledge to a degree.
> You're just slapping together some explanation to feel better.
Isn't that what understanding means? Suppose we could simulate the entire universe from the Big Bang on extremely fast, and move forward and backward in time in an instant. That would be very useful indeed for some purposes, I guess. Some would even call it knowledge. But would anyone call it understanding? All understanding is meant to make us feel better. :)
I agree with everything you say (although I read this article differently), but historians should be careful to steer away from Hayek's "scientism". History's underlying mathematical assumption -- an assumption that is almost surely correct -- is that society is a complex system, and therefore cannot be subject to predictions (or very few, qualitative predictions) in the long run. If historians feel quantitative analysis gives their explanations any sort of definitiveness (and "proving" the inevitability of past events is exactly the same as prediction), they will fall into the same trap as economists.
A different perspective -- absolutely; a quantitative theory -- no. In fact, quantitative theory, at least in a sense similar to physics, is nearly impossible even in subjects that are much closer to physics than history. The simplest non-linear differential equations defy quantitative study, and history (and even economics and biology) are nowhere near simple.
However, it is possible that even in a complex system, there should arise some temporary structures that can be described as linear processes, and therefore subject to some quantitative theory, but it will be short lived. Complex systems undergo phase-changes that tend to break and reform any structure. On the other hand, it can be argued that those phase changes are rare, and that human society is self-stabilizing on some global scale, so that the system -- at least for the moment -- is stable, and some quantitative analysis may apply. But I'm pretty sure no one would classify any specific society (i.e. a small subset of humankind) as stable.
Agreed. A big part of why I enjoy thinking about historical problems so much is that it's virtually impossible to supply a definitive explanation for anything (we're still arguing over the Roman empire's demise one and a half millennia on). I think the author's point - or at least the stance I take, which I suspect he aligns generally with too based on his work - is that quantitive data can only be used in historical arguments in what is effectively a qualitative way, i.e. as part of a larger, holistic assessment based on many perspectives rather than as a definitive "proof" of anything. But it's an exciting addition because historians typically haven't taken this approach at all prior to the last decade, excepting some forays into punchcard computing in the 60s and 70s by people like Lawrence Stone and the Annalists.
Incidentally, I'd point to Pinker's "Better Angels of Our Nature" book as an example of the risks of putting too much stock in a scientific/econometric approach to history. I actually think his argument is (in broad outlines) convincing, but the way he marshals his evidence drives academic historians crazy because it ignores that history is a complex system, as you say. (i.e., he does things like saying "x tribe in New Guinea is a 'stone age' tribe and according to one or two studies has a high murder rate, therefore all humans in 'the stone age' had a murder rate comparable to that tribe.")
This is true, but not entirely relevant. Life-based complex systems often share a propensity for punctuated stability specifically due to their own nature, because of the same circularity inherent in evolution (those that can survive to replicate, do). In this case, systems whose parameters tend towards stabilization persist specifically because they tend towards stabilization. The least self-undermining regularities persist (attractors).
Societies formed and persisted because they were good at it, because they were a stable attractor in a larger system. We might not necessarily be able to formally describe the entire system, but our propensity towards stabilization (at the biological level, the human level, the societal level) means we can do clever things at the stability points, like develop medicines that work, design groceries which are more likely to sell certain products, and predict the outcomes of presidential elections.
Now, whether or not the explanation given for the systemic outcomes are "accurate" descriptions of the underlying mechanisms is in question here, and it's an important one, but it's not a lost cause. When Copernicus set the world in orbit, there was a big argument of whether he was providing an actual explanation of the way the world works, or just a convenient mathematical shorthand for making accurate predictions. It turned out that the most parsimonious shorthand was also (ahem) less wrong than earlier mechanistic theories. So too can historians find explanations that, if not accurate representations of underlying mechanisms, can still be explanations which fit better to systemic tendencies than earlier explanations.
Edit: Which is just saying that the blog author's point is still a useful one, whether or not we can ever achieve complete mechanical account of human activity.
Well, this is what historians do -- try to provide a model of a certain society at a certain time. Yet, it is important to warn of scientism (I really like that word!) When we come up with more quantitative models (as economists do) would we be able to resist the urge to believe they are scientifically predictive?
And economics is a perfect example. It appears easily amenable to quantitative study, yet it does not yield very predictive models, does it (at least not nearly as predictive as we'd like)? And history is more complex than just economics.
Of course we can do clever things at those stable points, but defying math isn't one of them. Predicting presidential elections is a good example. We're not really (generally) able to do that (at least, not yet). The best we do is try to get a valid statistical picture, spot a trend, and extrapolate for very short durations.
Pron, as a historian who has come up with quantitative models and resisted the urge to call them predictive, yes, I think that sort of restraint is within humanit(y|ie)'s capable grasp. =]
I would love to see them. And I have great faith in historians' own abilities to resist that urge -- they are well trained at that; I am not so sure how others would fare.
Becoming a "quantitative historian" used to be a dream of mine (before I abandoned an academic career in either math or history). But then I realized that I enjoyed reading a transcript of a German witch trial no less than a Marc Bloch epic narrative, and found it to no less educational. :)
Predictions can be sideways as well as forward. If one gives an explanatory account of some phenomenon, the existential claims made as part of that explanation should in general have implications for other parts of the same place and time, so by asking if those implications are plausibly supported by what we know of that place and time is a way of testing the idea that "phenomenon X is explained by Y" while still being in a position where it is impossible to make predictions about the future, or other places and times.
Yes, the problem is that complex systems are complex "sideways" as well (i.e. in all dimensions). Sorry, but that's the nature of non-linear dynamics. They defy quantitative study.
That's not to say that it isn't interesting or worthwhile to come up with comparative analysis to explain, say, the "European Miracle", but it can never have the same definitiveness as Newton's laws.
The simplest non-linear differential equations defy quantitative study, and history (and even economics and biology) are nowhere near simple.
This is nonsense. I think you are alluding to the fact that forward integration from an initial condition with a measurement error will have an error growing at the rate exp(liapunov_exponent x time).
This doesn't mean that ODEs "defy quantitative study". One still makes quantitative predictions about them - e.g., "what is the liapunov exponent", "at what precise parameter choice does a phase change occur", "is the system recurrent, and if so what is the recurrence time", etc.
Am I wrong in guessing you don't actually study nonlinear systems and are simply making guesses based on newspaper summaries and pop science?
Once again, fajitas, your literal mind fails you. I cannot write a precise discussion of every single idea I mention, or each of my comments would be the length of a book, and they are already far longer than they should be. I mention an idea, and leave it to the mind of the reader to fill in the missing pieces. Any reader, even a critical one, can understand these are broad strokes, but not an adversarial one such as yourself.
Still, in this particular case, you are right that I have not discussed simulation as a valuable tool. It may certainly be; I would find it fascinating. But does it qualify as a quantitative theory? Maybe. If you have a theory of each of the system's elements, and the simulation follows reality for some non negligible time. But would it be a model of history or a model of psychology?
Would it make us understand history as a process more than we do now? In the end, if you simulate the entire universe accurately, you could predict any physical, chemical, biological, psychological and historical process. But would you understand them better? I don't think our knowledge of particle physics helps us understand human psychology, even if if some day it is able to predict it (Dostoyevsky touched on this very point 150 years ago[1]). Every level of abstraction demands its own models.
I am afraid studied my studies of nonlinear systems ended back at university, many years ago. I must say, though, that I am a bit surprised at your comment. As I was writing my previous ones, I was imagining you saying that what I say here contradicts things I've said to you before about the validity of historical knowledge. I contemplated writing a few more paragraphs that explain the difference, but feared it would make the comment even less readable. Can you see it?
In any event, fajitas, literal, adversarial readings tend to suffuse the discussion with a very autistic feel. I suggest that before classifying my words as "nonsense" -- though this classification does have a nice, succinct, quality -- you would try to understand what I am saying; this usually leads to a more interesting discussion.
-----------
[1]:
...science itself will teach man (though to my mind it's a superfluous luxury) that he never has really had any caprice or will of his own, and that he himself is something of the nature of a piano-key or the stop of an organ, and that there are, besides, things called the laws of nature; so that everything he does is not done by his willing it, but is done of itself, by the laws of nature. Consequently we have only to discover these laws of nature, and man will no longer have to answer for his actions and life will become exceedingly easy for him. All human actions will then, of course, be tabulated according to these laws, mathematically, like tables of logarithms up to 108,000, and entered in an index; or, better still, there would be published certain edifying works of the nature of encyclopaedic lexicons, in which everything will be so clearly calculated and explained that there will be no more incidents or adventures in the world
...
A PROPOS of nothing, in the midst of general prosperity a gentleman with an ignoble, or rather with a reactionary and ironical, countenance were to arise and, putting his arms akimbo, say to us all: "I say, gentleman, hadn't we better kick over the whole show and scatter rationalism to the winds, simply to send these logarithms to the devil, and to enable us to live once more at our own sweet foolish will!" That again would not matter, but what is annoying is that he would be sure to find followers
...
Ha! ha! ha! But you know there is no such thing as choice in reality, say what you like," you will interpose with a chuckle. "Science has succeeded in so far analysing man that we know already that choice and what is called freedom of will is nothing else than--
...
And that is not all: even if man really were nothing but a piano-key, even if this were proved to him by natural science and mathematics, even then he would not become reasonable, but would purposely do something perverse out of simple ingratitude, simply to gain his point. And if he does not find means he will contrive destruction and chaos, will contrive sufferings of all sorts, only to gain his point! He will launch a curse upon the world, and as only man can curse (it is his privilege, the primary distinction between him and other animals), may be by his curse alone he will attain his object--that is, convince himself that he is a man and not a piano-key! If you say that all this, too, can be calculated and tabulated--chaos and darkness and curses, so that the mere possibility of calculating it all beforehand would stop it all, and reason would reassert itself, then man would purposely go mad in order to be rid of reason and gain his point! I believe in it, I answer for it, for the whole work of man really seems to consist in nothing but proving to himself every minute that he is a man and not a piano-key! It may be at the cost of his skin, it may be by cannibalism! And this being so, can one help being tempted to rejoice that it has not yet come off, and that desire still depends on something we don't know?
I think that was Hayek's exact point...that economics cannot be like physics either, and in fact goes on to postulate that very few of the sciences can and/or should be treated as hard.
My impression of Hayek is that he was trying to shut down his opponents by saying "it's not possible to build models which are predictive". Since the models he was criticizing have been vety successful, I think we should take this stance with a grain of salt.
A very similar example gives Kahneman in his book "Thinking, Fast and Slow" (a great read for those interested in cognitive biases). He calles it "The illusion of understanding". Using an example of Google and a number of books depicting Page and Brinn as role models and geniuses, who leaded the company into its great success by taking always the right decisions.
It is of course easy to say so knowing already that Google is a successful company. It would be very hard to tell it 15 years ago.
Also when you study the history of Google you learn that there was a lot of luck involved, and you won't learn how to build the next google from those books.
Not sure if the author is around here, but the style sheet on this seems to be broken on an iPad 2 in landscape mode. It seems to be right on the edge of a CSS breakpoint, and flips back and forth between two different layouts every time I scroll. Makes it very very hard to read...
"You find that people cooperate, you say, ‘Yeah, that contributes to their genes' perpetuating.’ You find that they fight, you say, ‘Sure, that’s obvious, because it means that their genes perpetuate and not somebody else's. In fact, just about anything you find, you can make up some story for it."
The point was probably that this is not scientific. Whatever model you come up with won't really have any predictive power. See arguments against pseudo-science, e.g. Freud and Marxist theories.
Creating a seemingly meaningful justification for behavior isn't science, it's just folk wisdom. Scientific theories need to be falsifiable, they need to have predictive value.
This ain't actually how science works, it's just an idealized account. Special relativity wasn't technically falsifiable compared to its competing theory by Lorentz, because both produced mathematically equivalent predictions. If you pick up a random science paper these days, odds are you'll find all sorts of unfalsifiable claims. Does this mean modern scientists are doing it wrong? No, but even if you think it does, it still shows modern historians are on the same footing.
You can call activities which work outside of the framework of science whatever you want, but they aren't science. The foundation of science is testing theories against evidence, which means falsifiability.
Relativity is a perfect example because it made very specific and testable predictions such as gravitational lensing, frame dragging, gravitational redshift, and gravitational radiation.
If you define science as anything that's falsifiable, then of course anything that isn't, isn't science. And hey, free sciencecountry, that's your prerogative.
That said, the philosophers of science who originally discussed falsifiability have gone on to say its inadequate, and a huge chunk of work that's published and funded as science these days isn't strictly falsifiable.
Science is a word like any other, that we all agree on to make meaning. Right now lots of practicing scientists work on a definition that includes but isn't limited to falsifiability, but of course the great thing about it is you're welcome to decide what criteria is most comfortable to you. I like to think that the work Einstein did on Special Relativity before 1915, that wasn't mathematically distinct from Lorentz or Poincare, was scientific even though it wasn't falsifiable.
The problem is Just-so stories[1] that have no basis in evidence, but can be moulded to sound plausible given any kind of phenomena. So imagine there are two mutually exclusive phenomena - chances are that you could come up with a plausible-sounding, "evolutionary psychology"-story for both scenarios. And how good is a framework if it can plausibly explain any and all phenomena? Useless.
This might have more to do with how "evolutionary psychology"-sounding theories are presented on Web fora than how they are used in a more academic setting. You can easily run across participants on Web fora who almost admit to just making up shit on the spot that fit with the phenomena that they are presented with, based on their expert knowledge of Stone Age humans.
Of course you can make up a story for pretty much everything you find. But could you hypothetically come up with just as compelling a story for things you don't find? I don't think so.
I try to do this any time I get some new theory that seems to explain things: reverse it or switch around facts. Does it still sound like it might mean something? If it can explain reality and counterfactuals just as well, then it's probably useless.
I'm a bit upset with how many things I hear that might just be wrong or trying to influence me.
Yeah, and evopsych has the same problem as history, that it is not possible to change one variable and repeat the experiment. This makes it easy to make up explanation for observations, but hard to verify or falsify these explanations. It is surprisingly hard to disprove a historical theory, like "the decline of pirates in the Caribbean lead to the Russian revolution".
This is interesting and entertaining, but rather than a warning sign, it is just a good demonstration of the limits of our knowledge.
We should not be alarmed that we can only come up with post-facto explanations for the behavior of complex, chaotic systems such as human society, because that is a direct consequence of what they are: complex systems. In fact, we should be alarmed if the opposite were true, and by analyzing current trends, current data, we were able to predict future events. If that were true, then society would be no more than a simple system, completely predictable, whose complete dynamics could possibly be described some closed-form formula.
The only thing we should be alarmed about is people taking those after-the-fact explanations as being anything more than confabulations, because then people might think they actually understand the world, which is a dangerous conceit at the best of times.
After-the-fact explanations have almost zero epistemic value. They create no understanding, only a feeling of understanding. Look at the financial press for endless examples of this.
I hate to say 'I told you so,' but, Geez, this is literally everything I hate about liberal arts majors.. A grad school dissertation on Medieval Printing Techniques of the 1400's?? My senior project was software work for a Fortune 500 STEM company that I interned for as a sophomore & this guy is selling Printing Techniques of the Middle Ages.. what is this world coming to??
"It can hardly be denied that such a demand quite arbitrarily limits the facts which are to be admitted as possible causes of the events which occur in the real world. This view, which is often quite naively accepted as required by scientific procedure, has some rather paradoxical consequences. We know: of course, with regard to the market and similar social structures, a great many facts which we cannot measure and on which indeed we have only some very imprecise and general information. And because the effects of these facts in any particular instance cannot be confirmed by quantitative evidence, they are simply disregarded by those sworn to admit only what they regard as scientific evidence: they thereupon happily proceed on the fiction that the factors which they can measure are the only ones that are relevant."
[0] http://www.econlib.org/library/Enc/bios/Hayek.html
[1] http://www.nobelprize.org/nobel_prizes/economic-sciences/lau...
[edits]