Hacker News new | past | comments | ask | show | jobs | submit login
Science Only Has Two Legs (acm.org)
57 points by Rickasaurus on Aug 26, 2010 | hide | past | favorite | 47 comments



I agree entirely with the OP, but I have also seen people treating computation as a "third leg", to the detriment of progress I think.

Let me concoct a trivial example. Suppose you wanted to know whether a bowling ball would drop from the top of a building faster than a tennis ball. You can do this without reference to theory - just go out and experiment. Doing the experiment would not give you the complete physical picture perhaps, but it would certainly

(a) answer your narrow question and

(b) give you a constraint that any theory must explain.

The problem is that there is a class of research where people who are faced with this question, would set up a computer model. Now let's say they didn't realise they had to model air resistance. They will discover something all right, but it will neither answer the narrow question, nor provide a robust constraint for theory.

In other words, playing around with computational models is not, by itself, theory or experimentation unless you are very careful on how you tie it back to the physical world.


I agree that this was a great little essay. I like your illustration, but I think your point at the end of your comment disagrees a little with the essay, and I have to disagree with you a little as well. A computational model, such as Maxwell's Equations, is a theory. Creating a simulation of two balls falling would also be formulating a theory. It may be an incorrect theory though, as you point out, and that's where the other leg of science, experimentation (what you refer to as tying back in to the physical world), is necessary.

Taking more from your example, I would agree that scientists today are trying to answer questions by diving right into computational models (theory) without experimentation, but I believe it's because they are tackling questions that are very difficult to experiment with. I don't think that's altogether unprecedented though. There are instances of scientists producing theories without the means to test them throughout history, only to have them proven or disproven by experimentation at much later times.


A computational model [...] is a theory

I guess that's where we disagree. I respect your point, and of course I accept we are in a regime where computational models are the only options in some cases (we can't start a whole new universe off to see how it worked). That has no bearing on their limitations, though.

I think a computational model is not a theory, it is a theoretical construct. I concede this may seem like splitting hairs to everyone except me.


I'm confused on the difference between a theoretical construct and a theory.


A scientific theory is a self-contained description of a natural phenomena starting from first principles. My interpretation of a theoretical construct is something which is predicted to exist but has not been observed (or cannot be observed due to physical limitations). It can also be a set of results derived from thought experiments but not yet observed experimentally.

I apologize if anyone else has mentioned this, but I think most working scientists are uncomfortable with purely computational work and "theories" because they don't appear to be falsifiable within the framework developed by Popper. Current global climate models (GCM) are only subject to verifiability and this relegates them to a lower status then, say, Maxwell's equations. Maxwell's equations are capable of being used to explain almost all macroscopic electrodynamic phenomena, at least within the confines of classical physics. OTOH, there is no proper theory of the climate than can be treated with equal footing. There are only computational models and input data. There is a large amount of parameterization and data treatment (cynics would say massaging) that need to be done to get the models to converge.


when you have good reason to believe something about model-entities, you don't thereby have good reason to believe it about actual-entities, unless you also have good reason to believe that model-entities are relevantly similar to actual-entities. So the model itself is just a piece of a theory.


isn't a model a functional representation of a theory? I don't see how it is only a piece of a theory if the whole theory is represented in the model and is functional.


Can you represent the whole theory in the model? The model would need to represent evidence that its representations of the world are accurate. If that evidence is a mathematical proof, a program could probably encode it. But if it's experimental evidence, I don't know how it could.


That seems backwards to me. Theories don't "represent" evidence, experimental or otherwise (I'm not even sure what that's supposed to mean, it doesn't make sense to me). You formulate your theory - which is a model of cause and effect - from one set of observations, and test it with another set of observations. The model - the theory - may include facts from experimental evidence, eg G the gravitational constant, but that doesn't seem problematic for theory representation in program form.

Theories make predictions about the world. I'd argue they're useless without this. As such, they are function of a prior state, returning the new state.

Certainly you can't validate your theory without doing physical experiments, or at least having lots of data that the theory was not based on, to check it. But validation of a theory is distinct from its representation.


say i have a theory that says, changes in temperature depend on changes in pressure in some specified way. And you say, oh I have a similar theory, it's a computational model. In my model of the universe, when my model of temperature changes, then so does my model of pressure, and it changes in exactly the same way your theory describes.

I'd say ok, that's a great theory about your model of pressure and your model of temperature in your model of the universe.

And you say, no no, this is a theory about actual pressure and temperature in the actual universe, just like your theory.

In that case your model is not enough. I also need reason to believe that your models of temperature and pressure are relevantly similar to actual temperature and pressure such that relational properties that hold of the entities of your model also hold of the entities they're supposed to represent. If it turns out you modeled temperature as a jpeg representing a picture of a cat, then how is that a theory about the temperature?

Whereas my theory is a theory about temperature simply because my theory says: temperature, that real thing in the world, will do such-and-such.


Both the theory and the model depend on the definitions of their constituents. If you don't have a theory of temperature or pressure, your theory is similarly useless, because you will have no way to interpret the words "temperature" and "pressure".

Your theory assumes these things, but you want to say that the model is invalid because it also assumes these things.

But the more important misconception you appear to have is that you are not considering the human element; no representation of the theory has meaning unless a human interprets it. Programs manipulate symbols; but it is the human who chooses what the symbols mean, and that includes incorporating theories / models implied or assumed about those meanings. It's no different for a theory written down in a book.


"Theories should pay rent in anticipated experimental data."


I don't think there is any difference between believing something about "model-entities" and believing something that a theory predicts will be the case. You haven't justified your implied assertion that predictions according to theories are not only different, but more trustworthy, than predictions according to models (theories in computational form).


I think the argument being made isn't that "theories in computational form" are inferior, but rather something more pedantic - that there's no such thing as a "theory in computational form". Theories are about nature. A computer program that performs some simulation isn't itself a theory. The theory is "such and such natural phenomenon behaves like this here computer program."


Well, theories must be stored in some representation, whether it's a configuration of neuron state, ink markings in a book, verbal descriptions in sound, etc. My position is that programs are just another representation.


Programming languages generally don't have semantics that include claims about the physical world.


Ink scratches on wood pulp don't have semantics either; it's the interpreter - i.e. the reader - that imbues them with semantics.

Similarly, programs manipulate symbols. It's up to the person who runs the program to imbue those symbols with meaning.

F=ma; that's some coloured pixels on your screen. It's also an expression of one of the laws of Newtonian mechanics. It can be used to calculate the approximate acceleration of a mass after a particular force has been applied to it. But the coloured pixels on your screen isn't doing that; you need to apply your mind.

Similarly, here's a function:

  function calc_a(m, F) { return F / m; }
That might be part of a program which calculates the acceleration due to a force. It's also just a series of bytes, which, when interpreted by a translator, will ultimately shuffle bits of electrical state around a very complex circuit. You still need to apply your mind to give it meaning.


It sounds like we agree - programs aren't theories. Something else is required that relates the program to the real world. Note: you could have a programming language whose semantics specify a proposed relationship to reality, just as we posit such relationships in natural language. But most programming languages don't allow such things.


Unless we believe that the universe is actually made of math, then Maxwell's Equations are also just a model.


The example you suggested is probably too simple; one should realise that the actual experiment might be very difficult, if not impossible to carry out. Consider a theory of galaxy collisions---I am unable to see any way of testing out the hypothesis, unless the equations are simple enough.

A computer program just a different language of expressing the theory... Hence, I see no reason why one should separate theory and computer programs.


Modeling is at best a thought experiment. It can only show you if a theory is reasonably self consistent based on your assumptions not if it's actually true.

PS: There are limits on science, and the ability to conduct an actual meaningful exponent is one of the largest ones.


I think there's more to it than that. For example, a model can tell you if something isn't true which is very significant. It can also make predictions that can then be tested.

These two alone are much more than a mere thought experiment.


Great news! My model tells me that humans no longer have to breathe!


I am having a hard time seeing the different.

A thought experiment can tell you that if A then B. A model is a means of producing a prediction B from A by means of long, complicated calculations instead of simple logical inferences. In both cases the conclusions should be evaluated before accepted as physical truth.


what do you believe is the different between a theory and a thought experiment?


I would say a theory is a hypothesis which has been tested. A thought experiment can be part of hypothesis creation.


and a model can very easily be a hypothesis which can be tested. I.e. it's far more like a theory than a thought experiment.


Perhaps we're splitting hairs. But this doesn't sit right with me.

Consider what is meant by a thought experiment in popular literature. Take Einstein's elevator. Would you call that "Einstein's elevator theory?" Or is that a prediction based on the theory of equivalence?

Maybe in this vein a clear distinction is that a thought experiment is a proof discovery technique. It is working with heuristics and intuition. At the end there is something; a statement or prediction that seems right. But no matter how much intuition confirms it, in the physical sciences it needs to be tested and in the mathematical sciences it needs a formal proof. Beyond that, it is just another hypothesis. After that it leads to a theory.


With all due respect, but you're both not just splitting hairs: you're splitting hairs about something you don't know enough about. There's a whole body of literature on the subject and you're not going to reinvent all of that in the course of this small discussion. If neither of you have heard of, for instance, Carl Hempel, then this discussion is completely pointless.

If you don't standardize your terminology first, then you're just discussing the boundaries of your personal definitions, instead of embarking on actual intellectual discovery of the terrain.


This discussion is an 'intellectual discovery' for its participants. Its unfortunate you wish to hinder that through dismissive remarks instead of constructively participating.


When I was 12, I once had a night-long discussion with a friend about the implications of faster-than-light travel. Once I learned some actual physics, I realized none of that discussion made any sense at all.

Something isn't intellectual discovery just because information is exchanged and both people feel they are learning stuff. Intellectual discovery makes sense if you are trying to structure the knowledge and are connecting it to knowledge you already have. Without things to connect it to, such as a shared definition of 'theory', you end up running around in circles.

It's like being dropped into a maze and not trying to solve it, but just running around in it. It may be fun, and you won't hear me complain, but as soon as someone starts wondering whether they are going too much into the details of solving it ("maybe I'm splitting hairs"), I feel obliged to point out they haven't really started to solve it. They've just been randomly discovering the territory, not noticing whether they returned to the same point several times.

I'm not trying to be disparaging; I'm just pointing out that talk of 'splitting hairs' is really premature.


I often find it highly useful to flesh out terrain in my own mind before exploring the literature. You're remarks were extremely condescending and you could have made the same point without the poor tone.

Oddly, you nearly admit the usefulness of it. "They've just been randomly discovering the territory". Discovering territory is valuable, whether it's random or not!

But the oddest thing is that you don't provide any useful information. You don't link to a page with the info you think we should read, or to a book on the matter, or a paper. Nothing, but condescension.


I'm sorry, it was not my intention to be condescending; merely to be critical. Searching for 'philosophy of science' should provide plenty of resources; which ones suit you is impossible for me to guess, because it strongly depends on background/prior knowledge, which is why I refrained from advising anything specific.

As a starting point, http://www.teach12.com/ttcx/CourseDescLong2.aspx?cid=4100 may be a pretty decent introduction. There are torrents floating around if you want a taste.


The link looks interesting, but does it talk specifically about models and their relation to theories which is what we were discussing that you found to be a waste of time



But it is not obvious that a theory can be expressed by a program. I am prepared to believe that there are fields in mathematics or genetics or something where this is not problematic.

In physics, theories often describe a world which is both infinite and continuous, which is hard to encode in a form suitable for computation; typically finite-volume and discretisation approximations (at the very least) are made.

In these situations the theory represented in code isn't the same as the theory you started with. One has to work quite hard to demonstrate that the differences are quantitatively understood and under control, and that the domain of applicability is understood.


"First, neglect air resistance..."

http://www.wastedtalent.ca/comic/death-rain1


So .. I have a physics problem. I know, I'll write a simulation to solve it. Now I have two problems.


So lets define it, its a tool.


The two legs of science are more accurately referred to as theory and observation. Experimentation is one way to gather observations, but it is not the only one. Astronomy involves a lot of observation of stars, which are some distance outside of the laboratory and definitely not in an experimenter-controlled environment.

I would classify numerical/computational models, along with mathematical and philosophical models, under "theory". Number-crunching and heavy data analysis is a way to categorize and present observations. None of these deserve to be treated as new and separate legs of science; they are merely subsets of the existing legs of theory and observation.


This is an important distinction that should be more-widely propagated. I once got into a debate with a very conservative Episcopal priest who was then one of the clergy at my parish. He criticized "scientism" on grounds that the touchstone of science is experimentation (he said), and it's blasphemous to claim you can experiment with God. I responded that the touchstone of science is observation, and that there's no a priori reason we can't "observe" God at least indirectly (cf. Romans 1.20).


This article appears to be a response to the new Journal of Computational Science whose editor-in-chief promotes computational science as separate from theory and experimentation.

The inciting text can be found in the description section here (click to expand the full Aims & Scope section): http://www.elsevier.com/wps/find/journaldescription.cws_home...

An excerpt:

"Computational Science is a rapidly growing multi- and interdisciplinary field that uses advanced computing and data analysis to understand and solve complex problems. It has reached a level of predictive capability that now firmly complements the traditional pillars of experimentation and theory." - Peter Sloot, Editor-in-Chief


Part of that view, at least as I understand it, was driven by a similar view from the theory side, that computational modeling isn't theory in the traditional sense. From a more academic-politics point of view, computational modeling is often a difficult sell in traditional theoretical publication venues as well, which prefer the computation to be minimal and the equations to be hand-worked--- big computational models are seen more as something that belongs to engineering. So the pushback is: fine, it's not theory, but it's still important to science, and it's clearly not experimentation, so it must be a third thing, and we'll start our own publication venues for it.

I do agree more with this article that it's a kind of theory, but I think the people who disagree aren't only computational science people, but also (some) theory people.


I also concur. Computational modeling should be viewed as part of hypothesis formation. Similar to mathematical manipulation, it is a precursor to experimentation. Computational models are too often presented as evidence of a physical theory, instead of a prediction in need of experimental support. We see this in every field; social network theory, epidemiology, environmentalism, even papers in combinatorial optimization!

Interestingly, perhaps the field that most strongly holds to the view that models need testing is weapons research.


If it's true that computation is just an extension of two "traditional" legs of science, then it follows that the steps taken in computation as part of scientific research ought to be as public as the steps taken in theory and traditional experimentation. That is, opening up the source code and data to peer review and scrutiny becomes just as important as describing methods in a non-computational study.

In other words, computational transparency is important if computation is just an extension of the traditional scientific method.


the concept of a new leg is something that allow us to change significantly the way in which we move. A car is not a new leg, a leg grows with the body.

So the question is: is computational science like a car or like a leg. Is only a tool or is something that will make a change in the way we think and conceive experiments, in the way we consider thinks to be possible and shape our future?

In ancient times there was only one leg for science, that was authority. I see no problem which the three or four legs concept. The only thinks I would consider silly is to confuse a leg with a finger. Anyway, if you don't want to use body analogies, don't ask for know many legs science has in the first place.


i think op is making a good point - there is nothing more to science than computed minus measured equals error

what a scientist does is to make a model which is the computed than she makes measurements and obtains errors - the error or the residuals - is the knowledge

people may call computeds -theory -model -hypothesis -framework and similar words but the process is always the same

for instance ptolemaic theory is a mathematical framework that results in very good residuals which means that ptolemaic model saves the naked eye observations very well

if as the op writes an experiment generates -40 terabytes of raw data per second- you still have to model it and obtain residuals

but the real interesting problem facing contemporary science is that now what is -computed- and what is -measured- are no longer clearly separated




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: