Hacker Newsnew | past | comments | ask | show | jobs | submit | senki's commentslogin

The worse belief is that science has anything to do with belief. Or, I believe so...


I think Edge picked a very bad word. Isn't "scientific belief" is a silly oxymoron?

At least, many of those "beliefs" weren't actually beliefs, but scientific theories that worked very well but were later improved. It's the case of Newtonian gravitational force.


Isn't "scientific belief" is a silly oxymoron?

Not at all. At least, not if you are interested in epistemology. In fact, one common definition of knowledge is "justified true belief." Cf http://en.wikipedia.org/wiki/Justified_true_belief


That's not a common definition held in epistemology: The Gettier problem decimated that view a while ago.

http://en.wikipedia.org/wiki/Gettier_problem


But only the "justified" part. Truth and belief are still necessary conditions.


You're missing the points, which are:

A) "Justified true belief" as the criterion for knowledge is not common in epistemology, contra the initial comment;

b) "Scientific belief" is an odd term given that you either have "knowledge" or you have "false beliefs" based on best available science.


What point am I missing? Clearly, the definition of "scientific belief" used in this context is something like "belief held by people in the scientific community". The central point here seems to be that, while "belief" may have a connotation of something that is lacking evidence, it's in fact a far more general term that applies even to genuine knowledge.


>the definition of "scientific belief" used in this context is something like "belief held by people in the scientific community"

I hope not. The definition of "scientific belief" is belief in something supported by a scientific method. Put another way formally and rigorously established based on agreed axioms.

Being a scientist doesn't make your belief that you're Imhotep reincarnated a scientific belief.


The definition of "scientific belief" is belief in something supported by a scientific method.

How many examples on the OP did you read? How many of those examples were supported by a scientific method? (Probably some, but not all.) I think my definition better suits the context, quibbling aside.


>you either have "knowledge" or you have "false beliefs" based on best available science.

Personally I think it would be truly naive to imagine that we now have what you call "knowledge" as opposed to having in science the best (based on consensus) available description of the universe. If you wish to call the standard model "false belief" then I can go with that but it seems a bit overly fussy.

We should understand that we take our axioms and build on them and measure against them but that we need to adjust those axioms as evidence comes to light.

Axioms, many scientist fail to realise, are beliefs without scientific justification. Not only is physics built on them but the mathematics we use to build our physics and the logic that we use to support our mathematics are built on them too. What is more Godel shows us that we can't prove that logic to be complete and consistent from within.

Yes Pyrrho is my hero but I think Carneades went a bit far.


Doesn't applying no false premise (a la Nozick) remove the so-called Gettier problem entirely though, at least from an epistemological viewpoint? It seems to based on my hour or so of reading just now.

Practically however the NFP formulation of JTB (http://en.wikipedia.org/wiki/Justified_true_belief) leaves us without a usable word for "knowledge".


Thanks for that. I always thought that definition was somehow just very, very wrong, but it kept getting cited by people who didn't really care what the definition was.


I still think it's a very bad word to pick. "Belief" is widely accepted to mean holding a proposition regardless of evidence, which isn't something you could call "scientific".


> "Belief" is widely accepted to mean holding a proposition regardless of evidence

No, "belief" and "evidence" are orthogonal.


>No, "belief" and "evidence" are orthogonal.

In which case no evidence exists for us.


Just use an unsupported browser (Opera)!


Honoured Sir,

Understanding you to be a distinguished algebraist (that is, distinguished from other algebraists by different face, different height, etc.), I beg to submit to you a difficulty which distresses me much.

If x and y are each equal to 1, it is plain that

2 * (x^2 - y^2) = 0, and also that 5 * (x - y) = 0.

Hence 2 * (x^2 - y^2) = 5 * (x - y).

Now divide each side of this equation by (x - y).

Then 2 * (x + y) = 5.

But (x + y) = (1 + 1), i.e. = 2. So that 2 * 2 = 5.

Ever since this painful fact has been forced upon me, I have not slept more than 8 hours a night, and have not been able to eat more than 3 meals a day.

I trust you will pity me and will kindly explain the difficulty to Your obliged,

Lewis Carroll.


> Now divide each side of this equation by (x - y)

You can't divide by 0 you just get nonsense.


that's not really the point because we are dealing with algebra and not the numberical values. The error is the assumption that (x-y)^2 equals (x^2-y^2), which is not the case.


Hmm, no. Look again. It's using the fact that (x^2 - y^2) = (x - y)(x + y), which is the case, and a common enough identity (“the difference of two squares“) that it's used without remark here.

The problem really is that you can't divide by zero, even in an algebraic expression.

A simpler example of this phenomenon (which blew my mind when I first encountered it) occurs with the equation x = x^2. If you divide by x, you get x = 1, which is a solution to the equation, but where did the other solution x = 0 go??

Whenever you divide an equation by an algebraic expression, you need to consider the possibility of that expression being zero and treat it as a special case. So in the case of x = x^2, you can reason as follows: maybe x = 0, in which case … what … ah yes, that's a solution! Or maybe x ≠ 0, in which case we can divide by it and get x = 1. That doesn't contradict the assumption x ≠ 0, so it's okay, and x = 1 is the other solution.


It is the case, since it was a premise that x and y are both 1. So the two things you list are both 0, so they are equal, given the premise.

So, the actual problem is dividing by zero. Your assumption that "we are dealing with algebra and not numerical values" is false because it completely ignores the "if x and y = 1" part.


It doesn't matter anyways.


The Wikipedia has a good article:

http://en.wikipedia.org/wiki/Divisibility_rule


The original Wired article was posted five days ago: http://news.ycombinator.com/item?id=1770444


Schneier = infosec blogspammer. And very slow at that.


It's like homeopathy. No analysis necessary, it's just works, you know.


Kind of funny, but not really meaningful. There are different priors. Prior probability that water is actually medicine: ~0. Prior probability that repeat head trauma causes damage: >0. If you'd ever seen an MRI of diffuse axonal injury you'd have a different perspective.


A good example of a priori knowledge: "All bachelors are unmarried". This is true, because it's so by definition, it's probability == 1. The effect of boxing is not so obvious as you claim (prior probability that repeat head trauma causes damage: <1), and should be verified. See also roel_v's responses in this sub-thread.


Please reread what I wrote instead of setting up a strawman; you'll see that I did not write "=1" but instead >0. We have prior knowledge on this from evidence.


It's not prior knowledge but a prior conjecture. If you know something, you know it for sure.


That's not how probability works.


Good idea. They already do it.


Specifically: SuperFetch, first seen in Vista: http://en.wikipedia.org/wiki/Windows_Vista_I/O_technologies#...


Or Unix's mmap, which predates Vista by a couple decades. (It was designed around the time of 4.2BSD, i.e., 1983* .) I'd be shocked if Windows hasn't had a similar mechanism for years, as well.

* _The Design and Implementation of the 4.4BSD Operating System_, McKusick et al., p. 29-30.


mmap is not SuperFetch; and memory mapping has been in NT since it started.

What SuperFetch does is collect statistics about files hit by processes when they start up, so it can proactively cache bits of those files, to the point that it can cache files read by processes before those processes are ever started, assuming they are either commonly started or started on a schedule.


They say, it depends on personality. See http://en.wikipedia.org/wiki/Rorschach_test


The "limits.h" (ISO C99 Standard) defines UINT_MAX as the maximum value for a variable of type unsigned int which is 4294967295 (0xffffffff).


Sigh:

"The contents of the header <limits.h> are given below, in alphabetical order. The minimum magnitudes shown shall be replaced by implementation-defined magnitudes with the same sign."

  #define UINT_MAX 65535

Key here is implementation-defined, with a minimum of 65535.


Platforms differ, and the limits.h reflects this, that's the point. We should use the abstractions (the #defines) of the standard library, i.e. refer to these values by their names, that's the way to write portable software.


I always though that was set by the vendor correctly. Oh well..

I always liked the definition of true in Forth (all bits set to 1). It really made it a lot easier.


The standard doesn't require that an int use all of the bits of storage it takes. A hypothetical 33-bit machine may present a C environment where ints are 32-bits, with the extra bit unused (and unset).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: