I think Edge picked a very bad word. Isn't "scientific belief" is a silly oxymoron?
At least, many of those "beliefs" weren't actually beliefs, but scientific theories that worked very well but were later improved. It's the case of Newtonian gravitational force.
What point am I missing? Clearly, the definition of "scientific belief" used in this context is something like "belief held by people in the scientific community". The central point here seems to be that, while "belief" may have a connotation of something that is lacking evidence, it's in fact a far more general term that applies even to genuine knowledge.
>the definition of "scientific belief" used in this context is something like "belief held by people in the scientific community"
I hope not. The definition of "scientific belief" is belief in something supported by a scientific method. Put another way formally and rigorously established based on agreed axioms.
Being a scientist doesn't make your belief that you're Imhotep reincarnated a scientific belief.
The definition of "scientific belief" is belief in something supported by a scientific method.
How many examples on the OP did you read? How many of those examples were supported by a scientific method? (Probably some, but not all.) I think my definition better suits the context, quibbling aside.
>you either have "knowledge" or you have "false beliefs" based on best available science.
Personally I think it would be truly naive to imagine that we now have what you call "knowledge" as opposed to having in science the best (based on consensus) available description of the universe. If you wish to call the standard model "false belief" then I can go with that but it seems a bit overly fussy.
We should understand that we take our axioms and build on them and measure against them but that we need to adjust those axioms as evidence comes to light.
Axioms, many scientist fail to realise, are beliefs without scientific justification. Not only is physics built on them but the mathematics we use to build our physics and the logic that we use to support our mathematics are built on them too. What is more Godel shows us that we can't prove that logic to be complete and consistent from within.
Yes Pyrrho is my hero but I think Carneades went a bit far.
Doesn't applying no false premise (a la Nozick) remove the so-called Gettier problem entirely though, at least from an epistemological viewpoint? It seems to based on my hour or so of reading just now.
Thanks for that. I always thought that definition was somehow just very, very wrong, but it kept getting cited by people who didn't really care what the definition was.
I still think it's a very bad word to pick. "Belief" is widely accepted to mean holding a proposition regardless of evidence, which isn't something you could call "scientific".
Understanding you to be a distinguished algebraist (that is, distinguished from other algebraists by different face, different height, etc.), I beg to submit to you a difficulty which distresses me much.
If x and y are each equal to 1, it is plain that
2 * (x^2 - y^2) = 0, and also that 5 * (x - y) = 0.
Hence 2 * (x^2 - y^2) = 5 * (x - y).
Now divide each side of this equation by (x - y).
Then 2 * (x + y) = 5.
But (x + y) = (1 + 1), i.e. = 2. So that 2 * 2 = 5.
Ever since this painful fact has been forced upon me, I have not slept more than 8 hours a night, and have not been able to eat more than 3 meals a day.
I trust you will pity me and will kindly explain the difficulty to Your obliged,
that's not really the point because we are dealing with algebra and not the numberical values. The error is the assumption that (x-y)^2 equals (x^2-y^2), which is not the case.
Hmm, no. Look again. It's using the fact that (x^2 - y^2) = (x - y)(x + y), which is the case, and a common enough identity (“the difference of two squares“) that it's used without remark here.
The problem really is that you can't divide by zero, even in an algebraic expression.
A simpler example of this phenomenon (which blew my mind when I first encountered it) occurs with the equation x = x^2. If you divide by x, you get x = 1, which is a solution to the equation, but where did the other solution x = 0 go??
Whenever you divide an equation by an algebraic expression, you need to consider the possibility of that expression being zero and treat it as a special case. So in the case of x = x^2, you can reason as follows: maybe x = 0, in which case … what … ah yes, that's a solution! Or maybe x ≠ 0, in which case we can divide by it and get x = 1. That doesn't contradict the assumption x ≠ 0, so it's okay, and x = 1 is the other solution.
It is the case, since it was a premise that x and y are both 1. So the two things you list are both 0, so they are equal, given the premise.
So, the actual problem is dividing by zero. Your assumption that "we are dealing with algebra and not numerical values" is false because it completely ignores the "if x and y = 1" part.
Kind of funny, but not really meaningful. There are different priors. Prior probability that water is actually medicine: ~0. Prior probability that repeat head trauma causes damage: >0. If you'd ever seen an MRI of diffuse axonal injury you'd have a different perspective.
A good example of a priori knowledge: "All bachelors are unmarried". This is true, because it's so by definition, it's probability == 1. The effect of boxing is not so obvious as you claim (prior probability that repeat head trauma causes damage: <1), and should be verified. See also roel_v's responses in this sub-thread.
Please reread what I wrote instead of setting up a strawman; you'll see that I did not write "=1" but instead >0. We have prior knowledge on this from evidence.
Or Unix's mmap, which predates Vista by a couple decades. (It was designed around the time of 4.2BSD, i.e., 1983* .) I'd be shocked if Windows hasn't had a similar mechanism for years, as well.
* _The Design and Implementation of the 4.4BSD Operating System_, McKusick et al., p. 29-30.
mmap is not SuperFetch; and memory mapping has been in NT since it started.
What SuperFetch does is collect statistics about files hit by processes when they start up, so it can proactively cache bits of those files, to the point that it can cache files read by processes before those processes are ever started, assuming they are either commonly started or started on a schedule.
"The contents of the header <limits.h> are given below, in alphabetical order. The
minimum magnitudes shown shall be replaced by implementation-defined magnitudes
with the same sign."
#define UINT_MAX 65535
Key here is implementation-defined, with a minimum of 65535.
Platforms differ, and the limits.h reflects this, that's the point. We should use the abstractions (the #defines) of the standard library, i.e. refer to these values by their names, that's the way to write portable software.
The standard doesn't require that an int use all of the bits of storage it takes. A hypothetical 33-bit machine may present a C environment where ints are 32-bits, with the extra bit unused (and unset).