Seems like I'm in the minority here, but just yesterday I found an off-by-one bug in my small project for my dissertation, that was sitting there unnoticed for months and I started thinking about how ridiculously stupid is the fact that zero-based indexing is in widespread use. Here I see someone arguing how it eases up cycling through the array, "a frequent operation". How is looping through an array or accessing its last element for a frequent operation?
In every non-computer related document one will found ordinal numbers starting from 1 and n-element sets with the number of the last element being n. Now thanks to the zero-based indexing one has to always switch back and forth between two ways of counting things, depending if one writes/reads code or prose. Just pay attention over the next week or month how many times you forget to make the switch and make this kind of off-by-one mistake I'm talking about. I admire mathematics more then any other discipline and I do see the benefits of a formal approach, but here the formalism has in my opinion clearly diverged from any kind of common sense and from actual software engineering practice.
In fact in any engineering discipline, if you study its history, communication errors are the most frequent reason for failure or even for disaster. Having to always translate every document to the private language of your discipline certainly doesn't help communication. Dijkstra mostly did proofs and didn't have to deal with other people too much in his work, so perhaps he didn't understand that.
It isn't just computers. Mathematicians start from zero or one depending on the context. Hence the famous joke about the absent-minded mathematician whose wife left him on a street corner with a dozen packages and implored him to please, please, please pay attention and make sure they weren't stolen. When his wife returned, he apologized profusely. "I'm so sorry, my dear. My mind must have wandered. I didn't see anybody come near, but now one of the packages is missing."
"Are you sure?" said his wife. "It looks like they are still all here."
"There are only eleven left. I've counted them a dozen times at least. I'll count them again. See: zero, one, two...."
P.S. The most famous example I can think of is the aleph notation for the infinite cardinals, which starts with aleph null: http://en.wikipedia.org/wiki/Aleph
I doubt mathematicians commonly use zero as an index of the first element of a collection. Matrices and vectors which are closest to arrays are indexed from one. In the aleph notation aleph null is different from all the other alephs because it represents the cardinality of an infinite, but _countable_ set, so the notation makes sense and it doesn't have any disadvantages.
\aleph_0 is the first countable infinity because the alephs (like the beth numbers, and the levels of the cumulative hierarchy) are indexed by ordinals, which start from zero, i.e. the empty set in the usual constructions of von Neumann and Zermelo.
However, whether we start from 0 or 1 is ultimately arbitrary and more a question of symbol choice than semantics, since the sequence of natural numbers beginning with 1 is isomorphic to the sequence of natural numbers beginning with 0. Set-theoretically it makes a little more sense to start with 0, but when considering arithmetic alone, it really makes no difference.
There are many other examples. Topological spaces can be described as T0, T1, T2, etc. to indicate the separation axioms that hold on them. In part this reflects the tendency for there to be a first case that is degenerate or that common sense overlooks; for example, lines are one-dimensional and planes are two-dimensional, but mathematicians start with points, which are zero-dimensional.
It feels like this is related to the question of whether the "natural numbers" includes zero or not. In the set theory classes I took, we used N (in "blackboard bold") for the natural numbers 0, 1, 2, 3, ... and Z+ (in "blackboard bold") for the positive integers. My professor's pragmatic take on the controversy was that Z+ already means the positive integers, so it seemed like a waste to let N mean the same thing as Z+ and have no symbol to denote the nonnegative integers.
I always liked my algebra professor's way of getting to the same conclusion of \mathbb{N} including 0:
The natural numbers are how we count things. How many moons does Earth have? One. How many eggs are in the carton in my fridge? Twelve. How many elephants are in this room? Zero.
Incidentally, all of my professors agreed with this and most of them used 0-indexing unless some insight or ease of work was exposed by a different indexing scheme.
When I started grad. school in math, I hung out with a bunch of set theorists & topologists. And they all used \mathbb{N} to mean {1, 2, 3, ...}. Their reasoning was that we already had a symbol for {0, 1, 2, 3, ...}, namely, \omega, the first infinite ordinal. However, my experience in the years since, suggests that your convention is the more common one.
BTW, there is an important point made in your comment, one I had to hammer on repeatedly when I taught discrete math: that the answer to a "How many ... ?" question is a nonnegative integer, i.e., an element of {0, 1, 2, 3, ...}. I've found that this fact is surprisingly difficult for many people to grasp.
This doesn't make sense. If you count out twelve packages: zero, one, two, three, four, five, six, seven, eight, nine, ten, eleven; you have used twelve number names, hence: twelve packages.
Yes, obviously he made a mistake :-) and the mistake he made was starting from zero but assuming that the last number he said out loud ("eleven!") was the number of objects he had counted, which is only correct if you start from one.
I think Luyt's point, with which I reluctantly agree even though it spoils a nice joke, is that a hypothetical mathematician in whom the habit of counting from 0 is so firmly ingrained would also have a firmly ingrained habit of taking (last number named + 1) as the number of things.
So yeah, sure, he could make a mistake, but the premise of the joke is that such a mistake is especially likely for a mathematician counting from 0, and I don't think it is.
No, the point of the joke is that the mathematician uses natural numbers (0,..) and everyone else uses ordinal numbers (1rst, 2nd, ...) so when his wife asked for 12 he thought she meant 0-12. What she actually meant was 1-12. So it was a miscommunication caused both by his habit of using natural numbers for everything and being very isolated from the outside world.
I'm reluctant to continue dissecting what after all is only a joke, but:
1. The mathematician's wife was not using ordinal numbers; nor was he. They were both asking "how many of these things are there?". Cardinals, not ordinals.
2. A mathematician who's so used to 0-based working that he always counts from 0 simply will not interpret "0,1,...,11" as meaning that there are 11 things. The same unusual way of looking at things that makes him start from 0 will also make him proceed from "...,10,11" to "there are 12 things".
3. The mathematician's wife didn't "ask for 12". She left him with 12 packages. What happens in the joke is not a miscommunication, it's s slip-up by the mathematician in going from "...,10,11" to "12 things". Which I suggest is highly implausible for a mathematician in whom 0-based counting is deeply ingrained.
And now I've had enough of overanalysing this joke and shall leave it alone. Feel free to have the last word if you wish.
Incidentally, (some) mathematicians prefer their ordinals to start from 0 as well. The trouble is that "first" (derived from "foremost") really ought to mean the one you start with rather than the one numbered 1, and "second" (from the Latin "secundus" meaning "following") really ought to mean the one after that, and "third" etc. are derived from the number names and therefore have to correspond to 3,4,... -- so there's a gap at 2. One of my acquaintances therefore likes to use a sequence of ordinals that goes: first (0th), second (1nd), twifth (2th), third (3rd), fourth (4th), fifth (5th), etc. I don't think it'll catch on.
>How is looping through an array or accessing its last element for a frequent operation?
In languages without foreach or map I'd guess it's one of the most frequent operations. You probably work with the wrong kind of code to appreciate just how awful anything but the [0..n) behaviour is for... I'm sorry, it's so pervasive I can't even come up with a good example. The most trivial thing is offseting an array by an index by addition.
It's doubtful that, had you been working with a one-based indexing language, there wouldn't be other types of errors in your code because of it.
Zero is such a pervasive number in software development that, like it or not, you're bound to use it in lots of places.
Even if we assumed that one-based indexing was a good thing, many languages -VB prominently- can't seem to make their mind and use a single convention through: arrays may be one-based, but collections are 0-based, and when you start having to interface with libraries written in other languages or APIs, you're left walking on eggs, always wondering if you're not off by one somewhere.
I said this yesterday, but the way an array works is you get a pointer to its first item + (size of an item * index) to get a pointer to the item you want. The first item is the pointer + 0. So that's why it's like that.
FWIW I grew up on FORTRAN which starts from 1 and work with PL/SQL now which also starts from 1, so I am used to it, but I understand why it makes sense to do it from 0 in terms of the way computers actually work. At least, computers with operating systems written in C.
In every non-computer related document one will found ordinal numbers starting from 1 and n-element sets with the number of the last element being n. Now thanks to the zero-based indexing one has to always switch back and forth between two ways of counting things, depending if one writes/reads code or prose. Just pay attention over the next week or month how many times you forget to make the switch and make this kind of off-by-one mistake I'm talking about. I admire mathematics more then any other discipline and I do see the benefits of a formal approach, but here the formalism has in my opinion clearly diverged from any kind of common sense and from actual software engineering practice.
In fact in any engineering discipline, if you study its history, communication errors are the most frequent reason for failure or even for disaster. Having to always translate every document to the private language of your discipline certainly doesn't help communication. Dijkstra mostly did proofs and didn't have to deal with other people too much in his work, so perhaps he didn't understand that.