Hacker News new | past | comments | ask | show | jobs | submit login

> but for some reason decided not to make integer types available to programmers.

Can you expand upon this? All of the research I've done suggest that, not only was it was possible to use integer math in Basic for the Apple II, there are versions of BASIC that only support integers.




Wozniak's original BASIC for the Apple II only supported integers; when Apple decided they needed floating point and Woz refused to spend time on it, they decided to license it from Microsoft, producing Applesoft BASIC. Applesoft was slower than Woz's BASIC, because it performed all arithmetic in floating point.


"In the Apple II ROMs, I even stuck in my own floating point routine. It wasn't incorporated into the BASIC, but I just didn't want the world thinking I couldn't write floating point routines." -- Steve Wozniak [0]

[0] https://www.10zenmonkeys.com/2007/07/03/steve-wozniak-v-step...

I'm not clear on which Apple II ROMs (INTEGER BASIC or Applesoft ROM, or both) he's referring to.


As a kid hacking away on an Apple II this was apparent; all the good Basic games were written in Woz’s Integer Basic.


https://en.wikipedia.org/wiki/Dartmouth_BASIC

"All operations were done in floating point. On the GE-225 and GE-235, this produced a precision of about 30 bits (roughly ten digits) with a base-2 exponent range of -256 to +255.[49]"


Good find, Dartmouth was the original BASIC for their mainframe timesharing, Apple and other micro variants came later.

Speaking of, John G. Kemeny's book "Man and the Computer" is a fantastic read, introducing what computers are, how time sharing works, and the thinking behind the design of BASIC.


BASIC doesn't have typing, so most BASIC interpreters just used floating point everywhere to be a beginner friendly as possible.

The last thing they wanted was someone making their very first app and it behaves like:

    Please enter your name: John Doe

    Please enter how much money you make every day: 80.95

    Congratulations John Doe you made $400 this week!


Classic BASIC does have typing, it just shoves it into the variable name. E.g. X$ is a string, X% is a 16-bit signed integer, and X# is a double-precision floating point number.

This started with $ for strings in Dartmouth BASIC (when it introduced strings; the first edition didn't have them), and then other BASIC implementations gradually added new suffixes. I'm not sure when % and # showed up specifically, but it was already there in Altair BASIC, and thence spread to its descendants, so it was well-established by 1980s.


Interesting. I wonder if this association of $ with strings is related to the use of $ (rather than NUL) as the string terminator for DOS output routines?


Pretty sure DOS got it from CP/M, but I'm not sure why the latter would have it.

That said, it probably has something to do with earliest 5-bit and 6-bit text encodings that were very constrained wrt control characters, and often originating from punch cards where fixed-length or length-prefixed (https://en.wikipedia.org/wiki/Hollerith_constant) strings were more common. E.g. DEC SIXBIT didn't even have NUL: https://en.wikipedia.org/wiki/Six-bit_character_code


I always just figured ‘$’ looks like S, and S is for String


Whoa, I never realized this. And I spent much time in high school writing programs in QBASIC.


IIRC Python had similar reasoning for making floats the default in version 3 instead of integers, which had been the assumption in 1 and 2 when you entered a number without a decimal. R always did it that way and Julia still assumes integer, which occasionally trips me up when switching languages.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: