A classic piece which contains one of the best things ever written about design:
Take the hardest and most profound thing you need to do, make it great, and then build every easier thing out of it.
It's interesting to me that the idea that inspired Kay to this insight was FEXPRs: letting functions control the evaluation of their arguments. FEXPRs have long been banished from the Lisp world, which makes a sharp distinction between functions and macros (only the latter get to control evaluation). I still don't quite understand the reasons for this, and it seems like Kay doesn't either -- or at least didn't when he was inventing Smalltalk.
There's no real difference between a FEXPR, and a macro that calls eval or apply during expansion. The real meaning of "banishing" FEXPRs is that we actually get something extra: guaranteed-non-macro functions that behave according to a predictable pattern (evaluate arguments, pass them in, return a value.) Functions are basically a codified Lisp design-pattern, while macros (or FEXPRs, as they may be) are the real core of Lisp's abstraction mechanism, and really what make Lisp, Lisp.
It's just the same with Smalltalk: functions take lambdas as arguments, not primitive or composite values. "Receives values" is simply a design pattern.
That's a pretty good explanation; if it's accurate, I wonder why no one else explains it that way.
In Smalltalk, I thought arguments are always evaluated unless they're blocks. Is that wrong? Or is it that, at a lower level, everything's a block, and functions are implemented on top of that as something that always evaluates args?
No, you're right—arguments are always evaluated unless they're blocks in the full Smalltalk design we have today. However, like I said, this is a codified design pattern—the language could get away without doing it while still retaining the "core Smalltalk conceit" of everything being a message sent to an object. This is basically what the io language does—it's basically a Smalltalk at its core, but with less cruft.
Interesting, there's a researcher by the name of John Shutt who in said "I suspect them of offering fundamentally greater abstractive power than any other programming language feature yet devised" in 2009. http://lambda-the-ultimate.org/node/3640#comment-51563
This is really odd, I'm having a bug on chrome windows where the html gets printed out as plaintext. MIME type issues on the server side maybe? Other than that, looks like an interesting read.
Another interesting thing is exploring being pedantically standards compliant (breaking usefulness sometimes) versus bending standards (in the expense of encouraging bad practice)
If all browsers actually paid attention to Content-Type then this site probably would have fixed the actual problem. Instead we have a world where it works in only some browsers, and naïve users consider the one that is technically more correct to be "broken".
(I did it to be able to read the paper without all the distracting OCR errors/typos in the OP's version; perhaps they've been fixed since then, though they weren't at the time I sent him this.)
Really good read, abeit a bit long. It has a lot more details about the Xerox Parc days then the movies like "Triumph of the nerds had". It was interesting to hear a deeper understanding of the actor model in Smalltalk and simula
Take the hardest and most profound thing you need to do, make it great, and then build every easier thing out of it.
It's interesting to me that the idea that inspired Kay to this insight was FEXPRs: letting functions control the evaluation of their arguments. FEXPRs have long been banished from the Lisp world, which makes a sharp distinction between functions and macros (only the latter get to control evaluation). I still don't quite understand the reasons for this, and it seems like Kay doesn't either -- or at least didn't when he was inventing Smalltalk.