Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Interlisp-D and MIT CADR Lisp Machine demos for IJCAI Conference (1981) (archive.org)
67 points by _19qg on March 3, 2024 | hide | past | favorite | 35 comments


Two different very early Lisp Machines (a Xerox Dolphin and a MIT CADR), which are personal workstations running a Lisp operating system with early graphical user interfaces, are being demonstrated, from Xerox PARC. Both machines are integrated into the early 3 Mhz Ethernet there. The Interlisp-D / Xerox Dolphin demo begins at 00:20 and the Lisp Machine Lisp / MIT CADR demo begins at 25:00.

Includes noises from Lisp Machine keyboards. The use of the Interlisp-D structure editor for source code is shown.

The demo was for the IJCAI (International Joint Conference on Artificial Intelligence) in Vancouver, 1981

I've watched the H.264 version.


It's interesting to see the differences - the Interlisp machine is pretty much a "graphical workstation" which is recognizable today, though in monochrome; the CADR is more spartan on the GUI side.

I am slowly learning Lisp and have the Interlisp VM running on my system, and it executes very quickly on even 10 year old hardware; which of course is 1000 or more times faster than what it originally ran on.


One of the differences is that Interlisp-D used smaller windows. For example the structure editor we see in the video edits one function. The MIT CADR uses often full-screen windows and EINE/ZWEI/Zmacs usually edit files with several definitions. The MIT CADR demo showed how to split the screen into several windows, like several editors and listeners (REPLs). Thus splitting the screen into windows or panes was normal, but you don't have to. You can place windows with the mouse (and reposition/resize them) as well.

I was a bit surprised to see that they could demo an early MIT CADR at Xerox PARC. These were large, fragile and rare machines at that time. The MIT CADR was about to be commercialized by LMI, Inc. and Symbolics, Inc..

One other thing to note is that a lot of research went into the Interlisp-D IDE (not just the graphical version): interactive help, source code management, programming by example, window manager, programmer's assistants, ... The video for example shows how refactor Lisp programs using the source code management tools.


> I was a bit surprised to see that they could demo an early MIT CADR at Xerox PARC. These were large, fragile and rare machines at that time.

They weren't extraordinarily fragile; robot wirewrapping is pretty robust. The next year we shipped a couple of them to Paris and I used them just fine, along with a KL-20 that also made the trip OK.


Would you know if any of them are still around? Is there a good emulator that allows them to run well on new-ish x86 or ARM systems?


This is not an “early” CADR, it is just a CADR. By ‘81 they were heavily used at the AI lab and as gumby mentions, they aren’t fragile little machines.

No running (real) CADRs exist, unless you consider the two FPGAs on my desk.

For a CADR simulator you can check https://tumbleweed.nu/lm-3 — I managed to restore the last system version for it last year or so, and we are continuing hacking adding and fixing things.

E.g. you can run the simulator against the Global Chaosnet and talk to other LispMs and ITS machines (simulated or not). And some of us do run it 24/7 as a file server for other LispMs.


> No running (real) CADRs exist, unless you consider the two FPGAs on my desk

You've instantiated a lisp machine on an FPGA? That's superb. Have you put any information about that online?


Everything is on the LM-3 project web site (https://tumbleweed.nu/lm-3). :-)

The current HDL implementation though only works on a unobtanium FPGA board. We are slowly working on porting it over to something that can actually be bought these days. Help needed if you are keen on HDL hacking.


On that Lisp, I'll guess adapting something like Macsyma would be a no-no because of the constriained specs, right?


One of the initial reason for the Lisp Machine project back then was to run Macsyma, since the PDP-10 was too constrained, and a multi-user system that meant you had to share resources.

So MACSYMA runs well on the CADR :-)


Ha, look at the specs of the PDP-10 (a KA-10 originally!) on which it was developed. About 1/4 MIPS with IIRC up to 256 K words of memory!


Which is yet another reason to show most embedded boards nowadays do just well with managed languages, it is only a matter of culture and urban myths preventing many people to do so.


That is amazing serendipity since I was thinking of buying PDP10 replica, which runs ITS...thank you!


What more can you say about or link to of the wirewrap robot? I remember seeing it in the lab, but never saw it in action, and I haven't been able to dig up anything more about it. I'd love to see a video of it doing its thing!

Maybe its corpse appears in this video from 1993, which might be years too late, but it does show off some of its beautiful work.

David Siegel: MIT AI Lab:

https://www.youtube.com/watch?v=hp9NHNKTV-M

>An old video of the 9th floor of 545 Tech Square (MIT building number NE43), filmed around 1993.

Check out the Puma graveyard, and all the little swarming space robots!

I wonder what ever happened to Minsky's tentacle:

AI History: Minsky Tentacle Arm

https://www.youtube.com/watch?v=JuXQPdd0hjI

>This film from 1968 shows Marvin Minsky's tentacle arm, developed at the MIT AI Lab (one of CSAIL's forerunner labs). The arm had twelve joints and could be controlled by a PDP-6 computer or via a joystick. This video demonstrates that the arm was strong enough to lift a person, yet gentle enough to embrace a child.

The stuff at the end reminds me of Golan Levin's adorable googly-eyed worm robot (which was a menacing BB IRB-2400/16 underneath):

Interactive Worm Robot:

https://www.youtube.com/watch?v=OjUwH9tOdus

>"Double-Taker (Snout)" (interactive robotic installation, 2008) deals in a whimsical manner with the themes of trans-species eye contact, gestural choreography, subjecthood, and autonomous surveillance. The project consists of an eight-foot (2.5m) long industrial robot arm, costumed to resemble an enormous inchworm or elephant's trunk, which responds in unexpected ways to the presence and movements of people in its vicinity. Sited on a low roof above a museum entrance, and governed by a real-time machine vision algorithm, Double-Taker (Snout) orients itself towards passers-by, tracking their bodies and suggesting an intelligent awareness of their activities. The goal of this kinetic system is to perform convincing "double-takes" at its visitors, in which the sculpture appears to be continually surprised by the presence of its own viewers — communicating, without words, that there is something uniquely surprising about each of us. More information at http://www.flong.com/projects/snout/.

https://web.archive.org/web/20080803011529/https://www.flong...

https://library.e.abb.com/public/76ee88849028406b94407ad406d...


Don I was never there, but the ITS page mentions the robot, I think?

https://en.wikipedia.org/wiki/Incompatible_Timesharing_Syste...


Ask Henry if the arm might have turned up at Ivy street.


the relative speed might depend somewhat on what you're doing. following a random pointer into ram that isn't in your cache still takes something like 80 nanoseconds, while in 01981 it might have taken 500 nanoseconds. but in 01981 executing an instruction also took 500 nanoseconds, and now in 500 nanoseconds this laptop typically executes over 9000 instructions when it's not idle. lisps tend to involve higher proportions of random pointer following than things like golang or numpy


That's an interesting point. When arithmetic and memory accesses cost about the same, pointer chasing through graphs works great. The currently popular struct-of-arrays and cache oblivious datastructure patterns only really make sense when arithmetic is essentially free relative to memory.


fortran and proto-fortran programmers were using parallel arrays (aka 'struct-of-arrays') since the 01950s, and if you read sutherland's dissertation, you'll see that that parallel array patterns were just the default way of structuring data in computer memories, to the point that they didn't have a name and sutherland felt he had to justify doing things differently, and neither basic nor fortran supported records (structs). the only arithmetic you need to access attribute aa of object oo in parallel arrays is base + (oo << sizebits), and in the usual case, sizebits was 0, the addition was done implicitly by the index-register hardware, and base was an immediate operand in the instruction and so didn't need a separate fetch operation. this is quite comparable to the arithmetic needed for the record/struct approach: oo + offset, where offset is an immediate operand

as i understand it, the reason we switched to conventionally using records was not because of efficiency, nor even to support pointer chasing through graphs (you can totally make linked lists in parallel arrays), but for reasons of memory allocation and dynamic object lifetimes


"Interlisp is a very large software system and large software systems are not easy to construct. Interlisp-D has on the order of 17,000 lines of Lisp code, 6,000 lines of Bcpl, and 4,000 lines of microcode."

from Interlisp-D: Overview and Status

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

Ah, the days when a "very large software system" had 27K lines of code. :)


Even today 17,000 lines of Lisp can be dauntingly complex ;-).


And 100 if you use QuickLisp and some modules.


They also had "relatively large main memories" ;-) "(~1 megabyte) and virtual address spaces (4-16M 16 bit words)". I would guess the CPU speed would be rated less than 1 MIPS.


it's kind of surprising that the cadr took 10 seconds (at 43'56") to compile

    (defun fact (n)
      (cond ((< n 2) 1)
            (t (* n (fact (- n 1))))))
does anyone know why it's so slow? maybe it had to do a full garbage collection?


A full GC took several tens of minutes… it was considered a coffee break type of thing.

It is hard to say why it took that long to compile from the video, I’m guessing some network access to something …


Paging would be another option. Virtual memory was slow and the GC integration with virtual memory wasn't very sophisticated at that time. Also main memory was probably tiny.


thanks for the explanation! i'm sorry i upset you previously by saying that stallman had often been unkind to others


this is really enjoyable. i'd never seen a d-machine in video before, and i had no idea masterscope could do automated refactoring on interlisp code in 01981

there's something very hollywood-like about 01960s/01970s ai natural language parsing interfaces

unsurprisingly the cadr looks very similar to the symbolics genera environment i've seen more of


You missed a leading "1" in your year numbers


https://news.ycombinator.com/item?id=35663742

DonHopkins 10 months ago | parent | context | favorite | on: PHP Popularity: Is it decreasing and what to do ab...

That's exactly Jeff Atwood's point, which I quoted above (and will repeat here): "From my perspective, the point of all these "PHP is broken" rants is not just to complain, but to help educate and potentially warn off new coders starting new codebases. Some fine, even historic work has been done in PHP despite the madness, unquestionably. But now we need to work together to fix what is broken. The best way to fix the PHP problem at this point is to make the alternatives so outstanding that the choice of the better hammer becomes obvious." -Jeff Atwood

https://blog.codinghorror.com/the-php-singularity/

Leaning hard into the IDE (or ChatGPT these days) because your language design is flawed is a hella/totally stereotypical "West Coast" thing to do, as described in "Evolution of Lisp", "Worse is Better", and "History of T", and exemplified by Interlisp and Warren Teitelman's "pervasive philosophy of user interface design" and implementation of "DWIM".

https://en.wikipedia.org/wiki/DWIM

https://www.techfak.uni-bielefeld.de/~joern/jargon/DWIM.HTML

https://escholarship.org/uc/item/6492j904

If your language isn't terribly designed, then your IDE doesn't have to be such a complex non-deterministic Rube Goldberg machine, papering over the languages flaws, haphazardly guessing about your intent, "yelling at you" all the time about potential foot-guns and misunderstandings.

As you might guess, I'm firmly in the "East Coast" MacLisp / Emacs camp, because that's what I learned to program in the 80's. I can't stand most IDEs (except for the original Lisp Machines, and Emacs of course), especially when they keep popping up hyperactive completion menus that steal the keyboard input focus and spew paragraphs of unexpected boilerplate diarrhea into my buffer whenever I dare to type ahead quickly and hit return.

But my point is that you can have and should demand the best of both coasts, unless you start off with a Shitty West Coast Programming Language or a Shitty East Coast IDE.

(Of course those philosophies are no longer bound to the geographical coasts they're named after, that's just how those papers describe their origin.)

Jeff Atwood's point an my point is that we should demand both well designed programming languages AND well designed IDEs, not make excuses for and paper over the flaws of shitty ones.

There are historic existence proofs, like Lisp Machines and Smalltalk, and we should be able to do much better now, instead of getting stuck in the past with Lisp or PHP.

I mentioned the East/West Coast dichotomy in the discussion about the conversation between Guido, James, Anders and Larry:

https://news.ycombinator.com/item?id=19568860

>DonHopkins on April 4, 2019 | parent | context | favorite | on: A Conversation with Language Creators: Guido, Jame...

>Anders Hejlsberg also made the point that types are documentation. Programming language design is user interface design because programmers are programming language users.

>"East Coast" MacLisp tended to solve problems at a linguistic level that you could hack with text editors like Emacs, while "West Cost" Interlisp-D tended to solve the same problems with tooling like WYSIWYG DWIM IDEs.

>But if you start with a well designed linguistically sound language (Perl, PHP and C++ need not apply), then your IDE doesn't need to waste so much of its energy and complexity and coherence on papering over problems and making up for the deficiencies of the programming language design. (Like debugging mish-mashes of C++ templates and macros in header files!)

More discussion of West Coast -vs- East Coast language design:

Evolution of Lisp:

https://redirect.cs.umbc.edu/courses/331/papers/Evolution-of...

Worse is Better:

https://dreamsongs.com/WorseIsBetter.html

History of T:

http://www.paulgraham.com/thist.html?viewfullsite=1

The Interlisp Programming Environment

http://www.ics.uci.edu/~andre/ics228s2006/teitelmanmasinter....

https://news.ycombinator.com/item?id=5966328

https://news.ycombinator.com/item?id=5966399

>gruseom on June 30, 2013 | parent | context | favorite | on: The Interlisp Programming Environment (1981) [pdf]

>Interlisp was the so-called "west coast" Lisp that emphasized an interactive programming environment and in retrospect looks more like a hybrid between Smalltalk and Lisp than modern Lisp implementations. It was developed at PARC for a while. I don't know if there was cross-pollination between Interlisp and Smalltalk or if the similarity was a zeitgeist thing.

>This article talks about the design values of the system and communicates the flavour of what a Smalltalkish Lisp would have been like.

>As someone who's only read about this, I'd be interested in hearing from people who actually used it.


Don, have you played around with the Interlisp.org "try Medley in your browser" or the downloadable version? And if so, what did you think of it?


These are really nice, I have been playing with them lately. They work really well.


No, I have't had a chance to check that out. Thanks for the tip!


i hope i get a chance to hear what you think! larry has been putting a lot of work into it since his disability


On programming, I like awk (more gawk because of sockets and nice I/O), and editing with just ed/vis (vis is like vi but with syntax highlighting and Sam's Structural Regexen) and entr to monitor the files at read/write so it spawns a make subprocess.

Make, not gmake. Proper guide to simple makefiles

      git://bitreich.org/english_knight
I know awk it's kinda the opposite to MIT/GNU/Emacs/Lisp, but with awka I can compile it nicely into C to do lots of prototypes on math calculations and plots with Gnuplot. This is the Unix philosophy done right, albeit 9front/plan9 takes it further with Acme and I/O to any device as a file and from any language.

Emacs and Elisp (and Common Lisp as the 'biggie' language) would be nice if Emacs didn't hang on I/O (hello gnus, back in the day I did tweak a config file so it read slrnpull's Usnet spool so it lasted 5 minutes on parsing and not 45), I'd truly use it, as it has lots of nice modules inline help and discoverability.

But with the Unix philosophy first I write down schematics with paper and pen, I protoype stuff with awk (awk does far more than reading lines, with the BEGIN clause you can even write a Tetris without touching a file), and then, awka -> C. People doesn't know, but you can do a lot with just tcc, a full busybox set, gnuplot (no wx, no qt, just X11), and netcat if you don't want to use GNU awk for sockets.

Also, call me weird/odd, but a lot could be done with gopher, simple CGI with awk and an sqlite database. I know some -not small at all- book store in Spain using just AS400/DOS terminals to search between a networked array of machines with books ISBN's, availabilty and so on and the bookstore people looked up for books at light speeds. Imagine that today with the ubiquitous Java EE stack, or worse, Electron and bloated VS Code IDE's to generate a very subpar tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: