Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I was a bit surprised to see that they could demo an early MIT CADR at Xerox PARC. These were large, fragile and rare machines at that time.

They weren't extraordinarily fragile; robot wirewrapping is pretty robust. The next year we shipped a couple of them to Paris and I used them just fine, along with a KL-20 that also made the trip OK.



Would you know if any of them are still around? Is there a good emulator that allows them to run well on new-ish x86 or ARM systems?


This is not an “early” CADR, it is just a CADR. By ‘81 they were heavily used at the AI lab and as gumby mentions, they aren’t fragile little machines.

No running (real) CADRs exist, unless you consider the two FPGAs on my desk.

For a CADR simulator you can check https://tumbleweed.nu/lm-3 — I managed to restore the last system version for it last year or so, and we are continuing hacking adding and fixing things.

E.g. you can run the simulator against the Global Chaosnet and talk to other LispMs and ITS machines (simulated or not). And some of us do run it 24/7 as a file server for other LispMs.


> No running (real) CADRs exist, unless you consider the two FPGAs on my desk

You've instantiated a lisp machine on an FPGA? That's superb. Have you put any information about that online?


Everything is on the LM-3 project web site (https://tumbleweed.nu/lm-3). :-)

The current HDL implementation though only works on a unobtanium FPGA board. We are slowly working on porting it over to something that can actually be bought these days. Help needed if you are keen on HDL hacking.


On that Lisp, I'll guess adapting something like Macsyma would be a no-no because of the constriained specs, right?


One of the initial reason for the Lisp Machine project back then was to run Macsyma, since the PDP-10 was too constrained, and a multi-user system that meant you had to share resources.

So MACSYMA runs well on the CADR :-)


Ha, look at the specs of the PDP-10 (a KA-10 originally!) on which it was developed. About 1/4 MIPS with IIRC up to 256 K words of memory!


Which is yet another reason to show most embedded boards nowadays do just well with managed languages, it is only a matter of culture and urban myths preventing many people to do so.


That is amazing serendipity since I was thinking of buying PDP10 replica, which runs ITS...thank you!


What more can you say about or link to of the wirewrap robot? I remember seeing it in the lab, but never saw it in action, and I haven't been able to dig up anything more about it. I'd love to see a video of it doing its thing!

Maybe its corpse appears in this video from 1993, which might be years too late, but it does show off some of its beautiful work.

David Siegel: MIT AI Lab:

https://www.youtube.com/watch?v=hp9NHNKTV-M

>An old video of the 9th floor of 545 Tech Square (MIT building number NE43), filmed around 1993.

Check out the Puma graveyard, and all the little swarming space robots!

I wonder what ever happened to Minsky's tentacle:

AI History: Minsky Tentacle Arm

https://www.youtube.com/watch?v=JuXQPdd0hjI

>This film from 1968 shows Marvin Minsky's tentacle arm, developed at the MIT AI Lab (one of CSAIL's forerunner labs). The arm had twelve joints and could be controlled by a PDP-6 computer or via a joystick. This video demonstrates that the arm was strong enough to lift a person, yet gentle enough to embrace a child.

The stuff at the end reminds me of Golan Levin's adorable googly-eyed worm robot (which was a menacing BB IRB-2400/16 underneath):

Interactive Worm Robot:

https://www.youtube.com/watch?v=OjUwH9tOdus

>"Double-Taker (Snout)" (interactive robotic installation, 2008) deals in a whimsical manner with the themes of trans-species eye contact, gestural choreography, subjecthood, and autonomous surveillance. The project consists of an eight-foot (2.5m) long industrial robot arm, costumed to resemble an enormous inchworm or elephant's trunk, which responds in unexpected ways to the presence and movements of people in its vicinity. Sited on a low roof above a museum entrance, and governed by a real-time machine vision algorithm, Double-Taker (Snout) orients itself towards passers-by, tracking their bodies and suggesting an intelligent awareness of their activities. The goal of this kinetic system is to perform convincing "double-takes" at its visitors, in which the sculpture appears to be continually surprised by the presence of its own viewers — communicating, without words, that there is something uniquely surprising about each of us. More information at http://www.flong.com/projects/snout/.

https://web.archive.org/web/20080803011529/https://www.flong...

https://library.e.abb.com/public/76ee88849028406b94407ad406d...


Don I was never there, but the ITS page mentions the robot, I think?

https://en.wikipedia.org/wiki/Incompatible_Timesharing_Syste...


Ask Henry if the arm might have turned up at Ivy street.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: