In an alternate universe, we'd all be using descendants of these instead of the 8086.
Part of what killed it, besides not having the economy of scale in microprocessors, is that optimizing compilers got good at turning loops over integers into fast code for regular CPUs, and these are pretty important for overall system performance. So no matter how much faster a custom CPU could run eval, it wasn't going to be competitive at integer loops.
I think it's a little more than that. The Lisp machines were very expensive (though the single chip version made things like a MacIvory or MicroExplorer somewhat more affordable), very proprietary, and generally didn't run the software the mass market was looking for (not that they couldn't...there just was never a Lotus-123 or Wordstar for Symbolics). So there was probably never a path to real economy of scale.
But otherwise I think you're spot on, and it's a common arc in this industry: company comes up with a true innovation solving a problem better than the competitors, Moores law does it's thing, suddenly the COTS products are competitive through brute force if nothing else at a lower price point, and company either has to innovate again or embrace a COTS platform. Jim Gettys wrote a neat paper about how this effected high-end video cards back in the early X11 days (admittedly, no longer directly applicable since GPUs are the COTS solution now, but the principle stands).
Had these been picked for the IBM PC, it would have died and been replaced by some RISC.
Lisp machines only made any sense in that very brief window when DRAM was faster than the CPUs that used it. Partly because of process technology, partly because of lack of understanding of CPU design principles. With x86, you could choose to ignore the parts of the design that made it slow, and just build on the basics to bring the design forward. With a lisp machine, that would have never worked.
It wouldn't have been long until you would have been able to buy machines that would have been more than 10x faster on real workloads, and probably cheaper too.
Maybe that's true for this project (implementing pointer chasing list interpreter directly in hardware), but it's much less clear to me why it would be true of the much more commonly remembered examples of "lisp machines" like symbolics, TI, etc.
> In an alternate universe, we'd all be using descendants of these instead of the 8086.
Unlikely. For this to happen, the developers would have had to target the mass market (i.e. working hard to make the processor and a computer based on it rather cheaply available for the masses). This is clearly not what SCHEME-78 was developed for.
I'd argue users wanted off the shelf software, not bespoke solutions, which is why they were cool with slow machines that ran 123, dBase and Wordperfect.
What I remember from the 80s/90s is that computer speed was never fast enough to keep up with use cases. Users really did want speed and marketers focused on speed.
Computers are still never fast enough. But for 95% of users, if you said "you can have a computer that's 5x as fast, but won't run any of the software you use every day", they'd pass.
Compiled languages with automatic memory management were already a thing in 16 bit home computers, and they run fast enough for boring data entry business software, e.g. anything xBASE, compiled BASIC.
Anything to do with graphics and games, yes only Assembly would do the trick.
Yes, DBase was written in Assembly, like many compilers and interpreters back then, and I am certain you would be able to dig out a Lisp or Scheme compiler for CP/M equally written in Assembly.
Yet, the folks down at the bank, insurance companies, and video rentals, were using DBase applications written in xBase, compiled via Clipper, not Assembly.
My first computer was a Timex 2068, I kind of know what was being written with what.
I don't know what you're trying to say here. This thread was about someone saying in an alternate universe lisp computers might have caught on.
They were never going to catch on and it was never going to work because people didn't want them. Programmers might have wanted to program in lisp, but users didn't want to buy software made in lisp.
For some reason you're bringing up that someone may have written some scripts in an interpreted language which if anything reinforces that lisp machines weren't necessary.
> I'd argue users wanted off the shelf software, not bespoke solutions, which is why they were cool with slow machines that ran 123, dBase and Wordperfect.
Followed by
> I don't know what you're trying to say here, all those programs would have run terribly if they were written in lisp or scheme.
While dBase was written in Assembly, the interpreter, Clipper, the compiler was written in C, and dBase software written in xBase programming language, a programming language that by Clipper 5 days was just like Lisp or Scheme in capabilities, garbage collected, able to do functional programming, and OOP, in 640 KB, running on 20 MHz CPUs.
What does this have to do with people wanting lisp machines?
They couldn't do what people wanted. People didn't want their software written in lisp, they didn't want expensive hardware and slow software. Who cares if that software had interpreters in it?
Just word processors and spreadsheets were not fast enough for general use in the 80's. Everyone was mostly exited for making them faster, not for graphics and games.
Part of what killed it, besides not having the economy of scale in microprocessors, is that optimizing compilers got good at turning loops over integers into fast code for regular CPUs, and these are pretty important for overall system performance. So no matter how much faster a custom CPU could run eval, it wasn't going to be competitive at integer loops.