That's a very impressive piece of work. I've worked on a (much!) simpler SDLC card and wrote the firmware for it (someone else did the hardware), it was already quite a chunk of work to get that up and running and stable enough to process production data with. Eventually only several 10's of these were built but they worked until the whole system was decommissioned on account of obsolescence (more than a decade later).
What you've built there would run circles around what I was involved in. Does the hardware still exist?
Thanks. Yes, I was pretty proud of what we built. We had two orders of magnitude price-performance advantage over the state of the art for about a year, but before we could get any significant market share, gigabit ethernet came along (and the price of fast ethernet dropped) and that wiped us out. But it was fun while it lasted.
The hardware kind of exists. I have two prototype boards in my closet but they haven't been powered on in 30 years. I also don't know where the code for the device drivers is any more, though I have a box full of old hard drives that probably has it somewhere. Maybe some digital archeologist will dig it up some day.
To give proper credit where it is due, the idea and hardware design for Flownet were the work of Mike Ciholas, who went on to found a very successful hardware design company [1]. I wrote the device drivers and did the marketing, which is probably one of the reasons we failed. Turns out I'm terrible at marketing.
Well, there are probably a couple of old timers on HN who really appreciate the kind of skill that it took, even though it doesn't show in your bank balance.
It also makes me very grateful for the magic that goes on behind the scenes whenever I plug in a high speed USB device and it 'just works', the kind of wizardry involved for this sort of thing is highly underappreciated.
Very true. Ironically, my career has come full-circle and I am now working for Intel doing chip design (actually working on developing tools that do chip design). The process of producing modern state-of-the-art chips is truly mind-blowing.
I watched that 'indistinguishable from magic' video and that is indeed the only appropriate way to describe it, and if our base tech would not improve from this point forward I'd say that is a job well done.
But I also have a soft spot for the GA144, which represents the other extreme, it's what one man can do versus what a whole team of talented engineers can do.
Is your work related in any way to Symbolics NS, beyond using the same language? Maybe you're reusing some public knowledge from the papers that were published about it?
Is your team hiring? I'm working at Intel, though far from hardware design - in the MPI library's team. In my spare time I learned Common Lisp and some basics of hardware design, so I would be happy to make my hobbies relevant to my job and work with you.
As far as I know the i960 was the follow on to the iAPX432 (a joint venture with Siemens called BiiN) and not at all related to the i860.
This led Intel to have three options for the future in early 1990s: x86, i860 and i960. They decided to bet on the x86 and moved the i860 to the graphics card and the i960 to the smart i/o cards.
It is probably not right to say that BiiN was in any way a follow on the iAPX432.
It might have inherited a few ideas from iAPX432 and maybe also some designers, but otherwise it was a very different architecture, whose development was obviously triggered by the much publicity about the RISC advantages. At the same time, most companies involved in computers had concurrent RISC development programs, e.g. IBM, ARM, HP, AMD, Motorola, Fairchild, DEC.
I have not seen yet any document that would explain why Siemens has joined Intel into the BiiN project, what was Siemens expecting from the project and how the Intel and Siemens contributions to the project were split.
In any case, in 1988 Siemens has chosen to exit the BiiN project leaving Intel as its sole owner.
After renaming BiiN to 80960 (Intel had an 8096 series of 16-bit microcontrollers, which were supposed to be replaced by 80960), Intel has introduced the first 2 products based on it before the end of 1988, and in 1989 they have introduced additional variants.
80960 included many innovations, they were the first RISC ISA designed by Intel and already the 1988 products have been the first monolithic CPUs having the atomic fetch-and-add instruction (which had been invented in 1980/1981 for the NYU Ultracomputer project). The atomic fetch-and-add instruction was later included in the Intel 80486 instruction set (with the mnemonic XADD).
One of the 80960 variants introduced in 1989 (80960CA) was the first monolithic superscalar CPU, one year before IBM POWER. The first superscalar design had been the IBM ACS research project (1966), but the word "superscalar" has been coined only in 1987, by the team designing IBM POWER. In this case Intel has been very quick to include the results of published research in their design, even quicker than those who published them (but obviously, IBM POWER was a far more ambitious project, with CPUs for scientific workstations having a much higher performance than 80960CA).
After learning how to implement them in 80960, the more important innovations have been included by Intel in their mainstream CPUs, 80486, then Pentium (first mainstream superscalar).
While the 432 was a memory-to-memory CISC and the 960 a classic RISC, I do think they have a lot in common technically. A key difference is that the 432 uses positive/negative offsets to separate raw data and capabilities while the 960 had an optional bit 33 to do the same thing, which makes it far simpler to mix data and capabilities on the stack.
The "operating in hardware" is microcode in the 432 but RISC-friendly in the 960, but it is still there.
I am talking about the original 960MX here. Most of these features were dropped on the following 960 models, if I understood correctly. Those do not indeed have much in common with the 432.
I agree that the mechanism of implementing memory protection by capabilities was inherited by BiiN from iAPX432, but as you have said, after Intel has renamed BiiN to 80960 and they have changed its intended market from general-purpose CPUs to 32-bit microcontrollers, competing there with the older 16-bit MCUs or with 32-bit MCUs like AMD 29000, such high-level features have been dropped.
Nowadays there are attempts to resurrect the use of the memory tagging method for memory protection, e.g. the Cambridge CHERI research project, which has been implemented by ARM in their Morello demonstrator board.
Like the original BiiN/80960, CHERI uses an 1-bit memory tag for each 128 memory bits, to differentiate raw memory from capabilities.
I was also a big fan of the i960, and did many designs with the i960CF and I960RP. One of the i960CF based MPEG-2 encoder PCI cards from when I worked at Optivision in the 90's.
https://en.wikipedia.org/wiki/Intel_i960
The i960 was a commercial success, the i860 was not.
I used an i960 in my first startup back in the early 90s:
https://flownet.com/gat/fnlj.html
It was an absolute joy to work with, one of the most beautiful processor architectures I've ever seen.
The i960 didn't quite have a million transistors, but it came close, with the high-powered CF version having 900,000.
https://micro.magnet.fsu.edu/optics/olympusmicd/galleries/ch...
To put these numbers in perspective, an M1 pro has 33 billion (with a b) transistors, so the equivalent of 33,000 i960s.