This reminds me of the dial-in server I built in our student home back in '97-'98. It was a 386sx 16Mhz with 2MB of memory and a 40MB hard drive that you had to preheat in the oven or it wouldn't spin up. That was realistically the lowest spec'd hardware you could actually get to run linux on.
It took half an hour to boot, but that was okay since it was always on anyway. It took about 5 minutes to set up a ppp connection using a dial-in modem, and then the whole house had internet access through its network adapter. Unless someone had tripped over the coax cables again of course..
Just add it to one of the many HN browser extensions.
The tough part is correlating the different urls and titles while filtering out the irrelevant comments from past discussions. I don't just want to know that this is an old story (I can figure that out), or that it's a "cool project" that a commenter might use sometime soon, 5 years ago.
I want to know what the informed comments were at that time. Was it important in that time period? What were the concerns at the time? Did they abandon the idea? Why?
The bogomips is a really bad estimate of performance (hence 'bogo'). There's considerable difference in the bogomips calculation even between different ARM subarchitectures. There was a good article about the issue recently in LWN: http://lwn.net/Articles/628531/
This is actually pretty common. There was a time when not every CPU had a FPU and floating point operations were emulated in software if required. Or doing 64 bit integer operations on a 32 bit CPU. Or doing arbitrary precision arithmetics on todays CPUs. You do it just like in school, piece by piece, but instead of single digits you use the largest registers available. If you really like it slow you could of course also just use plain strings to represent numbers and do all math on a char by char, digit by digit basis.
There used to be bit-slice processors that chained together narrow (eg. 4-bit) chips to build a wider processor. This is pretty much a software emulation of that technique, but lacking the parallelism.
Commonly, to perform something like an arbitrary width addition, these CPUs have a carry flag that is set if an operation overloads, so that the carry can be added into the next addition. This way you can simply chain simple 8-bit additions to get the desired operation width. For example, a 16-bit addition (0x5555 + 0x1234 on 6502
clc
lda a
adc b
sta res
lda a+1
adc b+1
sta res+1
a:
.byte $55, $55
b:
.byte $34, $12
res:
.byte $00, $00 ; reserved for result
The simple answer would be: by using memory. More bits means that the CPU can operate at larger amounts of data at once.
For example shift would need to perform multiple 8bit operations but it's entirely doable (I hope the cpu he used has shifts with the carry flag)
Adding to some of these other comments, when IBM did their groundbreaking 32 bit System/360 in the 1960's, they started with a nice silicon transistor based logic family, but it pretty much ran at only one speed. So to deliver a family of computers with the same ISA, they microcoded the slower ones.
While I agree it is not novel at all [1] it is a useful exercise for someone to do and it especially reinforces the notion that Turing discussed with respect to computability. For a long time (and possibly even today) every IBM mainframe could emulate (at speed or faster) all previous versions of their mainframe line such that the investment in software was preserved.
[1] I did a PDP-11 emulator on the Arduino with FRAM chips for 'core' which was made quite a bit easier by the existence of PDP-11 instruction set diagnostics, and other test software which validated the CPU was running "correctly".
Somehow the fact that this was accomplished by emulating a 32 bit processor with an 8 bit processor, made this less appealing to me. I'm sure it was technically very challenging, but I would have been more excited to see Linux somehow running natively on the processor. I wonder what the slowest processors running Linux natively are?
For x86, older versions of Linux ran on a 386, but current versions require a 486, or a Pentium if you turn on stack protection (as most distro kernels do). (Almost nobody noticed.) However, drivers for ISA and similar still exist in the kernel, so you could likely get the most recent kernel running on a real 486, if you had one around.
The slowest systems still running Linux, though, are probably those running the m68k port. Many people doing development for that port use m68k emulators, which run many times faster than the fastest available hardware. However, at least in theory, you can run the m68k port on any m68k processor that has an MMU.
The main thing is to get the ARM instructions working correctly, so that's not too bad. If all of those are correct, it should boot and do basic stuff. The next bit of work is to get the peripherals going.
Interesting, but I found this effort http://www.homebrewcpu.com/ more impressive overall. It quite literally is a home made CPU which then had Minix 2 ported to it and booted. A very impressive achievement.
Neat article! I love the box he put it in. The neatly labeled sections are somewhat fascinating and give interesting insight into how a CPU works. Things seem different when your CPU is a foot long and sits on the table I suppose...
It took half an hour to boot, but that was okay since it was always on anyway. It took about 5 minutes to set up a ppp connection using a dial-in modem, and then the whole house had internet access through its network adapter. Unless someone had tripped over the coax cables again of course..