Hacker News new | past | comments | ask | show | jobs | submit login
The world's slowest Linux PC (extremetech.com)
102 points by kudu on Feb 15, 2015 | hide | past | favorite | 30 comments



This reminds me of the dial-in server I built in our student home back in '97-'98. It was a 386sx 16Mhz with 2MB of memory and a 40MB hard drive that you had to preheat in the oven or it wouldn't spin up. That was realistically the lowest spec'd hardware you could actually get to run linux on.

It took half an hour to boot, but that was okay since it was always on anyway. It took about 5 minutes to set up a ppp connection using a dial-in modem, and then the whole house had internet access through its network adapter. Unless someone had tripped over the coax cables again of course..


I had one when they were pretty new, back in 1991.

Had MS-DOS reduced to the minimum, jumping into Windows 3.1 straight after booting and using Stacker to be able to double the hard disc contents.

By the time I came to use GNU/Linux, I was already on Pentium 75 Mhz and never imagine such 386sx would ever manage it.



More Previously:

https://news.ycombinator.com/item?id=3767410 (1054 days ago)


Thanks. I wonder if I can automate that - although HN bans bots, right?


Just add it to one of the many HN browser extensions.

The tough part is correlating the different urls and titles while filtering out the irrelevant comments from past discussions. I don't just want to know that this is an old story (I can figure that out), or that it's a "cool project" that a commenter might use sometime soon, 5 years ago.

I want to know what the informed comments were at that time. Was it important in that time period? What were the concerns at the time? Did they abandon the idea? Why?


"Calibrating delay loop... 58.77 BogoMIPS"

As a comparison, from my i7-4771 @ 3.5GHz:

smpboot: Total of 8 processors activated (55999.93 BogoMIPS)

Computers sure are fast these days. Too bad we write such slow software for them.


Except if you emulate them in JavaScript. I get 5.10 BogoMIPS: http://copy.sh/v86/?profile=custom&cdrom.url=http://k%C3%A6n...


The bogomips is a really bad estimate of performance (hence 'bogo'). There's considerable difference in the bogomips calculation even between different ARM subarchitectures. There was a good article about the issue recently in LWN: http://lwn.net/Articles/628531/


Probably also get a low BogoMIPS score if you boot up an image with a web browser, then run javascript emulation inside the javascript emulation.


How do you emulate a 32 bit CPU with only eight bits? Seems like you'd have to quadruple everything and then flatten it again after.


This is actually pretty common. There was a time when not every CPU had a FPU and floating point operations were emulated in software if required. Or doing 64 bit integer operations on a 32 bit CPU. Or doing arbitrary precision arithmetics on todays CPUs. You do it just like in school, piece by piece, but instead of single digits you use the largest registers available. If you really like it slow you could of course also just use plain strings to represent numbers and do all math on a char by char, digit by digit basis.


There used to be bit-slice processors that chained together narrow (eg. 4-bit) chips to build a wider processor. This is pretty much a software emulation of that technique, but lacking the parallelism.

https://en.wikipedia.org/wiki/Bit_slicing


Commonly, to perform something like an arbitrary width addition, these CPUs have a carry flag that is set if an operation overloads, so that the carry can be added into the next addition. This way you can simply chain simple 8-bit additions to get the desired operation width. For example, a 16-bit addition (0x5555 + 0x1234 on 6502

    clc
    lda a
    adc b
    sta res
    lda a+1
    adc b+1
    sta res+1

    a:
        .byte $55, $55

    b:
        .byte $34, $12

    res:
        .byte $00, $00 ; reserved for result


The simple answer would be: by using memory. More bits means that the CPU can operate at larger amounts of data at once. For example shift would need to perform multiple 8bit operations but it's entirely doable (I hope the cpu he used has shifts with the carry flag)


Adding to some of these other comments, when IBM did their groundbreaking 32 bit System/360 in the 1960's, they started with a nice silicon transistor based logic family, but it pretty much ran at only one speed. So to deliver a family of computers with the same ISA, they microcoded the slower ones.

The very slowest one had 8 bit wide execution units, and presumably used similar techniques to deliver its 32 bit macroarchitecture. Per Wikipedia (https://en.wikipedia.org/wiki/IBM_System/360), the Model 30 in 1965 could execute "up to" 34,500 instructions per second, the hardwired Model 75 could do about a million. See e.g.: https://en.wikipedia.org/wiki/IBM_System/360#Table_of_System...


Sounds like that's exactly what he did.


While I agree it is not novel at all [1] it is a useful exercise for someone to do and it especially reinforces the notion that Turing discussed with respect to computability. For a long time (and possibly even today) every IBM mainframe could emulate (at speed or faster) all previous versions of their mainframe line such that the investment in software was preserved.

[1] I did a PDP-11 emulator on the Arduino with FRAM chips for 'core' which was made quite a bit easier by the existence of PDP-11 instruction set diagnostics, and other test software which validated the CPU was running "correctly".


Somehow the fact that this was accomplished by emulating a 32 bit processor with an 8 bit processor, made this less appealing to me. I'm sure it was technically very challenging, but I would have been more excited to see Linux somehow running natively on the processor. I wonder what the slowest processors running Linux natively are?


For x86, older versions of Linux ran on a 386, but current versions require a 486, or a Pentium if you turn on stack protection (as most distro kernels do). (Almost nobody noticed.) However, drivers for ISA and similar still exist in the kernel, so you could likely get the most recent kernel running on a real 486, if you had one around.

The slowest systems still running Linux, though, are probably those running the m68k port. Many people doing development for that port use m68k emulators, which run many times faster than the fastest available hardware. However, at least in theory, you can run the m68k port on any m68k processor that has an MMU.


And you can run uCLinux on the m68k processors that don't have an MMU. I was using it for a Motorola Dragonball microcontroller... 14 years ago.


Can't comment on Linux, but we used to run Xenix on a 286.


Almost certainly one of thing things supported by http://www.uclinux.org/.


How I would dread debugging this machine.


The main thing is to get the ARM instructions working correctly, so that's not too bad. If all of those are correct, it should boot and do basic stuff. The next bit of work is to get the peripherals going.


Interesting, but I found this effort http://www.homebrewcpu.com/ more impressive overall. It quite literally is a home made CPU which then had Minix 2 ported to it and booted. A very impressive achievement.


Neat article! I love the box he put it in. The neatly labeled sections are somewhat fascinating and give interesting insight into how a CPU works. Things seem different when your CPU is a foot long and sits on the table I suppose...


Thanks for this link. It deserves a separate thread!


[flagged]


Can you point me to the other ones for comparison?


There is an infinite number of ways to run Linux slowly by emulating compatible processors (here ARM) on slow HW.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: