Hacker News new | past | comments | ask | show | jobs | submit login

How do you emulate a 32 bit CPU with only eight bits? Seems like you'd have to quadruple everything and then flatten it again after.



This is actually pretty common. There was a time when not every CPU had a FPU and floating point operations were emulated in software if required. Or doing 64 bit integer operations on a 32 bit CPU. Or doing arbitrary precision arithmetics on todays CPUs. You do it just like in school, piece by piece, but instead of single digits you use the largest registers available. If you really like it slow you could of course also just use plain strings to represent numbers and do all math on a char by char, digit by digit basis.


There used to be bit-slice processors that chained together narrow (eg. 4-bit) chips to build a wider processor. This is pretty much a software emulation of that technique, but lacking the parallelism.

https://en.wikipedia.org/wiki/Bit_slicing


Commonly, to perform something like an arbitrary width addition, these CPUs have a carry flag that is set if an operation overloads, so that the carry can be added into the next addition. This way you can simply chain simple 8-bit additions to get the desired operation width. For example, a 16-bit addition (0x5555 + 0x1234 on 6502

    clc
    lda a
    adc b
    sta res
    lda a+1
    adc b+1
    sta res+1

    a:
        .byte $55, $55

    b:
        .byte $34, $12

    res:
        .byte $00, $00 ; reserved for result


The simple answer would be: by using memory. More bits means that the CPU can operate at larger amounts of data at once. For example shift would need to perform multiple 8bit operations but it's entirely doable (I hope the cpu he used has shifts with the carry flag)


Adding to some of these other comments, when IBM did their groundbreaking 32 bit System/360 in the 1960's, they started with a nice silicon transistor based logic family, but it pretty much ran at only one speed. So to deliver a family of computers with the same ISA, they microcoded the slower ones.

The very slowest one had 8 bit wide execution units, and presumably used similar techniques to deliver its 32 bit macroarchitecture. Per Wikipedia (https://en.wikipedia.org/wiki/IBM_System/360), the Model 30 in 1965 could execute "up to" 34,500 instructions per second, the hardwired Model 75 could do about a million. See e.g.: https://en.wikipedia.org/wiki/IBM_System/360#Table_of_System...


Sounds like that's exactly what he did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: