Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't have a gate count (yet). You're right that the 4-bit ALU doesn't save a lot of space overall. The Z-80 designer talks a bit about the 4-bit ALU [1] but doesn't really explain the motivation. My guess was he was able to use two cycles for the ALU without increasing the overall cycle count because memory cycles were the bottleneck. If you can cut the ALU in half "for free", why not? Hopefully as I continue analyzing the chip this will become clearer.

[1] See page 10 in http://archive.computerhistory.org/resources/access/text/Ora...

Note: if you're interested in Z-80 architecture, you seriously should read that link.



One of the reasons I was told was that the circuit extended to 16 bits easily (and was later used in the Z8000 as I recall) and doing decimal (BCD) math was easier. DAA (decimal adjust accumulator) was driven by the half carry flag. In '85 Intel wrote a Z80 emulator in 8086 machine code to try to win some Japanese game console design win and the decimal arithmetic stuff[1] was a PITA (and as it turned out not used a lot in games :-)

[1] The 8080 also had these decimal arithmetic hacks but it didn't have an alternate set of registers to pull from.


Thanks for the interesting information. I'm skeptical that the Z-80 designers were planning ahead for 16 bits, though. Simpler BCD math is a possibility - I'll look into this as I examine the Z-80 more. The 6502 wins, though, for crazy but efficient decimal arithmetic - it has a complex patented circuit that detects decimal carry in parallel with the addition/subtraction, and another circuit to add the correction factor to the result without going through the ALU again. So you don't need a separate DAA instruction or additional cycles for decimal correction.

General question: what things about the Z-80 would you guys like me to write about? Any particular features of the chip? Register-level architecture, gates, or the silicon? Analyzing instructions do cycle by cycle? Gate counts by category? Comparison with other microprocessors?


Would love any and all analysis, but most interesting to me would be instruction details and especially the undocumented side effects. I'd also like to see comparison with the 8080 and how Zilog improved/changed the design.


...what things about the Z-80 would you guys like me to write about?

Undocumented instructions! The MOS 6502 had plenty of these and I understand the Z-80 did too.


Whether to provide BCD optimisation always seemed to be a tricky engineering decision; virtually nobody used the 6502 BCD instructions in the amateur home microcomputer environment I was familiar with in the 80s, but it was clearly considered to be important to the CPU manufacturers. Were there BCD benchmarks back then? Was it considered a killer feature to make financial software easier to write? Did Rockwell ever capitalise on that patent?


The Atari's ROM's contained a full (well, for the time) floating point library implementation that used BCD floating point values.

The result was that the Atari's, without even trying, had more accurate decimal math algorithms than other contemporary computers. Something to do on the demo machines of the day in stores was to run this loop:

   10 let x = 100
   20 print x
   30 let x = x - 0.01
   40 goto 20
On an Atari this would accurately count down from 100 to zero with zero round off errors. The exact same loop on an IBM PC after about 5 steps started printing things like 99.94999999998 instead of 99.95.

Edit: formatting


I got some interesting results. MSX and Atari computed the results correctly. On the TRS-80 Model I, wrong results started on the 12th iteration. Apple IIe (AppleSoft), VIC-20 and PET started the wrong results on the 8th or so. This has to do with the internal representation of floating-point numbers, of course - the Apple II uses, IIRC, 5 bytes to represent a float while MSX uses, again, IIRC (it's been a long time) 8.


I have no idea what people used BCD for either. I vaguely recall reading that the C64's interrupt routine didn't even bother to clear the D flag, so you had to disable interrupts while using decimal mode! - so obviously most people just weren't expected to be using it.

I only ever saw it used for game scores... and the following, which prints a byte as hex, and is a neat example of cute 6502 code. Saves a few bytes over having a table of hex digits, and you don't need to save X or Y.

    HEX:  PHA
          LSR:LSR:LSR:LSR
          JSR HEX2
          PLA
          AND #15
    HEX2: CLC
          SED:ADC #$90:ADC #$40:CLD
          JMP PUTCH
(PUTCH takes an ASCII character in A.)

The 68000 had BCD as well. Never used it and don't recall ever seeing it used. I think they only included it so they could have an instruction called ABCD.


I would imagine BCD was useful as a bootstrap for a poor ASM programmer's bignum library (especially when 'bignum' was >16 bits).

Also would be useful for 7-segment LED displays.


SNES games used it a lot for storage of things that need to be displayed on screen, such as score and lives and whatnot. If the counter is checked relatively infrequently, the reduced integer range and hassle of switching to and from BCD mode are a lot better than having to divide by ten repeatedly each frame, which is relatively slow.


It's interesting that the parent comment came up in the context of the chip used in TI calculators. I know the TI-83 series floating point format is BCD [1], but I'm not sure off the top of my head whether the built-in floating-point library actually uses these CPU instructions.

[1] (PDF link) http://education.ti.com/guidebooks/sdk/83p/sdk83pguide.pdf see pages 22-23


In x86-world, floating point hardware was an add-on chip before the 486DX was introduced in 1989 [1] [2].

I think the BCD instructions were never intended to be used outside of software arithmetic libraries, but they provide speedups for crucial operations in such libraries. Sort of like Intel's recently introduced AES instructions, which will probably only be used in encryption libraries.

Of course, it turns out that BCD-based arithmetic isn't much used, because IEEE-style floating-point has a fundamental advantage (you can store more precision in a given amount of space) and is also compatible with hardware FPU's.

[1] http://en.wikipedia.org/wiki/Floating-point_unit#Add-on_FPUs

[2] http://en.wikipedia.org/wiki/I486


I'd guess this goes back to the 4004 which was designed for a desktop calculator. Easy BCD really helps those applications so they must have had that in mind as a target market. There's not much point in using BCD once reasonable amounts of RAM and ROM are available.


Except the Z80 / 80xx don't descend from the 4004, they descend from the Datapoint 2200. The 8008 didn't have BCD instructions or a half-carry flag, but it had a parity flag.


Not architecturally, but Federico Faggin and Masatoshi Shima were the key people on the 4004 and 8080 before leaving to form Zilog and build the Z-80. The Z-80 had to have DAA (decimal adjust) to be compatible with the 8080. Possibly the 8080 had DAA to compare well against the 6800. If that's the case, then we must ask where the 6800 got the idea. Could be from minicomputers or even mainframes, but from what I've read the early microcomputer designers had no pretense of making processors to compete anywhere near the high end. Instead their sights were set more along the line of embedded systems. Desktop calculators fit into that and Shima himself designed desktop calculators and helped specify the 4004 before he came to Intel. Thus my speculation that the impetus could have come from that direction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: