Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

RISC / CISC was basically IBM-marketing speak for "our processors are better", and never was defined in a precise manner. The marketing is dead, but the legend lives on years later.

IBM's CPU-advancements of pipelining, out-of-order execution, etc. etc. were all implemented into Intel's chips throughout the 90s. Whatever a RISC-machine did, Intel proved that the "CISC" architecture could follow suite.

------

From a technical perspective: all modern chips follow the same strategy. They are superscalar, deeply-pipelined, deeply branch predicted, micro-op / macro-op fused "emulated" machines using Tomasulo's algorithm across a far larger "reorder buffer register" set which is completely independent of the architectural specification. (aka: out-of-order execution).

Ex: Intel Skylake has 180 64-bit reorder buffer registers (despite having 16 architectural registers). ARM A72 has 128-ROB registers (depsite having 32-architectural registers). The "true number" of registers of any CPU is independent of the instruction set.



Since RISC wasn't coined by IBM (but by Patterson and Ditzel) this is just plain nonsense. RISC was and is a philosophy that's basically about not adding transistors or complexity that doesn't help performance and accepting that we have to move some of that complexity to software instead.

Why wasn't it obvious previously? A few things had to happen: compilers had to evolve to be sophisticated enough, mindsets had to adapt to trusting these tools to do a good enough job (I actually know several who in the 80' still insisted on assembler on the 390), and finally, VLSI had to evolve to the point where you could fit an entire RISC on a die. The last bit was a quantum leap as you couldn't do this with a "CISC" and the penalty for going off-chip was significant (and has only grown).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: