> All modern architectures would be considered RISC architectures
no no no no no no no no
It is sort of like RISC on the inside (uOps), but x86 is __not__ RISC on the outside, which means it has to have a thumping great decoder and a bunch of microcode using power all the time.
Locally even if the microarchitecture could be considered a RISC, the architecture of an X86 computer cannot be considered RISC both on account of the irregularity of the instruction set and the limits to instruction-decode throughput because reasons already mentioned.
I think modern architectures are neither RISC nor CISC.
Successful modern architectures have adopted features of both. Even the most RISC architecture these days likely has AES instructions. Even the most CISC architecture is using RISC like micro ops.
> Even the most RISC architecture these days likely has AES instructions
AFAIK these sorts of instructions are typically things that would be trivial to implement in hardware but extremely cumbersome to implement in software, like shuffling a bunch of bits around (ARM's "JavaScript instruction" is another famous example). These sorts of things would only require a single micro-operation (or a very small number of uops).
My understanding is that the big distinction between CISC/RISC comes from things like addressing modes, by which a CISC processor lets you cram many hardware operations into a single software instruction. For instance, on x86 you can write an instruction like 'ADD rax, rbx, [rcx + 0x1234 + 8*rdx]' that performs several additions, a bit shift, and a memory access. Whereas on ARM, you would have to split those multiple hardware operations into multiple instructions.
> Even the most CISC architecture is using RISC like micro ops
Yes, but as the parent comment points out, that requires a complicated decoder with limited throughput. Using a RISC architecture throughout therefore moves some (but not all) of that instruction-decoding work into the compiler.
(I'm not at all knowledgeable in the field of processor design, so I would be happy to be proven wrong.)
I mean, they do provide a footnote explaining further:
> Even the most common architecture, the Intel Pentium, whilst having an instruction set that is categorised as CISC, internally breaks down instructions to RISC style sub-instructions inside the chip before executing.
That's what I'm saying is wrong. Note that the instructions are RISC style i.e. some fairly large instructions decode into a pretty small number of micro-operations, which implies said "RISC" operations are actually quite bulky.
Issue-width isn't everything, but ARM chips are really showing the limitations of x86.
Addendum: "Wrong" is also wrong. More nuanced would be to say that for an introduction to architecture (i.e. for an engineer to draw a trend line with a fat pen on a graph), this is the kind of detail that actually matters when it comes to intuition-building, so it's misleading.
no no no no no no no no
It is sort of like RISC on the inside (uOps), but x86 is __not__ RISC on the outside, which means it has to have a thumping great decoder and a bunch of microcode using power all the time.
Locally even if the microarchitecture could be considered a RISC, the architecture of an X86 computer cannot be considered RISC both on account of the irregularity of the instruction set and the limits to instruction-decode throughput because reasons already mentioned.