RISC Vs CISC – Why Instruction Sets no longer matter

For a long time I believed RISC (reduced instruction set computing) was superior to CISC (complex instruction set computing) by being faster and more efficient, this article suggests otherwise.
https://ethw.org/Why_Instruction_Sets_No_Longer_Matter

“Having looked briefly at some samples of both the RISC and CISC instruction sets let us return now to the question at hand: which of these competing firmware architectures is actually better?

The answer, it turns out, is neither.

For a few years, yes, it seemed like RISC architectures such as SPARC really were delivering on their promise and outperforming their CISC machine contemporaries. But Robert Garner, one of the original designer of the SPARC argues, compellingly, that this is more of a case of faulty correlation. The performance that RISC architectures were achieving was attributed to the nature of the instruction set, but in reality it was something else entirely–the increasing affordability of on-chip memory caches which were first implemented on RISC machines.[8] When queried on the same issue Peter Capek, who developed the Cell processor jointly for IBM and Sony, concurs: it was the major paradigm shift represented by on-chip memory caches, not the instruction set that mattered most to RISC architectures.[9]

RISC architectures, like the Sun SPARC were simply the first to take advantage of the dropping cost of cache memory by placing it in close proximity to the CPU and thus “solving” or at least creatively assuaging one of the fundamental remaining problems of computer engineering–how to fix the huge discrepancy between processor speed and memory access times. Put simply, regardless of instruction set, because of the penalties involved, changes in memory hierarchy dominate issues of microcode.

In fact, both Garner and Capek argue that RISC instruction sets have in the last decade become very complex, while CISC instruction sets such as the x86 are now more or less broken down into RISC-type instructions at the CPU level whenever and wherever possible. Additionally, once CISC architectures such as x86 began to incorporate caches directly onto the chip as well, many of the performance advantages of RISC architectures simply disappeared. Like a rising tide, increases in cache sizes and memory access speeds float all boats, as it were.

Robert Garner cites both this trend and the eventual development of effective register renaming solutions for the x86 (which work around the smaller 8 register limit on that architecture and thus allow for greater parallelism and better out-of-order execution) as the “end of the relevance of the RISC vs. CISC controversy”.[8] “

Advertisements

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: