Goodbye x86. The FUTURE is RISC-V

1 Like

Yeah… RISC is good…

image

I would not bet on that.

Around 1990, Intel attempted to begin replacing x86 with Itanium. Today, most people ask, “What’s Itanium?”

RISC-V may have a future on low-end mobile devices to avoid royalty fees on ARM IP.

3 Likes

I agree, everytime the market touts risc we tend to see a peak in x86 development then x86 bursts back on the top of the market with better improvements.

At this point it may be that mobile is the next wave of PCs but in the long run I can see these as minute choices with which architecture to use in new fpga and asic based SoC boards being dominant in consumer markets.

But overall since gaming is no longer the market driver we’ll are always going to have x86 with asics on the data center side of IT.

1 Like

They’ve tried to snuff Von Neumann for the last 74 years…

He’s the zombie that just won’t die. RISC, ASIC, FPGA ?

They might as well try to reinvent history without Alan Turing; some things just won’t go away. Newton, Einstein, Shockley, Von Braun…forever.

2 Likes

I was at ISCA a few years back and went to a talk on a fully open source ISA, in retrospect some of the naysayers sounded a little over eager to say it would never happen. I doubt Intel and others will go down without a fight, but I guess we’ll see. Not too jazzed about the all cloud future the video ponders towards the end tho, not without major changes to the law regarding data and privacy anyway.

Something that was surprisingly delivered, but not at the promised performance. They had some severe heat issues with them.

1 Like

Unless you can create a whole new market like with the iPhone / Android and ARM, it is always the software base that will limit adoption.

x86 architecture has a rather interesting evolution.
Proof that marketing moves can force an inferior design to be adopted by the masses.
The instruction set and architecture get updates to accommodate DSP type processing.
More proof that w/ brute force speed, an inferior design can get things done.
The base hardware may take on some type RISC architecture, but there will be some layer of abstraction, obfuscation, etc. that will keep / accommodate the x86 instruction set.
That’s where the sftwr base is and the cost. Good luck in changing that.

Had Andy Grove been the CEO of Zilog (or?), x86 would just be another term on the page of history.

Exactly. Apple had enough of a monopoly on both the hardware and the software of the Macintosh that they were able to force a radical processor change twice. Likewise, Microsoft was able to do it twice with the Xbox. Same with the other gaming consoles. Most users do not care which processor drives their device as long as software was available at a reasonable price.

Intel never had that kind of power thanks to AMD. Microsoft has no incentive to force a significant change to the Wintel hardware platform. Nobody else has close to that kind of power today, not even Big Blue IBM who initially created the platform. They learned expensive lessons with OS/2 and Micro Channel.

Software developers may complain, but almost nobody writes in assembly language anymore. In fact, some developers may welcome a change because it may mean a temporary return of the Gold Rush Days™ when it was easy to make a small fortune with an app.

Apple may switch iOS devices to RISC-V to improve their margins, but that is the only tectonic shift I can see.

1 Like

Looking back, the evolution of the personal computer was not an accident.

Around 1980, the biggest limitation of the 8-bit CPU, namely the 64 Kbyte address space, was readily apparent. Programmers were wanting to use something other than assembly language, including me and I love assembly language. The killer app of the day, the electronic spreadsheet, was in real need of more memory. With graphical this and graphical that around the corner, the 8-bitters would not be the future.

The Motorola 68000 was late to the game. The Intel 8088/8086 was barely good enough. All that was needed was someone with enough of a reputation for users to adopt a new platform. Enter IBM.

We do not have a need for a radical change today…

1 Like

I doubt it. Switching to RISC would force a massive investment to bring it up to par with their ARM chips. And Cook would rather just buy back more stock with that money.

IMO, the product area most likely to be (initially) encroached upon by RISC-V is the low-end microcontroller space. These would be chips with ARM Cortex M0 - M4 class cores abound. The compelling economics of RISC-V are the elimination of the royalties paid to ARM to use their IP.

PlatformIO, the one-size-fits-all, vendor agnostic, IDE that started as a much enhanced platform over Arduino, has partnered with (and received funding from) RISC-V vendors SiFive and Western Digital. The result is an instantly available RISC-V IDE with debugging by adding RISC-V capability to PlaformIO.

There’s lots of activity in the RISC-V space, beyond the hobbyists.

Western Digital moving to RISC-V


There is so much mobile infrastructure built on ARM that moving to RISC V isn’t worth it on the mid and high end. Apple isn’t going to completely start over with it’s chips. Neither will Huawei, Samsung, or Qualcomm. Since it is not restricted to Intel and AMD like x86, but open to anything willing to pay the fee, RiSC-V will probably never gain the same reaction.

The superior performing product does not consistently win in the marketplace. This is also true in the services market where often as not a superior service experience wins over simply superior service.

Microsoft grabbed so much of IT for decades because they offered solutions that could be developed rapidly and could scale to a degree … security, stability, interoperability, and long-term scaling less so. Ethernet is also another story where initial good enough performance, a semi-open standard, sufficient backwards compatibility, and steady evolution has pushed it past the sneering of Token Ring supremacists ARCnet, and numerous other forgotten standards to become the singular wired LAN technology in the world and also a dominant WAN technology.

I don’t see x86 going anywhere in the desktop and lower to middle end server space anytime soon. Heck, not too long ago and with a little more determination Intel might have established a foothold in the mobile world with their now-abandoned Atom cellphone CPU pilot.

Contemplate what it costs per unit on Patent royalties for the cores in question.

Contemplate that it’s cheaper to license the proprietary pieces from SiFive than it would be to license the whole ARM cores in question. It’s such that Western Digital bought a billion core license from them for their server products.

Need an ARM M4 class CPU in a design? I can point you to a design that outstrips it in power and performance…and it costs you NOTHING to use. The class of designs you all are talking to are not where ARM started out gaining their marketshare. It was in this space. deeply embedded. So…will it take over? Your guess is as good as mine, but being dismissive is what the bunch that MIPS and X86 crowd thought of ARM back 10-ish years back. There’s a hint there…

2 Likes

Cost is king. So…as I said previously… What does it cost for an ARM core…? What does it cost for the RISC-V? You all talk about tooling… X-D I implemented a RISC-V core in an SOC for a client in Ft. Lauderdale which was part of a larger FPGA design. There was a consideration of per unit costs to ARM for the thing and a possible different one for a NIOS-II one. Then I found RISC-V designs. Ended up with one that outstripped most of the softcores for performance and utilization. Oh, there’s a GCC and an LLVM target for them… Hm… Cost of changing tooling? There wasn’t any…even if we DID use ARM or NIOS-II initially. 1.42 DMIPS/MHz. No royalties. NONE. Same compilers with same rules. Same OSes even.

2 Likes