Hacker News new | past | comments | ask | show | jobs | submit login

Qualcomm doesn't have nearly as much to lose as ARM does and they know it.

Qualcomm is almost certainly ARM's biggest customer. If ARM loses, Qualcomm doesn't have to pay out. If ARM wins, Qualcomm moves to RISC-V and ARM loses even harder in the long-term.

The most likely outcome is that Qualcomm agrees to pay a slight bit more than they are currently paying, but nowhere near what ARM is demanding and in the meantime, Qualcomm continues having a team work on a RISC-V frontend for Oryon.




Just the impact of making this move will have a chilling effect, regardless of the long term outome.

ARM Ltd wants to position itself as the ISA. It is highly proprietary of course, but the impression they want to give is that it is "open" and freely available, no lock-in, etc.

This really brings the reality back into focus that ARM controls it with an iron fist, and they're not above playing political games and siding against you if you annoy their favored customers. Really horrible optics for them.


"Chilling effect" implies that we should want ARM to succeed.

IMO we need to question the premises of the current IP ecosystem. Obviously, the principles of open-source are quite the opposite to how ARM licenses IP. (Afaik, ARM also licenses ready-to-go cores, which is very different from what Q is getting.)

It's easy to see how RISC-V avoids the conflict of interest between owning the ISA and licensing specific implementations.


> RISC-V

We’d just get a bunch of proprietary cores which might not even be compatible with each other due to extensions. Companies like Qualcomm would have zero incentives to share their designs with anyone.

ARM is not perfect but it at least guarantees some minimal equal playing field.

> Afaik, ARM also licenses ready-to-go cores

Which is the core of Qualcomm’s business. All their phone chips are based on Cortex. Of course ARM has a lot of incentives to keep it that way, hence this while thing.


> We’d just get a bunch of proprietary cores which might not even be compatible with each other due to extensions.

No different than ARM. Apple has matrix extensions that others don't, for example.

The ecosystem (e.g., Linux and OSS) pressure will strongly encourage compatible subsets however. There is some concern about RISCV fragmentation hell that ARM used to suffer from, but a least in the Linux-capable CPU space (e.g., not highly embedded or tiny), a plethora of incompatible cores will probably not happen.

> Companies like Qualcomm would have zero incentives to share their designs with anyone.

ARM cores are also proprietary. All ARM cores actually, you can't get an architectural license from ARM to create an open source core. With RISCV at least you can make open cores and there are some out there.

But opening the ISA is attacking a different level of the stack than the logic and silicon.


> RISCV fragmentation hell that ARM used to suffer from

Does "fragmentation hell" refer to Age Old problem of massive incompatibility in the Linux codebase, or the more "modern" problem people refer to which is the problem of device trees and old kernel support for peripheral drivers? Because you can rest assured that this majestic process will not be changed at all for RISC-V devices. You will still be fed plenty of junkware that will require an unsupported kernels with blobs. The ISA isn't going to change the strategy of the hardware manufacturers.


Neither. It refers to proliferation of incompatible extensions that plagued ARM. Most well known one was hf incompatibility, but there were others too.


> No different than ARM

Everyone has more or less the same access to relatively competitive Cortex and Neoverse cores. As ARM’s finances show that’s not a very good business model so it’s unlikely anyone would do that with RISC-V.

You can make opensource cores, but nobody investing massive amounts of money/resources required to design high end CPUs will make them open source. The situation with ARM is of course not ideal but at least the playing field is somewhat more even.


> No different than ARM. Apple has matrix extensions that others don't, for example.

Not anymore, M4 supports ARM SME instead.


> Companies like Qualcomm would have zero incentives to share their designs with anyone.

And yet, that's what linux did in 1991- they shared the code, lowering the cost of buying an operating system. I wouldn't say there is zero incentive, but it certainly lowers the incentive without a profitable complementary hardware implementation that can be sold for less than the proprietary isa when there is a royalty free license allows the manufacturer/fab designer "mask rights" to get a small margin within the difference of the foundry/ISA proprietary core competitor.


Hardware is not software. Almost nothing alike. Surr it might happen for low-end/old chips with low margins but nothing cutting edge or competitive on the high-end


software used to be hardware limited, and therefore efficient. Today Software relies on many more transistors per joule to compute a number of operations in high-level languages. I'd agree 22nm isn't leading edge, but foundries like Skywater could theoretically provide more options at 65nm and 90nm in coming years that are fully open source, except for the cost of the foundry technique perhaps: https://riscv.org/wp-content/uploads/2018/07/1130-19.07.18-L...


Yes, I think we might be talking about slightly different things. I don’t really see OS model working for higher-end / leading-edge chips.

In segments where chips are almost an interchangeable commodity and the R&D cost might be insignificant relative to manufacturing/foundry it would make a lot more sense.


Even the base integer instructions are Turing complete in RISC-V. Only instruction extensions that could be a point of contention are Matrix operations, as T-Head and Tenstorrent already have their own. Even then, I can't find a reason how this "clash" is any different than those in the x64 or ARM space.

Even if Qualcomm makes their own RISC-V chips that are somehow incompatible with everyone else's, they can't advertise that it's RISC-V due to the branding guidelines. They should know them because they are on the board as a founding top tier member.


> they can't advertise that it's RISC-V due to the branding guidelines

Unless it’s a superset of RISC-V. They can still have proprietary extensions


> "Chilling effect" implies that we should want ARM to succeed.

It really doesn't.

I agree an actual open ISA is far preferable, ARM is not much different than x86.


I don’t understand how they can copyright just the ISA. Didn’t a recent Supreme Court case in oracle v google Java issue decide that you can copy the api if you impelement it differently? So what exactl is arm pulling? Implementation hardware specs? I suspect Qualcomm can do that on its own


> Dudn’t a recent Supreme Court case in oracle v google Java issue decide that you can copy the api if you impelement it differently

No, it didn’t. It ruled that the specific copying and use of that Google did with Java in Android was fair use, but did not rule anything as blanket as “you can copy an API as long as you re-implement it”.


It was a little more nuance than that. Oracle was hoping SCOTUS would rule that API Structure, sequence and organization are copyrightable - the court sidestepped that question altogether by ruling that if APIs are copyrightable[1], the case fell under fair use. So the pre-existing case law holds (broadly, it's still fine to re-implement APIs - like s3 - for compatibility since SCOTUS chose not to weigh in on it in Google v. Oracle).

1. Breyer's majority statement presupposes API's are copyrightable, without declaring it or offering any kind of test on whats acceptable.


> So the pre-existing case law holds (broadly, it's still fine to re-implement APIs - like s3 - for compatibility since SCOTUS chose not to weigh in on it in Google v. Oracle).

There is no clear preexisting national case law on API copyrightability, and it is unclear how other, more general, case law would apply to APIs categorically (or even if it would apply a categorical copyrightable-or-not rule), so, no, its not “ok”, its indeterminant.


You are right there is no single national case that clearly ruled in either way, but the status quo is that it's de facto ok. Adjacent case law made white room reverse engineering & API re-implementation "ok" de jure, which is why most storage vendors - including very large ones - are confident enough to implement the S3 protocol without securing a license from Amazon first.

Edit: none of the large companies (except Oracle) are foolish enough to pursue a rule that declares APIs as falling under copyright because they all do it. In Google v. Oracle, Microsoft files briefs support both sides after seemingly changing their mind. In lower courts, they submitted an amicus brief supporting Oracle, then when it got to SCOTUS, they filed one supporting Google, stating how disastrous it would be to the entire industry.


I think it might actually be patents rather than copyright restrictions that are in play here.


Qualcomm is slightly bigger than ARM so it seems like a fair fight to me. Does Qualcomm police it's IP at all?


According to Wikipedia,

Qualcomm has 50,000 employees, $51 billion assets and $35 billion revenue https://en.wikipedia.org/wiki/Qualcomm

ARM Holdings has 7000 employees, $8 billion assets and $3 billion revenue https://en.wikipedia.org/wiki/Arm_Holdings

I think "slightly bigger" is an understatement.


That's roughly the same size -- like swamp thing vs namor -- both are name brand, almost blue chip heros.

Or put another way -- as they said in gawker[1] -- if you're in a lawsuit with a billionaire you better have a billionaire on your side or you're losing.

In this case -- it's unlikely that qualcomm will have quite enough juice to just smoosh Arm, in the same way that they would be able to just smoosh a company that's 100th the size of arm (not just 1/10th), regardless of the merits of the case.

[1]https://gawker.com/how-things-work-1785604699


> Qualcomm is slightly bigger than ARM so it seems like a fair fight to me.

I'm not really sure what you're responding to. It's got nothing to do with size whether or not something is fair, it's what is in the contract. None of us know exactly what's there so if it becomes disputed then a court is going to have to decide what is fair.

But that was entirely not the point of my comment though. I'm talking about how corporations looking to make chips or get into the ecosystem view ARM and its behavior with its architecture and licensing. ARM Ltd might well be in the right here by the letter of their contracts, but that canceling their customer's license (ostensibly if not actually siding with another customer who is in competition with the first) is just not a good look for positioning they are going for.


You might be right, but they do perhaps also have to establish that their contracts are going to be defended/enforced. Otherwise they have nothing.


Big middle ground before nuclear option of canceling license entirely though. It's a bad look too because Nuvia/QC has bad blood with Apple, and Apple is suspected to be a very favored client for ARM, so ARM has this problem of potentially appearing to be taking Apple's side against Qualcomm.

I'm not saying that's what happened or that ARM did not try to negotiate and QC was being unreasonable and the whole thing has nothing at all to do with Apple, or that ARM had any better options available to them. Could be they were backed into a corner and couldn't do anything else. I don't know. That doesn't mean it's not bad optics for them though.


Does Qualcomm police it's IP at all?

Traditionally they've been known as a tech company that employs more lawyers than engineers, if that tells you anything.

I'd probably go up against IBM or Oracle before I tugged on Qualcomm's cape. Good luck to ARM, they'll need it.


I am an ex Qualcomm employee. We often called ourselves a law firm with a tech problem. QC doesn't actually have more lawyers than engineers, but I'd not be surprised if the legal department got paid more than all the engineers combined.


Oracle v Qualcomm would be epic.


The public will likely be the loser of such a battle. :-(


Not if they end up cancelling each other's patents. :)


>Qualcomm moves to RISC-V and ARM loses even harder in the long-term.

I think long term is doing a lot of heavy lifting here. How long until:

1. Qualcomm develops a chip that competitive in performance to ARM

2. The entire software world is ready to recompile everything for RISC-V

Unless you are Apple I see such a transition taking a decade easily.


> 1. Qualcomm develops a chip that competitive in performance to ARM

Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

If Qualcomm were motivated, I believe they could swap ISAs relatively easily on their flagship processors, and the rest of the core would be the same level of performance that everyone is used to from Qualcomm.

This isn’t the old days when the processor core was deeply tied to the ISA. Certainly, there are things you can optimize for the ISA to eke out a little better performance, but I don’t think this is some major obstacle like you indicate it is.

> 2. The entire software world is ready to recompile everything for RISC-V

#2 is the only sticking point. That is ARM’s only moat as far as Qualcomm is concerned.

Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1. With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

If Qualcomm stopped making ARM processors, what alternatives are you proposing? Everyone is switching to Samsung or MediaTek processors?

If Qualcomm were switching to RISC-V, that would be a sea change that would actually move the needle. Samsung and MediaTek would probably be eager to sign on! I doubt they love paying ARM licensing fees either.

But, all of this is a very big “if”. I think ARM is bluffing here. They need Qualcomm.


> Everyone is switching to Samsung or MediaTek processors?

Why not? MediaTek is very competitive these days.

It would certainly perform better than a RISC-V decoder slapped onto a core designed for ARM having to run emulation for games (which is pretty much the main reason why you need a lot of performance on your phones).

Adopting RISC-V is also a risk for the phone producers like Samsung. How much of their internal tooling (e.g. diagnostics, build pipelines, testing infrastructure) are built for ARM? How much will performance suffer, and how much will customers care? Why take that risk (in the short/medium term) instead of just using their own CPUs (they did it in some generations) or use MediaTek (many producers have experience with them already)?

Phone producers will be happy to jump to RISC-V over the long term given the right incentives, but I seriously doubt they will be eager to transition quickly. All risks, no benefits.


> Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

You're talking essentially about microcode; this has been the case for decades, and isn't some new development. However, as others have pointed out, it's not _as_ simple as just swapping out the decoder (especially if you've mixed up a lot of decode logic with the rest of the pipeline). That said, it's happened before and isn't _impossible_.

On a higher level, if you listen to Keller, he'll say that the ISA is not as interesting - it's just an interface. The more interesting things are the architecture, micro-architecture and as you say, the microcode.

It's possible to build a core with comparable performance - it'll vary a bit here and there, but it's not that much more difficult than building an ARM core for that matter. But it takes _years_ of development to build an out-of-order core (even an in-order takes a few years).

Currently, I'd say that in-order RISC-V cores have reached parity. Out of order is a work in progress at several companies and labs. But the chicken-and-egg issue here is that in-order RISC-V cores have ready-made markets (embedded, etc) and out of order ones (mostly used only in datacenters, desktop and mobile) are kind of locked in for the time being.

> Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1.

That's actually true, but porting Android is a nightmare (not because it's hard, but because the documentation on it sucks). Work has started, so let's see.

> With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

I wonder what the percentage here is... Again, I don't think recompiling for a new target is necessarily the worst problem here.


> > Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

> You're talking essentially about microcode; this has been the case for decades, and isn't some new development.

Microcode is much less used nowadays than in the past. For instance, several common desktop processors have only a single instruction decoder capable of running microcode, with the rest of the instruction decoders capable only of decoding simpler non-microcode instructions. Most instructions on typical programs are decoded directly, without going through the microcode.

> However, as others have pointed out, it's not _as_ simple as just swapping out the decoder

Many details of an ISA extend beyond the instruction decoder. For instance, the RISC-V ISA mandates specific behavior for its integer division instruction, which has to return a specific value on division by zero, unlike most other ISAs which trap on division by zero; and the NaN-boxing scheme it uses for single-precision floating point in double-precision registers can be found AFAIK nowhere else. The x86 ISA is infamous for having a stronger memory ordering than other common ISAs. Many ISAs have a flags register, which can be set by most arithmetic (and some non-arithmetic) instructions. And that's all for the least-privileged mode; the supervisor or hypervisor modes expose many more details which differ greatly depending on the ISA.


> Many details of an ISA extend beyond the instruction decoder. For instance, the RISC-V ISA mandates specific behavior for its integer division instruction, which has to return a specific value on division by zero, unlike most other ISAs which trap on division by zero; and the NaN-boxing scheme it uses for single-precision floating point in double-precision registers can be found AFAIK nowhere else. The x86 ISA is infamous for having a stronger memory ordering than other common ISAs. Many ISAs have a flags register, which can be set by most arithmetic (and some non-arithmetic) instructions. And that's all for the least-privileged mode; the supervisor or hypervisor modes expose many more details which differ greatly depending on the ISA.

All quite true, and to that, add things like cache hints and other hairy bits in an actual processor.


1. That doesn't mean you can just slap a RISC-V decoder on an ARM chip and it will magically work though. The semantics of the instructions and all the CSRs are different. It's going to be way more work than you're implying.

But Qualcomm have already been working on RISC-V for ages so I wouldn't be too surprised if they already have high performance designs in progress.


That is a good comment, and I agree things like CSR differences could be annoying, but compared to the engineering challenges of designing the Oryon cores from scratch… I still think the scope of work would be relatively small. I just don’t think Qualcomm seriously wants to invest in RISC-V unless ARM forces them to.


> I just don’t think Qualcomm seriously wants to invest in RISC-V unless ARM forces them to.

That makes a lot of sense. RISC-V is really not at all close to being at parity with ARM. ARM has existed for a long time, and we are only now seeing it enter into the server space, and into the Microsoft ecosystem. These things take a lot of time.

> I still think the scope of work would be relatively small

I'm not so sure about this. Remember that an ISA is not just a set of instructions: it defines how virtual memory works, what the memory model is like, how security works, etc. Changes in those things percolate through the entire design.

Also, I'm going to go out on a limb and claim that verification of a very high-powered RISC-V core that is going to be manufactured in high-volume is probably much more expensive and time-consuming than the case for an ARM design.

edit: I also forgot about the case with Qualcomm's failed attempt to get code size extensions. Using RVC to approach parity on code density is expensive, and you're going to make the front-end of the machine more complicated. Going out on another limb: this is probably not unrelated to the reason why THUMB is missing from AArch64.


> verification of a very high-powered RISC-V core that is going to be manufactured in high-volume is probably much more expensive and time-consuming than the case for an ARM design.

Why do you say this?


Presumably, when you have a relationship with ARM, you have access to things that make it somewhat less painful:

- People who have been working with spec and technology for decades

- People who have implemented ARM machines in fancy modern CMOS processes

- Stable and well-defined specifications

- Well-understood models, tools, strategies, wisdom

I'm not sure how much of this exists for you in the RISC-V space: you're probably spending time and money building these things for yourself.


There is a market for RISC-V design verification.

And there is already some companies specializing on supplying this market. They do consistently present at RISC-V Summit.


The bigger question is how much of their existing cores utilize Arm IP… and how sure are they that they would find all of it?


> That doesn't mean you can just slap a RISC-V decoder on an ARM chip and it will magically work though.

Raspberry Pi RP2350 already ships with ARM and RISC-V cores. https://www.raspberrypi.com/products/rp2350/

It seems that the RISC-V cores don't take much space on the chip: https://news.ycombinator.com/item?id=41192341

Of course, microcontrollers are a different from mobile CPUs, but it's doable.


That's not really comparable. Raspberry Pi added entirely separate RISC-V cores to the chip, they didn't convert an ARM core design to run RISC-V instructions.

What is being discussed is taking an ARM design and modifying it to run RISC-V, which is not the same thing as what Raspberry Pi has done and is not as simple as people are implying here.


Nevertheless, several companies that originally had MIPS implementations did exactly this, to implement ARM processors.


I am fan of the Jeff Geerling Youtube series in which he is trying to make GPU (AMD/Nvidia) run on Raspbery Pi. It is not easy - and they have linux kernel source code available to modify. Now imagine all Qualcomm clients have to do similar stuff with their third party hardware, possibly with no access to source code of drivers. Then debug and fix for 3y all the bugs that pop up in the wild. What a nightmare.

Apple at least have full control on hardware stack (Qualcomm do not as they only sells chips to others).


Hardware drivers certainly can be annoying, but a hobbyist struggling to bring big GPUs’ hardware drivers to a random platform is not at all indicative of how hard it would be for a company with teams of engineers. If NVidia wanted their GPUs to work on Raspberry Pi, then it would already be done. It wouldn’t be an issue. But NVidia doesn’t care, because that’s not a real market for their GPUs.

Most OEMs don’t have much hardware secret sauce besides maybe cameras these days. The biggest OEMs probably have more hardware secret sauce, but they also should have correspondingly more software engineers who know how to write hardware drivers.

If Qualcomm moved their processors to RISC-V, then Qualcomm would certainly provide RISC-V drivers for their GPUs, their cellular modems, their image signal processors, etc. There would only be a little work required from Qualcomm’s clients (the phone OEMs) like making sure their fingerprint sensor has a RISC-V driver. And again, if Qualcomm were moving… it would be a sea change. Those fingerprint sensor manufacturers would absolutely ensure that they have a RISC-V driver available to the OEMs.

But, all of this is very hypothetical.


> If NVidia wanted their GPUs to work on Raspberry Pi, then it would already be done. It wouldn’t be an issue. But NVidia doesn’t care, because that’s not a real market for their GPUs.

It's weird af that Geerling ignores nVidia. They have a line of ARM based SBCs with GPUs from Maxwell to Ampere. They have full software support for OpenGL, CUDA, and etc. For the price of an RPi 5 + discreet GPU, you can get a Jetson Orin Nano (8 GB RAM, 6 A78 ARM cores, 1024 Ampere cores.) All in a much better form factor than a Pi + PCIe hat and graphics card.

I get the fun of doing projects, but if what you're interested in is a working ARM based system with some level of GPU, it can be had right now without being "in the shop" twice a week with a science fair project.


> It's weird af that Geerling ignores nVidia.

“With the PCI Express slot ready to go, you need to choose a card to go into it. After a few years of testing various cards, our little group has settled on Polaris generation AMD graphics cards.

Why? Because they're new enough to use the open source amdgpu driver in the Linux kernel, and old enough the drivers and card details are pretty well known.

We had some success with older cards using the radeon driver, but that driver is older and the hardware is a bit outdated for any practical use with a Pi.

Nvidia hardware is right out, since outside of community nouveau drivers, Nvidia provides little in the way of open source code for the parts of their drivers we need to fix any quirks with the card on the Pi's PCI Express bus.”

Reference = https://www.jeffgeerling.com/blog/2024/use-external-gpu-on-r...

I’m not in a position to evaluate his statement vs yours, but he’s clearly thought about it.


I mean in terms of his quest for GPU + ARM. He's been futzing around with Pis and external GPUs and the entire time you've been able to buy a variety of SBCs from nVidia with first class software support.


AFAIK the new SiFive dev board actually supports AMD discrete grsphics cards over PCIe


Naively, it would seem like it would be as simple as updating android studio and recompiling your app, and you would be good to go? There must be less than 1 in 1000 (probably less than 1 in 10,000) apps that do their own ARM specific optimizations.


Without any ARM specific optimizations, most apps wouldn’t even have to recompile and resubmit. Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand. Google would just have to decide to support another target, and Google has already signaled their intent to support RISC-V with Android.

https://opensource.googleblog.com/2023/10/android-and-risc-v...


I remember when Intel was shipping x86 mobile CPUs for Android phones. I had one pretty soon after their release. The vast majority of Android apps I used at the time just worked without any issues. There were some apps that wouldn't appear in the store but the vast majority worked pretty much day one when those phones came out.


I'm not sure how well it fits the timeline (i.e. x86 images for the Android emulator becoming popular due to better performance than the ARM images vs. actual x86 devices being available), but at least these days a lot of apps shipping native code probably maintain an x86/x64 version purely for the emulator.

Maybe that was the case back then, too, and helped with software availability?


Yep! I had the Zenfone with an Intel processor in it, and it worked well!


> Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand.

No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device. Though that doesn't change the result re compatibility.

However – a surprising number of apps do ship native code, too. Of course especially games, but also any other media-related app (video players, music players, photo editors, even my e-book reading app) and miscellaneous other apps, too. There, only the original app developer can recompile the native code to a new CPU architecture.


> No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device.

Google Play Cloud Profiles is what I was thinking of, but I see it only starts “working” a few days after the app starts being distributed. And maybe this is merely a default PGO profile, and not a form of AOT in the cloud. The document isn’t clear to me.

https://developer.android.com/topic/performance/baselineprof...


Yup, it's just a PGO profile (alternatively, developers can also create their own profile and ship that for their app).


> Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

If that's true, then what is arm licensing to Qualcomm? Just the instruction set or are they licensing full chips?

Sorry for the dumb question / thanks in advance.


Qualcomm has historically licensed both the instruction set and off the shelf core designs from ARM. Obviously, there is no chance the license for the off the shelf core designs would ever allow Qualcomm to use that IP with a competing instruction set.

In the past, Qualcomm designed their own CPU cores (called Kryo) for smartphone processors, and just made sure they were fully compliant with ARM’s instruction set, which requires an Architecture License, as opposed to the simpler Technology License for a predesigned off the shelf core. Over time, Kryo became “semi-custom”, where they borrowed from the off the shelf designs, and made their own changes, instead of being fully custom.

These days, their smartphone processors have been entirely based on off the shelf designs from ARM, but their new Snapdragon X Elite processors for laptops include fully custom Oryon ARM cores, which is the flagship IP that I was originally referencing. In the past day or two, they announced the Snapdragon 8 Elite, which will bring Oryon to smartphones.


thank you for explaining


A well-designed (by apple [1], by analyzing millions of popular applications and what they do) instruction set. One, where there are reg+reg/reg+shifted_reg addressing modes, only one instruction length, and sane useful instructions like SBFX/UBFX, BFC, BFI, and TBZ. All of that is much better than promises of a magical core that can fuse 3-4 instructions into one magically.

[1] https://news.ycombinator.com/item?id=31368681


1 - thank you

2 - thank you again for sharing your eink hacking project!


Note that these are just a person's own opinions, obviously not shared by the architects behind RISC-V.

There are multiple approaches here. There's this tendency for each designer to think their own way is the best.


I get that. I just work quite distantly from chips and find it interesting.

That said, licensing an instruction set seems strange. With very different internal implementations, you'd expect instructions and instruction patterns in a licensed instruction set to have pretty different performance characteristics on different chips leading to a very difficult environment to program in.


Note that this is not in any way a new development.

If you look at the incumbent ISAs, you'll find that most of the time ISA and microarchitecture were intentionally decoupled decades ago.


>Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1. With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

This is only true if the application is written purely in Java/Kotlin with no native code. Unfortunately, many apps do use native code. Microsoft identified that more than 70% of the top 100 apps on Google Play used native code at a CppCon talk.

>I think ARM is bluffing here. They need Qualcomm.

Qualcomm's survival is dependent on ARM. Qualcomm's entire revenue stream evaporates without ARM IP. They may still be able to license their modem IP to OEMs, but not if their modem also used ARM IP. It's only a matter of time before Qualcomm capitulates and signs a proper licensing agreement with ARM. The fact that Qualcomm's lawyers didn't do their due diligence to ensure that Nuvia's ARM Architecture licenses were transferable is negligent on their part.


ARM already did the hard work. Once you've ported your app to ARM, you've no doubt made sure all the ISA-specific bits are isolated while the rest is generic and portable. This means you already know where to go and what to change and hopefully already have testing in place to make sure your changes work correctly.

Aside from the philosophy, lots of practical work has been done and is ongoing. On the systems level, there has already been massive ongoing work. Alibaba for example ported the entirety of Android to RISC-V then handed it off to Google. Lots of other big companies have tons of coders working on porting all kinds of libraries to RISC-V and progress has been quite rapid.

And of course, it is worth pointing out that an overwhelming majority of day-to-day software is written in managed languages on runtimes that have already been ported to RISC-V.


Interesting, does anyone know what percentage of top Android apps run on RISC-V? I'd expect a lot of apps like games to only have binaries for ARM


The thing about RISC-V is that they indirectly have the R&D coffers of the Chinese government backing them for strategic reasons. They are the hardware equivalent of Uber's scale-first-make-money later strategy. This is not a competition that ARM can win purely relying on their existing market dominance.


Aren’t Android binaries in Dalvik so you only need to port that to get it to run on RISC-V?


Many games, multimedia apps (native FFMPEG libs), and other apps that require native C/C++ libs would require a recompile/translation for RISC-V.


Not Android, but Box86 already works on RISC-V, even already running games on top of Wine and DXVK: https://youtu.be/qHLKB39xVkw

It redirects calls to x86 libraries to native RISC-V versions of the library.


FFMPEG has a RISC-V port. We're yet to try it, but I did successfully compile it to target RISC-V vector extensions.


Most FLOSS libraries are already ported over thanks to GNU/Linux.



Aren't most applications NOT using the ndk?


Everyone that doesn't want to write Java/Kotlin is using the NDK.

Although from Google's point of view the NDK only purpose is for enabling writing native methods, reuse of C and C++ libraries, games and real time audio, from point of view of others, it is how they sneak Cordova, React Native, Flutter, Xamarin,.... into Android.


NDK usage is pretty high among applications that actually matter.


Most major apps use the NDK.


That's what's magical about Apple. It was a decade-long transition. All the 32-bit code that was removed from macOS back in 2017 was all in preparation for the move in 2019.


Apple has done it multiple times now and has it down to a science.

68k -> PPC -> x86 -> ARM, with the 64 bit transition you mixed in there for good measure (twice!).

Has any other consumer company pulled a full architecture switch off? Companies pulled off leaving Alpha and Sparc but that was servers which has a different software landscape.


I don't believe any major company has done it. Even Intel failed numerous times to move away from x86 with iAPX432, i960, i860, and Itanium all failing to gain traction.


For Apple it was do or die the first few times. Until x86, if they didn’t move they’d just be left in the dust and their market would disappear.

The ARM transition wasn’t strictly necessary like the last ones. It had huge benefits for them, so it makes sense, but they also knew what they were doing by then.

In your examples (which are great) Intel wasn’t going to die. They had backups, and many of those seem guided more by business goals than a do-or-die situation.

I wonder if that’s part of why they failed.


In a way that's also true for the x86->ARM transition, isn't it? I had an MacbookAir 2018. And.. "it was crap" is putting it very, very mildly. Yes it was still better than any Windows laptop I got since and much less of a hassle than any Linux laptop that I'm aware of in my circle. But the gap was really, really small and it cost twice as much.

But the most important part for the working of the transition is probably that, in any of theses cases, the typical final user didn't even notice. Yes a lot of Hackernews-like people noticed as they had to recompile some of their programs. But most people :tm: didn't. They either use AppStore apps, which were fixed ~immediately or Rosetta made everything runnable, even if performance suffered.

But that's pretty much the requirement you have: You need to be handle to transition ~all users to the new platform with ~no user work and even without most vendors doing anything. Intel never could provide that, not even aim for it. So they basically have to either a) rip their market in pieces or b) support the "deprecated" ISA forever.


> Rosetta made everything runnable, even if performance suffered.

I think a very important part was that even with the Rosetta overhead, most x86 programs were faster on the m1 than on the machines which it would have been replacing. It wasn’t just that you could continue using your existing software with a perf hit; your new laptop actually felt like a meaningful upgrade even before any of your third party software got updated.


I don’t think so. I’ve got a 2019 MBP and yeah, the heat issue is a big problem.

But they weren’t going to be left in the performance dust like the last times. Their chip supplier wasn’t going to stop selling chips to them.

They would have likely had to give up on how thin their laptops were, but they could have continued on just fine.

I do think the ARM transition wasn’t strictly good, it let them stay thin and quiet and cooler. They got economies of scale with their phone chips.

But it wasn’t necessary to the degree the previous ones were.


> I do think the ARM transition wasn’t strictly good

That’s a total typo I didn’t catch in time. I’m not sure what I tried to type, but I thought the transition was good. They didn’t have to but I’m glad they did.


IBM also did it, with mainframes. But otherwise, no.


In a sense, Freescale/NXP did it from their old PowerPC to ARM.


> Companies pulled off leaving Alpha and Sparc

Considering the commercial failure of these efforts, I might disagree


MacOS (as NeXTSTEP and/or OpenStep) also ran on SPARC and PA-RISC I believe.


OpenStep was developed on SunOS, and was the primary GUI out of the box


I think windows-on-arm is fairly instructive as to how likely RISC-V would go.


>> 1. Qualcomm develops a chip that competitive in performance to ARM

Done. Qualcomm is currently gunning for Intel.

2. The entire software world is ready to recompile everything for RISC-V

Android phones use a virtual machine which is largely ported already. Linux software is largely already ported.


And with VM tech, and the power of modern devices even some emulator/thunking layer is not too crazy for apps that (somehow) couldn't cross compile.


2. Except games...

But ARM and RISC-V are relatively similar and it's easy to add custom instructions to RISC-V to make them even more similar if you want so you could definitely do something like Rosetta.


Switches like that are major, but get easier every year, and are easier today than they were yesterday, as everyones tools at all levels up and down both the hardware and software stacks get more powerful all the time.

It's an investment with a cost and a payoff like any other investment.


Keep in mind, Apple _did_ actually take a good decade from starting with ARM to leaving x86.


With 100% control of the stack and an insanely good emulator in Rosetta.


Qualcomm's migration would be much easier than Apple's.

Most of the Android ecosystem already runs on a VM, Dalvik or whatever it's called now. I'm sure Android RISC-V already runs somewhere and I don't see why it would run any worse than on ARM as long as CPUs have equal horsepower.


Yeah, but Qualcomm doesn’t control Android or any of the phone makers. It’s hard for large corps to achieve the internal coordination necessary for a successful ISA change (something literally only Apple has ever accomplished), but trying to coordinate with multiple other large corps? Seems insane. You’re betting your future on the fact that none of the careerists at Google or Samsung get cold feet and decide to just stick with what works.


Wouldn’t coordination to change ISA between multiple companies receive heavy scrutiny in the Lina Khan era?


NDK exists.


The companies with large relevant apps running on the NDK are well staffed and funded enough to recompile.


It's not about whether they can, it's whether they will. History has proven that well-resourced teams don't like doing this very much and will drag their feet if given the chance.


it's not about that, it's about running the apps whose makers are out of business or just find it easier to tell their customers to buy different phones


Is the transition fully over if the latest MacOS still runs an x86 emulator for old software?


> Qualcomm develops a chip that competitive in performance to ARM

That’s what Oryon is, in theory.


>2. The entire software world is ready to recompile everything for RISC-V

This would suggest that RISC-V is starting from scratch.

Yet in reality it is well underway; RISC-V is rapidly growing the strongest ecosystem.


I think it takes Apple at least 9 years to prepare and 1 year to implement.


Thing is businesses don't work like side-projects do.

Qualcomm is more or less a research company, the main cost of their business is paying engineers to build their modems/SoCs/processors/whatever.

They have been working with ARM for the last, I dont know, 20 years? Even if they manage to switch to RISC-V, and each employee has negative performance impact of like 15% for 2-3 years this ends up in billions of dollars, because you have to hire more people or lose speed.

If corporate would force me to work with idk Golang instead of TypeScript I could certainly manage to do so, but I would be slower for a while, and if you extrapolate that on an entire company this is big $$.


> because you have to hire more people or lose speed

Yes and 9 women can make a baby in 1 month :)


In Norse mythology, Heimdallr was born of nine sisters. I'm not sure that it took any less time than usual, but I enjoy the story all the same. https://en.wikipedia.org/wiki/Nine_Mothers_of_Heimdallr


and Norse mythology has 9 world dimensions, so maybe it worked for them


Just take a guess at how the baby will be like and get everyone to pretend it already exists for the 8 months (and throw away the experience if mispredicted afterwards) :)


It's called pipelining, and the concept works well in all modern processors. Can also work with people, you only have a initial setup delay :)


No but 9 women can have 9 babies in 9 months.

Which is a 9x output.

Production and development requires multiple parties. This mythical man month stuff is often poorly applied. Many parts of research and development need to be done in parallel.


If you make screws, sure :)


> If corporate would force me to work with idk Golang instead of TypeScript

I think the most evil thing to do would be to switch places: TS for backend, Go for frontend. It can certainly work though!


Building a website that way would yield quite a popular Show HN post!


TS running under Node.js for the backend, I'd dare say, looks pretty standard.

But I like to imagine the Web frontend made in Go, compiled to WASM. Would be a fun project, for sure.


Try Java 1.8.


>Qualcomm doesn't have nearly as much to lose as ARM does and they know it.

Not even close. Android OEM's can easily switch to the MediaTek 9400 that delivers the same performance as the Qualcomm high-end mobile chip at a significantly reduced price or even the Samsung Exynos. Qualcomm, on the other hand, has everything to lose as most of their profits rely on the sales of high-end Snapdragon chips to Android OEM's.

Qualcomm thought they were smart by trying to use the Nuvia ARM design license, which was not transferable, as part of their acquisition instead of doing the proper thing and negotiating a design license with ARM. Qualcomm is at the mercy of ARM as ARM has very many revenue streams and Qualcomm does not. It's only a matter of time before Qualcomm capitulates and does the right thing.


The transition to RISC-V depends entirely on how much of the cpu core susbstem is from ARM. The ISA itself is one part, there are branch predictors, l1,l2 and 3 caches, MMUs, virtualization, vector extensions, the pipeline architecture, etc etc. So moving away from ARM means they need performant replacements for all that.

I'm sure there are folks like SiFive that have much of this, but how is it competitively I don't know, and how the next snapdragon would compete if even one of those areas is lacking... Interesting times.


Moving to a whole new architecture is really really hard, no? The operating systems and applications all need to be ported. Just because Qualcomm cannot be friends with arm, every single Qualcomm customer from google to some custom device manufacturer needs to invest years and millions to move to a new architecture? Unless I am fundamentally misunderstanding this, it seems like something they won’t be able to achieve.


Android already supports RISC-V, so while migrating an SOC to it is not painless (third-party binaries, device-specific drivers...), the hard work of porting the operating system itself to the ISA is done.


> If ARM wins, Qualcomm moves to RISC-V

Around 30-40% of Android apps published on play store include native binaries. Such apps need to be recompiled for RISC-V otherwise they won’t run. Neither Qualcomm nor Google can do that because they don’t have source codes for these apps.

It’s technically possible to emulate ARMv8 on top of RISC-V, however doing so while keeping the performance overhead reasonable is going to be insanely expensive in R&D costs.


Binary-only translators exist, for instance Apple has https://en.wikipedia.org/wiki/Rosetta_(software)


Apple gross revenue is 10x the Qualcomm, and the difference in net income is even larger. Apple could easily afford these R&D costs.

Another obstacle, even if Qualcomm develops an awesome emulator / JIT compiler / translation layer, I’m not sure the company is in the position to ship that thing to market. Unlike Apple, Qualcomm doesn’t own the OS. Such emulator would require extensive support in the Android OS. I’m not sure Google will be happy supporting huge piece of complicated third-party software as a part of their OS.

P.S. And also there’re phone vendors who actually buy chips from Qualcomm. They don’t want end users to complain that their favorite “The Legendary Cabbage: Ultimate Loot Garden Saga” is lagging on their phone, while working great on a similar ARM-based Samsung.


> I’m not sure Google will be happy supporting huge piece of complicated third-party software as a part of their OS.

Yeah, for the upcoming/already happening 64-bit-only transition (now that Qualcomm is dropping 32-bit support from their latest CPU generations), Google has decided to go for a hard cut-off, i.e. old apps that are still 32-bit-only simply won't run anymore.

Though from what I've heard, some third party OEMs (I think Xiaomi at least?) still have elected to ship a translation layer for their phones at the moment.


You’re suggesting that Snapdragon processors would switch to RISC-V and that would be no big deal? Presumably Qualcomm is committed to numerous multi-year supplier agreements for the arm chipsets.


Qualcomm pitched Znew quite a while ago. It mostly ditched 16-bit instructions and added a bunch of instructions that were basically ripped straight from ARM64.

The idea was obviously an attempt at making it as easy as possible to replace ARM with RISC-V without having to rework much of the core.

https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...


An attempt that failed miserably. (it was formally rejected a year ago)

But, by now, it is expected that Qualcomm's RISC-V designs have been re-aligned to match the reality that Qualcomm does not control the standard.


Actually, it was an attempt to reuse as much of the ARM design they got when they bought Nuvia moving to a different CPU architecture. They were worried about ASIC design not software code.


This affects their custom Nuvia derived cores. I'm sure Qualcomm will be able to keep using ARM designed cores to cover themselves while they ween off ARM in favor of RISC-V if they need to.


This is a bit off topic, but has anyone demonstrated it's possible to design a big RISC-V core, that's performance competitive with the fastest x86 and ARM designs?


Well, Tenstorrent, Andes and others have their respective designs...

On the in-order side, I can see on-par performance with the ARM A5x series quite easily.


After a bit of digging, I found that the SiFive P670 has performance equivalent to the Galaxy S21, or the desktop Ryzen 7 2700, which is not too bad and definitely usable in a smartphone/laptop form, so competitive with the 2021 era designs. Definitely usable. It's not clear what's the power level is.


The P670 is a core, not a chip, so you can't really get to power numbers (or indeed, raw performance as opposed to performance / watt or performance / GHz) without actually making a practical chip out of it in some given process node. You're better off comparing it to a core, such as the ARM Cortex series, rather than the S21.

SiFive claims a SPECint2006 score of > 12/GHz, meaning that it'll get a performance of about 24 at 2 GHz or ~31 at 2.6 GHz, making it on par with an A76 in terms of raw performance.


Qualcomm’s business strategy has become hold everyone at gun point then act surprised when everyone looks for alternative customers/partners/suppliers.


They're the Oracle of hardware.


> Qualcomm doesn't have nearly as much to lose as ARM does and they know it.

question: isn't arm somewhat apple?

...Advanced RISC Machines Limited and structured as a joint venture between Acorn Computers, Apple, and VLSI Technology.

https://en.wikipedia.org/wiki/Arm_Holdings#Founding


> question: isn't arm somewhat apple?

Not for decades. Apple sold its stake in ARM when Steve Jobs came back, they needed the money to keep the company going.


>>Qualcomm moves to RISC-V and ARM

That is a HUGE cost!


> Qualcomm is almost certainly ARM's biggest customer.

You think Qualcomm is larger than Apple?


Absolutely.

There are nearly 2B smartphones sold each year and only 200M laptops, so Apple's 20M laptop sales are basically a rounding error and not worth considering.

As 15-16% of the smartphone market, Apple is generally selling around 300m phones. I've read that Qualcomm is usually around 25-35% of the smartphone market which would be 500-700M phones.

But Qualcomm almost certainly includes ARM processors in their modems which bumps up those numbers dramatically. Qualcomm also sells ARM chips in the MCU/DSP markets IIRC.


Qualcomm's modems aren't ARM processors, they're Hexagon.

https://en.wikipedia.org/wiki/Qualcomm_Hexagon


Qualcomm may have the market but Apple has the profit.


Apple has (to a first approximation) a royalty-free license to ARM IP by virtue of the fact that they co-founded ARM - so yes, Qualcomm is most likely paying ARM more than Apple is.


Just to clarify for those that don't know ARMs history, Acorn were building computers and designing CPUs before they spun out the CPU design portion.

Apple did not help them design the CPU/Architecture, that was a decade of design and manufacturing already, they VC'ed the independence of the CPU. The staffing and knowledge came from Acorn.


> Apple did not help them design the CPU/Architecture

I believe they had a big hand in ARM64. Though best reference I can find right now is this very site: https://news.ycombinator.com/item?id=31368489


Oh, I was just wanting to clarify the "Apple co-founded".

They had the Newton project, found ARM did a better job than the other options, but there were a few missing pieces. They funded the spun out project so they could throw ARM a few new requirements for the CPU design.

As a "cofounder" of ARM, they didn't contribute technical experience and the architecture did already exist.


Sigh. Newton. So far ahead of it's time.


On modem side they can move to whatever they want without impacts. But on the apps side they need to run Linux/Android/Windows/etc so are dependent on Arm.


> If ARM wins

Qualcomm pays them.

> Qualcomm moves to RISC-V

That’s like chopping your foot off to save on shoes…

It would take years for Qualcomm to develop a competitive RISC-V chip. Just look at how long it took them to design a competitive ARM core…

Of course they could use this this threat (even if it’s far-fetched) to negotiate a somewhat more favorable settlement.


> Qualcomm is almost certainly ARM's biggest customer

what about Apple?


is risc-v anywhere near the same efficiency ballpark?


RISC-V is very competitive with ARM when comparing similar PPA niches.


High performance and low power laptop and phone SoC’s, no way. There exists no competitive risc-v chip.


We're all assuming that Oryon-V is already being developed.


"Developed" and "successfully shipped" are two enormously different things.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: