It's worth noting the age of RISC-V, according to Wikipedia (https://en.wikipedia.org/wiki/RISC-V) it originated as a 'short, three-month project over the summer' in 2010. So ~8 and a bit years old. It's taken a while to really take-off but now it does really seem to be picking up momentum. Though no serious use in production hardware that I'm aware of.
Perhaps a good comparison point is LLVM. It too originated as an academic project. Again looking at Wikipedia (https://en.wikipedia.org/wiki/LLVM) it started development in 2000, first release 2003, Lattner hired by Apple to develop it 2005, Clang released by Apple 2007 (so ~7 years from initial LLVM creation to serious production use potentially less depends when Apple were using it internally).
Not really, the foundries are on the cusp of fully validating rocket cores on the various processes, and you can just include them in your design like you would a Cortex-M or Cortex-R. They've recognized that 3-5 stage classic RISC cores are a commodity market now, and it's in their best interest to make it as easy as possible to add to your design.
Above those simple cores, we should expect to see more and more RISC-V cores hit the same level of "just drop it in for no licensing, already validated, the pieces like register files are already optimized for the process". BOOM is ~Cortex-A9 perf/gatecount/IPC, which puts it into greater than RPi territory (usable, but not really out crazy). There's still work to be done on the higher end still, albeit, but it's not like you can just go out and licence the highest perf ARM cores anyway (those are Apple's).
If RISC-V can create a low cost-of-entry development pipeline with freely available building blocks, then people will still flock to it over ARM even if the price goes way down. Its the same reason that Linux is far and away the most popular OS for hosting web apps: its just stupid easy to get one running quickly and for free. No paying for tools, no stupid licensing agreements, no arbitrary restrictions, and the freedom to change the code as needed.
Unfortunately I also think this may explode the number of "IOT" devices that will be just as badly designed as before (similar to what happened to web apps and phone apps in the last 10 years).
Its not just about price, its about control and time. ARM could make their product much cheaper and people might still pick RISC-V simply because dealing with ARM and getting a deal is quite difficult.
If you need a simple chip, and that's large parts of ARM sales volume there are free chips that you can fully control and its hard for ARM compete.
That said, the idea that ARM will go bankrupt is insane. Of course they will not, the market is not one big market but lots of small once and ARM has a massive head start in all of them.
I see no issue with both ISA existing for quite a long time. ISAs are like programing languages, after a certain size, they can not die.
Big difference is Android on ChromeOS is using the Linux kernel that is part of ChromeOS. Here Android will be on top of Zircon. Which is the fuchsia kernel.
Now gnu/linux on ChromeOS is using a completely different Linux kernel.
It was important Google did it this way so gnu/Linux would still work when they update ChromeOS to Fuchsia. Google already has Gnu/linux running on fuchsia with something called Machina.
Probably not at this point. If they had done this right when RISC-V was first announced, then maybe, as at that point they had the advantage of much more ecosystem and tooling integration.
However, it's now at the point where there are shipping RISC-V Linux distros, it's supported in GCC and LLVM out of the box, there are more than a dozen open-source RISC-V cores (https://riscv.org/risc-v-cores/, several of which are actually parameterized families of cores), and there are companies like SiFive offering commercially supported proprietary cores. There is hardware that has shipped from multiple different vendors, and that likely means that there are a number of others where it's fairly far in the pipeline.
With MIPS being behind Intel on the desktop and server market, behind ARM on the proprietary embedded core front, and behind RISC-V on the open source ISA and cores front, it's a bit hard to see why someone would buy MIPS who hasn't already invested in it.
If you want something more well established with a wider variety of cores available to purchase, you'd go with ARM. If you want something with no ISA licensing fees and even potentially no charge at all for an open source core, you can go with RISC-V.
I suppose there is some chance that MIPS could win some folks over from RISC-V for already having stable SIMD and DSP extensions, and more overall architectural maturity than RISC-V. But given that this seems to just be opening up the ISA without opening up any cores, and right now is a press release without having decided on the governance structure for the project or released any assets, while RISC-V has a number of fairly mature open cores including Rocket, BOOM, RI5CY, and more, and has a governance structure with a number of different companies involved, it seems like it will be a while if ever for MIPS to catch up on the open ISA front.
> China is rallying around the architecture with perhaps hundreds of RISC-V SoCs and dozens of cores in the works.
> “We are talking hundreds, if not thousands, of [RISC-V–based SoC] projects under way; it’s crazy … probably at least 40 to 50 companies or academic groups are dabbing in core development — some for internal use, some for open-source, and some commercial”
When I mentioned "multiple vendors," I was talking about hardware implementations that have actually shipped.
I've been looking for RISC-V chips that I can buy right now, and from what I can tell, there are only a handful. Two from SiFive (the E310 embedded processor, and U540 supervisor mode/Linux capable processor), one from GreenWaves (the GAP8). Turns out since I last looked (around the beginning of October) it looks like the Kendryte K210 has been released, also available in the Sipeed M1 module (https://www.cnx-software.com/2018/10/22/sipeed-m1-risc-v-com...). Anyhow, that makes 3 vendors shipping hardware implementations.
It's possible there are other special purpose chips not widely available for sale, or not advertised to English speaking customers, but I haven't found much evidence of them.
There certainly are a lot of RISC-V projects under way, which was what I was referring to by "that likely means that there are a number of others where it's fairly far in the pipeline." I think the "hundreds" from the article you quote is about total number of projects, not total number of vendors.
Some of these are just academic projects, some are multiple projects at the same vendor, and so on. I wouldn't be surprised if it was dozens of vendors that have projects at various stages of completion, however. I can imagine we'll be seeing a lot of releases over the next couple of years.
And there are a number of cores you can run on FPGAs, from high-end cores like BOOM that can be run on big expensive Xilinx FPGAs (or on Amazon's cloud FPGAs), to low-end like PicoRV32 that you can run on a $5 Lattice ICE FPGA.
> It's possible there are other special purpose chips not widely available for sale, or not advertised to English speaking customers, but I haven't found much evidence of them.
Take any established Chinese IT company, there are chances they are working on RISC-V based SoC or two. Alibaba - CK902, Huami - Huangshan No. 1, not talking about less known at the West vendors.
But it seems there are some considerations among established IT companies about not announcing RISC-V products very loud, especially in English.
> The Western executive, speaking on condition of anonymity, told us, “A lot of the biggest companies doing this are being very discrete indeed: perhaps they cannot afford to upset Arm.” He said they fear being told by Arm, “‘Oh, the core for your new smartphone is two weeks late.’”
That's a good question. Did OpenSPARC slow down RISC-V?
It's worth noting that, as the article points out, MIPS remains patent encumbered, the ownership situation of these patents is itself quite complex, and all in all, it's not quite clear how the new "Open" licensing will work.
In my totally non-expert option, no, it will not slow down RISC-V.
The RISC-V instruction set family does benefit from the decades of research since MIPS was first created.
Also, a lot depends on the ecosystem surrounding whatever MIPS cores are available. Memory crossbars, interrupt handling, etc. There's a lot that goes into a system-on-chip.
It depends on bunch of small and medium-size Chinese SoC vendors, would they jump to MIPS now, or not. RISC-V is pretty much default ISA among them nowadays.
They will not. Important RISC-V companies like SiFive already ship commercial RISC-V that they helped develop. The same for Andes. Esperanto has a RISC-V core that is competitive with top ARM cores already. Western Digital has already committed to RISC-V and released open source cores, invested in RISC-V companies and has multiple products deep in the pipeline.
Academic research from Berkley, MIT, ETH and many others is already on RISC-V and they would be crazy to switch, RISC-V is perfect for them.
Not to mention that RISC-V is a better instruction set overall.
It depends if they truly open source their stuff, or if they just do their newest items. Their definition of open source and ours might be different as well.
I would take everything with a grain of salt until it is truly open sourced and then revisit the question.
I'm curious to hear the answer for this as well. Given the MIPS and RISC-V are similar in nature, I'm not sure what the impact of making an already widely used and quite mature RISC ISA open source when this was basically the entire premise behind RISC-V.
I believe this is similar to what we will see with Zircon as a default.
Why I believe multi core performance with Zircon will exceed the same with Linux. The big question is single core?
You do not have the benefit. But I do think Google will do their own CPU optimized for Zircon. Do hope they use RISC-V for the ISA. They did with the PVC.
I apologize for asking a question that will likely lead to a flame war regardless of your answer, but which is better? I've used Go for a while for certain apps, but as a primarily functional programmer I find my way of thinking often clashes with the language (and I also don't like the verbosity).
So, do you do functional programming, and is Rust a better (with all the subjectivity that word implies) language than Go?
This 'question' is bound to go up in flamewars, but here is an honest and unbiased answer by someone who has taken a look at nearly every language on the planet (a hobby) and thinks that both languages are a bit crappy from a general programming language perspective but quite usable in practice.
Go is good to get things done quickly. It has a vast ecosystem and super-fast compilation. It's like a modern BASIC, but more performant and fun to use. It's fast enough for most everyday tasks except for real-time audio processing and high end gaming. It's good for writing CLI tools and server backend software.
Rust is good for writing libraries and CLI tools that replace existing C or C++ solutions with inherently safer versions and when speed matters a lot, though not as much as what would make you use Fortran or hand-optimized C. It is not suitable for high integrity systems and solid engineering where you'd normally use Ada/Spark, because of low maintainability, an unprofessional 'language aficionado' user base converted from C++, and being a fast moving target. Maybe later, though.
Both are fundamentally different, neither is "better".
They are both good at slightly different things.
If your desire is to accept bytes over the network and spit back bytes over network Go is going to be a pretty solid choice because that was very much the focus of it's design.
However if you want to build an application for a hard realtime environment and you either lack the space for runtime or can't handle GC pauses then maybe Rust is a better choice.
From a language perspective Go is a simple language and Rust is a complex language. The two have different tradeoffs here. Go is easy to learn, has limited pitfalls but also lacks in the power department if you need metaprogramming and abstractions to model your problem.
Rust however accels in that role due to it's powerful type system and hygienic macros. The tradeoff is very apparent once you try use the two languages however, Rust is -far- more difficult to both climb the initial learning curve and has a much higher ceiling.
Fundamentally you will probably find Go is better at replacing dynamic languages though there are many cases where C/C++ was used where it's bare metal nature isn't needed and Go is a very suitable replacement. Go however has some difficulty in replacing certain usages of C/++. Namely it can't easily be used to create a shared library because of it's runtime and I/O system.
That said if you wanted to be able to replace any and all C/C++ code Rust would be a better choice as it can do anything C/C++ can with no downsides. i.e embedded systems, shared libraries, bare metal access without worrying about the green threaded execution model.
There are many other things to consider too but these are some of the important ones from someone who got into coding doing C and embedded, has since learnt both Go (and used professionally) and Rust (and used for side projects).
Subjectively I think Go is a better choice when it can do the job as it's easier and less brain intensive to just do the thing. Rust however is more "fun" to program in as it's a less mechanical endeavour and also can solve some problems you can't with Go.
My rule of thumb is use Go by default, but if it makes sense to trade a lot of developer time for extreme performance or extreme type safety, use Rust. As with all rules of thumb, there's a lot more nuance than this, but I think it captures the big idea well enough.
I disagree that using rust means trading a lot of developer time. I'm as comfortable with rust as I am go, and I develop equally fast in either language. I would even say faster in Rust because of the type system.
That's quite a feat. According to the Rust developer survey, it takes many people a month or more to feel productive in Rust^1 at all much less as productive as with Go. I've been picking up Rust occassionally for 4-5 years now and I'm still not particularly productive and far less productive than I am in Go (and I come from a C++ background, so it's not like I'm a stranger to thinking about memory management and things). I suspect that you're an outlier (I may be also, but my point doesn't hinge on that).
^1: Most people report being productive with Go in a day or two
Well it took me longer to get comfortable with Rust than Go, and I also had to learn actix (actor style framework in rust) to do the same high concurrency programming. But once the time investment is put in, I definitely consider Rust to be the more productive language.
Once async/await stabilizes and the rest of the ecosystem catches up and becomes a little bit more ergonomic to use, I would say Rust will be in a good position.
As a Haskell & Erlang person who currently uses Rust as my primary language at work:
> do you do functional programming, and is Rust a better (with all the subjectivity that word implies) language than Go?
Yes, yes. Obviously the Rust ecosystem has fewer mature libraries, but its type system and error handling make Go look like a toy.
Go can be okay for small one-off tools, but its safety guarantees are not far ahead of scripting languages and I think it should be considered as such.
Rust has much better support for a functional style. On the other hand, not having a GC in Rust means that dealing with closures can in some cases get quite complicated, whereas closures in Go work exactly as you'd expect. (Although to be fair, if you aren't doing any mutation, closures in Rust are pretty easy to deal with.)
This is like asking for a language war, which is the last thing we want on a language thread. You shouldn't have any problem finding already-existing Go vs. Rust discussion either in the search bar here or on Google.
There is so much momentum already. Not many things catch on that quick. But when they do they end up huge.