Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Exciting Days for ARM Processors (smist08.wordpress.com)
145 points by diehunde on July 6, 2020 | hide | past | favorite | 182 comments


> I think Apple should be thanking the Raspberry Pi world for showing what you can do with SoCs, and for driving so much software to already be ported to the ARM processor.

With all due respect, I love my raspberry pi, but Apple just needs to thank whoever was in charge of acquiring PA Semi’s know how in 2008, along with the chip mastermind that is Johni Srouji. Or themselves from 1990, when they actually founded ARM as a joint venture with Acorn computers to make chips for the Newton.


Before the Pi ...

The first ARM-based device for Linux that I remember getting big was the Sheevaplug. It was an ARM-based "plug computer" with an SD card slot, a USB port IIRC, serial/JTAG port, Ethernet, Wifi, and no display.

I had its next generation, the Guruplug, which had 2 Ethernet ports, Bluetooth, and a couple USB ports. Used it as a router for a few years.

I don't know how long Debian's arm distro was around before then but that was my goto for the Guruplug. Worked great.


Although not an ARM device, the old Linksys Wrt54G (2002) was the first device to really get hackers interested and building a community around embedded Linux computing. There were other things but it was really the only one that was affordable and network connected.

Later on the Linksys NSLU2 (2004, a network USB hard drive adapter) came out and was the first ARM based "computer" to really catch on. The custom firmware and optware offered a pretty good posix experience. It wasn't until 2008 when debian support came out, give the world a fully modern Linux experience on a little embedded device.

After that companies started seeing a niche market and programmers accepted the challenge of installing linux on everything and we got things like the Sheeva Plug and BeagleBoard.

A few years later the Raspberry Pi came out with an unbeatable price and community support.

And then a decade later we're getting hints of the possibility of ARM computers with true workstation level of computing power.


I still have my sheevaplug. It was great to have a small self-contained computer at the time. The raspberry pi definitely fills that spot today, but it isn't a full self-contained package like the sheevaplug was (for the pi you need to figure out the case and power).


There is Olimex homer server: https://www.olimex.com/Products/OLinuXino/Home-Server/open-s... Fully contained package for a small home server, Free Hardware Design, SATA


Yes, the raspberry pi is more of a biproduct of the world having moved towards ARM before than a driver by itself. In the end it's just reusing an SoC which was developed for other applications.

And we had countless applications in mobile phones and automotive infotainment systems for Cortex A series CPUs and other embedded systems for Cortex M series CPUs before.


Also, the Raspberry Pi is based on possibly the shittiest family of SoCs available on the market. As an embedded engineer, it's hard to overstate how bad the BCM283* series is. Peripherals locked to GPU clock. Why!? Watchdog timer is straight up broken. How do you mess that up?


At least the new generation SoC that's in the Pi 4 FINALLY FINALLY uses a GIC instead of a Broadcrap custom interrupt controller. Now EDK2 firmware can present it as a reasonable almost-SBSA-ish ACPI system :)

But of course it's still a bit weird. For example, XHCI (due to being attached through a bad PCIe host controller) can only DMA to the lower 3GB of memory. In FreeBSD, we've never had to implement DMA limits for ACPI devices, because no real system had this kind of limitation before, and now I've had to write this: https://reviews.freebsd.org/D25219


Which SBC do you like, from that vantage point?


Different class than raspi (about 3x the cost all told), but the APU2 from PCEngines is awesome! Multi core amd64 processor, 3-4 gigabit NICs, up to 4GB ECC RAM. In the same class as the raspi, I prefer the BeagleBoard family.


Yes, the chip team at Apple is likely worth tens of billions on their own. Think about how many companies out there that would pay top dollar to have the chip performance that Apple does. I know the ARM ISA != Intel's X86 but on a perf v perf basis if Apple-level-performance CPUs were available to the mass market Intel would be in a world of hurt and would likely have to drop prices even more aggressively than it does when AMD puts out a competent chip.


> if Apple-level-performance CPUs were available to the mass market Intel would be in a world of hurt

Amazon are betting on this on the server side. They've developed a number of ARM-powered EC2 instance types: A1, M6g, C6g, and R6g.

They haven't yet made a burstable ARM instance type.


Amazon doesn't have access to Apple CPUs, although ARM own designs have finally become quite reasonable for server workloads (and thats what Amazon is using)


right, and I don't think the above person was insinuating as much just that a company with large enough pockets like Amazon could take the ARM ISA and do what Apple did and make their own custom silicon for their own purposes. Or at least, that's what I hope they are doing.


Right, I hadn't meant to imply that Amazon are licensing Apple's ARM chips. I quoted Apple-level-performance CPUs after all, not Apple's CPUs.

As you both rightly say, the ARM CPUs Amazon are using are not the same as Apple's. Instead, they are custom built for Amazon. [0][1]

[0] https://aws.amazon.com/blogs/aws/new-ec2-instances-a1-powere...

[1] https://aws.amazon.com/about-aws/whats-new/2019/12/announcin...


So far, per-watt and relative to the amount of I/O, Zen 2 Low-power is already more efficient than anything Apple has put out so far.

So I don't think it would have that much of an impact.


Got benchmarks to prove that? I’d be keen to see


> Apple just needs to thank whoever was in charge of acquiring PA Semi’s know how in 2008

Jim Keller as much as anyone I would think.


> The Intel world has stagnated in recent years, and I look forward to seeing the CPU market jump ahead again.

Really? AMD's recent moves with Zen/Zen2/Zen3/EPYC look like a big step forward. Zen2 chiplets are the biggest change in years. Zen3 IPC is supposed to be significantly better than Zen2 (17%). The previous gen Ryzen was like 15w TDP, where as the A12Z in the Mac Mini ARM is 15w TDP, but the Zen crushes the A12Z Bionic on benchmarks.

It's difficult for people to remember, but ARM came from really terrible performance to a spot where it is getting in the ballpark of x86, so the advancements look impressive, but that's like saying if I go from $100 to $200, the gains look impressive, whereas you only went from $10000 to $11000, it looks like you are standing relatively still in comparison.

But x86 architectures are decades of maturity, so someone getting a 17% IPC lift (Zen2->Zen3) or a massive reduction in TDP on such a complex chip, isn't stagnation, it's actually MORE impressive IMHO.

ARM is going to reach marginal returns, and soon the yearly perf boosts won't look as impressive anymore.

In the end, I think we'll see convergence of performance. ARM will still have a power advantage on mobile, because they don't have so much backwards compatibility legacy that x86 has to support. However, are Macbooks and Mac Desktops going to be performance and price competitive with Linux x86? I doubt it.

First of all, ARM vendors haven't even come close to the GPU performance of NVidia or AMD's discrete GPUs. An ARM A13 or Mali is not going to compete with a laptop with an RTX 2060, 3060, or RDNA1/2. And secondly, just looking at AMD, it's possible to lift performance / watt still in x86.

I think the laptop space in the x86 realm is still exciting, because of what AMD and NVidia are doing.


There is no reason why you can't run AMD/Nvidia GPUs alongside ARM chips. The GPUs do not run x86.

Further, the average laptop consumer is happy with their intel integrated graphics. All they do is browse the web and punch some numbers in excel.


Not sure who the average consumer is, but all teenagers and many adults want games to work, Intel's integrated graphics don't cut it especially with screen resolutions going up and many people find that out after the purchase.


Intel's Xe integrated graphics is reportedly faster than AMD's newest APU family. The article I saw said up to 30% faster in many workloads.


Isn't "Xe" the name of the non-integrated, PCIe graphics cards?


These reports were more unsubstantiated then I originally realized, sorry for not noting that.

The news is based on leaked benchmarks from upcoming tiger lake mobile socs with markedly higher graphics scores. I suppose it is possible that these tests were done with a discrete GPU hooked up. Hard to say for now. But it seems like a logical step for intel to upgrade their anemic integrated graphics with their next generation stuff.


I would also like my laptop to run cool enough to, idk, put on top of my lap.


You might find it interesting that Apple made some very suggestive statements at WWDC regarding graphics performance, like “don’t assume discrete graphics is better than integrated”. I think it might be interesting to wait a bit to see what they come out with.


Right, so a device with 5000+ CUDA Cores, 312 TFlops of Tensor Cores, hundreds of dedicated RT Cores, and 1+ Tb/s memory bandwidth to 8gb of dedicated VRAM, and 54 billion transistors, is going to lose to a SoC? Not likely.

Apple's view of what "discrete" performance is, is probably 2-generation old cut-down mobile variants. For example, their top-of-the-line uber-expensive Mac Pro ships with Radeon Vega II architecture from 2017. Meanwhile, AMD RDNA2/Navi and NVidia Ampere are about to ship.

From my view, they're at least 2-3 generations behind in performance, and their focus on mobile means they're always going to be behind given thermals. NVidia and AMD are focused on maximum performance, and they assume gaming takes place plugged into a wall socket. And because they're focused on maximum performance, especially for triple-A title games and DCC, they're putting in features like DXR (DirectX Raytracing) with support for hardware accelerated ray-intersection. How long until I can run Minecraft with ray-tracered shaders at 2k or 4k on Apple's GPU? (https://www.youtube.com/watch?v=AdTxrggo8e8)

I mean, Apple's engineers on ARM are good and they did a fantastic job improving the PA Semi IP, but I don't think they're going to take what's essentially a PowerVR architecture which was never competitive with discrete, and leapfrog NVidia and AMD.


Apple's GPUs have exceptional render performance. Consider how close they get to a GTX 1060 laptop already[1], and then consider that their upcoming chips can comfortably double GPU core counts and will be 50% stronger per core with even conservative generational improvements.

They do less well in compute, but even there they do well, and a 16 core A14 GPU is likely to bat with a 1060.

[1] https://images.anandtech.com/graphs/graph13661/103805.png (not a perfect benchmark because of CPU bottlenecking, but there are no good benchmarks to choose)


Not exceptional and impressing for comparing 14nm/7nm GPUs. (yes it's fanless.)


A 1060 is 200mm² of pure GPU at ~100W, an A12X's GPU is ~26mm² (approximated) running at ~10W.

Of course it's impressive. Even if that was somehow all down to the better node, their next generation SoCs will still use a newer node than NVIDIA's next generation GPUs.


You comparing 1060's entire die size including NVEnc, memory controller, PCIe controller, and so on vs A12X's GPU core only die size. And I expect 1060 is optimized for perf but A12X is optimized for power(temp).

There's no reason NVIDIA should use a process behind Apple forever because both uses TSMC (and Samsung) and process improves slowing down.


Cropping to the inner part still gives ~140mm², or comfortably over 5x the size of Apple's GPU.

I don't really get your argument. Apple customers buying Apple Silicon Macs—which, again, will probably have a GPU over three times as fast as the A12X—aren't going to let hypotheticals detract from their powerful and power-efficient GPUs. ‘But NVIDIA didn't optimize for efficiency’ and ‘but NVIDIA hypothetically could have used a newer node than they did’ don't count for squat.


Average gamer doesn’t need and cant afford that stuff. Some have just got used to having every bell and whistle. Average gamer is playing fortnite which works perfectly nicely on an A12Z. I tried it. Plugged my monitor into my iPad Pro and paired an Xbox controller with it and got a pleasant surprise.

The key is consistent performance and that is something they can manage with full control over API and silicon end to end. They are already doing this well on cores that are two generations old. I expect to see a good compromise on performance and power which is what we need for general purpose computing.

To be clear I have actually canned my gaming pc recently which was a ryzen 3700x and GTX 1660. I haven’t missed it, the big titles or the pain in the arse getting it stable and built to start with.


Pain in getting it stable? What does it even mean, it was unbalabced and falling off your desk?

What did you have, crashing games? It has been like 7 years since I had any stability issues on windows for gaming.


Just annoying little issues like the six months it wouldn’t power up first time. Turned out to be a GPU initialisation problem. But when you only have one type of each component and the only way of debugging is swapping parts until you find the problematic one, you have to live with it. For some people it never works and then you have to deal with several vendors you bought the parts from all of whom point at each other. And if you buy off the shelf then you really have no idea what crap you’re being sold unless you want to pay a very high premium.

Every PC build takes a lot of risk on. Not one I’m willing to take any more. Commercial desktops are either junk or too expensive. I’d rather use a single sourced well integrated appliance.

Edit: also the dubious nature of some parts is a worry as demonstrated here https://www.reddit.com/r/WatchPeopleDieInside/comments/g0420...


Macs are far expensive for gaming compared to good quality prebuilt PCs.


Yes they’re terrible for gaming. I’d buy an Xbox instead.


In the PC world a discrete GPU means you get a second chip with a thermal/power budget of 70 to 200W. On a SoC the thermal budget is more limited, so even if they integrate nVidia IP on silicon, they can't clock it as high.

Of course by "integrated" Apple could mean they just solder a GPU chip to the board. Or they built a surprisingly performant GPU core for their SoC, though I wouldn't expect it to be much faster than what AMD integrates in their CPUs (30% at most? Still much slower than discrete).


> The previous gen Ryzen was like 15w TDP, where as the A12Z in the Mac Mini ARM is 15w TDP,

Source for 15W A12Z TDP? And the benchmarks?


Mostly from developer forums. The A12 has been estimated at 5-6w by reviewers who used power measuring techniques (e.g. Anandtech, etc running stress test benchmarks). The A12Z on the Apple DTK has 2-extra CPU cores, and 3 extra GPU cores, so people have estimated based on scaling and clock, to be anywhere from 10w to 15w if you consider the device on plugged in power, running all cores, with clock boosting.

Whether it's accurate or not, the reality is, the Ryzen Mobile is pretty efficient for its performance, and Intel LakeField is basically going with a "BIG.litte" architecture as well (x86+atom). AMD in particular is a fierce fighter, and they're not just going to sit still and let ARM capture the laptop and desktop markets, especially since a huge existing ecosystem gives them an advantage, and many people with work or business laptops aren't necessarily going to throw away everything to get an extra mm of thinness, or slightly longer battery life. After all, if battery life was the end all and be all, mobile phones could have doubled battery life a long time ago by simply doing less. Putting super high pixel density displays and constantly running background services is one reason why battery size keeps going up, but battery life hasn't improved as much.


> Intel LakeField is basically going with a "BIG.litte" architecture as well (x86+atom)

Current Cinebench R15 benchmarks [0] of real world Lakefield based hardware put this CPU 10+% lower than a 2013 AMD A6-1450 8W 'Temash' CPU [1]. Let's say it's the same ballpark. Unless it's a benchmarking fluke or early hardware issues Lakefield is nothing to write home about.

[0] https://www.notebookcheck.net/Exclusive-First-benchmarks-of-...

[1] https://www.cpu-monkey.com/en/cpu-amd_a6_1450-143


> 3 extra GPU cores

Four extra, eight total. The A12X is the one with a core disabled.


> The previous gen Ryzen was like 15w TDP, where as the A12Z in the Mac Mini ARM is 15w TDP, but the Zen crushes the A12Z Bionic on benchmarks.

Hmm? The A12X beats the 3700U in Geekbench 5's single-thread, multi-thread and compute benchmarks, and not by trivial margins.

Your power comparisons are also unfair; if the A12X ever draws 15W, it would be way at the edge of its power curve on all cores[1], whereas 15W on the 3700U is a tepid clock for the Ryzen. The A12X is much more reasonably considered an ~8W part.

[1] https://images.anandtech.com/doci/14892/a12-fvcurve.png


>where as the A12Z in the Mac Mini ARM is 15w TDP

There is no chance the A12z in the DTK is a 15w chip.

Do you have a source?


You are looking at x86 from an AMD's perspective which really is just catching up to Intel in many aspects and enjoying the advantage of TSMC and Chiplets.

The Zen3 rumoured impressive IPC improvement will still be below Willow Cove or Tiger Lake. An uArch that was supposed to be out in 2018. Intel's 7nm ( In between TSMC 5nm and 3nm Node ) Golden Cove was supposed to be launch this year in Intel's original roadmap.

From roughly 2.5 years ahead of the industry to now lagging behind 1.5 years. That is 4 years of difference.

That is why The Intel world has stagnated in recent years. 4 years is very long in tech industry.


From the limited data regarding the Tiger Lake benchmarks floating around, it could be just as much of a revelation as the Zen2 jump which you mentioned. The GPU is much more powerful than Icelake and the CPU seems to have benefited from the higher clock speeds possible from the more mature 10nm process. TDP is said to be the same, but it's hard to go by Intel watts until someone does a battery life test.


I was a bit surprised the article didn't mention Windows on ARM at all. Following the Apple announcement, I managed to snag a Windows laptop that uses the Qualcomm Snapdragon 850 ARM SoC for dramatically less than MSRP on eBay. (To be fair, they were selling it for parts since they couldn't figure out how to remove the password. Wiping the drive and reinstalling Windows was easy enough.) For the most part, it feels just like Windows. Every app I've downloaded has _just worked_. That being said, there's definitely at least one app that won't (Wireguard since it requires an ARM64 driver to work). I've actually been tracking which software provides an ARM64 version[1]. Sadly, it looks like virtually every toolchain still needs to update to support ARM64 on Windows. I'm tracking a handful of GitHub issues, and support is definitely in the pipeline, but it's slow going. For example, .NET _still_ doesn't support Windows on ARM, despite the fact it's the flagship way to build apps on Windows, and ARM64 Windows devices have been available for almost two years.

[1] https://iswindowsonarmready.netlify.app/


Apple has tight enough grips on their hardware and software ecosystem that they can force major architectural changes onto the userbase with a hardline take-it-or-leave-it attitude, which led them through the previous three transitions. It's much harder to succeed in massive transitions when the userbase is not being forced to and are perfectly free to stay where they are and avoid the transitional inconveniences (Itanium, IPv6, Windows on ARM, etc, etc, etc.)


They also have a fair amount of software out of the gate as well as experience developing for the ISA.


And on the Apple AppStore, developers ship LLVM-IR that can be compiled to different architectures (ppc64le, x86_64, arm64, etc.), instead of pre-compiled binaries.

So when Apple releases a new chip, it can just re-compile the LLVM-IR to it, to make use of newer features and compiler optimizations for that chip.

Basically, the only applications that have to do anything to transition to Arm on Apple are those that are not using the AppStore... which from the platform's perspective, is kind of their own fault.


Bitcode is architecture-dependent, you can’t just recompile it for a random target. Additionally, Mac AppStore apps are shipped without Bitcode.


> Bitcode is architecture-dependent, you can’t just recompile it for a random target.

In general, you are correct. LLVM-IR is architecture dependent. Things like the size of a pointer, the size of an `int`, parts of the calling convention, or "architecture-specific defines in C code" have already been "hardcoded"/expanded into the generated IR.

In practice, you can easily re-compile x64 IR to arm64, as long as the IR does not use, e.g., "arch-specific" LLVM intrinsics (like explicitly using, e.g., NEON intrinsics). Apple does not let you ship this kind of bitcode to the AppStore, so if you want your software to use NEON on iOS, you need to call the opaque Apple math libraries, which they can just provide for a different architecture.

The other things you need to worry about is, e.g., calls to system libraries (e.g. you can't call a Windows API, generate IR for it, and then try to re-compile that IR for Linux, because that API will fail to link). In practice, if you provide the symbol, everything will work, and the system APIs for MacOSX, iPadOS, and iOS are quite similar.

The x86-macos IR won't be recompilable to Linux or windows, but can be made recompilable to arm-macos.

Note also that this is not the first time Apple does this with the AppStore. They silently migrated all apps from 32-bit ARM to 64-bit ARM, recompiling the software for you. This hints that they have additional capabilities to changing some architecture details in the AppStore's IR, like pointer-sizes (these Apps are not running in 32-bit compatibility mode in 64-bit CPUs, but are running as native 64-bit apps).

>Additionally, Mac AppStore apps are shipped without Bitcode.

The Apps themselves are shipped to users as binary blobs, but developers ship bitcode to the AppStore, which Apple has been recompiling for each new arm processor in their iphones (so on an iPhone 11 you get a different binary blob than on an iphone 6, but the developer didn't ship two blobs, nor they recompiled their App).


Commenting on this post from my Galaxy Book S, which is an Snapdragon ARM laptop running Windows 10 Pro.

It's currently driving 2x 1080p screens over USB C + DisplayPort chaining. It can drive 3 of these chained in this way. Keyboard and mouse are connected to the USB hub on the display, so it's a single cable connection from my laptop to start working in the morning.

I hope to have a phone at some point that runs Windows 10 Pro, that I can plug into a USB C plug and get straight to work, that would be amazing!

MSFT Edge and Windows Terminal both have ARM64 builds (and so does VS Code Insiders), and WSL works great. I work with tmux+nvim on Ubuntu ARM, which runs almost everything I need.

A lot of stuff _doesn't_ work though, but the stuff that does works well. I hope that we see some inexpensive (200-300USD?) NUC style devices that can run Windows 10, it would make for a great little computer.


I am quite sure that .NET supports Windows on ARM via UWP, and has been doing so since Windows 8 got released.


That's .NET Native, which is a completely different beast unfortunately. .NET Core is set to add support for ARM64 in .NET 5. Furthermore many .NET apps for Windows were built using WPF, which won't support ARM64 until .NET 5.


It is compatible with .NET Standard 2.0 and the latest version makes use of .NET Core 2.3.

So plenty of stuff is available.

Besides it is not like one can blindly run .NET on other platforms, because plenty of applications make use of COM or DLLs written in C++.


Sure, and that's why a good number of UWP apps have ARM64 versions. But UWP apps represent a miniscule fraction of the .NET Windows apps out there. What makes it even smaller is the fact that .NET Native is even a subset of .NET Standard, so some things like Reflection don't work without some additional tweaking. Can you write a UWP app using .NET that runs on ARM64? Sure! But basically every non-UWP Windows .NET app needs to wait for .NET 5 to build for ARM64.


Is plain win32 API supported on arm64/Windows? Is there a version of MSVC? Or does one use mingw?


Hello,

Win32 is supported with MSVC just fine.

You can also use MinGW with LLVM (not MinGW with GCC), available at: https://github.com/mstorsjo/llvm-mingw


Win32 is supported, and a supported tool chain comes with Visual Studio.


I wonder why apple is abandoning bootcamp if windows on ARM is fine?


My opinion is that Apple needed bootcamp to reassure users they could still use their windows software even if they were switching to Mac back when bootcamp was released. Now Apple and its ecosystem don't need windows anymore, in fact they would not like you to use anything other than ios and macos...


Alternatively, it looks likely that Apple will use their own GPU designs in at least some, if not all, of their ARM Macs. So running Windows on ARM would necessitate Apple porting drivers for their GPU to Windows. I have to imagine they did a cost analysis on that and decided it wasn't worth the time to support a competing platform which most users would only want for its own compatibility layer, some for games, which would need proper DX9/10/11/12 drivers for said GPU.

(this also is probably part of the reason apple deprecated opengl, I imagine)


>Alternatively, it looks likely that Apple will use their own GPU designs in at least some, if not all, of their ARM Macs.

Apple Removing Support for AMD GPUs in macOS Arm64

https://news.ycombinator.com/item?id=23754055


This is not "removing", this is "not adding yet"?

Quite likely they'll support eGPUs, and maaaaaybe big MacBook Pros with Apple SoC + AMD GPU could eventually happen??


OpenGL is still supported (and still marked deprecated) on Apple ARM


Microsoft only makes the ARM version available to OEM's, there is no retail version you could use.

I suspect the real reason is people don't use it much anymore, there were some figures published that showed the number of bootcamp users had fallen from 15% to 2% over its lifetime.

I still use it although I will probably be retired by the time my current machine needs replacing.


I'm quite interested in what Nvidia is planning. Nvidia purchased Mellanox last year (they make high-end network gear often used in compute clusters). Nvidia is very involved in machine learning with their GPUs. The only missing part of puzzle is CPUs which was either Intel or AMD (a direct competitor). ARM changes things and means they're not dependant on those companies (and the x86 licensing clusterfcuk preventing newcomers), and they have some room to tailor it to fit their needs (like Amazon & Google recently).

These definitely are exciting times. Most Brits born in the 80's and 90's will have used Acorn computers in school, no one predicted that what would grow out of it (they weren't particularly fast).


You seem to have overlooked the Tegra X1, which powers 55+ million Nintendo Switches. Nvidia seems to already be invested in the ARM CPU market and doing relatively well. Their X2 powers the new Shield devices and who knows what else in the future.

Yes, that's a very tiny number compared to the billions of CPUs made by Intel, Apple, AMD, Qualcomm and Samsung, but the point is, they are already doing ARM CPU integration and have carved out a decent niche for themselves.


Also, the X1 uses an ancient A57. To enumerate, since then, we've had A72,A73, A74, A75, A76, A77, and the announced A78 and (interesting named) X1.

That's 7 major generations behind the curve.


Video game CPUs aren’t generally where the puck is going, sadly. Cell was interesting but other than that we’ve had a history of picking a semi-popular architecture and usually some old chip (Nintendo) or just a normal x86.


You are totally mistaken. Have you seen what the PS5 is doing with memory bandwidth? It's amazing. The entire PC industry is going to end up emulating their architecture.


Memory bandwidth (by using GDDR6 for shared RAM/VRAM) is impressive compared to PC but it's already done by PS4(GDDR5) and not special/difficult architecture like Cell. Just a difference comparing general hardware vs gaming hardware.

For normal PC workload, such bandwidth is not useful and GDDR increases latency.

IMO Interesting thing like HBM is happening in Server/HPC world.


I was talking about PS5's secondary storage being essentially as fast as RAM: 9GB/sec throughput with custom 12-channel controller to the new PCIe 4.0 SSD.

https://screenrant.com/ps5-io-ssd-speed-tech-specs-playstati...


It's impressively fast and adopting it as standard equipment is great but similar SSD product should be available for PC soon. Not feel innovative like Cell.


Right, so you could say that "being available for PC soon" means that's "where the puck is going", correct? Which was my original point. The PS5 is a video game console, but isn't just commodity hardware, but is actually pushing the limits of computing in a way that the PC industry will soon adopt.

Also. It might be much longer than "soon". Though Sony is targeting the next gen SSD specs (unlike Microsoft), there will need to be a new generation of motherboards and CPU upgrades before PCs will be able to come close to the PS5 bandwidth, even with the same exact next gen SSD.


I don't think the PS5 will make SSD manufacturers to develop faster SSDs. It's already on the roadmap.

But I want the PS4 to enforce a fast SSD as a requirement for PC game, like you say. I think game consoles have a power to standardize great techs, but no longer makes innovating techs for computing.

"a new generation of motherboards and CPU" was available in 2019.


Nvidia has already embraced ARM, for example the Jetson machine learning platform is based on custom ARM SoCs: https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...


> they weren't particularly fast

I remember them being amazingly fast. I was comparing them to Amigas in 1987 though. The Motorola 68000 in the Amiga 500 was less than 1.0 dhrystone MIPS, while the Archemedes A3000 was about 4.0. For reference, the machines cost about £499 and £599 respectively in 1987.

As a result the frame rate, poly count and screen size in Zarch on the Archemedes was higher than Starglider II on the Amiga :-). Look https://youtu.be/MNXypBxNGMo?t=36, https://youtu.be/edDLqlG4quw?t=132


I remember switching on and being at the desktop in .. I don’t know, 5 seconds? Consistently as well. No random waits.

Apps opened so fast you couldn’t perceive a delay between double clicking and then appearing. New windows for apps also instant.

The only times I noticed waiting was when doing something that legitimately seemed like I should have to wait. A big operation in a paint program. Starting up a game.

I think this consistent ‘zero delay’ came through firstly the fact that lots of the OS was in ROM, then the tight design and coding of the OS, then perhaps that the ARM CPU was pretty fast.

Running RISC OS on a Raspberry PI natively it’s still just as snappy for normal operations. Things that take significant CPU are also now fast thanks to the several hundred fold increase in MHz.

I’d love to see another ‘personal computer’ OS appear which focuses on making the UX feel like it’s all working hard real time (or as close as it can) so that we can get back to the joy of feeling that the machine is consistently predictable in how it responds. Today’s ‘personal’ OSes feel more like driving a car with automatic transmission, steer-by-wire and several mechanical problems that cause it to randomly fail to respond in the expected time or just do something entirely different.


Nvidia is partnering with Ampere on ARM, and AMD on x86 (!). I would not be surprised if Nvidia made an offer down the line once they figure out whether Ampere is worth the money.

Right now, getting Apps ported to ARM seems a lot more likely than AI / ML porting their work off CUDA. And I am thinking if CUDA will end up like Excel for Microsoft. It is not that we cant move off Excel, but the code and value running on CUDA / Excel will not be worth the effort porting away from it.


> One possible downside of the new Macs, is that Apple keeps talking about the new secure boot feature only allowing Apple signed operating systems to boot as a security feature. Does this mean we won’t be able to run Linux on these new Macs, except using virtualization?

No, you can turn off secure boot.


Was it just a gaffe when Federighi mentioned you couldn't install alternative operating systems in his recent Daring Fireball interview, then? I admit I could have misunderstood him, but it seemed like that was what he was saying.


An important thing to recognize is that every version of macOS (and iOS!) so far has retained some official way to boot a custom/unsigned/"unclean" kernel, specifically for kernel (or kernel extension) development.

When SIP (rootless v1) was introduced, macOS came with a way to disable it, for the sake of kernel developers. When APFS was introduced by default, macOS retained two workarounds for kernel developers: both the ability to boot from alternate APFS volumes within the APFS container; and also the ability to install to/boot from HFS+. And when the system volume became truly read-only in Catalina (rootless v2)—that's right, even that restriction could be overridden via csrutil, specifically to facilitate kernel development.

Apple is never going to lock themselves out of booting custom OSes that are almost, but not quite, macOS; because that's how new macOS gets made.

And, of course, as long as you can boot something that's not literally macOS; then that same mechanism can always be abused to boot something that is much less macOS, or in fact not macOS at all (but may still seem to be macOS, from the bootloader's perspective.)


Except that the future is user space only extensions, as they re-iterated once more at this year's WWDC, while announcing additional support for newer userspace extensions APIs.

So actually, going forward they only need to make it easier for their own kernel teams, not third parties.


I would hope that there is always a way to get code execution in kernelspace, though it may require flipping more and more switches.


From the contents of the talks it is obvious that if that exists, it will be for Apple internal access only.

Given how they keep stressing the security and kernel stability during those sessions.

They are turning macOS into a proper micro-kernel, by changing the engines while keeping the plane flying.


Couldn't they retain 100% of the capability and lock everyone else out of it via signed software? For others that need to cooperate could they not hand out the capability to dozens of manufacturers without also providing that capability to millions of customers.


Apple doesn’t really seem to like ceding that control to users that way. It’d be really nice if you could change the root of trust on your device, but with Macs it’s either Apple or nothing, and on iOS it’s just Apple.


Nitpick: SIP and rootless are the same thing.


Followup nit: the read only system partition is named as such and pronounced as an acronym.


Did he say you couldn’t install them or that they wouldn’t be “supported”? Those are two different things.


I'm aware they're two different things. Believe it or not, the rest of Hacker News is not five years old.


It's very easy to misunderstand things due to phrasing. I wouldn't take it personally, nor would it surprise me if a 30 year old friend of mine got confused.


Haven’t gotten to that part yet! Maybe I will on my next walk. But I would guess that he was talking about Linux virtualization being good enough that you wouldn’t need to do this and/or no Bootcamp support?


Follow up: I got to that point, I think the words were “we’re not direct booting an alternate operating system” in reference to not running Windows in Bootcamp on x86 and to reaffirm that Bootcamp was not a thing, not that there would be no support for booting other things.


Not on ARM macs. The option is missing. The only remaining option is for allowing older versions of the OS, which must still be signed by Apple.


I think I remember a WWDC talk mentioning that it would be possible via a revamped csrutil from recovery.

Edit: this one, about 19 minutes in: https://developer.apple.com/videos/play/wwdc2020/10686/


The GUI does not display all options. You most likely can use csrutil to disable it, see the “Platforms State of the Union” WWDC talk.


You can on Intel-based machines, as far as I know it is not clear yet whether you can on ARM-based systems.


Pressure for that came from Microsoft for Windows 8 certified devices:

"Enable/Disable Secure Boot. On non-ARM systems, it is required to implement the ability to disable Secure Boot via firmware setup. A physically present user must be allowed to disable Secure Boot via firmware setup without possession of PKpriv. A Windows Server may also disable Secure Boot remotely using a strongly authenticated (preferably public-key based) out-of-band management connection, such as to a baseboard management controller or service processor. Programmatic disabling of Secure Boot either during Boot Services or after exiting EFI Boot Services MUST NOT be possible. Disabling Secure Boot must not be possible on ARM systems."[1]

This requirement is no longer needed for Windows 10 logo certified x86_64 computer,[2] but I've yet to see a vendor actually take it out.

For SOC systems, at least if you wish to have Windows 10 Logo, it makes it optional if I'm reading their spec sheet correctly:

"Requirement 10: OPTIONAL. An OEM may implement the ability for a physically present user to turn off Secure Boot either with access to the PKpriv or with Physical Presence through the firmware setup. Access to the firmware setup may be protected by platform specific means (administrator password, smart card, static configuration, etc.)"[3]

That said, that's only for Windows logo certified devices. I'd assume if you don't intend to make you system Windows compatible then you can do whatever you want.

------

[1]https://docs.microsoft.com/en-us/previous-versions/windows/h...

[2]https://www.pcworld.com/article/2901262/microsoft-tightens-w...

[3]https://docs.microsoft.com/en-us/windows-hardware/drivers/br...


I take delivery of a Raspberry Pi 4 tomorrow. I’m really hoping it will replace my MBP for almost everything I do — namely clerical office work, teaching high school CS, and web browsing. Exciting times.


I would be prepared to be underwhelmed at using a rpi as a desktop. Graphics drivers still need work.

This is where I think Apple can do it right. They can tune drivers and fully optimize system performance, since they basically own the whole stack, and won't be stuck with proprietary broken blobs or reverse engineering a 3rd party design.


I’ve watched a lot of YouTube videos of, ironically, people using Raspberry Pis to watch YouTube videos. The UI seems snappy, and I can live without full screen 1080p — the only thing that seems poor is full screen full resolution video.

Is there anything else I should be looking out for?

Raspberry Pi 4 is now OpenGL ES 3.0 certified, so I’m expecting it to only get faster with time. My main concern is a Pi5 being released and my “old” hardware becoming obsolete.


Here was my experience a few weeks ago: https://www.jeffgeerling.com/blog/2020/i-replaced-my-macbook...

Just as a helpful data point.


Followup: this was very helpful, thank you. As much for the technical content as the inspiration that it could be viable.

No, it didn’t really work out for you.

Yes, it’s working out great for me!

Konsole with Terminus is an aesthetically pleasing environment in which to work, 8 hours a day.

i3 in 4k, even at 30Hz, looks great. Vim renders fast (66-80ms keyboard latency.)

eog, evince, and gimp let me fiddle with materials. gnome-screenshot copies to the clipboard for rapidly sharing notes with others, based on what I have on screen.

davmail transparently connects dumb IMAP clients to the corporate email, address book, and calendaring systems. I have an XBiff alerting me to incoming Outlook emails!

REPL.it is slow in Firefox but fast in Chromium. It’s going to work just fine for managing pupils’ class work.

My home directory is encrypted and mounted on login. Mutt and offlineimap let me handle last term’s mail in bulk.

Python and git and tmux let me prepare and mark work from the students, or at least they should do come next term. I have all my little tools working to give me an excellent markdown environment for note taking on classes and pupils and meetings. Asciidoctor for the more typographically demanding class materials.

Absolutely all of this was a breeze to set up because all of these technologies just work in Raspbian/Debian on armhf. It has, so far, been unbelievably clean and pleasant.

I even have LXC working locally, for running Unix and IPv6 experiments with multiple little Alpine “VMs”. χ!


Thanks! That’s very helpful.

For reference, my day is spent:

• editing code, markdown, and asciidoc for classes;

• managing pupil work in repl.it, GitHub, and Google Classroom;

• managing pupil reports and records of work in custom school IT web applications; and

• emailing with colleagues and pupils with outlook’s webmail.

If I can get a snappy and correctly scaled terminal and browser — which I think you’ve managed after some effort — I hope it will work out well.

I have a 4k LG display at home which might be the biggest pain point. Rest assured though it’s good ole 1080p all the way, at school: HDMI1 for my private desktop and HDMI2 for the 80” class display.

I am optimistic this is a workload that an ARM desktop can handle.


I wouldn't expect a pi5 any time soon. Maybe another revision of the pi4 or a new compute module system that allows a proper pi4 compute module (apparently not enough pins on the sodimm form factor they use currently to give the pcie lanes and other things the processor supports).


It doesn't matter if it's OpenGL ES 3.0 certified if the drivers don't expose it, and OpenGL ES isn't really well supported on the desktop compared to standard OpenGL or Vulkan. Android is a better environment for OpenGL ES. In general it does not feel there is a concerted effort to make mobile SoCs first-class citizens for desktop distros. The open source graphics drivers for rpi seem to be nowhere near getting to a polished state anytime soon, and those provided by Broadcom seem hacky. I think there is better effort going on with getting Mali GPUs supported, but that's not what the Raspberry Pis use.


> This is where I think Apple can do it right.

While Apple has open sourced a few things, I doubt they will open source their graphics drivers. Though one can hope.


I don't expect them to open source anything. I expect them to produce competent drivers for macOS and provide the first truly decent and performant desktop experience for an ARM SoC.


Honest question. How is that any better than a small laptop? Is the price point that makes the difference?


The price, and I’m also interested in portability and robustness. My plan is to hot desk with this thing between three locations: my office in school, my classroom, and my home office.

I also want something weird. Kids kind of look down on PCs and especially Linux. It’s seen as kind of crappy compared to the shiny of iPhones and Windows gaming rigs. I hope to show them that there’s a third kind of computer with which one can do a lot, that also happens to be pocketable and under $100.

(And also: ANSI keyboard Pinebook Pro has been sold out for weeks.)


You could do the same thing with a $100/150 NUC and get better performance and software compatibility - https://www.amazon.com/Wintel-Pro-Mini-ordenador-T8/dp/B083V...


I wish Intel or some other company would make a usb-c pc stick. So that I can plug it into a usb-c hub, do work, then later move to another usb-c hub and continue.



The RPi4 is excellent as video player using Libreelec/Kodi, but for some reasons their desktop Linux distro still doesn't support video acceleration, so be prepared to sluggish performance for a while. I also experienced lots of Firefox crashes when playing Youtube videos from the desktop, but cannot tell if due to the interface slowness for the above reasons.


> their desktop Linux distro still doesn't support video acceleration

Works fine on my Pi4 with official 32-bit Debian Buster OS: https://github.com/Const-me/Vrmac/tree/master/VrmacVideo

Based on CPU usage, the included VLC player uses same API.


I never understood this obsession with PI as a desktop machine, tlits such a poor choice. A 2013 Intel Atom has more oomph, costs the same, and that's even before we get into potato storage.


Portability and first class Linux support are big selling points. It just works.

(It literally did just work — shipping arrived first thing this morning, armhf Raspberry Pi OS installed and running within half an hour.)

I’d love to try an Atom / Intel compute stick if you had a recommendation though.


NanoPi M4V2 may be better. plus you can install nvme drives in it for additional io perf.


I love the diversity in this space but unfortunately I’ve been burned by off-brand pi stuff before.

Pine’s RockPro64 and Libre Renegade Elite are two lovely RK3399 devices that I’ve tried. However, each has sufficient quirks that something more mainstream like the Raspberry Pi 4 (with a wide user base, more Google hits for common problems, etc) will hopefully let me focus on using the device as opposed to getting it to function.

Non-RPi SBCs also seem to require a lot of off-brand community patches to get them working. I’m pretty wary of bringing an OS build which I downloaded from a third party github page onto a campus full of children. I don’t have any reason to believe the community builds are nefarious, but it’s less of a risk to stick with Raspberry Pi, IMHO.


> Pine’s RockPro64 and Libre Renegade Elite are two lovely RK3399 devices that I’ve tried. However, each has sufficient quirks that something more mainstream like the Raspberry Pi 4 (with a wide user base, more Google hits for common problems, etc) will hopefully let me focus on using the device as opposed to getting it to function.

The Pinebook pro is really nice (I've had one since the beginning of the year) but I completely share your sentiment. I really like mine and spend almost no time fiddling with it now. It made a really nice, reliable daily driver for my son this spring during the sudden and unplanned distance learning the pandemic brought us.

But I would not recommend it to anyone whose computer I couldn't/wouldn't personally fix right now. It's close, but not there yet IMO. Bootstrapping, power management and sound all have more rough edges than running Linux on a normal x86 system does right now. I say this fondly: it's still a bit of a project right now. (That's part of the fun for me, but doesn't match your stated focus at all.)


Hindsight is 20/20, but Intel selling off their ARM division (XScale) almost exactly a year before the first iPhone was announced looks like a pretty bad call in retrospect.


it gets worse. apple had apparently originally approached intel to build arm chips for the iPhone, and intel said no due it thinking it would never be successful: https://appleinsider.com/articles/15/01/19/how-intel-lost-th...


It's like Decca refusing to signup The Beatles.


The only reason Intel owned it in the first place was because they lost a lawsuit with DEC. Intel has repeatedly shown that it is just not that into ARM.


It's a fine call... if they could scale Atom down that low + eat the margin hit to gut ARM dev boards.

Neither of which they executed on.


Woah. Isn't apple only licencing the ARM instruction set? They're still very much physically making their own chips riight?

The A13 bionic wouldn't physically share any components built by ARM?

If that's the case, it's like saying AMD use Intel chips, but really they just licence the instruction set


They’re designing their own chips, including their own CPU core (which implements the ARM instruction set). TSMC actually fabs them though. There may be some ancillary cores or blocks licensed from ARM, but nothing major.


Ithought so. Full credit to apple then for pioneering this effort.

I appreciate some commenters saying the raspberry helped, but really it's apple who are the taking a big step forward with popularising a third player in the CPU space. Unless they are willing to sell their chipsets, I doubt it will affect consumer products much outside of their own ecosystem.

Again nothing exciting for ARM, but more the future of apple chips


They definitely design the CPU but I wonder if they licence any IP for things like USB controllers or such? That would mean using ARM standard interfaces inside the chip and I can see upsides and downsides.


My fingers are crossed for Steam on ARM and distributing ARM binaries for games that devs cross-compiled. Given Valve's support for Linux, if an ARM client ran on Android too and could finally access my library on my phone, I'd be a happy customer.


Surprised he doesn't mention Nuvia, which could be a very exciting development in Arm servers (if it lives up to the hype of course).


Isn't Nuvia "just" making a server chip with ARM Neoverse cores? Nothing to sneeze at, sure, but if you want to kick the tires just launch a graviton2 instance on AWS?


AWS won't sell you the physical hardware.

Ampere will, though — Altra's going to be good. Also there's Marvell (ex-Cavium), but that's custom cores, and rather HPC-focused.

More choice is more better though, and since Jon Masters works for Nuvia, we know Nuvia is going to have the most compliant and least quirky hardware, especially in the PCIe area :) Though to be fair, Ampere's first generation is already very decent in terms of this, and Ampere even promised to eventually contribute support for their hardware to EDK2 TianoCore open firmware..


They seem to see Neoverse as more of a competitor than a component.

https://twitter.com/jonmasters/status/1237879913245368321?s=...

Jon Masters works at Nuvia.


What size is a memory page on other ARM CPUs? I think Apple's processors use 16KiB pages. Doesn't x86 software assume a 4KiB page size, unless it deals with huge pages?


According to [1] at 14:00, the final hardware will support 4kb pages. But the DTK only supports 16kb pages.

[1] https://developer.apple.com/videos/play/wwdc2020/10686/


They all support 4KiB, 16KiB, and 1MiB. It's required by the ARM spec, obviously with the exception of CPUs that don't have an MMU. Support for 16MiB pages is optional.


And for 64-bit ARM, the base page sizes are 4KiB, 16KiB, and 64KiB, with IIRC 16KiB being optional. If you want 52-bit physical addresses, you need to use 64KiB base page size, otherwise the maximum is 48-bit physical addresses; this is probably why RHEL uses 64KiB page size on 64-bit ARM.


64kb is also a better match for today's working sets to avoid TLB pressure. I imagine x86 would switch to it or something close if it weren't such a schlep.


Another advantage of a 64KiB page size is that it allows for a bigger L1. The L1 is usually VIPT (for good reasons), and to prevent confusing issues with aliases, it means that its maximum size is a single page per cache way. For an 8-way cache, that means the L1 can be at most 32KiB with a 4KiB page size; with a 64KiB page size, even a 2-way L1 cache could have up to 128KiB.


openSUSE started with 64KiB as well, but that was found to have massive memory overhead with smaller files, so it was switched back to 4KiB.


Yeah, see e.g. the Linus Torvalds rants about the "optimal" base page size.

You can probably make a good case for the optimal base page size being larger than 4kB today, but probably not by very much. Maybe 16 kB or so. But then it's not a huge advantage over 4 kB which has the benefit of compatibility, so, meh..


You're conflating two separate things. There's hardware-level page size, which is the logical unit of the mmu. On x86, this is always 4k. The kernel can map mmu-level pages into the address space of running processes. As an optimization, it might always map these mmu-level pages in batches. The batch size comprises the virtual page size; for instance, it could always map in batches of 4, for a 16k virtual page. But the CPU-level page size is completely fixed.


Page size is virtualized when running using Rosetta 2


Partially, Electron already has an issue open for the DTK because Chromium doesn’t like it: https://github.com/electron/electron/issues/24319


> Of course this computer runs Linux and currently is being used to solve protein folding problems around developing a cure for COVID-19, similar to folding@home. This is a truly impressive warehouse of technology and shows where you can go with the ARM CPU and the open source Linux operating system.

Extraordinary claim, but any evidence to support that Fugaku is super useful apart from making headlines?


I am not sure what to make of the fact that the article never says "X86", but keeps mentioning "Intel" and occasionally "AMD". Was that tailored to the expected audience, or the author is not confident enough about what "X86" means?


I would have preferred this to happens with RISC-V, though...


It will and much sooner than you think. Everything that is making Arm attractive right now is applicable to RISC-V but even more so, since it is so much easier to add custom logic.


Did Apple say they would move their entire line of Macs to Apple Silicon? I thought Federighi said they had « amazing » Intel-powered new Macs in their pipeline.


Yes, they announced a transition to Apple Silicon over two years. There are a few new Intel Macs left in the pipeline before everything switches over.


There are Intel Macs in the pipeline because the switch is not instant, and they want to sell at least some Macs for the next 6–18 months.


This is how you don’t Osborne your product line.

When they did this with PPC they said the same thing, buying a PPC Mac Mini ended up being a bad choice.


Are Apple ARM chips the same as other ARM chips or are there any "magic" that requires more work for developer to port software?


They will likely have some Apple specific features, but I doubt they'd want to make them incompatible to the point of requiring a software rewrite compared to other ARM chips. Just recompile your stuff, jump through whatever hoops you need to publish it, and that should be it.


Going by their iPhone chips: nothing special to worry about, other than page sizes which may not be an issue in the future depending on how Rosetta 2 works on their actual shipping hardware. There are some proprietary MSRs but they’re not for your use ;)


Apple licenses the ARM CPU architecture from Arm Holdings. My understanding is that Arm Holdings doesn't sell any hardware themselves.


Arm sells both ISA licenses (which is what Apple has) that allow you to make a fully custom microarchitecture, and pre-made cores (which is what e.g. AWS has, along with all the little embedded SoC manufacturers). But yeah, they don't really sell physical hardware.


Apple chips have one custom extension (AMX), but IIRC, they don't give third party developers access to this, so from a developer's perspective it's a standard ARMv8 chip.


Pure speculation on my part, but I think Apple wouldn't want people installing either other OSes on their machines or their OS on other hardware, so I would expect some lock-in hardware/firmware/software to make impossible or really difficult to build ARM hackintoshes and the other way around. Hoping to be wrong, though.


In his interview with Gruber, Federighi explicitly said that they wanted hobbyists and hackers to be able to play with the Mac hardware, so it would be possible to boot other OSen. Don’t have a link right now, but it was clear that he was saying that the platform would be hackable.

Proof will be in the pudding of course.


I hope so, too!


Where can a developer buy one of these $500 ARM minis?



You can’t buy them - it’s more of a rental situation.


And you need approval and all, plus there's a bunch of terms and conditions…


Like living in one of the EU countries that Apple have decided are part of the EU rather than all of them. <grumbling from a Balkan EU country not on the list>


Read the first paragraph then wished that the font on mobile was a lot bigger.


"Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful."

https://news.ycombinator.com/newsguidelines.html


Is this a new addition to the guidelines? I could have sworn it wasn’t there before…



Thanks. I didn't see that this was updated 10 days ago.

Now my comment has been down voted into oblivion.


Ok, I've brought your comment back from oblivion. One-time offer.


Apple should start investigating a transition to RISC-V IMHO


Given that Apple was one of the original founders of Arm, and has an ARM architectural license (the most gold plated license Arm sells, basically allowing the customer to create their own chips implementing the ARM ISA), I'd guess Apple would be the last one to switch from Arm to RISC-V (assuming such a switch would ever happen).


And throw away their billions of dollars in investments in making the fastest ARM processors for mobile devices?


Apple Silicon are not ARM processors. They have ARM compatible instruction sets. They are not based on 3rd party ARM cores.


That's like saying AMD chips aren't x86 processors.


No, it is like saying AMD chips aren't Intel processors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: