If you're interested in more background about one user-visible problem being directly attacked by this new GPU architecture, that could be "shader compilation stutter" (although there are many others).
These are two excellent posts that go deep on this:
The Shader Permutation Problem - Part 1: How Did We Get Here?
The Shader Permutation Problem - Part 2: How Do We Fix It?
In particular, the second post has the line:
We probably should not expect any magic workarounds for static register allocation: if a callable shader requires many registers, we can likely expect for the occupancy of the entire batch to suffer. It’s possible that GPUs could diverge from this model in the future, but that could come with all kinds of potential pitfalls (it’s not like they’re going to start spilling to a stack when executing thousands of pixel shader waves).
... And some kind of 'magic workaround for static register allocation' is pretty much what has been done.
Does apple document exactly how many actual true cores there are inside their GPUs? It is always confusing they say "40 core GPU" but I assume these are shader cores which each inside them can execute (per the video) "many thousands" of parallel execution paths.
So how does one translate to an equivalent in "CUDA cores" type terminology?
What Apple calls a GPU core seems to be roughly the same as what Nvidia calls a “stream multiprocessor”.
For example a 1080 GTX GPU has 20 stream multiprocessors (SM), each containing 128 cores, each of which supports 16 threads.
Meanwhile Apple describes the M1 GPU as having 8 cores, where “each core is split into 16 Execution Units, which each contain eight Arithmetic Logic Units (ALUs). In total, the M1 GPU contains up to 128 Execution units or 1024 ALUs, which Apple says can execute up to 24,576 threads simultaneously and which have a maximum floating point (FP32) performance of 2.6 TFLOPs.”
So one option to get a single number for a rough comparison is to count threads. The 1080 GTX supports 40,960 threads while the M1 supports 24,576 threads.
There’s obviously a lot more to a GPU — for starters, varying clock speeds, ALUs can have different capabilities, memory bandwidth, etc. But at least counting threads gives a better idea of the processing bandwidth than talking about cores.
Just for clarification: The 1080 has 20 SMs with 128 FPUs each. Each FPU can perform 2 FLOPs per cycle (fused multiply adds). Combined with the frequency of 1607 MHz we land on the advertised 8.2 TFlop/s.
The fact that each SM can support 1024 threads (that's the maximum blocksize of CUDA on that card) doesn't do much for the theoretical flops. Only a fraction of those threads can be active at a time. The others are idling or waiting on their memory requests. This hides a lot of the i/o latency.
For sure. Just counting threads doesn't give anything like a complete picture of performance.
It's still somewhat interesting because threads are a low-level programming primitive. If you can come up with work for 40k simultaneous threads, you can use the GPU effectively. For some tasks this parallelization is obvious (a HD video frame has 2 million pixels and shading them independently is trivial), and of course often it's anything but.
If the architecture is vastly different, these comparisons become sort of meaningless, though. The ultimate performance is determined by all the tiny little bottlenecks, like how quickly the architecture can move data between different types of cores, memory, cache, etc.
Apple has always been really good at parallelism, which is why they get so much performance from less power consumption.
Yeah, it’s completely meaningless to compare but HN loves specs & seemingly hasn’t learned—after about 15 years of it being true—that direct spec comparisons are meaningless.
I see if with every major product announcement, the worst are usually Apple threads but it’s not constrained only there.
Question from a GPU novice. I presume a thread is the individual calculation perform on one element of the vector? Can the 128 cores, or 20 SM perform different operations at the same time or all 24,576 threads perform the same operation at the same time, on a vector of data of length of 24,576?
"128 * the-number-of-cores" of threads can make progress truly in parallel (at the same time).
24,576 threads (or however many, I didn't validate the number and it depends on the occupancy, which depends on thread resource usage, like registers => depends on the shader program code) is how many threads can be executed concurrently (as opposed to in parallel), as in, how many of them can simultaneously reside on the GPU. A subset of those at any time are actually executed in parallel, the rest are idle.
You can think of this situation as follows using an analogy with a CPU and an OS:
1. 128 * the-number-of-cores is the number of CPU cores(*1)
2. 24,576 threads is the number of threads in the system that the OS is switching between
Major differences with the GPU:
3. On a CPU context switch (getting a thread off the core, waking up a different thread, restoring the context, and proceeding) takes about 2,000 cycles. On a GPU _from the analogy_ that kind of thread switching takes ~1-10 cycles depending on the exact GPU design and various other details.
4. In CPU/OS world the context switching and scheduling on the OS side is done mostly in software, as the OS is indeed software. In GPU's case the scheduler and all the switching is implemented as fixed function hardware finely permeating the GPU design.
5. In CPU/OS world those 2,000 cycles per context switch is so much larger than a roundtrip to DRAM while executing a load instruction that happened to miss in all caches - which is about 400-800 cycles or so depending on the design - that OS never switches threads to hide latencies of loads, it's pointless. As far as performance is concerned (as opposed to maintaining the illusion of parallel execution of all programs on the computer), the thread switching is used to hide the latency of IO - non-volatile storage access, network access, user input, etc. (which takes millions of cycles or more - so it makes sense).
In the GPU world the switching is so fast, that the hardware scheduler absolutely does switch from thread to thread to hide latencies of loads (even the ones hitting in half of the caches, if that happens), in fact, hiding these latencies and thus keeping ALUs fed is the whole point of this basic design of pretty much all programmable GPUs that there ever were.
6. In real world CPU/OS, the threads that aren't running at the time reside (their local variables, etc) in the memory hierarchy, technically, some of it ends up in caches, but ultimately, the bulk of it on a loaded system is in system DRAM. On a GPU, or I suppose by now we have to say, on a traditional GPU, these resident threads (their local variables, etc) reside in on-chip SRAM that is a part of the GPU cores (not even in a chunk on a side, but close to execution units in many small chunks, one per core). While the amount of DRAM (CPU/OS) is a) huge, gigabytes, and b) easily configurable, the amount of thread state the GPU scheduler is shuffling around is measured typically in hundreds of KBs per GPU Core (so on the order of about "a few MBs" per GPU), and the equally sized SRAM storing this state is completely hardwired in the silicon design of the GPU and not configurable at all.
Hope that helps!
footnotes
(*1) a better analogy would be not "number of CPU cores", but "number-of-CPU-cores * SMT(HT) * number-of-lanes-in-AVX-registers", where number-of-lanes-in-AVX-registers is basically "AVX-register-width / 32" for FP32 processing which (the latter) yields about ~8 give or take 2x depending on the processor model. Whether to include SMT(HT) multiplier (2) in this analogy is also murky, there is an argument to be made for yes, and an argument to be made for no, and depends on the exact GPU design in question.
Also, your "128 cuda cores" of Skylake variety run at higher frequencies and work off of much bigger caches, so they are faster (in serial manner)...
...until they are slower, because GPU's latency hiding mechanism (with occupancy) hides load latencies very well, while CPU just stalls the pipeline on every cache miss for ungodly amounts of time...
...until they are faster again when the shader program uses a lot of registers and GPU occupancy drops to the floor and latency hiding stops hiding that well.
> ...until they are slower, because GPU's latency hiding mechanism (with occupancy) hides load latencies very well, while CPU just stalls the pipeline on every cache miss for ungodly amounts of time...
Is the GPU latency hiding mechanism equivalent to SMT/Hyperthreading, but with more threads per physical core? Or is there more machinery?
Also, how akin GPUs "stream multiprocessors"/cores are to CPUs ones at the microarchitectural level? Are they out-of-order? Do they do register renaming?
As you state, GPU latency hiding is basically equivalent to hyper threading, just with more threads per core. For example, for a 'generic' modern GPU, you might have:
A "Core" (Apple's term) / "Compute Unit" (AMD) / "Streaming Multiprocessor" (Nvidia) / "Core" (CPU world). This is the basic unit that gets replicated to build smaller/larger GPUs/CPUs
* Each "Core/CU/SM" supports 32-64 waves/simdgroups/warps (amd/apple/nvidia termology), or typically 2 threads (cpu terminology for hyperthreading). ie, this is the unit that has a program counter, and is used to find other work to do when one thread is unavailable. (this blurred on later Nvidia parts with Independent Thread Scheduling.)
* The instruction set typically has a 'vector width'. 4 for SSE/NEON, 8 for AVX, or typically 32 or 64 for GPUs (but can range from 4 to 128)
* Each Core/CU/SM can execute N vector instructions per cycle (2-4 is common in both CPUs and GPUs). For example, both Apple and Nvidia GPUs have 32-wide vectors and can execute 4 vectors of FP32 FMA/cycle. So 128 FPUs total, or 256 FMAs/cycle Each of these FPUs what Nvidia calls a "Core", which is why their core counts are so high.
In short, the terminology exchange rate is 1 "Apple GPU Core" == 128 "Nvidia GPU Cores", on equivalent GPUs.
I'll leave your first question to the other comment here from frogblast, as I really battled with how to answer it well, given my limited knowledge and being an elbow deep into an analogy, after all. I got a writer's block, and frogblast actually answered something :D
> how akin GPUs "stream multiprocessors"/cores are to CPUs ones at the microarchitectural level?
I'd say, if you want to get a feel for it in a manner directly relevant to recent designs, then reading through [1], [2], subsequent conversation between the two, and documents they reference should scratch that curiosity itch well enough, from the looks of it.
If you want a much more rigorous conversation, I could recommend the GPU portion of one of the lectures from CMU: [3], it's quite great IMO. It may lack a little bit in focus on contemporary design decisions that get actually shipped by tens of millions+ in products today and stray to alternatives a bit. It's the trade-off.
> Are they out-of-order?
Short answer: no.
GPUs may strive to achieve "out of order" by picking out a different warp entirely and making progress there, completely circumventing any register data dependencies and thus any need to track them, achieving a similar end objective in a drastically more area and power efficient manner than a Tomasulo's algorithm would.
I think a better comparison is to look at floating point performance. For example, the 10 core M2 GPU does 3.6 TFLOPS (FP32) while an RTX 4060 does 15 TFLOPS and an RTX 4090 82.58
Unfortunately, hardly. Ampere's (Nvidia 3000 series), Ada's (Nvidia 4000 series), and RNDA 3's (AMD 7000 series) GPUs have doubled up their FP32 units in ways that differ in implementation (between AMD and Nvidia) but are relatively similarly poor in their ability to be utilized properly at rates much higher than pre-doubling (Nvidia is doing better than AMD in that, but very far from great).
The formal TFLOPS comparison as a result would be most sensible between pre-M3 designs, AMD 6000 series (RNDA 2), and Nvidia's 2000 series (Turing). After that it gets really murky with AMD's "TFLOPS" looking nearly 2x more than they are actually worth by the standards of prior architectures, followed by Nvidia (some coefficient lower than 2, but still high), followed by M3 which from the looks of it is basically 1.0x on this scale, so long as we're talking FP32 TFLOPS specifically as those are formally defined.
You can see this effect the easiest by comparing perf & TFLOPS of AMD 6000 series and Nvidia 3000 series - they have released nearly at the same time, but AMD 6000 is one gen before the "near-fake-doubling", while Nvidia's 3000 series is the first gen with the "close-to-fake-doubling": with a little effort you'll find GPUs between these two that perform very similarly (and have very similar DRAM bandwidth), but Ampere's counterpart has almost 2x the FP32 TFLOPS.
Those statements have to be made carefully. A lot of the time the GPU is memory-bandwidth bound, so a increase in FLOPS does nothing. Doesn't mean they're worthless.
Even if you're not memory bandwidth bound, leveraging these 2x FLOPs on recent designs is hard, often due to issues like register bank clashes.
They are low utilization, but apparently still worth it because process node changes have made more ALUs take relatively little area. So doubling the ALU count, even with low utilization is still apparently an overall benefit (ie, there wasn't something better to spend that die space on).
Not really, there is no such definition that you're referring to.
Perf of a GPU can be limited by any one of the thousand little things within the micro-architectural organization of the GPU in question, any on-chip path can become the bottleneck:
1. DRAM bandwidth
2. ALU counts
3. Occupancy
4. Instruction issue port counts
5. Quality of Warp scheduling (the scheduling problem)
6. Operand delivery
7. Any given cache bandwidth
8. Register file bandwidth (SRAM port counts)
9. Head of the line blocking in one of the many queues / paths in the design whatever that path is responsible for:
- sending memory request from the instruction processing pipelines to the memory hierarchy
- or sending the reply with the data payload back,
- or doing the same but with the texture filtering block (rather than memory H),
- or the path that parses GPU commands from command buffers created by the driver,
- or the path that subsequently processes those already decoded commands and performs on-chip resource allocation, warp creation / tear down, all of which need to be able to spawn the work further down fast enough to keep the rest of the design fed;
and so on and so on and so on.
By the time a high quality design is fully finished, matured, and successful enough on the market to show up on everyone's radar outside of the hardware design space, due to the commonly occurring ratios of costs of solutions for these various problems above, it usually ends up being 1, 2, or 3, but that's experimental data + statistics + survivorship bias, there is no "definition" that that's the case.
Further, what's "commonly occurring" is changing over time as designs drift into different operational areas in the phase space of operating modes as they pick out low-hanging fruits, science and experience behind the micro-architecture grows, common workloads change in nature, and new process nodes with new properties become the norm. Doubling up of F32 ALUs in Ampere is a good example of that, that was done in a way that changed the typical ratios substantially. And now M3 threw a giant wrench into incumbent statistics of relationships between (3) and the rest of the limiters, as well as between (3) and what's actionable for a GPU program developer to do to mend it.
You can be low on DRAM bandwidth util and ALU util at the same time. How would that be if there were no other limiters?
Generally, a component X of a computer system needs to be a limiter Y% of the time where Y equals the portion of the total cost of the system X is responsible for.
The principle is the easiest to apply in a "calculus of variations" manner: if doubling the key performance metric of X results in an increase of the cost of the entire system as a whole by 5%, but how often X is the limiter drops from 10% of the time to 5% of the time, doing the doubling would bring the design quite close to proper balance wrt. X.
Things that are cheap to beef up are relatively rare a limiter in well-designed systems as a result. Things that are expensive to beef up are often the limiter. What is and isn't expensive to do depends heavily on where in the design space the current design is at and where the technology is at, all of which is changing over time.
FP32 was cheap to double up in Ampere, so they doubled it up, even though that provided only relativelyl small performance improvement. But now as a result, FP32 is very rarely a limiter (in Ampere and Ada). That doesn't automatically mean that these designs are "gimped" in DRAM bandwidth or anything of the kind. Rather, the whole perception that a good GPU design just gotta be ALU limited all the time is nothing but a mistaken perception, just like "it's either ALU limited or DRAM bandwidth limited by definition" also is just untrue. See "occupancy limited" for a prime example.
While FP32 non tensor flops at least looks comparable, FP16/BF16 with tensor core(nowadays a default for any neural network including LLM) at 330 TFlops/s blows away M2.
Not on mobile, which is where Apple GPUs come from. FP32 is not necessary for many computations related to graphics, it is just simpler to deal with one data type.
Are you claiming that people aren't using FP32 on mobile, or are you claiming that people are using FP32 on mobile but could technically have gotten away with FP16?
If it's the latter, it's still correct to say that FP32 is king in mobile graphics.
Unless people explicitly ask for high or at least medium precision in the code, they probably will get FP16 floats on a mobile chip, which is totally fine for most fragment shader calculations.
> So how does one translate to an equivalent in "CUDA cores" type terminology?
I don’t really think we can, even if we knew exactly what is in a M3 GPU core, which we don’t. Both architectures are very different, and different again from AMD GPUs. We have to count Tflops.
I skimmed the video but a lot of it sounded more like advertising than technical information to me. On the other hand, I'm looking forward to watching the Asahi folks crack this stuff open.
It is an overview of what is new about their hardware and technical advice (but not overly technical) on how to maximize the GPUs performance.
I watched the full video and thought it was excellent. I wish other CPU/GPU manufacturers made technical overview videos like this. I've never programmed graphics targeting metal before, but I feel much more inclined after watching this so I guess it was good advertising.
I didn't watch too many developer conference, but IHMO the Apple WWDC sessions are fairly easy and enjoyable for learning. especially they provide full transcript.
I think it's important to understand that this video is aimed at a broad audience of developers. Obviously some of them are GPU experts but a lot of people are quite literally app developers who are going to watch this video as an introduction to "hmm everyone keeps telling about this GPU thing, what can I do with it?" So the video has to provide them with some context they can use to explore more.
> On the other hand, I'm looking forward to watching the Asahi folks crack this stuff open.
It will still be years before it is practical for Linux developers to target these features. Eventually, the rate of change in GPU design will slow and Linux will catch up once and for all. But it's hard to not drool over the hardware that proprietary OSs get to use today.
What's not technical about it? It tells you what the new advancements in the M3/A17 GPUs are, what you should look out for, and how you can take advantage of them. It provides enough information so you can understand what technical tradeoffs you need to make when you target M3 / A17 GPUs. E.g. Register pressure is a real concern in large shaders, and this helps explain how the behavior would be different under a dynamic allocation scheme. It explains how the ray tracing acceleration works, and how it reorders the different intersection calls and how you should avoid intersection queries.
There are some hyperbole interjected about how incredible the performance is but that's only in between the useful data. (I did chuckle at the… enthusiasm of the speaker though)
This isn't a technical document for GPU designers. Apple doesn't really need or want you to understand exactly how the implementation works because that's basically trade secret for them. This is aimed at letting app / game developers know how they should optimize for the new GPUs, since previously Apple just made some ambiguous remarks about some of these new technology ("Dynamic Caching") without explaining what they meant.
But yes, I do like how the Asahi folks tend to end up documenting a lot of how these hardware works, but they also only have public information like this to start from so these are still useful info to have for them.
I'd say the beginning sounds like an introduction to GPU architectures in general, not marketing.
Somewhere in the ballpark of 5:30-6:00 or so it describes prior hardware design of the Apple's shader core, and starting 7:00 it goes into hardware design of the new M3/A17Pro shader core. It's actually surprisingly detailed, e.g. Nvidia's whitepapers provide less detail on the actual organization of their SMs.
> I'm excited to tell you about the new Apple family 9 GPU architecture in A17 Pro and the M3 family of chips, which are at the heart of iPhone 15 Pro and the new Max.
"The new Max"? He clearly meant "the new Macs".
Kinda weird that Apple can't properly transcribe its own content.
I’ve also noticed this on lots of YouTube videos, where the creator clearly meant one thing, but the subtitles substitute a more common, similarly-sounding word with a different meaning.
I suspect they have the videos transcribed externally, and don’t check the transcription (or only do so in a cursory manner).
For YT vids, especially shorts, it's because churning out shorts/reels/tiktoks of clips from longer form videos (and/or with the split screen gameplay of some mobile game/minecraft platforming run) is now a common tactic for trying to gain tons of views on your account for monetisation later.
I’ve also frequently seen it on long-form videos. I think the transcriptions must be at least partially reviewed by humans, because YouTube already has automatic transcription for videos without subtitles.
The guy introduced himself by name, I was really confused for a while if it was just a human trained to sound like an AI, or an AI trained to sound like a human.
This is unfortunately inevitable; CPUs are just so much easier to benchmark in a broadly useful way. And the extreme leakiness of geekbench is helpful (I suspect Apple sees this as a feature; most recent Apple chip iterations have leaked on geekbench)
Okay, I went through the other video they reference ("Discover new Metal profiling tools for M3 and A17 Pro" [1]), and there is actually a whole bunch of extra very relevant (IMO) information on the subject, starting about 13:30 or so.
They should just support directx. Devs will never support two graphics api’s. it costs too much especially to grab the marginal mac os share that has powerful enough gpu’s. Id bed in 4 years apple moves to directx.
Not going to happen, what's more likely is a Proton-like layer above macOS APIs to simplify porting games over. Also see "Game Porting Toolkit" here: https://developer.apple.com/games/
I've no idea exactly how MS licenses uses of DX, but just for context Imagination Technologies just released a custom GPU design that implements DirectX Feature Level 11_0 (which corresponds to earlier versions of DX 12 [1]): https://www.imaginationtech.com/news/imagination-launches-br...
Imagination Technologies is a near 40 year old British silicon IP company that has been doing GPUs for quite some time, just not ones supporting DX up until now, and it has nothing to do with MS (in terms of ownership / rights / etc).
The way that works isn't that IMG ships a full directx implementation, but that they ship some kernel and user mode components that plug in to Microsoft's directx implementation such that when taken together, directx is accelerated by IMG's hardware.
Similarly, Microsoft would need to release the non GPU specific bits for macos to fit the same model.
DirectX APIs can be emulated on top of other 3D APIs, see Proton on Linux for instance.
Of course the idea of Apple switching from Metal to D3D is rubbish, but a Proton-like solution for macOS totally makes sense (maybe the "Game Porting Toolkit" will be exactly that eventually).
I guess parent forgot "on macOS and iOS". Technically every game on macOS or iOS is running on Metal, because the Apple OpenGL implementations are also running on top of Metal for quite a while now ;)
Across platforms, there's no chance in hell that more games run on Metal than DirectX, not even when counting iOS shovelware.
There are also more games on Metal than DirectX. And when I say DirectX, I meant the latest versions of DirectX. No idea what the count is for all games released for DirectX total. It's not relevant for the topic I was replying to.
The OP said Apple should switch to DirectX. They can't because it's closed source. The reason given was that devs won't support 2 APIs. They already do. They support DirectX and Vulcan (PS5 version) almost always. And devs already support Metal - with more games than the latest DirectX.
Also many developers support even more than 2, as you are fogetting Playstation and Switch, both of which with their own APIs, 2 for Playstayon (GNM and GNMX), 3 for Switch (NVN, Vulkan and OpenGL).
There are 480k games on iOS. Yes, most of them are bad. But it's a fact that there are more games running on Metal than DirectX.
>Also many developers support even more than 2, as you are fogetting Playstation and Switch, both of which with their own APIs, 2 for Playstayon (GNM and GNMX), 3 for Switch (NVN, Vulkan and OpenGL).
I guess the relevant question would then be how many of those games are actually developed using Metal directly, and not some other API using Metal “behind the scenes”. Relevant in regards to developer acceptance I mean.
It’s pretty weird to count the number of games. Wouldn’t it be better to count the number of players? Many of those games will have close to zero players.
This issue can be fixed by switching the display to RGB [1]. So I think it’s a software bug but it‘s really annoying since the fix sometimes resets and the bug only occurs when there is a lot of black on the screen.
To be fair Apple usually follow specs to a fault- the problem they have when tying into non-Apple ecosystems is when de facto standards others follow are widespread- they often don’t implement/test those and don’t consider it their problem when encountered…
That's hilarious because that sounds like the old sync-on-green problem that Macs and monitors used to have. And that used to happen a lot with Dell monitors.
Does the complex block in the diagram refer to complex numbers? That doesn't sound typical, does it? What type of work load that typically run on the GPU that would require complex numbers?
Judging by the output/GUI of their GPU profiler, "complex" there is more like "complex instructions", think f32 (floating point) ops that aren't additions and multiplications (and FMAs), but trigonometry, square roots, that sort of thing.
FFT plus some game stuff requires complex numbers to do partial rendering (e.g. do some now and then do more next frame - I've lost the link to the talk but IIRC EA did a talk on how they made a shader that emulates lights in the background that are out of focus (not Guassian but the actual cool effect as if it was a real camera))
So ... the registers are dynamically allocated from a chunk of cache? Does this mean there, effectively, are no registers? Does this cache have one clock latency?
I doubt anyone will be able to answer questions this fine grained, not now (if the implementation is architecturally exposed - leaks into the ISA - and Asahi Linux group figures some of it out), or possibly not ever (if it's architecturally transparent and thus entirely micro-architectural).
> Does this mean there, effectively, are no registers?
I can only point out just for context that if by any chance you're asking whether the registers are implemented as actual hardware design "registers" - individually routed and and individually accessible small strings of flip-flops or D-latches - then the history of the question is actually "it never was registers in the first place" - architectural (ISA) registers in GPUs are implemented by a chunk of addressable ported SRAM, with an address bus, data bus, and limited number of accesses at the same time and limited b/w [1].
There is a fairly informative survey on the subject: https://www.osti.gov/servlets/purl/1332070 (A Survey of Techniques for Architecting and
Managing GPU Register File)
An easier to read research article that's narrower in subject and seemingly more relevant to the OP: https://research.nvidia.com/sites/default/files/pubs/2012-12... ("Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor", 2012)
It’s amazing how bad the competition is. The A17 pro has 2 performance cores and 4 efficiency cores. The Google G3 has 9 cores of 3 different types, the fastest being slower than Apple’s performance cores, the most efficient being less efficient than apple’s efficiency cores. And it’s a phone so you don’t take advantage of the extra parallelism. You just get the worst of both worlds. no wonder these android phones have 50% more battery and 50% less battery life. Is it that hard to just copy the winning formula?
A big part of Apple's "winning formula" is taking their giant piles of money and negotiating exclusive contracts for whatever is scheduled to be the most advanced semiconductor node next year.
Anyone else literally cannot compete, they don't have billions in pocket change they don't know how to spend otherwise, so they'll have to wait until the exclusivity agreement expires.
> they don't have billions in pocket change they don't know how to spend otherwise
your parent comment's example is literally Google, world-class experts at burning money on developers producing a million dead-end products and abandoning them a year later.
if Google would get some sensible leadership, focus on a few core products, and stick with them for a decade, they'd have just as much money to spend. But "focus" and "Google" seem to have become opposites.
My point: the 'winning formula' of Apple is laser-sharp focus: have a few products, do them as well as anyone else or better, and only introduce a new product if it is mature-ish and very profitable. (We'll see how the vision headset fits in here)
> My point: the 'winning formula' of Apple is laser-sharp focus: have a few products, do them as well as anyone else or better, and only introduce a new product if it is mature-ish and very profitable. (We'll see how the vision headset fits in here)
They also aimed at markets that are ripe for disruption, because of weak competition: The MP3 player market before the iPod, the PDA-with-a-SIM-card market before the iPhone, etc. pp. all could be reasonably disrupted by just delivering a reasonably (but not even best-in-class, specs wise) product with better UX (not hard, in the cases mentioned) and massive marketing. You can't do that in a heavily competitive market that's already full of these products. VR headsets are probably closer to the "ripe for disruption" end of the spectrum, and I think the Vision will probably do well. But I doubt the "Apple Car" plans that have been floating around for 10 years now will ever lead to anything.
Well, they got into that position starting from a near bankrupt company, which couldn't negotiate anything exclusive, and which was for a long time at the mercy of Motorola and the Intel.
So it's something they took advantage of after they grew (well, which company at their scale wouldn't ask for the best wholesale deals?), but not what made them big in the first place.
What made them big in the first place were the iPod/iTunes/iPhone and the ludicrous revenues from the App Store.
The iPod's only notable hardware that wasn't just a random off the shelf part was the click wheel, the chips were all off-the-shelf (until old iPhone chips counted as that), and iPhones didn't get custom chips until the 4.
So I guess the other part of the winning formula is "use market dominance in one sector to subsidize expansion into the next". I guess that's indeed one area where Google could reasonably try to be less inept, but I think all the institutional inertia makes that impossible by now. They'll go the DEC route of just drowning in their own internal problems until someone buys them up.
>What made them big in the first place were the iPod/iTunes/iPhone and the ludicrous revenues from the App Store.
The iPod/iPhone yes, but "in the first place" the App Store was insignificant (the remenues at 2010 was < 2 billion dollars worldwide, so Apple's take was less than $600 million).
For comparison the iPod had that profit already in 2004, and around 3 billion in 2010 (when the iPhone had already started replacing it).
So, the App Store was hardly ludicrous revenue for its first 3-4 years, in fact less than 10% of Apple's revenue. The iTunes store even less so.
It's the iPod and then iPhone that made Apple's dominance. The big store profit came later (and the iTunes/Music profit never was that big).
The most notable hardware of the original iPods were they took a gamble during design on soon to be released high capacity 1.8 inch hard drives. Before that MP3 players were either low capacity flash based like the Rio (I remember having one with just 64MB!) or monstrous discman sized devices running 2.5 or 3.5 inch platters like the Nomad.
"Is it that hard to just copy the winning formula?"
yes it is, thanks to IP law. And back in the day Steve Jobs already wanted thermonuclear war on Samsung, because he felt their flagship at the time was too close to the IPhone.
To be fair, there was a moment when Samsung was in full copy mode. All the way down to having their own version of the dock connector and a retail box that closely mimicked Apple. In retrospect, a bit embarrassing for a company we know is capable of much more.
Steve Jobs wanted nuclear war with Google (not Samsung) because Eric Schmidt was on Apple’s board of directors while the iPhone was being developed, so Jobs felt Schmidt was basically doing insider trading for google to develop Android.
It's not that simple though though. I have never got through an entire day with an iPhone (XR, 12, 13 Pro). I'm just hitting 2 days easily with my Pixel 7a with the same crap on it. My daughter just took an iPhone 15 back because it won't get through the day.
It is about attitude in voice, not about audio quality. It is very noticeable at 1.5x speed, 1x speed is too slow anyway with infinite pauses for such low density technical details.
"Discover new Metal profiling tools for M3 and A17 Pro"
https://developer.apple.com/videos/play/tech-talks/111374/
"Learn performance best practices for Metal shaders"
https://developer.apple.com/videos/play/tech-talks/111373/
"Bring your high-end game to iPhone 15 Pro"
https://developer.apple.com/videos/play/tech-talks/111372/