This sounds worryingly like “sales are down, cut costs to maintain profitability to spike our stock numbers” and not enough focus on what I believe is actually wrong, which is that they can’t seem to reliably make competitive processors in a stiff market.
Anyone else feel like this is bean counters at Intel playing the wrong game? I personally feel like leadership at Intel lost the plot almost a decade ago.
We have very little understanding of the bloat that Intel might be carrying, or the projects they have underway that were not making progress or perhaps not the innovation expected.
Financial reports are going to be exactly that, focused on the numbers and appearing to investors to be maximising their value. You want to carry a perception that you are focusing on the positives and the innovations being carries forward, as opposed to mulling over what "might be wrong". All companies go through ups and downs, but you do not want your financial report to have a theme of these are the things that are wrong.
well all of their fab operations can be called bloat because they aren't doing progress but this is still arguably the most valuable portion of the company.
they should have spun-off their fab years ago as AMD did. fabs business model isn't shareholder value compatible.
>You want to carry a perception that you are focusing on the positives and the innovations being carries forward, as opposed to mulling over what "might be wrong".
so, empty platitudes that turn into none or negative change. Gotta love the modern day business.
Stipulating in your financial report where your R&D is focusing and what those outcomes are hopefully going to be are not empty platitudes that turn into none or negative change.
I doubt that Nvidia putting in their financial reports last year they were focusing their R&D on AI hardware turned into none or negative change
Lots of hope in your comment. Not much hope in my mind. It's just been too shitty a year for me to trust these reports. The worst parts are that the shareholders barely care if they lie. They want to be lied to. They aren't investing because of product confidence after all.
Pessimism aside: they did this exact song and dance some 7 years ago. So I'm not entirely operating off bad vibes but on history and current trends. Like I said, they already basicslly lost the AI race. If they want to compete for scraps, power to them. I just hope employees jump ship before the next layoff wave.
Leadership changed back to an engineer with the return of Pat Gelsinger in 2021. I’d give him a couple more years. It takes time to turn around a big ship like Intel.
Everyone around here keeps talking about how the problem of Intel is that it was a Business guy running the company and not an engineer, and it feels so similar to when people claimed Phil Spencer was going to save Xbox.
Truth is, Pat is doing nothing despite the generous government subsidies
Not everyone is cut from the same cloth. Look at AMD, their CEO history has been staffed exclusively by people with engineering backgrounds, but their performance has had more than a few rocky points.
This is simply not true. They are building giant fabs in Arizona and Ohio from those subsidies.
It takes YEARS to realize the benefits of these changes.
The problem is, to stopgap TSMCs growing lead, Intel resorted to cranking up voltage on the last two generations of chips leading to what will be an imminent gigantic ass lawsuit.
This is where I see that Intel Processor are pushing the cart for Intel Fabs by small iterations to sell new processors and mother boards. Intel Fabs needs to push it's own cart like TSMC and actually focus on long term quality and gains.
The previous CEO was going to shut down the fabs. Maybe it would have been the right strategy but Pat Gelsinger had reoriented towards fab manufacturing for what it’s worth.
>it feels so similar to when people claimed Phil Spencer was going to save Xbox.
Well, he did. He bought up a ton of studios and focused on games again. in 2014 they were completely tonedeaf to the state of not just games, but media as a whole. Cable box subscriptions even in 2014 for their 8th generation console was just horrible optics.
Now, is it his fault directly that the game development per studio is slow, or the releases uneven? He's probably not blameless, but MS isn't really known to intervene nor micromanage their studios. IMO that's a much bigger issue than any single MS studio.
My favorite Pat moment was watching the video of him, published by VMWare, apologizing for how shitty their products were and how they were gunna do so much better.
It’s still far from clear that ARM for Windows will be going anywhere. And Intel is trying to increase ISA diversification with their foundry business.
I've seen reviews of the latest 13 inch Dell XPS and it's both more powerful and less energy hungry than the M3 in a MacBook air. It uses the Snapdragon Elite X ARM cpu.
It also blows the latest mobile i7 ultra 155H on the same XPS 13 in performance while lasting almost 20 hours (compared to Intel's 6).
The comparisons are made on software that runs on both (Geekbench, handbrake) but it's clear that there is a huge momentum building for windows on arm.
There's way more to ARM adoption than simply power consumption.
When apple went to the M1, they got their entire software ecosystem to go along with the migration. That's the fundamental strength of Apple. They can do things like kill standards at will.
Microsoft can't necessarily do the same thing for the vast library and catalog of x86 software.
MT is one thing but in ST it’s still simply not even close. Even the Qualcomm SDXE pulls fully 15w less than AMD, and apple pulls 10w less than Qualcomm.
60w peak power consumption for a literal single thread. 35w average.
We simply aren’t going to see x86 get to the subjective “apple experience” of a laptop that runs equally well on or off battery, with day-long life during interactive low-intensity usage, until x86 vendors can get their single-thread power consumption under control. Platform power is part of it (although external-monitor tests minimize most of the usual factors) but x86 simply isn’t competitive in performance terms unless it’s boosting excessively high in power terms. At least not right now.
I know Keller said it only matters 20% or so, but… right now we are objectively in a world where x86 is using ~triple the power in 1T workloads. The x86 is fine when it can exploit SMT and the workloads are heavily numeric/vector oriented, they simply aren’t very good at single-threaded workloads where there is no SMT to fill their pipeline bubbles.
And yes, race to sleep is all well and good. But apple can race to sleep on 1/3 the power. Apple can also have an icache and all that other crap too (m1 has an icache six times as big as its x86 competitors). At the end of the day the x86 results are still objectively poorer by literally several powers of 2. But Keller said everything is equally good therefore the evidence can be casually dismissed.
Even the much happier computerbase numbers only put this at 20w core power (and they’re using sensors, not actual measurements of course). And that still is power you have to pay on those laptops, even if it’s “platform power” - like, ok, so the defense of x86 here is that x86’s finest champion is spending more on platform power than the apple laptop uses at the wall for the whole laptop… by a factor of 50%. It then uses another 2x multiple of the apple laptop to do just the computation work.
> We simply aren’t going to see x86 get to the subjective “apple experience” of a laptop that runs equally well on or off battery, with day-long life during interactive low-intensity usage
I disagree. Apple themselves showed what you could do with an x86 processor in terms of battery life long before they went to ARM, and most manufacturers still aren’t even close. People have been talking about the amazing life in their products for ages before arm.
Meanwhile I can’t get my windows laptop to not burn itself up in my bag with an update, which also coincidentally uses a ton of battery power as the fans cool itself on full tilt
As an owner of an ice lake macbook I can tell you definitively that it’s still utter shit compared to the apple silicon ones. Best case scenario it idles at 14w system power, while my 2020 m1 mba idled at <5w and sometimes <3w at very long idle.
Usually if it’s not totally idle it idles at 22w. And again, this is the problem, light load produces quite a lot of power consumption on x86 compared to arm.
I bought it for the meme, it was a laugh and I wanted a late-gen x86 model with no dGPU to give x86 the best shot (mostly bought it for windows use actually). But the “it’s just macOS/optimization magic!!!” isn’t true either. x86 MacBooks still fucking suck, even the ones without the absolutely shit-tastic AMD dGPUs onboard. Those idle at closer to 50w if you have a display plugged in, btw. There’s no way to turn them off because the display outputs are hardwired to the gpu, if you are using an external display the dGPU is forced on. I tried, my old work MacBook was an i9 and it fucking sucked during the summer.
The idea that macOS is just uniquely lightweight and can somehow wring better perf/w out of computational tasks than Linux or windows just isn’t supportable either. Certainly if it were true it would also show up in MT and not just 1T loads. Reality is perf/w is mostly a constraint of the processor and not the OS… just like the OS can’t magically make applications use less ram and “turn 8gb into 16gb” either. It takes X cycles for Y processor to execute Cinebench and the OS does not matter that much in the big picture.
It maybe does help with sleep states and idle power, things like grand central scheduling. But 1T load is a load state, not idle.
> As an owner of an ice lake macbook I can tell you definitively that it’s still utter shit compared to the apple silicon ones. Best case scenario it idles at 14w system power
That really surprising since my 8th gen intel laptop (2 generations older than ice lake) idled at under 5W in Linux. So does my newer 12th gen intel laptop.
I would assume that’s packagd power and I mean total system power, using the “stats” cask with the average system power sensor.
I will play with it and see if I can get package power.
But the overall point is that even looking at total system power - the apple silicon is silly efficient. It’s actually meaningfully better than x86 in literally every way you choose to measure it.
I guess the Lunar Lake mobile CPUs will be more interesting than the Meteor Lake ones (e.g. 155H). But we'll see if they can keep the promises in regards to power consumption.
In my opinion, one the most reliable signs is when companies stop repeatedly investing into their product innovation. The moment they start behaving like an established behemoth (monopoly/oligopoly) and you see repeated stock buybacks and frequent layoffs, the end is coming.
This applies to most large, inefficient corporations that get too comfy with success and forget how to pivot fast, take risks, and fiercely fight for their place in the market.
Eventually, any organization that falls into this state, slowly stagnates into oblivion.
Intel has put literally hundreds of billions of dollars into R&D annually in the past decade. Last year alone it was more than $16 billion. That's one of the highest annual R&D spends of any company in the entire US.
Nvidia for comparison is less than half that per year.
There is this interesting phenomenon where people outside an organization view R&D potential as elastic. That is to say, there is always a positive return on investment for the marginal R&D dollar.
It is clearly not in the case or every company would have exponential growth simply by reinvesting in R&D.
In reality, it is extremely difficult to create growth from R&D at all.
>> The moment they start behaving like an established behemoth (monopoly/oligopoly) and you see repeated stock buybacks and frequent layoffs, the end is coming.
Add to that the suspension of the dividend, and the packaging of the FPGA business (getting ready to sell it?) and the reference to idle capacity - guess they can't even fill that with their foundry business.
One thing I've noticed is that predicting timing is impossible. You can see a company showing signs of impending failure and somehow stock price still increases for a while, or the company decays slowly over a very long time. This might be a good thing since a company may save itself in some instances.
The writing was on the wall for Intel more than 7 years ago.
By then:
- Intel's 10 nm node was known to be two years late and known to be plagued for many others while TSMC had already taken the crown for most advanced foundry on the planet
- Apple announced their work on phasing out Intel hardware completely for their own ARM SOCs
- AMD had launched Ryzen, back then a bit slower in single thread but already much more competitive in MT
- Supercomputers and crypto showed the ever increasing possibilities of GPUs for heavily parallelizable calculations and Intel had nothing to offer there
- Cloud providers building their own hardware on ARM
> I am convinced that the decay is often very slow. There are signs of impending failure and it frequently drags out for many, many years.
When Jobs contacted Otellini about having Intel fab an ARM chip for them and Otellini said no... That's probably about the time their fate was sealed. And that would've been around 2006. Yes, there were plenty of signs prior to that. For example: AMD needed a lot less engineers to spin a new processor rev than Intel did and AFAIK that's still the case.
> You can see a company showing signs of impending failure and somehow stock price still increases for a while, or the company decays slowly over a very long time
Probably not offering fab time to other companies when they were process leaders. TSMC ate their lunch and managed to leapfrog Intel due to their massive R&D efforts fueled by cell phone chips.
This one is an interesting take. In hindsight it's pretty obvious but if you are in that situation at that time it seems like you would be selling your crown jewels for pennies.
agreed. Stalling and then falling behind in fab to TSMC and Samsung is where they really lost the company. Still, Gelsinger sees this and they have bet the company on their 18A process. Will be interesting to see if they pull it off.
Thinking they were in a stable situation and going into max cashflow mode when there were in a death match. If you look at stock standard competitive theory (like, without any nuance) like Porter, you can see their situation getting really hairy starting 15 years ago. Customer concentration (public clouds), substitute products (arm in servers for standard workflows, various others in AI workloads etc), competition in their main product line couldn't go anywhere but worse (they were practically all by themselves). They had to fight like crazy but they thought they should print money.
Big takeaways: in consumer tech you live and die by the performance of your latest product. That applies to both CPUs and foundry.
Now-obvious mistakes: I'm not including the obvious mobile argument as it belongs to the previous point, but they slept too long on GPUs. Even though the biggest supercomputers in the planet started doing computations on GPUs a decade ago they missed the ultimate parallelization-hardware train.
they have been cooking on it for decades too - larrabee was 2009 after all. Xeon phi survived that collapse and continued for another decade.
Honestly the problem is the same then as it was today - having good hardware isn’t enough. You need software penetration and organic usage/ecosystem. And the world can support maybe 2-3 of those right now - nvidia, apple, maybe one other. Everyone else gets the bullshit “we are the ones you need a cross-platform “adapter” ecosystems with effectively no real corporate moat etc.
(amd, intel, the door to this room is locked… one of you will get an ecosystem and the other goes home in a box… the game begins now.)
Like even just in gaming, AMD’s continued refusal to do the devrel has hurt them, it’s cost them revenue. It’s not even just “the drivers” or the constant deficit in features and functionality, but getting the people out into the studios to assist in development and tuning is how it has to be, nobody is going to tune for your hardware if you don’t do it yourself.
legacy. being over confident about their fab innovating faster than competition made their design side take the second seat and not innovate. There wasn't a real need for big design innovation although a lot of R&D spending happened on design. Plus they didn't diversify their product portfolio. They missed the whole mobile market because of this, likewise the GPU and accelerator market.
When their fab hit a rock with 14nm and 10nm process tech advantage was more. This gave rhe edge to all other design companies and made it clear that how behind Intel was on design front.
This move is what business school would suggest. Revenue is declining in their near monopoly position in the market. What's the obvious mistake that fixes that?
Don't screw up running your own fab is the better lesson.
* Intel foundry services were perhaps a decade or two too late. Running a fab needs scale. That had to come before they lost the technology edge.
* Intel treatment and mismanagement of engineers was well-known in 2000, and you need the best and brightest.
* Leadership needs to understand core business / technology. Bob Swan. Arguably Brian Krzanich. Paul Otellini. Just a lot of clueless at the top.
Etc. The point is Intel wasn't "everyone." Intel was at the very top of the game a quarter-century ago. That was their position to lose, which they did in a spectacular fashion.
I continue to hope for a Microsoft-style comeback, but it's becoming increasingly unlikely.
Don't screw up? Sage advice! I agree they are mismanaged but it also takes luck to stay in the lead.
I hope for a comeback too. My amateur worthless opinion is that Intel is struggling now but at least they have fabs. The fabless companies are going to be fucked in the long term- the fabs they contract to are all on shifting geopolitical sands!
It's not luck if they were riding momentum that slowed because they shifted away from proper R&D.
The fabless companies had nothing but R&D to live on and the costs of outsourcing the fab means the capital costs could be amortized over more customers (meaning intel should have probably split in two).
Intel's fabs are a generation behind and now they're capital constrained. How do you get back ahead outside of China invading Taiwan?
I should elaborate on the word "proper" in this context.
Intel rarely had the best technology, but it was often "good enough, cheap enough, and available enough" to succeed in the marketplace - this is a lesson SO MANY people forget. Sun, DEC, etc arguably had better hardware, but it was expensive. What Intel was very good at was iterating on the technology it had and making it faster within a reasonable budget. Microsoft is another good example of where "good enough, cheap enough, and available enough" is what usually matters in the market.
Where they started to stumble is when they got greedy. The whole RAMBUS debacle back in the day is a good example of where they took more expensive and worse RAM and tried to make it mandatory. Then there was the whole Itanium debacle, where politics succeeded over engineering and they ended up with a worse, more expensive product that failed in the market. AMD, which had recently acquired a lot of talent from DEC's alpha chip design, then went on and made 64-bit extensions to x86 that Intel had to play catch up to.
Intel (for the most part) ignored the performance per watt aspect of CPU design, which mattered in embedded, mobile, and dense datacentre environments. The result is that ARM came from the bottom up and is now eating Intel's lunch from all angles.
Intel's fabs couldn't compete with TSMC and to a lesser extent Samsung, both of whom could focus on the intense CAPEX (which due to cultural reasons people in Asian countries are willing to take longer term outlooks). CAPEX looks bad on an american GAAP spreadsheet. TSMC literally could take upfront money from apple to build new state of the art fabs and guarantee them a certain number of chips (and keep the money if the iphone had failed in the market). Intel couldn't do that.
Intel spent most of the 2000s era doing $130B in stock buybacks and even if it was run by engineers, they lost sight of the market. Stock buybacks and dividends are fine if you have surplus cash, but chip design is a high CAPEX business.
> Then there was the whole Itanium debacle, where politics succeeded over engineering and they ended up with a worse, more expensive product that failed in the market.
Why do you say Itanium was politics over engineering? It turned out to be a not-really-workable idea, sure. But if you looked at it, not knowing how it was going to work out, it was not an obviously bad idea. It was probably worth trying.
(Or are you saying that politics succeeded over engineering in the process of implementation of Itanium? I have no view on that.)
I’m saying the latter, that politics succeeded over engineering in the process of implementation of Itanium.
The instruction set for it was relatively bloated, indicating that there were too many compromises being made.
Writing compilers for it was a notorious shitshow. Itanium introduced a too much of indeterminism by via static scheduling hostile methods like branch prediction, variable latency caches, and attempts at an x86 compatibility layer. The core problem for Itanium is that it was impossible for the compiler to make good calls since it could not know exactly where and when things in memory would be. Even intel’s own compilers produced lackluster results, to say nothing of attempts by Microsoft, HP (who bet the farm on Itanium), and GNU (which mattered for Linux).
There were politics and demands from HP, which was the only seriously committed partner that wanted chips that would in theory be best for replacing pa-risc, but then intels own expectations of having it replace x86 everywhere.
Itanium actually did perform well at some kinds of parallelism. However, to take full advantage of that would have required a paradigm shift in the way languages worked, because 99% of the code out there was sequential/procedural. So maybe some new code (or more likely languages) could have really kicked ass (and intel did demonstrate that was possible), but even if everybody decided to go ahead there’d be a lot of learning and mistakes on the journey to full optimisation. Or you could just make your current code go faster by buying cheaper 64 bit x86.
My recollection at the time was that there were good reasons to believe compilers would be workable.
We were in the golden years of Java bytecode translations, real-world JITs, and Transmeta (which made a similar bet in hardware). That was the era we were starting to introduce rather complex transformations and optimizations. Computers were just starting to become powerful enough to where this kind of analysis and optimization felt possible and practical.
The compiler problem felt complex, but not unsolvable.
There were no good, obvious reasons I recall why the Itanium optimization problem couldn't be solved. The basic philosophy was make the hardware as fast as possible, even if it was hard to program, just rely on the compiler to deal with it.
Now, in hindsight, I can give all the reasons it didn't work. Key among them is that we drastically underestimated the degree to which complex software systems are hard to build and the way code messiness grows exponentially for these kinds of systems.
However, that wasn't really knowable in hindsight, though. Around the time, a lot of firms were making similar bets, and in 2001, I likely would have made the exact same bet with what was known at the time.
As a footnote, the anticipated progress in compilers is being made, but much more slowly than anticipated. NVidia reached its market cap on architectures being explored at that time. I had plenty of faculty promise similar SIMD/MIMD architectures would be increasingly important, but __dramatically__ underestimating the time it would take to get there.
> We were in the golden years of Java bytecode translations, real-world JITs, and Transmeta (which made a similar bet in hardware). That was the era we were starting to introduce rather complex transformations and optimizations. Computers were just starting to become powerful enough to where this kind of analysis and optimization felt possible and practical.
Funnily, the static scheduling of Itanium was the opposite direction.
> There were no good, obvious reasons I recall why the Itanium optimization problem couldn't be solved. The basic philosophy was make the hardware as fast as possible, even if it was hard to program, just rely on the compiler to deal with it.
Oh, it technically could have been solved via compilers AND further architecture refinements (though it's a lesson in arrogance that Intel shunted this responsibility onto software developers). x86 is a complex mess of an instruction set, too. But it took decades of incremental hardware optimizations and compiler improvements to get it to scream. But there was a huge market for it, so anybody and everybody was working to improve it. You had everybody from big corporations to scrappy video game developers (especially John Carmack) figuring out ways to make it run faster, often via bugs in the architecture.
> However, that wasn't really knowable in hindsight, though. Around the time, a lot of firms were making similar bets, and in 2001, I likely would have made the exact same bet with what was known at the time.
I would argue that it was somewhat knowable. This wasn't the first time new CPU architectures were created. Many commercial UNIX's moved CPU architectures, some a few times. Some of their choices were similarly problematic. There were many technical and philosophical arguments on how reduced RISC processors should be.
I'm quoting from memory, but I saw a great statement somewhere that fans of RISC tended to be people forced to program assembly in university, which was painful on CISC architectures. This often clouded their judgement that extra instructions can dramatically speed up computing and the complexity can be abstracted by higher level languages and compilers/JITs. The real world is a harsh mistress (this is not to say RISC CPUs didn't have performance benefits in many cases, but it was mostly where large amounts of data needed to be computed on like databases).
> As a footnote, the anticipated progress in compilers is being made, but much more slowly than anticipated. NVidia reached its market cap on architectures being explored at that time. I had plenty of faculty promise similar SIMD/MIMD architectures would be increasingly important, but __dramatically__ underestimating the time it would take to get there.
The big difference is that GPUs, which are a fundamentally different paradigm of computing from CPUs, provided an immediate improvement even before heavy optimizations could be done. GPU development was kickstarted by games and it took decades for it to branch out to other large markets (first crypto, then AI). But the real benefit of GPUs is that it didn't replace CPUs, but operated in parallel. You could even run quake without one, at obviously reduced performance.
But intel also screwed up with GPUs...
I'm a big believer in incentives. Intel's incentives were to make a CPU that nobody else could copy like x86. The complexity was probably internally seen as a benefit so it'd be harder to copy. Intel thought it could dictate to the market what it was going to get and it arrogantly saw its market domination as being stronger than it was, leaving AMD to create 64 bit extensions to x86 on its terms (hilariously most compilers call the architecture amd64).
But the biggest incentive mismatch was there was no incentive to optimize for Itanium in the market. If nobody is buying it, are you going to spend time and money making your compiler or software better for it? If you're Microsoft, how much money are you going to sink into optimizing windows, .NET, VS Code, SQL Server, etc when there's no ROI? It's also a chicken-egg problem where you can't optimize when almost nobody is using it and seeing the bottlenecks. There were no John Carmacks spending hours in a debugger trying to squeeze out performance hacks.
Meanwhile, ARM started to grow by being good at specific things at first (performance per watt at a relatively low price). This mattered first in portable devices, then moved to being important in dense datacenter environments which encouraged development of more raw performance. Now we're seeing it in top of the line consumer computers. Itanium was promised to be to be all of this at once - almost overnight. IMO it was doomed to failure the second that first sales graph was created: https://en.wikipedia.org/wiki/Itanium#Expectations. If you try to make everybody happy at once, you usually end up with nobody being happy.
> I would argue that it was somewhat knowable. This wasn't the first time new CPU architectures were created. Many commercial UNIX's moved CPU architectures, some a few times. Some of their choices were similarly problematic. There were many technical and philosophical arguments on how reduced RISC processors should be.
I think the key difference was that in the early days, one could only afford a single-pass compiler. Then, double-pass (but it was slow). Itanium was just at the time when compilers could _practically_ do deeper program analysis.
There is no individual piece in making a good compiler for Itanium which I can't solve. At the time, most interesting CS problems that smart people were working on are what we called "microprogramming" in my college jargon -- optimizing individual algorithms, optimizing an assembly loop, etc. What made an Itanium compiler hard was (again in local jargon) "macroprogramming" -- making all those things work together.
> I'm quoting from memory, but I saw a great statement somewhere that fans of RISC tended to be people forced to program assembly in university, which was painful on CISC architectures. This often clouded their judgement that extra instructions can dramatically speed up computing and the complexity can be abstracted by higher level languages and compilers/JITs. The real world is a harsh mistress (this is not to say RISC CPUs didn't have performance benefits in many cases, but it was mostly where large amounts of data needed to be computed on like databases).
This doesn't match my history.
The hypothetical difference was the other way around. CISC had complex instructions (which might take many cycles to run, like a string copy). RISC has simple instructions. Ergo, "reduced instruction set." The technical difference in early processors was RISC was pipelined and CISC was microcode (where all instructions took many cycles).
The reason for CISC was largely so programs could be smaller. This made a huge difference if a computer has e.g. 8k of RAM. CISC, circa Pentium days, was hard to make fast because:
- CISC variable length instructions were hard to decode, and RISC fixed-length ones were easy. This mostly disappeared as decode units became smaller perhaps circa 2005.
- CISC was hard to pipeline. Again, circa 2005, it became easy to (1) avoid annoying instructions in code and treat them as very slow backwards-compatibility special cases in hardware; or (2) do a translation.
For the most part, the practical distinction disappeared around then.
> The big difference is that GPUs, which are a fundamentally different paradigm of computing from CPUs, provided an immediate improvement even before heavy optimizations could be done. GPU development was kickstarted by games and it took decades for it to branch out to other large markets (first crypto, then AI).
True. Although that's more of a business distinction than a technical one.
> But the real benefit of GPUs is that it didn't replace CPUs, but operated in parallel. You could even run quake without one, at obviously reduced performance.
I'm betting 50% on convergence, as number of CPU cores grows, and GPU cores become increasingly complex. I think the M1 may be the track we eventually converge on, with diverse cores optimized for diverse tasks.
> But intel also screwed up with GPUs...
True. Although they seem on an okay track if they can get the software right. A770 16GB can be had for $300. NVidia 4060 16GB is $450 (and faster). For a lot of non-gaming users (ML, CAD, etc.), the A770 is ideal.
> I'm a big believer in incentives. Intel's incentives were to make a CPU that nobody else could copy like x86. The complexity was probably internally seen as a benefit so it'd be harder to copy. Intel thought it could dictate to the market what it was going to get and it arrogantly saw its market domination as being stronger than it was, leaving AMD to create 64 bit extensions to x86 on its terms (hilariously most compilers call the architecture amd64).
I am too -- although that's a business rather than technical argument -- but I'm not sure that's exactly what happened here; I think it's about half-right. I think Intel simply overestimated the amount of time it would stay dominant in the market, underestimated the competition, and underestimated the time Itanium would take to develop.
I don't think they were intentionally making the CPU either easy or hard to copy.
First of all, I’m enjoying this discussion. I also want to point out that I’m not necessarily advocating for either side in RISC vs CISC (I lean RISC personally), but I’m more pointing out how the market actually ended up and why it (was) so hard to replace x86.
> I think the key difference was that in the early days, one could only afford a single-pass compiler. Then, double-pass (but it was slow). Itanium was just at the time when compilers could _practically_ do deeper program analysis.
> There is no individual piece in making a good compiler for Itanium which I can't solve. At the time, most interesting CS problems that smart people were working on are what we called "microprogramming" in my college jargon -- optimizing individual algorithms, optimizing an assembly loop, etc. What made an Itanium compiler hard was (again in local jargon) "macroprogramming" -- making all those things work together.
Both these circle back to a point I made earlier in the thread - who’s going to do that for a processor nobody is using? There is an alternate timeline where a combination of hardware and compiler/software iteration make Itanium competitive at a performance level, but intel’s non-technical decisions made that impossible at a practical level. It was never made cheap or available enough where anybody could tinker at the lower end, and at the high end they suffered from the chicken/egg “nobody bought it because x86 was faster today”.
The Itanium did to very well in some supercomputer deployments where the code could handle the architecture’s good parallelism. But that was likely net-new code for a niche product.
> The hypothetical difference was the other way around. CISC had complex instructions (which might take many cycles to run, like a string copy). RISC has simple instructions. Ergo, "reduced instruction set." The technical difference in early processors was RISC was pipelined and CISC was microcode (where all instructions took many cycles).
Hypothetically, yes. In the real world a lot of work was done to minimize that over x86’s lifetime. In the early days RISC did do a lot of what was promised, but clever (some would say hacky in some situations) updates to x86 CPUs and compilers made these advantages less (although one could argue the x86 microarchitectures that showed up in the mid to late 1990s were more RISC like). Faster and better caching (variable length x86 instructions meant that common instructions can have a shorter encoding and take up less space in the instruction cache, vastly reducing expensive cache misses) also minimized a lot of these issues. The main issue was a lot of these “enhancements” kept the performance per watt ratios at very poor levels, which caused no end of headaches as laptops became more popular and left intel (and AMD with x86) with no competitive alternative to ARM in phones.
> The reason for CISC was largely so programs could be smaller. This made a huge difference if a computer has e.g. 8k of RAM. CISC, circa Pentium days, was hard to make fast because:
> - CISC variable length instructions were hard to decode, and RISC fixed-length ones were easy. This mostly disappeared as decode units became smaller perhaps circa 2005.
> - CISC was hard to pipeline. Again, circa 2005, it became easy to (1) avoid annoying instructions in code and treat them as very slow backwards-compatibility special cases in hardware; or (2) do a translation.
I agree with all these points, but even before the decode enhancements ~2005, lots of work was done to mitigate these. But the most obvious thing that x86 caught up with was clock speed. A central argument in favor of RISC was that it allowed for an ability to jack up the the clock speeds of processors, that could then iterate over the reduced instructions faster and provide better performance in most computing cases. This clock speed advantage (in practice) was eliminated, though not due to issues with RISC itself, but more because it was only the large volumes of x86 chips could justify the higher costs of staying near the bleeding edge of transistor manufacturing allowing smaller transistor sizes (something that ARM would eventually come to lead, though).
The CISC instructions were often heavily improved upon over hardware iterations or moved over to new ones (MMX being a famous example) that were more in line with how code was used (hilariously MMX became kind of redundant soon after as GPUs took over those functions). There were also many cases where x86 could do things in fewer instructions than RISC, which helped even more at higher clock speeds as x86 caught up.
Again, I’m not necessarily defending CISC/x86. But it had so much engineering heft thrown at it due to its install base that it often brute forced its way to performance and it was only when performance per watt metrics started to matter that an alternative came on the scene (and it was not something that Itanium would have been better at - ARM would probably be causing the same issues to intel in the data center had Itanium taken off as it was). This was always going to be a hindrance to any replacement. The fact there was genuine competition in x86 kept prices lower and development cycles active, too.
A lot of what you state is correct, but in practice x86 chips were still faster for most workloads out there. In a perfect world all this time and effort would have been heaped on a far better CPU architecture (CISC or RISC), but alas…
Even today x86 still outperforms ARM on most server chips, but our AWS loads are using their ARM chips because it’s cheaper per unite of compute, which is fine for most workloads. This may even get better as more focus is put on ARM via compilers or architecture iterations.
> True. Although that's more of a business distinction than a technical one.
It’s a technical one. They provided immediate technical enhancements, but didn’t mean all your current code had to be rewritten. It was optional (until video games got so sophisticated that they were required then) and you didn’t need to rewrite/recompile your OS to use it. However, GPUs are a lot more niche, so fundamental architecture changes can be done more easily, especially as most code out there is done via higher level APIs.
> I'm betting 50% on convergence, as number of CPU cores grows, and GPU cores become increasingly complex. I think the M1 may be the track we eventually converge on, with diverse cores optimized for diverse tasks.
I agree on this for most consumer products. SoCs have taken over the embedded and mobile market and that can continue to other markets. But we’ll see what happens if AI continues its current trajectory. There it’s the GPUs that matter more than the CPUs and we could see something different emerge.
> I am too -- although that's a business rather than technical argument -- but I'm not sure that's exactly what happened here; I think it's about half-right. I think Intel simply overestimated the amount of time it would stay dominant in the market, underestimated the competition, and underestimated the time Itanium would take to develop.
It’s all about business, though. If the world was purely technical, Amiga, DEC, or Sun would be on top of the world. Intel did everything you just said, but also tried to do too much. A lot of what intel does is over-engineered in a sense they do things in a more complex way that necessary (some cynical people would say on purpose to make larger margins on hardware/chipsets eg USB, which at a low level is very complex even for the original spec).
> I don't think they were intentionally making the CPU either easy or hard to copy.
At an engineering level, no. But higher up they made sure the way it worked with IP, etc would make this difficult. The fact that x86 clones exist at all was a legal miracle and a quirk of history (pretty much IBM demanding it for the original PC and intel was not big enough yet to say no): https://jolt.law.harvard.edu/digest/intel-and-the-x86-archit...
(I also agree with most things, and don't reply to those parts)
> In the real world a lot of work was done to minimize that over x86’s lifetime. In the early days RISC did do a lot of what was promised, but clever (some would say hacky in some situations) updates to x86 CPUs and compilers made these advantages less (although one could argue the x86 microarchitectures that showed up in the mid to late 1990s were more RISC like).
They were identical to RISC architectures, with the exception of a more complex decode unit, which led to extra transistors burning extra power and costing speed in the early Pentium days.
That mattered a lot when a 1993 Pentium had 3.1 million transistors.
That matters a lot less when a modern Intel CPU has ≈5 billion transistors.
Fundamentally, that's what allowed x86 to pass all the various RISC architectures in speed. Those kinds of differences in instruction set just don't matter anymore. At that point, it was just R&D dollars, which Intel had more of due to volume.
As a footnote: It's worth remembering StrongARM. The SA-110 was just as fast as the fastest Intel had to offer, when introduced in 1995, at a much lower price and power budget. It was crazy I could get a little embedded board for $100-$300 which was just as fast as $5000 computers. Intel acquired it from DEC. Oddly enough, it barely improved from there on. A decade later, clock speed went up 4x on the XScale (and only on the very top-end), and more than 15x on the x86.
> The fabless companies are going to be fucked in the long term- the fabs they contract to are all on shifting geopolitical sands!
The fabless companies will also have a problem if only TSMC is left. What is there to keep them from rising the prices? Right now TSMC is afraid of Samsung and Intel, but if Samsung falls behind and Intel folds, AMD / NVDA / ... will have a crisis on their hands.
> decided cheap labor was more important than national security
Organized labor is considered a threat to national security, it's no coincidence that deindustrialization followed a period of widespread labor militancy.
For fabs to run efficiently and effectively, you need a competent workforce. I’m not sure you can get that kind of workforce here in the US. Just look at TSMC’s struggles in AZ.
To get high yields and a reliable process at these process nodes you need high throughout in the fabs, even with sophisticated models and simulations is my guess. At one point in time Intel had high throughput compared to the world via internal demand, which then translated into an advantage for their designs. Internal tools were build around this feedback loop, until that loop was no longer true. The throughput at external fabs far exceeded the throughput at Intel. While external fabs were designed to be used by all, Intel flows and fabs were not. Rather than adjusting to new reality quickly, and competing on design while exposing the fabs to customers, it seems Intel is now struggling mightely on both fronts.
Number of employees is not the correct comparison, total labor cost is. Intel intentionally pays middling compensation so you would expect to need more people to do the same job.
I believe the ambitious plan by Gelsinger to release "5 nodes in 4 years", as well as building couple of new factories necessarily resulted in a lot of redundancy and hiring people who were needed for the transition period, but not necessarily long term. Now that they are nearing the completion of Intel 20A and 18A R&D cycle it seems logical point to start cost-cutting.
One can only hope. Goodness knows it would be better if Intel were more competitive in this market. Let's cross our fingers that you're right and we see a resurgence after the new fabs come online.
I live near Intel HQ and I know many Intel employees. Everyone I know has been interviewing at other companies for the last 8-12 months. I haven't met a single person that had a positive thing to say about Intel and its future.
Paul Otellini and Brian Krzanich (both from sales background) have made such strategic mistakes its hard to overstate, combined with the govt office work culture makes it not a great place to work
Suspending the dividend is poetic in a sense. They'd been taken over by bean counters decades ago and slowly pissed away their privileged position at the top of computing.
They essentially ended up where Boeing did, but at least didn't kill anybody.
A lesson that has to be taught again and again. You can run a business into the ground but look great for years on inertia alone. Intel has been crumbling for a long while but the inertia of previous decades meant it wasn't apparent.
That financials are a very laggy indicator of reality is the lesson that doesn't actually seem to be properly taught. It's even worse in software. The stickiness can make companies look great when they are actually a mess. The cost of acquiring companies can make companies look like money losers when they are actually doing well. Intel has been a mess for a decade at least, an obviously going to zero. But the numbers only started going badly a few years back.
Maybe. On Intel's scale almost everything you do wrong kills people, at least statistically. Though, so does everything you do right, but probably less. How many human lifetimes were spent cleaning up and mitigating the microarchitecture vulnerabilities?
The ‘flagged dead’ comment is right on. Intel has been a bloated bureaucracy for a long time now and things need to get tightened up for it to continue.
What's special about "them"? Twitter was technically never a particularly complex product that required some kind of exceptional talent. Not even remotely in the same ballpark compared to what Intel is doing (or trying to do anyway...)
Indeed there are highly valuable experts at Intel - there have to be, for them to function at all. They're an engineering company making incredibly complex products, unlike Twitter which just sold ads.
But there are also likely to be countless "twitter-like" employees too, paid to sit on their hands or faff about with branding or tinker with devops parlor tricks or gate access to resources that engineers need.
So the current level of cuts passes the "gut" check for me. They surely have less "chaff" than Twitter, and it's crucial not to accidentally cut too many key personnel in the process.
The big difference is that Intel is a manufacturer, not just a web site. They need people on the ground, many of them, in anticontamination suits, 24/7. That changes the "can we just fire 80% of them" question by quite a bit.
Twitter has dropped in value almost 80% since Musk bought it. Twitter is this weird zombie unicorn which keeps attracting money while mostly losing money on net. I don't think its useful to use it as an example.
I'm very surprised people are really taking advice from freaking Twitter for how to handle a silicon hardware company. Also the lack of empathy for fellow developers, but that's a sadly rising sentiment. Crabs in a bucket.
The only reason Twitter isn't a dead husk is because network effects are really damn strong. That's it. The attrition to Bluesky or Mastodon or whatever won't be as drastic as the Myspace days when single percentages of the current internet populace were connected.
You don't get network effects with hardware, especially since most people these days buy laptops and won't bother to specify an intel chip (if available at all).
Dropped in projected value as a company irrespective of the stock which you can't buy now because its private per individuals with large stakes.
Their financials tanked as advertisers fled the platform or reduced spend drastically. It's now basically a money pit that at present trajectory will continue to burn money until Elon's other ventures can't afford it. Given his wealth he can keep losing a billion or two a year forever even if people stop buying his overpriced poorly built cars.
I predict however that eventually it looks less interesting and they try a rebrand as twitter with 90% less Elon for 2x the advertising dollars (for real this time) and borrow as much money and assets as possible and eventually exit.
advertisers are free to join the platform back. Could it be that allowing people to publish 200 word texts just isn't that great of a business? Also, I can't help but notice the similarities in the arguments about censorship ("It's their platform their rules"), then when the wrong person buys the platform, suddenly that argument gets put to rest.
Why would they when bots are up, engagement with valuable users down, and your ad could play opposite nazi shit.
Censorship is when the government won't let you publish something. It's not when Twitter doesn't let you post something, it's not when your post isn't shared, it isn't when Pepsi won't pay you to run ads, and it's not when users stop engaging.
All of these things are things people are morally entitled to do.
Elon is the wrong person because he's ruining it and because he's doing so in service not to a different tax or economic policy but in service to evil.
I maintain that no owner of twitter really understood what they had, either before or after Musk. Twitter was really good at news if you knew who to follow and you had direct access to a lot of experts in various fields. They had to put an enormous amount of effort into dealing with misinformation, but couldn’t figure out the balance between that and mass market appeal. The result is that they financially treaded water.
Musk thought that what people wanted was raw unfiltered “free speech”, but he thought his kinds of views were restricted. The result was when he got control, the guardrails were mostly removed and a lot of users recoiled, and advertisers left due to the desirable users targets disappearing as well as having their ads shown next to questionable content. Then he contradicted himself by blocking accounts that hit his ego.
I’m a geopolitical nerd and loved twitter, but finally gave up on it when I started getting fake news as well as promoted tweets by musk himself, whom I didn’t give a shit about his at best bizarre opinions. The blocking of third party clients meant that I couldn’t even filter client-side anymore (RIP Tweetbot).
It could be because in conjunction with the layoffs Intel are battling with a potential class action against their two latest series of desktop CPU's being faulty
$152 billion in stock buybacks. We don't have an economy anymore, we are just handing money to the ultra-wealthy.
US. Total economic collapse. Hard landing.
This is the beginning of the end. We really had the chance to make something beautiful with this country, but the 1% bought and sold it into the ground.
Half our politicians aren't even trying to keep it running anymore.
They should have done more buybacks so it could be used for something productive.
All of the money left in the company is going to be wasted as it rots from the inside.
The company didn't die of starvation, it died of obesity.
Im not sure who was begging who, but as long as they build the fabs they are being paid to build, they should get the money. If they dont build the fabs, they shouldn't get the money for them.
Heck, our politicians are at the point where they're taking their bribes in pure gold (see NJ's recent senatorial shame).
A good friend of mine is the son of a former hedgefund manager... who told me in 2018 "owning real gold is foolish;" in 2024, he's recently told me about his expanding gold collection.
I have made even more returns on silver/BTC, but anything "real" is probably durable enough to last for (hopefully at least) another decade of keep kicking the can down the road...
Quoting my favorite family member of The Silent Generation (pre WW2 birth), "Nobody wants to be the last one at the party, because then you have to help clean up all the mess!"
It's happening, real-time. Yes. All "trickling up," exactly as-designed: into more-durable asset classes, largely unaffordable to Joe American.
Just for reference, a troy pound of gold currently trades for around $30,000; whereas in 1970 the same mass was, officially, $420.
e.g. Multiple OPEC countries trading oil in non-USD transactions (against policies to only do so, set since early 1970s).
e.g. 2020, largest publicly-traded company was a state-run oil extractor (Aramco), worth barely only $2 trillion [two million million dollars]; 2024 (so far) there were THREE $3T+ marketcaps, each a massively-inflated evaluation; bitcoin is more-stable & more-valuable than most tech stocks
e.g. #100 publicly-traded asset (by market cap), in 2020, was worth approximately $110B; #100, in 2024, is worth approximately $155B
Never forget that Nixon prosmised, just as countless other politicians are trained to do: that this suffering is only TEMPORARY. They're technically not incorrect, when temporal definitions are left undefined.
Why would anyone buy stocks if companies didn't do buybacks or issue dividends? The whole purpose of stocks is to be financially rewarded for investing capital.
Like if you just don't like stock markets or capitalism in general, that's fine, but that doesn't have anything to do with stock buybacks.
my issue with stock buybacks is that clearly corners were cut elsewhere while the company was hollowed out. And they get away with it because Uncle Sam is just going to show up with a new check next year.
Same with Boeing. Miserable failure of a company, somehow found room to hand the owners massive amounts of money, meanwhile they have massive quality issues.
Reducing Operating Expenses: The company will streamline its operations and meaningfully cut spending and headcount, reducing non-GAAP R&D and marketing, general and administrative (MG&A) to approximately $20 billion in 2024 and approximately $17.5 billion in 2025, with further reductions expected in 2026. Intel expects to reduce headcount by greater than 15% with the majority completed by the end of 2024.
INTC and its subsidiaries should consider a comprehensive reorganization. The company is struggling, and it seems they're making poor decisions regarding staffing and leadership. Mediocre and ineffective leadership persists at all levels, and the business and sales departments are likely contributing to the issues as much as HR.
If you want to help turn things around, contact the board members and CEOs directly. Make it clear that they should not receive any compensation until they have successfully revitalized INTC, MBLY, and other related companies. For reference, these are the KPIs for them: INTC's market cap should be currently around $250 billion and MBLY's is about $35 billion.
Anyone else feel like this is bean counters at Intel playing the wrong game? I personally feel like leadership at Intel lost the plot almost a decade ago.