It’s worth keeping in mind that the silicon lottery is very much a thing at these nanometer sizes. So some market segmentation has to exist. If Intel threw away every chip that had one of the four cores come out broken, they’d lose a lot of money and have to raise prices to compensate. By fusing off the broken and one of the good ones, they can sell it as a two core SKU.
Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation. But we can’t make every CPU support everything without wasting half good dies.
> Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation.
I think even this is a bit unfair. Intel's segmentation is definitely still overkill, but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour.
Most of the cost (especially for intel) is going to be upfront costs like R&D on the chip design, and the chip foundry process. I don't think it's unreasonable for Intel to be able to sell an artificially gimped processor at a lower price, because the price came out of thin air in the first place.
The point at which this breaks is when Intel doesn't have any real competition and uses segmentation as a way to raise prices on higher end chips rather than as a way to create cheaper SKUs.
> The point at which this breaks is when Intel doesn't have any real competition and uses segmentation as a way to raise prices on higher end chips rather than as a way to create cheaper SKUs.
I’m not sure that this is really fair to call broken. This sort of fine granularity market segmentation allows Intel to maximize revenue by selling at every point along the demand curve, getting a computer into each customer’s hands that meets their needs at a price that they are willing to pay. Higher prices on the high end enables lower prices on the low end. If Intel chose to split the difference and sell a small number of standard SKUs in the middle of the price range, it would benefit those at the high end and harm those at the low end. Obviously people here on HN have a particular bias on this tradeoff, but it’s important to keep things in perspective. Fusing off features on lower-priced SKUs allows those SKUs to be sold at that price point at all. If those SKUs cannibalized demand for their higher tier SKUs, they would just have to be dropped from the market.
Obviously Intel is not a charity, and they’re not doing this for public benefit, but that doesn’t mean it doesn’t have a public benefit. Enabling sellers to sell products at the prices that people are willing/able to pay is good for market efficiency, since it since otherwise vendors have to refuse some less profitable but still profitable sales.
It is unfortunate though that this has led to ECC support being excluded from consumer devices.
Without knowing what the silicon lottery distribution actually looks like we can't really say that.
> "... but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour."
Yes, you could choose to amortize it over every product but then you're selling each CPU for the same price no matter which functional units happen to be defective on a given part.
Since that's not a great strategy (who wants to pay the same for a 12 core part as a 4 core part because the amount of sand that went into it is the same?) you then begin to assign more value to the parts with more function, do you not? And then this turns into a gradient. And eventually, you charge very little for the parts that only reception PCs require, and a lot more for the ones that perform much better.
Once you get to diminishing returns there's going to be a demographic you can charge vastly more for that last 1% juice, because either they want to flex or at their scale it matters.
Pretty soon once you get to the end of the thought exercise it starts to look an awful lot like Intel's line-up.
I think what folks don't realize is even now, Intel 10nm fully functional yields are ~50%. That means the other half of those parts, if we're lucky, can be tested and carved up to lower bins.
Even within the "good" 50% certain parts are going to be able to perform much better than others.
> So some market segmentation has to exist. If Intel threw away every chip that had one of the four cores come out broken, they’d lose a lot of money and have to raise prices to compensate.
Except in the case with the Pentium special edition 2 cores and i3 parts, Intel actually designed a separate two core part that wouldn't have the benefit of re-enabling cores among hobbyists.
And then there's the artificial segmentation by disabling Xeon support among consumer boards... even though the Xeon branded parts were identical to i7s (with the GPU disabled) and adding (or removing) a pin on a socket between generations even though the chipset supports the CPU itself (and the CPU runs on the socket fine with an adapter.)
Intel definitely did everything they could to make it as confusing as possible.
Its just the behavior of a monopolist where they are making their product line as efficient as possible by milking every last penny out of every single customer.
In a truly competitive ecosystem features that have additional cost would be the only ones that actually cost more, and artificial limits wouldn't work because the vendor with less market share would just throw them in for free.
So you would expect product segmentation along the lines of core counts, dram channels, etc but not really between for example high end desktop/low end server because there would be a gradual mixing of the two markets.
And it turns out the market is still competitive because Arm and AMD are driving a bus through some of those super high margin products that are only artificially differentiated from the lower end parts by the marketing department or some additional engineering time that actually breaks functionality in the product (ecc, locked multipliers, iommu's, 64-bit MMIO windows, etc).
Look at the Apple A12x. They disabled a GPU core in it for the iPad, and then in the A12z they enabled that core. This was likely to help with yields. Then with the M1 chips they decided to sell a 7 core version of the chip with the base level Macbook Air and save the 8 core version for the higher trims.
Even Apple is susceptible to it. But Apple doesn't sell chips, they sell devices and they can eat the cost for some of these. For example if a chip has 2 bad cores instead of selling a 6 core version Apple is probably just scrapping it.
Having no margin of error on these SKU's would be terminally dumb, but having tight error bars isn't necessarily a bad thing.
Being able to sell bad batches of product takes some of the sting out of failure, and past a certain point you're just enabling people to cut corners or ignore fixable problems. Having a tolerance of 1 bad core means if I think I have a process improvement that will reduce double faults but costs money to research and develop, aren't I more likely to get that funding?
All of those device perform exactly the same, as Apple has chosen the same power/thermal set point for all of them. This is going to start to look a lot different in coming years when the larger MacBook Pro transitions - I expect 2-3 more models there. Then when the Mac Pro transitions I expect another 2-3 models there.
We'll start to see high-binned next-gen Apple Silicon parts moving to the MacBook Pro, and Mac Pro, and lower-binned parts making their way down-range.
Another commenter (dragontamer) pointed out elsewhere in the thread that Apple might be doing what Sony did for the PS3 (since Sony also made custom chips that had to perform identically in the end product): the strategy Sony took was to actually make better chips than advertised for the PS3, and disable the extra cores. That means that if one of the cores is broken, you can still sell it in a PS3; you were going to disable it anyway. Yields go up since you can handle a broken core, at the cost of some performance for your best-made chips since you disable a core on them.
That could make sense for Apple; the M1 is already ~1 generation ahead of competitors, so axing a bit of performance in favor of higher yields doesn't lose you any customers, but does cut your costs.
Plus, they definitely do some binning already, as mentioned with the 7 vs 8 core GPUs.
Baseless speculation: perhaps they do actually throw away chips? They only really target a premium market segment so perhaps it's not worth it to their brand to try and keep those chips.
Waste is a factor in all production goods. Every fish you eat's price takes into account dealing with bycatch. Your wooden table's price accounts for the offcuts. It's the nature of making (or harvesting, or whatever) things.
In silicon manufacturing, the inefficiency is actually pretty low specifically because of the kind of binning that Intel and AMD do, that GP was complaining about. In a fully vertically integrated system with no desire to sell outside, the waste is realized. In a less integrated system the waste is taken advantage of.
In theory capitalism should broadly encourage the elimination of waste - literally every part of the animal is used, for instance. Even the hooves make glue, and the bones to make jello.
That's not really an Apple tax though, that's a cost of doing business tax. It's not like Intel and AMD and everyone else aren't effectively doing the same exact thing.
Intel and AMD __literally__ sell those broken chips to the open marketplace, recouping at least some of the costs (or possibly getting a profit from them).
Apple probably does the same strategy PS3 did: create a 1-PPE + 8-SPE chip, but sell it as a 1-PPE + 7-SPE chip (assume one breaks). This increases yields, and it means that all 7-SPE + 8-SPE chips can be sold.
6-SPE-chips (and below) are thrown away, which is a small minority. Especially as the process matures and reliability of manufacturing increases over time.
I can confirm that 5000 desktop ryzen series has issues with turbo boost, basically if you disable turbo and stay on base clock then everthing is fine, but with turbo (CPB) enabled you get crashes and BSOD. I had this problem at work at my new workstation with ryzen 5900x. We RMAed it and new cpu works fine. From what i read it's pretty common problem, but it's strange that no on talks about it.
I think yes, but if you buy cpu, you look at advertised speeds and you expect get them in your machine. From what i researched, to achive advertised clock frequencies you need to increase voltage to make it more stable. Some people reported silicon degradation after increasing voltages (it worked fine for week and then problems returned).
I am very interested in AMD's latest lineup (and bought a 5500U laptop that performs super well so far) but I am aware that on the PC front things can be a bit rockier and not always stable so such comments and articles help a lot.
Apple sells a 7 core and 8 core version of their M1 chips. Maybe Intel and AMD ship CPUs with even more cores disabled but it's not like Apple doesn't do this at all.
There's no way they throw away that much revenue. Not even Apple is that committed to purity. I'm sure they have a hush-hush deal with another company to shove their chips in no-name microwave ovens or something.
Funny story about microwaves, theres basically only 2 main manufacturers. They're both in China, and you've never heard of them. But if you look at various brands in the US and take them apart, you'll see the only difference is the interface. The insides are literally the same.
The only exception to this are Panasonic microwaves.
To be fair, is there anything particularly revolutionary that could be done with a microwave (short of "smart" features)? They all function the same: shoot specific frequency energy into the (possibly rotating) chamber. It would make sense that the guts are just a rebadged OEM part.
It's not that much revenue because the marginal cost of an individual chip is very low. Given that apple has plenty of silicon capacity, throwing away say 5-10% of chips that come off the line is likely cheaper than trying to build a new product around them or selling them off to some OEM who needs to see a bunch of proprietary info to use them.
No way; the half-busted chips go into low-cost products like the iPhone SE. It costs little to accumulate and warehouse them until a spot in the roadmap for a budget device arises.
That's not amoral. It's missing a market opportunity, but conflating that with morality is an interesting way of looking at it.
Businesses don't owe you a product (before you pay for it) any more than you owe them loyalty after you pay for something. They will suffer when someone else offers what you want and you leave. That's the point of markets and competition.
Maybe 'amoral' is a bit strong, but I think there is something wrong with an economic system where producers destroy wealth, rather than distribute all that is produced.
If it's wrong for the government to pay farmers to burn crops during a depression, then it's wrong for a monopoly to disable chip capabilities during a chip shortage.
I think you're framing the supply chain in a very personal (strawman) way.
The problem is just one of "efficiency". The production is not perfectly aligned with where people are willing to spend money. A purely efficient market exists only in theory / textbooks / Adam Smith's Treatise.
The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.
> The problem is just one of "efficiency". The production is not perfectly aligned with where people are willing to spend money. A purely efficient market exists only in theory / textbooks / Adam Smith's Treatise.
Destroying wealth is not appropriate the market mechanism to deal with disequilibrium. Producers should either lower the price to meet the market or hold inventory if they anticipate increased future demand. However, the latter may be harder to do in the CPU business because inventory depreciates rapidly.
Intel has hitherto been minimally affected by market pressures because they held an effective monopoly on the CPU market though that is fast changing.
So, there is nothing necessarily "efficient" about what Intel is doing. They're maximising their returns through price discrimination at the expense of allocative efficiency.
> The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.
That may be true in some cases, but not in others. I'm speaking directly to the case where a component is deliberately modified to reduce its capability for the specific purpose of price discrimination.
> Businesses don't owe you a product (before you pay for it) any more than you owe them loyalty after you pay for something.
This is itself a moral claim. You may choose to base your morals on capitalism, but capitalism itself doesn't force that moral choice.
> That's the point of markets and competition.
And the point of landmines is to blow people's legs off, but the existence of landmines does not morally justify blowing people up. Markets are a technology and our moral framework should determine how we employ technologies and not the other way around.
So, if I had changed to preface with "In today's western society, it is generally accepted that ... ", we'd be on a level playing field? That's reasonable.
No, the scenario is that there are massive price differences even for the same class of seats. Traditionally, the major long haul airlines sold seats weeks/months in advance at rates that were basically losing money but made almost all of their per flight profit on last minute bookings at higher rates. These were usually business flights, but not necessarily (not usually, even) business class.
Business models for budget airlines (RyanAir, etc.) are a bit different but that's not relevant here.
Because if they're capable of making plenty of good 4-cores but have more demand for 2-cores so are cutting good 4c, they should just make the 4-cores a little cheaper. But maybe they already do this.
Anyways, agreed ECC should be standard, but it requires an extra die and most people can do fine without it, so it probably won't happen. But an ECC CPU option with clearly marketed consumer full ECC RAM would be nice. DDR5 is a nice step in this direction but isn't "full" ECC.
I don't know if mobile cores factor into the same process, but if you have a lot of demand for 2 core system for cheap laptops that can't supply the power or cooling for a 4 core then having more 4 cores, even if they're cheaper doesn't help.
Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation. But we can’t make every CPU support everything without wasting half good dies.