I've never had time for Intel creating 400 different CPUs just to create artificial market segmentation and force people into a more expensive CPU. Why is there an i3, i5, i7, i9 - ahh, right, because then you can try to justify charging incrementally more for each additional feature. Oh you want turbo boost? Sorry that's an i5! Oh you want hyperthreading/SMT? Nope, next model up. Oh you want ECC? That's a "workstation" feature, here's an identical xeon with nothing new other than ECC!
Just STOP. EVERY CPU they make should support ECC in 2021. Give me an option for with or without GPU, and with or without 10Gbe - everything else should be standard. Differentiate with clock speed, core count, and a low power option, and be done with it.
It’s worth keeping in mind that the silicon lottery is very much a thing at these nanometer sizes. So some market segmentation has to exist. If Intel threw away every chip that had one of the four cores come out broken, they’d lose a lot of money and have to raise prices to compensate. By fusing off the broken and one of the good ones, they can sell it as a two core SKU.
Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation. But we can’t make every CPU support everything without wasting half good dies.
> Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation.
I think even this is a bit unfair. Intel's segmentation is definitely still overkill, but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour.
Most of the cost (especially for intel) is going to be upfront costs like R&D on the chip design, and the chip foundry process. I don't think it's unreasonable for Intel to be able to sell an artificially gimped processor at a lower price, because the price came out of thin air in the first place.
The point at which this breaks is when Intel doesn't have any real competition and uses segmentation as a way to raise prices on higher end chips rather than as a way to create cheaper SKUs.
> The point at which this breaks is when Intel doesn't have any real competition and uses segmentation as a way to raise prices on higher end chips rather than as a way to create cheaper SKUs.
I’m not sure that this is really fair to call broken. This sort of fine granularity market segmentation allows Intel to maximize revenue by selling at every point along the demand curve, getting a computer into each customer’s hands that meets their needs at a price that they are willing to pay. Higher prices on the high end enables lower prices on the low end. If Intel chose to split the difference and sell a small number of standard SKUs in the middle of the price range, it would benefit those at the high end and harm those at the low end. Obviously people here on HN have a particular bias on this tradeoff, but it’s important to keep things in perspective. Fusing off features on lower-priced SKUs allows those SKUs to be sold at that price point at all. If those SKUs cannibalized demand for their higher tier SKUs, they would just have to be dropped from the market.
Obviously Intel is not a charity, and they’re not doing this for public benefit, but that doesn’t mean it doesn’t have a public benefit. Enabling sellers to sell products at the prices that people are willing/able to pay is good for market efficiency, since it since otherwise vendors have to refuse some less profitable but still profitable sales.
It is unfortunate though that this has led to ECC support being excluded from consumer devices.
Without knowing what the silicon lottery distribution actually looks like we can't really say that.
> "... but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour."
Yes, you could choose to amortize it over every product but then you're selling each CPU for the same price no matter which functional units happen to be defective on a given part.
Since that's not a great strategy (who wants to pay the same for a 12 core part as a 4 core part because the amount of sand that went into it is the same?) you then begin to assign more value to the parts with more function, do you not? And then this turns into a gradient. And eventually, you charge very little for the parts that only reception PCs require, and a lot more for the ones that perform much better.
Once you get to diminishing returns there's going to be a demographic you can charge vastly more for that last 1% juice, because either they want to flex or at their scale it matters.
Pretty soon once you get to the end of the thought exercise it starts to look an awful lot like Intel's line-up.
I think what folks don't realize is even now, Intel 10nm fully functional yields are ~50%. That means the other half of those parts, if we're lucky, can be tested and carved up to lower bins.
Even within the "good" 50% certain parts are going to be able to perform much better than others.
> So some market segmentation has to exist. If Intel threw away every chip that had one of the four cores come out broken, they’d lose a lot of money and have to raise prices to compensate.
Except in the case with the Pentium special edition 2 cores and i3 parts, Intel actually designed a separate two core part that wouldn't have the benefit of re-enabling cores among hobbyists.
And then there's the artificial segmentation by disabling Xeon support among consumer boards... even though the Xeon branded parts were identical to i7s (with the GPU disabled) and adding (or removing) a pin on a socket between generations even though the chipset supports the CPU itself (and the CPU runs on the socket fine with an adapter.)
Intel definitely did everything they could to make it as confusing as possible.
Its just the behavior of a monopolist where they are making their product line as efficient as possible by milking every last penny out of every single customer.
In a truly competitive ecosystem features that have additional cost would be the only ones that actually cost more, and artificial limits wouldn't work because the vendor with less market share would just throw them in for free.
So you would expect product segmentation along the lines of core counts, dram channels, etc but not really between for example high end desktop/low end server because there would be a gradual mixing of the two markets.
And it turns out the market is still competitive because Arm and AMD are driving a bus through some of those super high margin products that are only artificially differentiated from the lower end parts by the marketing department or some additional engineering time that actually breaks functionality in the product (ecc, locked multipliers, iommu's, 64-bit MMIO windows, etc).
Look at the Apple A12x. They disabled a GPU core in it for the iPad, and then in the A12z they enabled that core. This was likely to help with yields. Then with the M1 chips they decided to sell a 7 core version of the chip with the base level Macbook Air and save the 8 core version for the higher trims.
Even Apple is susceptible to it. But Apple doesn't sell chips, they sell devices and they can eat the cost for some of these. For example if a chip has 2 bad cores instead of selling a 6 core version Apple is probably just scrapping it.
Having no margin of error on these SKU's would be terminally dumb, but having tight error bars isn't necessarily a bad thing.
Being able to sell bad batches of product takes some of the sting out of failure, and past a certain point you're just enabling people to cut corners or ignore fixable problems. Having a tolerance of 1 bad core means if I think I have a process improvement that will reduce double faults but costs money to research and develop, aren't I more likely to get that funding?
All of those device perform exactly the same, as Apple has chosen the same power/thermal set point for all of them. This is going to start to look a lot different in coming years when the larger MacBook Pro transitions - I expect 2-3 more models there. Then when the Mac Pro transitions I expect another 2-3 models there.
We'll start to see high-binned next-gen Apple Silicon parts moving to the MacBook Pro, and Mac Pro, and lower-binned parts making their way down-range.
Another commenter (dragontamer) pointed out elsewhere in the thread that Apple might be doing what Sony did for the PS3 (since Sony also made custom chips that had to perform identically in the end product): the strategy Sony took was to actually make better chips than advertised for the PS3, and disable the extra cores. That means that if one of the cores is broken, you can still sell it in a PS3; you were going to disable it anyway. Yields go up since you can handle a broken core, at the cost of some performance for your best-made chips since you disable a core on them.
That could make sense for Apple; the M1 is already ~1 generation ahead of competitors, so axing a bit of performance in favor of higher yields doesn't lose you any customers, but does cut your costs.
Plus, they definitely do some binning already, as mentioned with the 7 vs 8 core GPUs.
Baseless speculation: perhaps they do actually throw away chips? They only really target a premium market segment so perhaps it's not worth it to their brand to try and keep those chips.
Waste is a factor in all production goods. Every fish you eat's price takes into account dealing with bycatch. Your wooden table's price accounts for the offcuts. It's the nature of making (or harvesting, or whatever) things.
In silicon manufacturing, the inefficiency is actually pretty low specifically because of the kind of binning that Intel and AMD do, that GP was complaining about. In a fully vertically integrated system with no desire to sell outside, the waste is realized. In a less integrated system the waste is taken advantage of.
In theory capitalism should broadly encourage the elimination of waste - literally every part of the animal is used, for instance. Even the hooves make glue, and the bones to make jello.
That's not really an Apple tax though, that's a cost of doing business tax. It's not like Intel and AMD and everyone else aren't effectively doing the same exact thing.
Intel and AMD __literally__ sell those broken chips to the open marketplace, recouping at least some of the costs (or possibly getting a profit from them).
Apple probably does the same strategy PS3 did: create a 1-PPE + 8-SPE chip, but sell it as a 1-PPE + 7-SPE chip (assume one breaks). This increases yields, and it means that all 7-SPE + 8-SPE chips can be sold.
6-SPE-chips (and below) are thrown away, which is a small minority. Especially as the process matures and reliability of manufacturing increases over time.
I can confirm that 5000 desktop ryzen series has issues with turbo boost, basically if you disable turbo and stay on base clock then everthing is fine, but with turbo (CPB) enabled you get crashes and BSOD. I had this problem at work at my new workstation with ryzen 5900x. We RMAed it and new cpu works fine. From what i read it's pretty common problem, but it's strange that no on talks about it.
I think yes, but if you buy cpu, you look at advertised speeds and you expect get them in your machine. From what i researched, to achive advertised clock frequencies you need to increase voltage to make it more stable. Some people reported silicon degradation after increasing voltages (it worked fine for week and then problems returned).
I am very interested in AMD's latest lineup (and bought a 5500U laptop that performs super well so far) but I am aware that on the PC front things can be a bit rockier and not always stable so such comments and articles help a lot.
Apple sells a 7 core and 8 core version of their M1 chips. Maybe Intel and AMD ship CPUs with even more cores disabled but it's not like Apple doesn't do this at all.
There's no way they throw away that much revenue. Not even Apple is that committed to purity. I'm sure they have a hush-hush deal with another company to shove their chips in no-name microwave ovens or something.
Funny story about microwaves, theres basically only 2 main manufacturers. They're both in China, and you've never heard of them. But if you look at various brands in the US and take them apart, you'll see the only difference is the interface. The insides are literally the same.
The only exception to this are Panasonic microwaves.
To be fair, is there anything particularly revolutionary that could be done with a microwave (short of "smart" features)? They all function the same: shoot specific frequency energy into the (possibly rotating) chamber. It would make sense that the guts are just a rebadged OEM part.
It's not that much revenue because the marginal cost of an individual chip is very low. Given that apple has plenty of silicon capacity, throwing away say 5-10% of chips that come off the line is likely cheaper than trying to build a new product around them or selling them off to some OEM who needs to see a bunch of proprietary info to use them.
No way; the half-busted chips go into low-cost products like the iPhone SE. It costs little to accumulate and warehouse them until a spot in the roadmap for a budget device arises.
That's not amoral. It's missing a market opportunity, but conflating that with morality is an interesting way of looking at it.
Businesses don't owe you a product (before you pay for it) any more than you owe them loyalty after you pay for something. They will suffer when someone else offers what you want and you leave. That's the point of markets and competition.
Maybe 'amoral' is a bit strong, but I think there is something wrong with an economic system where producers destroy wealth, rather than distribute all that is produced.
If it's wrong for the government to pay farmers to burn crops during a depression, then it's wrong for a monopoly to disable chip capabilities during a chip shortage.
I think you're framing the supply chain in a very personal (strawman) way.
The problem is just one of "efficiency". The production is not perfectly aligned with where people are willing to spend money. A purely efficient market exists only in theory / textbooks / Adam Smith's Treatise.
The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.
> The problem is just one of "efficiency". The production is not perfectly aligned with where people are willing to spend money. A purely efficient market exists only in theory / textbooks / Adam Smith's Treatise.
Destroying wealth is not appropriate the market mechanism to deal with disequilibrium. Producers should either lower the price to meet the market or hold inventory if they anticipate increased future demand. However, the latter may be harder to do in the CPU business because inventory depreciates rapidly.
Intel has hitherto been minimally affected by market pressures because they held an effective monopoly on the CPU market though that is fast changing.
So, there is nothing necessarily "efficient" about what Intel is doing. They're maximising their returns through price discrimination at the expense of allocative efficiency.
> The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.
That may be true in some cases, but not in others. I'm speaking directly to the case where a component is deliberately modified to reduce its capability for the specific purpose of price discrimination.
> Businesses don't owe you a product (before you pay for it) any more than you owe them loyalty after you pay for something.
This is itself a moral claim. You may choose to base your morals on capitalism, but capitalism itself doesn't force that moral choice.
> That's the point of markets and competition.
And the point of landmines is to blow people's legs off, but the existence of landmines does not morally justify blowing people up. Markets are a technology and our moral framework should determine how we employ technologies and not the other way around.
So, if I had changed to preface with "In today's western society, it is generally accepted that ... ", we'd be on a level playing field? That's reasonable.
No, the scenario is that there are massive price differences even for the same class of seats. Traditionally, the major long haul airlines sold seats weeks/months in advance at rates that were basically losing money but made almost all of their per flight profit on last minute bookings at higher rates. These were usually business flights, but not necessarily (not usually, even) business class.
Business models for budget airlines (RyanAir, etc.) are a bit different but that's not relevant here.
Because if they're capable of making plenty of good 4-cores but have more demand for 2-cores so are cutting good 4c, they should just make the 4-cores a little cheaper. But maybe they already do this.
Anyways, agreed ECC should be standard, but it requires an extra die and most people can do fine without it, so it probably won't happen. But an ECC CPU option with clearly marketed consumer full ECC RAM would be nice. DDR5 is a nice step in this direction but isn't "full" ECC.
I don't know if mobile cores factor into the same process, but if you have a lot of demand for 2 core system for cheap laptops that can't supply the power or cooling for a 4 core then having more 4 cores, even if they're cheaper doesn't help.
Just to note, AMD does every single thing you blame Intel for.
AMD recently dicked b350/x370 chipset owners by sending motherboard manufacturers a memo telling them not to support Zen 3 (5000 series) Ryzen CPUs on their older chipsets.[1] This was after AsRock sent out a beta BIOS which proved that 5000 series CPUs worked fine on b350 chipsets. Today, AsRock's beta BIOS still isn't on their website and it's nearly a year after they put it out.
Also, Ryzen APU CPUs do not support ECC. Only the PRO branded versions. Which only exist as A) OEM laptop integration chips, or B) OEM desktop chips which can only be found outside North America (think AliExpress, or random sellers on eBay).
It's more accurate to say AsRock supports ECC on Ryzen. And sometimes Asus. They are also incredibly cagey about exactly what level of ECC they support.
Ryzen only supports UDIMMs. Not the cheaper RDIMMs. There are literally 2-3 models of 32GB ECC UDIMMs on the market. One of which is still labeled "prototype" on Micron's website, last I checked. Even if your CPU supports ECC, it takes the entire market to bring it to fruition. If no one is buying ECC (because non ECC will always be cheaper), then the market for those chips and motherboards won't exist. Want IPMI on Ryzen? You're stuck with AsRock Rack or Asus Pro WS X570-ACE. Go check the prices on those. Factor in the UDIMM ECC. It's not cheaper than Xeon.
>AMD recently dicked b350/x370 chipset owners by sending motherboard manufacturers a memo telling them not to support Zen 3 (5000 series) Ryzen CPUs on their older chipsets.[1] This was after AsRock sent out a beta BIOS which proved that 5000 series CPUs worked fine on b350 chipsets. Today, AsRock's beta BIOS still isn't on their website and it's nearly a year after they put it out.
And they stated their reasoning:
The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard, including: VRM configuration, memory trace topology, and PCB layers
Which is entirely reasonable, and accurate if you look at the quality of the average X370 motherboard compared to 400+.
And no, AMD does not do everything I described. Which Ryzen model doesn't have SMT? I see it on the 3, the 5, the 7, and the 9. Which model doesn't have turbo boost? I see it on the 3, the 5, the 7, and the 9.
As for ECC: I don't believe I said they're perfect, but it's a heck of a lot better than what Intel has to offer...
> The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard, including: VRM configuration, memory trace topology, and PCB layers
So AMD told you that? And yet you don't call that market segmentation? Come on now. Lose the double standard already. AsRock (and I think Asus or Gigabyte?) has proven the b350/x370 chipset works fine with 5000 series CPUs. People have tested it and are using it just fine. VRMs are up to the motherboard. Why are you letting AMD dictate what motherboard manufacturers want to support here?
> look at the quality of the average X370 motherboard compared to 400+
Uh, what? The x370 is at a higher tier than b450. There are many b450 boards that are straight garbage (and let's be honest, garbage MBs stretch across all chipsets). The difference between a b350 and b450 is vanishingly tiny.
I'm baffled that people really think 300/400/500 series matter. You can run Zen 1 on b550/x570 despite AMD not wanting you to. You can't claim VRM/memory trace/PCB there. The only real limitation that I can tell is physical BIOS RAM capacity.
> Which Ryzen model doesn't have SMT?
The Ryzen 3, of course. Not that I meant literally all the steps Intel took AMD also took. But what the hell do you think the "X" series of Ryzen chips are? Or Threadripper and EPYC? It's all market segmentation. The Ryzen 5 is just the 7 with cores disabled. Why are you picking certain features as "segmentation" over others? It makes no sense.
> As for ECC: I don't believe I said they're perfect, but it's a heck of a lot better than what Intel has to offer...
How? Just so you know I spent literally months researching everything I've stated in this thread just so I could put together a Ryzen system with ECC. With Xeon I could have been done in a day.
Gigabyte allows ECC RAM to operate, but forces it into non-ECC mode thereby working as normal RAM. Good luck figuring out what MSI is doing. Asus, who the hell really knows. Their website spec sheet lists "ECC supported" and the manual for each specific motherboard says something entirely different.
They took right to choose for myself from me for my own good! Like Abortion, Apple genius telling me I should buy new device because replacing battery will cost the same, or Tesla charging $15K for broken battery cooling pipe, is that what you are saying?
>VRM configuration
New CPUs have same TDP.
>memory trace topology
worked fine with previous CPUs at speed X
>and PCB layers
see above
>> The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard
AMD Zen CPUs are full S0Cs nowadays. What they call "chipset" is just a PCIE connected Northbridge. Everything important is integrated inside CPU. pcie, ram, usb 3.0, sata, HD Audio, even RTC/SPI/I2C/SMBus and LPC are on die. You can make perfectly functional system with just an AMD CPU alone.
How about AMD Smart Access Memory totally requiring 500-series chipset despite being just a fancy marketing name for standard PCI Express Resizable BAR support? Already shipping disabled for 2 prior generations before being announced as 5000 exclusive. Oh, enough uproar and even that crumbles a little bit https://www.extremetech.com/computing/320548-amd-will-suppor... but still linked to "chipsed" while implemented entirely inside CPU.
Or that time x470 was going to support PCIE 4, but then it was made x570 exclusive. Despite the fact "chipset" doesnt even touch the lines between CPU and slots.
oh, but but the bios size limit, we cant support all the CPUs on same motherboard (like they did in Socket A days) ... in a 16MB bios chip? please.
The worst part is that adding ECC support should only increase the price of RAM by about 13%, which given that the RAM modules are about $50-$100 on most builds works out to $7-$13 to the total cost of the machine. Every machine should come with ECC. It's such cheap insurance. But because the chip manufacturers have to make more money by artificially segmenting the market almost nobody runs ECC on home machines.
It is 13% of one of the cheaper components. Back in the 80s when all memory was expensive there was something of an excuse, but today we are needlessly trading the possibility for silent corruption over the multi-year lifetime of the machine for a couple of coffees. And worse, we make it really expensive and difficult for people who do want to reduce their risk by artificially segmenting the market.
Back in the 80s the need for ECC was much less because the gates were physically bigger and there was much less overall memory. Back then the chance of your computer having a bit flip was like one in a million per year, now with gigabytes of memory it's near 100% chance per year.
For RDIMM, it's fair that they don't "implement" it on memory controller because they don't sell chips made from same silicon that need support RDIMM .
Intel's "disabling" ECC is different situation. They implements ECC for the silicon, enable for Xeon, disable for Core i.
>OEM desktop chips which can only be found outside North America (think AliExpress, or random sellers on eBay).
Lenovo offered Pro Series Ryzen APU small form factor PCs. Like the Lenovo ThinkCentre M715q with a 2400GE. I believe HP offered them as well with the 2400GE at some point.
by desktop I meant non-integrated/embedded. A standalone CPU you could buy and plop into any standard ATX/mATX/ITX motherboard.
But even if you have a Pro embedded, it doesn't mean you get ECC. My Lenovo ThinkPad has a PRO 4750U. But they solder on one non-ECC DIMM. So it's rather pointless. Plus, it's SODIMM. So that's yet another factor at play when choosing RAM.
The only real exception that I know of is the recent 5000G APUs may support ECC. But this seems to be borderline rumor/speculation at this point. Level1Techs made the claim on YouTube and were supposed to have a follow up. Not sure if that ever happened.
Yeah, I've switched to AMD Ryzen 5k for my dedicated servers. They're faster and cheaper than Xeon, they support ECC which is the only reasons I need Xeon previously.
Fun-fact: Intel's 12th gen desktop CPUs will no longer have AVX-512. Well, I mean, the cores do have it, but it's disabled in all SKUs. So to do any AVX-512 development and testing at all you will need an Intel Xeon machine in the future.
Market segmentation both raises and lowers prices. I don't think it is inherently bad. The low cost of entry level chips is only viable because of the high cost of premium chips. It is also critical in getting more viable chips out of your wafers, as defective parts of the silicon can be disabled and the chip placed in a lower SKU.
If you eliminate the market segmentation practices, then the price of the small number of remaining SKUs will regress to the mean. This may save wealthy buyers money as they get more features for less cash, but poor buyers get left out completely as they can no longer afford anything.
I do agree that Intel takes this to an absurd degree and should reign it in to a level more comparable to AMD. With ECC being mandatory in DDR5, I would expect all Intel chips to support it within a few years.
I agree in principle, but it's pretty obvious that this would be bad for their profit margins and as a consequence wouldn't happen.
After all, making your consumers buy the more expensive versions of your product just because they need one of its features is a sound business decision.
Otherwise people will use the cheaper and lower end versions if they only need these features - like i'm currently using 200GEs for my homelab servers, because i do not require any additional functionality that the low power 2018 chip doesn't provide.
> they are losing because their fabs are way behind TSMC.
I don't believe it is merely an execution problem.
AMD's out-innovated Intel Evidence being the pivot to multi-core, massive increased PCIe, better fabric, chiplet design, design efficiency per wafter, among others.
Why did this happen?
> Two years after Keller's restoration in AMD's R&D section, CEO Rory Read stepped down and the SVP/GM moved up. With a doctorate in electronic engineering from MIT and having conducted research into SOI (silicon-on-insulator) MOSFETS, Lisa Su [1] had the academic background and the industrial experience needed to return AMD to its glory days. But nothing happens overnight in the world of large scale processors -- chip designs take several years, at best, before they are ready for market. AMD would have to ride the storm until such plans could come to fruition.
>While AMD continued to struggle, Intel went from strength to strength. The Core architecture and fabrication process nodes had matured nicely, and at the end of 2016, they posted a revenue of almost $60 billion. For a number of years, Intel had been following a 'tick-tock' approach to processor development: a 'tick' would be a new architecture, whereas a 'tock' would be a process refinement, typically in the form of a smaller node.
>However, not all was well behind the scenes, despite the huge profits and near-total market dominance. In 2012, Intel expected to be releasing CPUs on a cutting-edge 10nm node within 3 years. That particular tock never happened -- indeed, the clock never really ticked, either. Their first 14nm CPU, using the Broadwell architecture, appeared in 2015 and the node and fundamental design remained in place for half a decade.
>The engineers at the foundries repeatedly hit yield issues with 10nm, forcing Intel to refine the older process and architecture each year. Clock speeds and power consumption climbed ever higher, but no new designs were forthcoming; an echo, perhaps, of their Netburst days. PC customers were left with frustrating choices: choose something from the powerful Core line, but pay a hefty price, or choose the weaker and cheaper FX/A-series.
>But AMD had been quietly building a winning set of cards and played their hand in February 2016, at the annual E3 event. Using the eagerly awaited Doom reboot as the announcement platform, the completely new Zen architecture was revealed to the public. Very little was said about the fresh design besides phrases such as 'simultaneous multithreading', 'high bandwidth cache,' and 'energy efficient finFET design.' More details were given during Computex 2016, including a target of a 40% improvement over the Excavator architecture.
....
>Zen took the best from all previous designs and melded them into a structure that focused on keeping the pipelines as busy as possible; and to do this, required significant improvements to the pipeline and cache systems. The new design dropped the sharing of L1/L2 caches, as used in Bulldozer, and each core was now fully independent, with more pipelines, better branch prediction, and greater cache bandwidth.
...
>In the space of six months, AMD showed that they were effectively targeting every x86 desktop market possible, with a single, one-size-fits-all design. A year later, the architecture was updated to Zen+, which consisted of tweaks in the cache system and switching from GlobalFoundries' venerable 14LPP process -- a node that was under from Samsung -- to an updated, denser 12LP system. The CPU dies remained the same size, but the new fabrication method allowed the processors to run at higher clock speeds.
>Another 12 months after that, in the summer of 2019, AMD launched Zen 2. This time the changes were more significant and the term chiplet became all the rage. Rather than following a monolithic construction, where every part of the CPU is in the same piece of silicon (which Zen and Zen+ do), the engineers separated in the Core Complexes from the interconnect system. The former were built by TSMC, using their N7 process, becoming full dies in their own right -- hence the name, Core Complex Die (CCD). The input/output structure was made by GlobalFoundries, with desktop Ryzen models using a 12LP chip, and Threadripper & EPYC sporting larger 14 nm versions.
...
>It's worth taking stock with what AMD achieved with Zen. In the space of 8 years, the architecture went from a blank sheet of paper to a comprehensive portfolio of products, containing $99 4-core, 8-thread budget offerings through to $4,000+ 64-core, 128-thread server CPUs.
The secondary features (PCIe, ECC) and tertiary features (chiplets) wouldn't have mattered if Intel had delivered 10nm in 2015.
It's a harsh truth, but nodes completely dominate the value equation. It's nearly impossible to punch up even a single node -- just look at consumer GPUs, where NVidia, the king of hustle, pulled out all the stops, all the power budget, packed all the extra features, and leaned harder than ever on all their incumbent advantage, and still they can barely punch up a single node. Note that even as they shopped around in the consumer space, NVidia still opted to pay the TSMC piper for their server offerings. The node makes the king.
Exactly. It seemed like a sound business decision because it gave them measurably more money in their pocket over a short period of time. They don't appear to have taken into account that they left the door open for competition. It wasn't just prices that left them vulnerable, but it sure didn't help.
AMD should never have been able to get back in the game.
I agree but this is a game you can play with your customers when they actually want what you’re selling and you have market power. When you’re losing ground and customers are leaving the shop, it’s time to cut the bullshit and give people what they want.
>I agree in principle, but it's pretty obvious that this would be bad for their profit margins and as a consequence wouldn't happen.
The only reason it hasn't happened is because they had no legitimate competition until recently. In a healthy market they would have been forced to do so long ago. Capitalism and "market forces" only work where competition exists.
Yes please. ECC support by now should come by default, both in CPU support and in motherboards, RAM chips etc.
At least AMD Ryzen supports it, but the fact that one has to spend a lot of time to research through products, specs, forums and internet chats to figure out a good CPU, m/b & RAM combination that works is cumbersome, to say the least.
The "reason" is yield management combined with inventory management.
The i3 through i9 are generally the exact same silicon. But yields are always variable. If you took the raw yield the actual i9 per wafer might only be 10%-20% which would not be economically viable.
So designed into EVERY Intel product (and generally every other semiconductor company's products) are "fuses" and circuitry that can re-map and re-program out failed elements of the product die.
So a failed i9 can AND DOES become i7, i5, or i3. There is no native i3 processor. The i3 is merely an i9 that has 6 failed cores or 6 "canceled" cores (for inventory/market supply management). Same goes for i5 and i7. They are "semi-failed" i9s!
This is how the industry works. Memories work in similar ways for Flash or DRAM: there is a top-end product which is designed with either spare rows or columns as well as half-array and 3/4-array map-out fuses. Further there is speed binning with a premium on EMPIRICALLY faster parts (you can NOT predict or control all to be fast - it's a Bell curve distribution like most EVERYTHING ELSE in the universe)
With this, nominal total yields can be in the 90% range. Without it, pretty much NO processor or memory chip would be economically viable. The segmentation is as much created to support this reality OF PHYSICS and ENGINEERING as it is to maximize profits.
So generally, to use your example, a non-ECC processor is a regular processor "who's" ECC logic has failed and is inoperable. Similar for different cache size versions - part of the cache memory array has failed on smaller cache parts.
So rather than trash the entire die which earns $0 (and actually costs money to trash), it has some fuses blown, gets packaged and becomes a non-ECC processor which for the right customer is 100% OK so that it earns something less than the ECC version but at an acceptable discount.
When I worked at Intel, we had Commercial, Industrial and Military environmental plus extra ones for "emergencies: e.g. parts that completed 80% of military qual and then failed - hence the "Express" class part.
We also had 10 ns speed bins which create 5-7 bins, and then the failed half- and quarter-array parts meant 3 more. So 4x7x3 = 84 possible product just for the memory parts I worked on.
For processors you could easily have separate categories for core failures, for ECC failures, for FPU/CPU failures. That takes you up to 100-200 easy. If you are simultaneous selling 2-3 technology generations (tik-tock or tik-tik-tock), that gets you to 500-1000 easy.
This is about "portfolio effect" to maximize profits while still living with the harsh realities that the laws of physics impose upon semiconductor manufacturing. You don't rely on a single version and you don't toss out imperfect parts.
BTW how do you think IPA and sour beers came about?? Because of market research? Or because someone had a whole lot of Epic Fail beer brew that they needed to get rid of??
It was the latter originally, plus inspired marketing. And then people realized they could intentionally sell schlock made with looser process controls and make even more money!
> So generally, to use your example, a non-ECC processor is a regular processor "who's" ECC logic has failed and is inoperable.
But no high performance mainstream desktop Intel CPU supports ECC [0]. Meanwhile AMD doesn't have any that lack it.
What gives? Surely Intel's ECC logic doesn't have such a huge defect ratio that Intel can't have even a single regular mainstream part with ECC.
At work I need fairly low performance CPU with decent integrated graphics. Intel's iGPUs are great were it not for the lack of any parts with ECC. Nevermind that finding a non-server Intel motherboard with ECC support would restrict the choice such that there'd likely be none with also other desired features.
IPA came about because hops are a natural preservative and they needed to ship the beer all the way to India from England.
Sour Beer is just air fermented beer ala Sourdough Bread. It is actually harder to make Sour Beer than "normal" beer (it does not come out of the failure of normal beer fermentation either).
ECC support is an actual +10-20% cost in materials for the motherboard and DIMM manufacturers. Also, ECC errors are basically non-existent on desktop/laptop workloads. ECC is worth the extra cost in servers, but for desktops and laptops, the market got it right.
According to who? I checked the edac module for a year on my work machine, and it never detected a single error. I know I'm just one anecdote, but I doubt I'm that lucky.
In theory Intel could use profits from Xeons to subsidize consumer chips, but I doubt they actually are. In practice you only see that happen in highly competitive commodity markets where the profit margin on consumer grade models is razor thin (e.g. SSDs). Intel's profit margin on their consumer chips is not particularly small, and AMD wasn't a significant competitive threat until a year or two ago.
Except that if the cheaper chips have ECC, they probably couldn't go up much in price — that price is limited by how much people (who don't care about ECC anyway) are willing to pay. So if prices for the low end went up, people (like you) would instead go without (meaning Intel doesn't get your money), or try to get second hand (Intel doesn't get your money), or go with AMD (Intel doesn't get your money). But Intel would really like to have your money, or at least generally more money.
Intel would like to make the same profit per wafer as before. Any savings you get as some who wants ECC gets added weighted by fraction of volumes to chips in my price class. No thanks.
The more expensive chips subsidize the cheaper ones. If they put ECC in low-end models, they would have to charge more for them, because fewer people would buy the high-end models.
Also, there's some cross contamination between price point and market segment here. Nobody just buys a CPU, they buy a CPU wrapped in a laptop. So Intel's real customers are laptop manufacturers, not you. So the low-end chips have to appeal to a model that the laptop vendors want to introduce. That takes the form of thin & light laptops (or low-energy-usage "green" desktops for office workers).
Adding ECC support adds heat and cost and die size. All things the thin & light market do not want under any circumstances.
Let’s say it costs 5 billion to design a car (it goes as high as 6 billion) and another 2-3 billion to create all the molds and custom tooling and change over a factory. If you sell 10 million cars, that overhead costs $800 per car. If you sell only 1 million, that’s $8,000 per car. Some sports cars sell even fewer units than that. This is the biggest reason prices are higher.
It is a bit much that ECC is only availble on xeons, as ecc is incredibly cheap in terms of circuitry. Glad to see AMD are including it on mid end products.
And similarly with memory speed segmentation in the Xeon line. I'm kicking the tires on a ice lake 8352V, and I was disappointed (but not at all surprised) to learn that it is running its 3200 memory at 2933
so these days 10GbPCIe and 10gbe are essentially the same thing at the low level silicon/pins/wires level the bit packing/unpacking/signalling stuff has a whole lot in common and they're all sort of converging on some superset of hardware serdes - the higher level hardware stuff is still different (ethernet MACs vs PCI etc) of course
Just STOP. EVERY CPU they make should support ECC in 2021. Give me an option for with or without GPU, and with or without 10Gbe - everything else should be standard. Differentiate with clock speed, core count, and a low power option, and be done with it.