Hacker Newsnew | past | comments | ask | show | jobs | submit | froogle's commentslogin

> Here is the actual die photograph and confirmation that this is being done on TSMC 7nm.

Yikes, Intel. Has to be a pretty low moment as a chipmaker to have to use your competitor's fabs for something like this.


Intel has a prototype 1.8nm node which Nvidia already tested and spoken well of.

But yes, the 10nm node has been a disaster for Intel and set the company back 5/10 years on the chip making business.


If not for Intel's 10nm debacle, Apple probably wouldn't have left. All of Apple's early-2010s hardware designs were predicated on Intel's three-years-out promises of thermal+power efficiency for 10nm, that just never materialized.


I hard disagree. The chassis and cooler designs of the old intel based macs sandbagged the performance a great deal. They were already building a narrative to their investors and consumers that a jump to in house chip design was necessary. You can see this sandbagging in the old intel chassis Apple Silicon MBP where their performance is markedly worse than the ones in the newer chassis.


That doesn’t make sense: everyone else got hit by Intel’s failure to deliver, too. Even if you assume Apple had some 4-D chess plan where making their own products worse was needed to justify a huge gamble, it’s not like Dell or HP were in on it. Slapping a monster heat sink and fan on can help with performance but then you’re paying with weight, battery life, and purchase price.

I think a more parsimonious explanation is the accepted one: Intel was floundering for ages, Apple’s phone CPUs were booming, and a company which had suffered a lot due to supplier issues in the PowerPC era decided that they couldn’t afford to let another company have that much control over their product line. It wasn’t just things like the CPUs failing further behind but also the various chipset restrictions and inability to customize things. Apple puts a ton of hardware in to support things like security or various popular tasks (image & video processing, ML, etc.) and now that’s an internal conversation, and the net result is cheaper, cooler, and a unique selling point for them.


> net result is cheaper, cooler, and a unique selling point for them

That and they are not paying for Intel's profit margins either. Apple is the quintessential vertical integration - they own their entire stack.


I was thinking of that as cheaper but there’s also a strategic aspect: Apple is comfortable making challenging long-term plans, and if one of those required them to run the Mac division at low profitability for a couple of years they’d do it far more readily than even a core supplier like Intel.


Apple doesnt manufacture their own chips or assemble their own devices. They are certainly paying the profit margins of TSMC, Foxconn, and many other suppliers.


That seems a bit pedantic, practically every HN reader will know that Apple doesn't literally mine every chunk of silicon and aluminum out of the ground themselves, so by default they, or the end customer, are paying the profit margins of thousands of intermediary companies.


I doubt it was intentional, but you're very right that the old laptops had terrible thermal design.

Under load, my M1 laptop can pull similar wattage to my old Intel MacBook Pro while staying virtually silent. Meanwhile the old Intel MacBook Pro sounds like a jet engine.


The m1/m2 chips are generally stupid effecient compared to Intel chips (or even amd/arm/etc)... Are you sure the power draw is comparable? Apple is quite well known for kneecapping hardware with terrible thermal solutions and I don't think there are any breakthroughs in the modern chassis.

I couldn't find good data on the older mbpros, but the m1 max mbpro used 1/3 the power vs an 11th gen Intel laptop to get almost identical scores in cinebench r23.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...


> Apple is quite well known for kneecapping hardware with terrible thermal solutions

But that was my entire point (root thread comment.)

It's not that Apple was taking existing Intel CPUs and designing bad thermal solutions around them. It's that Apple was designing hardware first, three years in advance of production; showing that hardware design and its thermal envelope to Intel; and then asking Intel to align their own mobile CPU roadmap, to produce mobile chips for Apple that would work well within said thermal envelope.

And then Intel was coming back 2.5 years later, at hardware integration time, with... basically their desktop chips but with more sleep states. No efficiency cores, no lower base-clocks, no power-draw-lowering IP cores (e.g. acceleration of video-codecs), no anything that we today would expect "a good mobile CPU" to be based around. Not even in the Atom.

Apple already knew exactly what they wanted in a mobile CPU — they built them themselves, for their phones. They likely tried to tell Intel at various points exactly what features of their iPhone SoCs they wanted Intel to "borrow" into the mobile chips they were making. But Intel just couldn't do it — at least, not at the time. (It took Intel until 2022 to put out a CPU with E-cores.)


the whole premise of this thread is that this reputation isnt fully justified, and thats one I agree with.

Intel for the last 10 years has been saying “if your CPU isn't 100c then theres performance on the table”.

They also drastically underplayed TDP compared to, say, AMD, by taking the average TDP with frequency scaling taken into consideration.

I can easily see Intel marketing to Apple that their CPUs would be fine with 10w of cooling with Intel knowing that that they wont perform as well, and Apple thinking that there will be a generational improvement on thermal efficiency.


>Under load, my M1 laptop can pull similar wattage to my old Intel MacBook Pro while staying virtually silent. Meanwhile the old Intel MacBook Pro sounds like a jet engine.

On a 15/16" Intel MBP, the CPU alone can draw up to 100w. No Apple Silicon except an M Ultra can draw that much power.

There is no chance your M1 laptop can draw even close to it. M1 maxes out at around 10w. M1 Max maxes out at around 40w.


Where do you get the info about power draw?

Intel doesn't publish anything except TDP.

Being generous and saying TDP is actually the consumption; most Intel Mac's actually shipping with "configurable power down" specced chips ranging from 23W (like the i5 5257U) to 47W (like the i7 4870HQ); (NOTE: newer chips like the i9 9980HK actually have a lower TDP at 45w)

of course TDP isn't actually a measure of power consumption, but M2 Max has a TDP of 79W which is considerably more than the "high end" Intel CPU's; at least in terms of what Intel markets.


Check here: https://www.anandtech.com/show/17024/apple-m1-max-performanc...

Keep in mind that Intel might ship a 23w chip but laptop makers can choose to boost it to whatever it wants. For example, a 23w Intel chip is often boosted to 35w+ because laptop makers want to win benchmarks. In addition, Intel's TDP is quite useless because they added PL1 and PL2 boosts.


Apple always shipped their chips with "configurable power down" when it was available, which isn't available on higher specced chips like the i7/i9 - though they didn't disable boost clocks as far as I know.

The major pains for Apple was when the thermal situation was so bad that CPUs were performing below base clock. -- at that point i7's were outperforming i9's because they were underclocking themselves due to thermal exhaustion; which feels too weird to be true.


That's not Apple. That's Intel. Intel's 14nm chips were so hot and bad that they had to be underclocked. Every laptop maker had to underclock Intel laptop chips - even today. The chips can only maintain peak performance for seconds.


Can you elaborate?

My 2019 macbook pro 15 with the i9-9880H can maintain the stock 2.3GHz clock on all cores indefinitely, even with the iGPU active.


My 2019 MBP literally burned my fingertips if I used it while doing software development in the summer.


Back in the Dell/2019MBP era every day was summer for me.


> You can see this sandbagging in the old intel chassis Apple Silicon MBP where their performance is markedly worse than the ones in the newer chassis.

And you can compare both of those and Intels newer chips to Apples ARM offerings.


If I was a laptop manufacturer who wanted to make money selling laptops I would not intentionally make my laptops worse.


I dunno, they could have gone to AMD who is on TSMC and have lots of design wins in other proprietary machines where the manufacturer has a lot of say in tweaking the chip (=game consoles).

I think Apple really wanted to unify the Mac and iOS platforms and it would have happened regardless.


If Intel didn't have those problems, Intel would still be ahead of TSMC, and the M1 might have well be behind the equivalent Intel product in terms of performance and efficiency. It is hard to justify a switch to your inhouse architecture under these conditions.


>If not for Intel's 10nm debacle, Apple probably wouldn't have left

I doubt it. Apple loves doing full vertical stack as much as possible.


Apple left for TSMC, not to do in-house chip fabrication.


Apple had been doing their own mobile processors for a decade. It was matter of time before they vertically integrated the desktop. They definitely did not leave Intel over the process tech.


Apple has been investing directly in mobile processors since they bought a stake in ARM for the Newton. Then later they heavily invested in PortalPlayer, the designer of the iPod SoCs.

Their strategy for desktop and mobile processors has been different since the 90s and they only consolidated because it made sense to ditch their partners in the desktop space.


> Apple has been investing directly in mobile processors since they bought a stake in ARM for the Newton. Then later they heavily invested in PortalPlayer, the designer of the iPod SoCs.

Not this heavily. They bought an entire CPU design and implementation team (PA Semi).


I mean, they purchased 47% of ARM in the 90s. That's while defining the mobile space in the first place, and it being much more of a gamble than now. Heavy first line investment to create mobile niches has empirically been their strategy for decades.


Apple invested in them for a chip for Newton, not for the ARM architecture in particular. Apple was creating their own PowerPC architcture around this time, and they sold their share of ARM when they gave up on Newton.

The PA Semi purchase and redirection of their team from PowerPC to ARM was completely different and obviously signaled they were all in on ARM, like their earlier ARM/Newton stuff did not.


Apple didn't, the Mac business unit did. And it makes sense to consolidate on the investment in chips for phones and tablets.


Apple famously doesn’t do business units. It’s all in on functional organization.


Let's not forget Intel had issues with atom soc chips dying like crazy due to power-on-hours in things like Cisco routers, Nas and other devices around this era too. I think that had a big ripple effect on them to play a cog in their machine around 2018 or so.

Yes 10nm+++++ was a big problem, too.

Apple was also busy going independent and I think their path is to merge iOS with MacOX someday here so it makes sense to dump x86 in favor of higher control and profit margins.


Right. It made no sense for Apple to have complete control over most of their devices, with custom innovations moving between them, and still remain dependent on Intel for one class of devices.

Intel downsides for Apple:

1. No reliable control of schedule, specs, CPU, GPU, DPU core counts, high/low power core ratios, energy envelopes.

2. No ability to embed special Apple designed blocks (Secure Enclave, Video processing, whatever, ...)

3. Intel still hasn't moved to on-chip RAM, shared across all core-types. (As far as I know?)

4. The need to negotiate Intel chip supplies, complicated by Intel's plans for other partner's needs.

5. An inability to differentiate Mac's basic computing capabilities from every other PC that continues to use Intel.

6. Intel requiring Apple to support a second instruction architecture, and a more complex stack of software development tools.

Apple solved 1000 problems when they ditched Intel.


> 3. Intel still hasn't moved to on-chip RAM, shared across all core-types. (As far as I know?)

Apple doesnt have on chip RAM either. They do the exact same thing PC manufacturers do: use standard off the shelf DDR.


Ah yes. The CPU and RAM are mounted tightly together in a system on a chip (SOC) package, so that all RAM is shared by CPU, GPU, DPU/Neural and processor cache.

I can't seem to find any Intel chips that are packaged with unified RAM like that.


Yep. Got hit twice by that. Enough power on hours on those Atoms = some clock buffer dies and now your device no longer boots.


Apple switching to ARM also cost some time. It took like 2 years before you could run Docker on M1. Lots of people delayed their purchases until their apps could run


TSMC processes are easier to use and there's a whole IP ecosystem around them that doesn't exist for Intel's in-house processes. I can easily imagine a research project preferring to use TSMC.


Many intel products that aren't strategically tied to their process nodes use TSMC for this reason. You can buy a lot of stuff off-the-shelf and integration is a lot quicker.


Pretty much everything that isn't a logic CPU is a 2nd class citizen at Intel fabs. It explains why a lot of oddball stuff like this is TSMC. Example: Silicon Photonics itself is being done in Albuquerque, which was a dead/dying site.


They've used them for maybe 15 years for their non-PC processors, guys.

If you search Google with the time filter between 2001 and 2010 you'll find news on it.

There must be some reason HN users don't know very much about semiconductors. Probably principally a software audience? Probably the highest confidence:commentary_quality ratio on this site.


I'd assume people who do know about semiconductors are more used to NDAs and less apt to publicly pontificate.


nvidia and amd have already signed contracts to use Intel's fab service (angstrom-class) so it's not unfathomable to consider. AMD did the same when they dropped global foundries and it could be argued that this was the main reason for their jump ahead vs intel.

They're all using ASML lithography machines anyway, so who's feeding the wafers into the machine is kind of inconsequential.


" amd have already signed contracts to use Intel's fab service"

Source requested, I follow this news closely and have not seen anything that AMD is using Intel's fab.

Nvidia praised their test chip and that indicates they might use Intel's fab. They have not definitively announced that either (https://www.tomshardware.com/news/nvidia-ceo-intel-test-chip...)

Amazon was using Intel Foundry for packaging, not fabrication. And Qualcomm was considering 20A (https://www.pcgamer.com/intel-announces-first-foundry-custom...) in 2021, but there was a rumor earlier this year that Qualcomm might not use it after all (https://www.notebookcheck.net/Qualcomm-reportedly-ditches-In...)


who's feeding the wafers into the machine is kind of inconsequential

If that was true Intel wouldn't be years behind.


That’s a colossal overstatement. I suggest you checkout Asianometry on YouTube.

TSMC is in a unique position in the market and its integration with ASML is one of the chapters in this novel.


Prediction markets are a fantastic concept with a ton of value I think for everyone. So I wish companies in the field success and I hope they thrive despite regulators doing their best to destroy them, like PredictIt.

I think it's a shame Kalshi has managed to get their hooks into the CFTC and, rather than lobby to open the field to all prediction markets, instead lobbied for narrow permissions to operate only their stuff (in a limited, less useful way) while killing their competitors.

It's a terrible way to operate a business and it's a black stain on YC for funding a company using these kinds of underhanded methods.

I can only hope the regulatory environment improves and we get some improvements at the CFTC, but I think anyone familiar with US government institutions would bet against it. Perhaps someone could open a question about it on a prediction market.


Running a bucket shop is not innovation. Running a back-alley card game is not some brilliant new financialization of anything. Hustling financially illiterate or gambling-addicted suckers for their last quarters is really not about to herald a new era of anything interesting at all.

You would have to be a modern American to have so little clue about the types of frauds and scams that continue to be perpetrated to this very day that gave rise to the financial regulations we have now. There's always ways to improve regulation, but it's almost pathological how Americans refuse ever to acknowledge that there could be a huge body of law and volumes of history that might just explain the way things are. RTFM


Prediction markets are a way to cut through the fog and find truth. This isn't the equivalent of playing blackjack or roulette. In today's world, it's increasingly difficult to find people in the media who are telling the truth. Pundits everywhere can proffer predictions without being on the hook for anything if they're wrong.

Prediction markets let you more accurately know in advance pretty much anything: war, economic calamity, political outcomes like who will win an election, whether a drug will be approved, etc. and they are a genuine innovation over the current media landscape. Check out Metaculus for an example of the concept with imaginary points; it'd be more accurate if real money were allowed.

Gambling and addiction is a major problem, certainly, and I'm not arguing for zero regulation on gambling. (I could quibble on prediction markets being gambling, though they are closer to it than not.)

Regulators are so useless that they are perfectly willing to allow crypto, Las Vegas casinos and sports betting to exist and be advertised broadly (things with essentially 0 value to society), but an _actually_ useful form of dealing with probability and chance that can help people chart an uncertain world isn't allowed to exist.

Well, except in limited cases if you spend enough money greasing the right pockets and can take down a competitor too. :)


We've been committed to battling through regulations to open and expand the market for close to 4 years now.

As I posted above, here's more (insider) information about what happened with PredictIt: https://www.capitolaccountdc.com/p/gambling-on-politics-an-i...


what value do they provide other than speculation/gambling/betting?


China counts Covid deaths differently than most countries. Recent claims about them only having 3 deaths despite a surge in cases are, as far as I can tell, making the fundamental mistake that China has far more stringent requirements to call something a "Covid death". The true number of Chinese Covid deaths is likely higher - though I imagine still lower than in places like America.

Furthermore, the lockdowns have caused quite a few deaths on their own through things like delaying access to healthcare, and we don't have reliable stats here. One doctor is estimating 1000 diabetes patients will die from lack of healthcare access during the lockdowns (again, we do have to apply some skepticism to the numbers). Lockdowns aren't costless and merely keeping people locked in place for a short period.

Source: https://www.nytimes.com/2022/04/20/world/asia/covid-shanghai... (non-paywalled: https://archive.ph/GIDBw).

I won't disagree that there's a tradeoff you can consider for China's approach, but we all need to be very wary when trying to compare numbers between Western nations and China.


You have to be careful: given a rallying cry of free speech and no censorship, the main appeal of the platform is going to be attracting those holding unpopular opinions.

This happened with the mass exodus from Reddit to Voat of r/the_donald and similar, and I absolutely can see it happening here. Voat became host to a ton of toxic communities.

And because of that, it's going to push more orthodox people people away from the site towards Twitter in a self-reinforcing loop.

The plan for the site to get a broad range of opinions (in this case, the bounty of $20k for a prominent liberal) seems doomed as a result.

Still - hopefully the founders have learned from Voat and have plans in action to stop it before it gets too stuck in the cycle. The bounty is an interesting idea, though I don't think it'll be strong enough to break the perverse social dynamics involved. More competition with Twitter is a good thing.


The terms of service PDF is really clear on what and what is not allowed.

Particularly interesting were links to the US Entity list of defined terriorist organizations. Downloadable in a text file (and it's large).

Since it's not pseudo-anonymous like Voat, and is also being endorsed pretty much by politicians, I think it won't fall into the same traps. I think it's going to become a de facto pipeline to politicians though.

The Terms of Service sound like the vision of a social media network after a Section 230 crackdown.


> My theory is that the FP folks would have seen more success had they figured out ways to bring their features to mainstream languages, rather than asking people to adopt wholesale their weird languages (from an average programmer's point of view).

I'd argue this has been happening for years and years now. If you want a pithy saying, you could say that over time languages become closer and closer to Haskell.

Option types, pattern matching, pure functions (see: Vue/React with computed properties, or any other frameworks doing things close to functional reactive programming), type inference (even Java is getting a var keyword!), more powerful type systems (things like the Typescripts of the world have) are now becoming trendy, but Haskell had them years and years ago.

The above is definitely oversimplifying (a short HN comment is no place to get into the fundamentals behind the various kinds of type systems), but I've found all the things I love in Haskell and other FP languages seem to slowly drift on over to other languages, albeit often a little suckier.

(I'm still waiting on software transactional memory to become mainstream, though. Of course, in languages that allow mutable state and without some way to specify a function has side effects, you're never gonna get quite as nice as Haskell. Oh well.)


I'd bet a lot of this has resulted from improvements in computation power. It used to be you couldn't afford to abstract your hardware away in a lot of real-world cases. Whereas now we can afford the programmer benefits of immutability, laziness, dynamic types, and first-class functions. Static things like var can probably be attributed to the same advancements happening on developers' workstations.


This is a good point which I hadn't thought of, but I think there's a lot more opportunity to bring concepts from FP to mainstream languages. For example, why don't imperative languages have an immutable keyword to make a class immutable, given how error-prone it is otherwise? https://kartick-log.blogspot.com/2017/03/languages-should-le...


> over time languages become closer and closer to Haskell

That's a statement worthy of highlighting, supported by examples below it. I'll remember that as I continue to study more languages.


> Vue [...] works with Typescript too.

It """works""" with it. React's story is much better on this front. I set up Typescript with Vue at work and:

1. Setting it up and getting it to compile was hell. The documentation was incredibly poor (often out of date on several options) and I often had to try several things just to get it to compile.

2. It was slow with Webpack - far, far too slow to start up (what was once 10 seconds to start up easily was taking 45+). tslint was a no on top of that. I looked into various ways to speed up Webpack, wasting hours trying to parallelize things with various plugins, and found almost everything would just refuse to work with .vue files.

I got this fixed, thanks to fork-ts-checker-webpack-plugin - which didn't actually have .vue support at the time. I ended up having to use a fork by the amazing David Graham [0] which thankfully has been merged now.

3. Vue's "support" for TypeScript isn't there yet. It's maybe 80% there. Vuex isn't typed - which, at least for my company's SPA, kills half the benefits of type safety and requires a lot more type annotations than is pleasant. (Yes, I'm aware you can get Vuex working with some serious hacks... but this isn't documented, and I found out it was possible from GitHub comments. It did not look easy enough for me to do without an hour of pain, so I have yet to try it.)

Also, you can't strongly type props in both directions using the standard template language. In fact, I'm not sure you can strongly type the things you give to a component, only the things you take in... if you use a third party library [1], which modifies an official new way (ES6 class syntax) to declare Vue components.

You could use JSX - which Vue supports - but again, you're going off the beaten path and aren't really writing standard Vue components. Swapping to JSX was not possible at my company - requiring developers to learn an entirely new templating language when they already know Vue components is a no. It's a wonder we're okay with ES6 class syntax.

All this is to say, standard Vue components can't be fully typed. Not only do you have to use a new way to declare them, you have to use a third party library to get proper typing for props and more. And it's still not enough unless you swap to JSX too. (I'm not even sure if JSX would end up working for full type safety, either - Vue might have some stupid thing which screws it all up.)

---

Compare this to React, where because of JSX you get strong typing on literally everything without special hacks. They're not the same here.

Vue is moving very nicely in the direction of more and more TypeScript support - I'm a huge fan of that. I'm glad we swapped over at work. But right now, claiming Vue supports TypeScript is misleading. It's a second class citizen. You're not getting the full benefits. This seems to be changing, but glacially.

(Am I allowed to say that something which might take less than a year is glacial? I guess this is just how web dev is.)

[0] https://github.com/Realytics/fork-ts-checker-webpack-plugin/...

[1] https://github.com/kaorun343/vue-property-decorator


I think the idea is you keep the charger plugged in all the time (say, beside your bed) and when you go to sleep you just throw the phone on the charger. Seems kinda convenient to me.


Oh, I get it, it's just that the feature is often talked about like it's the second coming of wifi. It takes me about a second to plug in my iPhone.


Once upon a time Wifi sounded dumb. It isn't that difficult to cable ethernet, and the performance is better with much less latency. The same goes for things like AirPlay audio and video.

Putting them everywhere, like the kitchen counter, home and office desks, car and bedside table would really lengthen the lives of phone batteries. If it really took only a second for people to do the equivalent of plugging in their phone they'd almost always leave them charging.


Ikea make furniture with built-in wireless charging. It's not remotely essential, but it's a nice convenience feature.


7% is basically the Lizardmen's Constant [1]; wouldn't buy into this as Americans being stupid.

[1] http://slatestarcodex.com/2013/04/12/noisy-poll-results-and-...


That's a very insightful article, and I look forward to using "the Lizardman's Constant" in conversation.


The title is somewhat misleading, I think.

> In total, the visual recognition API classified 1,428 profiles as masculine, 84 as feminine and 1,964 as unclassified.

> The top 100 Stack Overflow users consist of 77 masculine profiles, 21 unclassified and 1 feminine profile.

They only identified a single account belonging to a female, but there 21 unclassified accounts in the top 100 that could also belong to females.

I would certainly expect women to be more reluctant than men to note their gender on their online profiles.


While this used to be true (and mostly still is), the tide is shifting on this point. Games are using more and more cores. Overwatch, for example, uses 6. I believe DirectX 12 makes it easy/reasonable to use 4 cores, with some benefit to be gained with up to 6.

Not that it really helps with some games. Dota 2 is CPU bound for the moment, so if you play that a lot then maximizing single-threaded performance is probably the way to go.

Still, if I was buying a CPU today, I'd be cautious about going for less than quad core if I was interested in new AAA games.


For sure at least quad core is a must, I just meant having 8 or whatever slower cores instead isn't (yet) better.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: