Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those benchmarks are impressive. iPhone SE beating the 2019 MBP in single core performance? That's fantastic!

I wish I could comment on or know what he was talking about with the iPhone X vs "old iPhone" interaction paradigm. I personally thought the "single button" hardware interface was an incredible innovation that changed everything. Even Android relies on it. My Galaxy S9+ has a "software home button" (like most Androids do) in place of the iPhone's. I'll have to find a friend with an iPhone X or later to try the new swipe up gestures and see if they are indeed better.



Fun fact: iPhone's buttons are not mechanical buttons starting with iPhone 7. The click you feel when you press it is just the taptic engine.

If you have an iPhone 7 or 8, try turning it off and try to press the button.


Same with MBP touchpads which many people don't even realise since the feedback works even when the laptop is sleeping. It's a really weird experience when you actually turn it off and... nothing - you can't actually "press" it.


When I had the first generation MacBook 12", people wouldn't even believe that it was not a mechanical click when you told them. I often had to turn the machine off to convince them that it is fake. Which I would happily do to see the look on their faces ;).


I remember pressing the home button on a powered off iPhone 7 a few times. It feels like attempting to take the last step in a flight of stairs when you miscounted by one.


You can also adjust the "clickiness" of the button from your phone settings, which is amazing.


Took me a while to realize this also applies to the magic trackpad and integrated trackpad on macbooks, but it is nice to be able to customize it. The trackpads even have two clicks the second being a "deep" click which requires even more pressure than a regular click.


It's pretty easy to notice with the magic trackpad—it loses its ability to be "clicked" when it loses power/when you turn it off.

(The trackpad on the laptop does as well, but I feel like you're just less likely to try clicking the trackpad on a macbook/MBP if the computer's not on. Whereas with the external one, you'll notice it when you go to pick it up/move it around and it doesn't accidentally click in your hand.)


The trackpads actually have an “infinite” number of clicks. Try it on the seeking buttons in QuickTime or IINA.


Also a nice feature, which unfortunately not a lot of applications use, is the haptic feedback. E.g. when you drag an object in OmniGraffle, you will feel very subtle feedback in the trackpad when two objects align.


Almost 20 years ago I had a cool Logitech mouse that did this. The mouse had a haptic (Synaptics, I think?) engine in it, and when you moved the cursor over buttons, menus, or other UI elements, it would give a slight “tick” as you crossed each boundary. It was very cool. There was a lot of customization available for it as well.

I had to ditch that mouse when I switched to Linux. I think it didn’t have any standard HID-type driver or something.


Yeah when I got my 2015 MBP that track pad was mind blowing. I remember turning it on and off and comparing the feeling of trying to click it. It's an incredible innovation. Small, but amazing from an engineering/UX perspective.


Just the home button, the rest are mechanical even on the newest iphones.


The X adds Taptic to the mechanical click of volume buttons


It’s such a weird feeling when the “button” is dead.


I knew this was true, as you can't click it when battery is off, but I still don't get it: how does it work, and why?


How: haptic feedback using a weighted linear actuator.

Why: I suspect because it debuted with the iPhone 7, the first officially water resistant iPhone, that it was a waterproofing move to get rid of possible ingress points.


> I suspect...it was a waterproofing move

I've always assumed it was partially about reliability and partially about saving space, since water-resistant physical buttons are fairly standard components.

The phone needs to have some form of haptic motor anyway, and switching to a virtual button allows them to eliminate a complex moving part - removing one potential point of failure - while freeing up a few cubic millimeters of internal volume.

Anecdotally, the home buttons always seemed to be one of the more fragile parts of an iPhone, to the degree that many people started to use the on-screen assistive touch dialogs to avoid wear and tear.


On the older iPhones, the home buttons got problems after a couple of years. My old iPhone 4 had this problem. They got unreliable in registering being pushed. I think that’s also the reason it makes sense to not be an actual hardware button. Less moving parts makes it more reliable and it’s more likely to work after many years.


Good haptics.


You shouldn't need to turn it off, it feels like a knockoff of a real button, like a fake designer bag. Don't know what was wrong with a regular button that presses.


The physical home button broke a lot.


I suspect water resistance played a part in moving in this direction.


Something has got to be screwy. The iPhone has a TDP of 6W and a clock speed of 2.5GHz. The MBP has boost clocks of 5GHz and a 45W TDP.


Each A13 Lightning core has around 2.5x the transistors of a Skylake core. The Skylake design is also 5 years old and on a node at least a generation out of date. Geekbench is also a short enough test that thermals aren’t a huge concern.


Yes but AMD is on a much newer node and still only achieves a very slight improvement in IPC nowhere near >2x these benchmarks imply.


Zen 2 is much more economical in its core area. Lighting cores likely have around 2x the transistors.

Also, and I’m not sure how big of a deal this is, but Zen 2 is the 2nd gen of a practically new arch. Lighting has a lineage of at least 6 generations.


Where are you getting the per-core transistor counts from?


It’s a very rough estimation. Basically ((core + L2 area)/(total die area))(Total transistors on die).

For A13 with unified L2, I used L2 per core.

Of course, transistor density isn’t consistent across the chip, but this should be good enough for the relative comparison I’ve done.


Something to consider Intel's TDP is given at base clock - boost is irrelevant (i.e. they skew the facts the CPUs are way more toasty than TDP). Increasing the frequency requires more voltage, power is proportional to frequency and square of the voltage.

Other than that - x86 cores (skylake) are generally more powerful per clock than any Arm.

I'd not trust geekbench a bit between different platforms. for example:

The LZMA workload compresses and decompresses a 2399KB HTML ebook using the LZMA compression algorithm with a dictionary size of 2048KB. The workload uses the LZMA SDK for the implementation of the core LZMA algorithm. The test effectively is about L2/3 cache sizes, if it fits there - great result, if not - a horrid one.

Most of the testing lacks any specific data, just what library they use.


If those numbers were in any way real world relevant, the server market would be flooded with ARM servers. 80% power savings for the same computing power would revolutionize data centers in terms of power and cooling needed.


There's much more to server hardware than CPU power and efficiency, and it's difficult segment to enter because of the risk involved making large investments in unproven hardware. But despite that, it still seems like things are actually moving in that direction and ARM is starting to get a serious foothold in the server market now. For example consider Amazon's new Graviton2 CPU's [1], which according to AnandTech 'puts all x86 alternatives to shame' [sic] when it comes to the price per unit of compute time it provides. I think it's entirely likely that the server market will be flooded with ARM servers in the not too distant future.

[1] https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...


Synthetic benchmarks are extremely unreliable.


I imagine even more so than usual when the two things being tested have different operating systems and CPU architectures. Although these benchmark results would still have to be wrong by a lot to stop being impressive.


I think, Geekbench favors bursty performance because it gives very close scores to new MacBook Air with the Y-series Intel chip (7 watt TDP) and newest Dell XPS 13 with U-series chip with TDP of 25 watts.


To Me, having been using the Swipe Up Gestures for years now, I still prefer the Home Button. It is simple and it works.

People are saying the occasional error in Swipe up and back to Home Screen is not annoying are the exact same type of people they prefer the MacBook to have larger track pad, where the occasional false touch didn't matter.

It did to me. I sort of demand zero false positive from trackpad. Which was possible in old MacBook Pro 2015, and not any more since 2016 with larger trackpad.

For iPhone, at least I could understand the trade off, you are wasting lots of potential screen area, and while I prefer to have home button, I dont want to trade the screen area for it.

On MacBook though, I dont understand the need for that slightly larger trackpad. I felt it was the wrong trade off.


Modern stock (Google-flavored?) Android has also taken up buttonless gestures with the last couple of generations of Pixel devices. Depending on your Android version and your phone, you can toggle it on and off in settings by searching for "gesture".


I wonder how comparable Geekbench actually is across different CPU architectures and operating systems though?

Linus Torvalds is quoted here: https://www.realworldtech.com/forum/?threadid=185109&curpost... as saying that Geekbench 4.0 shouldn't be treated as a valid comparison between desktops/laptops and phones. (However this review used Geekbench 5.0 - not sure what differences there are).


Specint benchmarks conducted by Anandtech show the A13 going toe-to-toe with the 9900K [0], although x86 still has a decent lead for floating point.

[0] https://www.anandtech.com/show/14892/the-apple-iphone-11-pro... (scroll down to the 2nd last plot, look at the right half)


Do mobile chips focus less on floating point for some reason? Could that explain the seemingly massive difference in perf/watt?


Floating point tends to matter for classic 'number crunching' workloads- crypto, media encoding, anything to do with modeling the physical world, image processing, AI.

Phones tend towards 'control flow', e.g. display an interface, accept a user input, communicate with a server, display information... and media decoding & gps both have dedicated hardware.

FP workloads can be incredibly power hungry, so it's fortunate mobile devices mostly manage to avoid them. But any chip with an FP unit is going to shut it off when running integer code.


Crypto stuff never uses floating point. It might have built-in instructions like AES-NI, though. In the test geekbench consider that - integers ones[0] and definitely not floating point. Media encoding tend not to use floating point, either. (decoders tend to be built-in in the GPUs with designated hardware, so testing on CPU is quite pointless)

[0]: https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf


Crypto doesn't do floating point. Media transcoding does SIMD, but not usually not floating point.


Fair enough wrt media. However SIMD e.g. MMX is done in the floating point unit (https://softpixel.com/~cwright/programming/simd/mmx.php)

I thought bitcoin mining was floating point, and that was why GPUs were so adept.


MMX is obsolete.

SSE/AVX has its own set of registers. It can do floating point and integer operations. It would not be correct to refer to it as a floating point unit.

GPUs are good at mining because they can do so many calculations in parallel. They're just as capable of integer operations as the SIMD units on a CPU.


MMX was integer SIMD that re-used the FPU register file, but was not doing floating-point operations.


Most things on iOS tend to be dynamically linked these days.


Yeah, "statically linked" is a bizarre thing to say about iOS. I believe the only statically linked code on iOS is dyld itself. It's not possible to make a statically linked app on iOS, and certainly all the number crunching routines (and compression etc.) are dynamically linked. Maybe geekbench doesn't link them and does it in-process? But you would see the same effect on PCs then.


the real question is why we need that performance in your phone? I'm honestly asking, because with every flagship phone (no mater the brand) there is the hype about performance (X's processor is 2x quicker than Y) and it always makes me think. I don't think that in the last 10 years I ever had thought "I wish my phones CPU had higher clock speed"...


First, the whole phone needs to be well architected - the CPU, GPU, Flash, wireless and cell radios can each be bottlenecks for good performance.

As the screen size and refresh rate increase, the needs of the CPU/GPU and Flash will increase.

Mobile phones tend to be more interactive and thus have a larger negative experience with slow/jerky scrolling, slow response to taps including app launch, etc.

Microcontrollers tend to have options from slowing down the clock rate to going into a deep sleep mode when there is no scheduled work. There has been a lot of work to optimize the scheduling of network/driver/app tasks to minimize power draw. It is also fairly common to see a CPU have lower power cores, typically for the OS to leverage for hardware support and background tasks separate from what might be considered the 'main' CPUs for this reason.

Likewise, Apple has stopped shooting for a 50-100% increase in performance year over year, instead going for a mix of performance and power efficiency improvements. They have also started to do significantly more custom silicon design to support their hardware features, such as the Secure Enclave and neural engine additions used to support Face ID.

10 years is a long time though. Four years ago would be what, the iPhone 3GS? Apple went several years doubling the CPU, GPU, _and_ Flash performance and it was extremely noticeable.


> I don't think that in the last 10 years I ever had thought "I wish my phones CPU had higher clock speed"...

Thats a thing with the computer industry, everthing is getting faster all the time but everthing keeps feeling getting slower.

For me the pinnacle of UI was Mac OS 9, it was just so snappy and responsive. It's (pre emtive) multithreading was a pain to wait for but the UI itself was tactile. Apply only approached this on Mac OS X with (I believe) 10.7 Lion when they overhauled stuff under the hood (after upgrading to that version it felt like I bought a new machine). After that it's been downhill again.


The faster the processor, the quicker it can idle and use less power - and the smaller the battery requirement.


I believe Apple uses the iPhone as a test bed for its CPUs that they will eventually put into their Mac line. The Mac line doesn't make enough money to justify putting that much engineering into making a new CPU for them.


I'd be interested in seeing how the numbers compare if there was a porting of Cinebench to the iPhone




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: