That makes sense when tools are as dumb as static notes and TI-84s.
But in the (hypothetical) limit where AI tools outperform all humans, what does this updated test look like? Are we even testing the humans at that point?
> if Apple is providing raw eye tracking streams to app developers
Apple is not doing that. As the article describes, the issue is that your avatar (during a FaceTime call, for example) accurately reproduces your eye movements.
Isn't it the a distinction without a difference ? Apple isn't providing your real eye movements, but an 1 to 1 reproduction of what it tracks as your eye movements.
The exploit requires analysing the avatar's eyes, but as they're not the natural movements but replicated ones, there should be a lot less noise. And of course as you need to intentionally focus on specific UI targets, these movements are even less natural and fuzzy than if you were looking at your keyboard while typing.
The difference is that you can't generalize the attack outside of using Personas, a feature which is specifically supposed to share your gaze with others. Apps on the device still have no access to what you're looking at, and even this attack can only make an educated guess.
This is a great example of why ‘user-spacey’ applications from the OS manufacturer shouldn’t be privileged beyond other applications: Because this bypasses the security layer while lulling devs into a false sense of security.
> ‘user-spacey’ applications from the OS manufacturer shouldn’t be privileged beyond other applications
I don't think that's an accurate description, either. The SharePlay "Persona" avatar is a system service just like the front-facing camera stream. Any app can opt into using either of them.
The technology to reproduce eye movements has been around since motion pictures were invented. I'm sure even a flat video stream of the user's face would leak similar information.
Apple should have been more careful about allowing any eye motion information (including simple video) to flow out of a system where eye movements themselves are used for data input.
"technology to reproduce eye movements has been around since motion pictures were invented"
Sure, but like everything. It is when it is widespread that the impact changes. The technology was around, but now it could be on everyone's face, tracking everything you look at.
If this was added to TV's so every TV was tracking your eye-movements, and reporting that back to advertisers. There would be an outcry.
So this is just the slow nudging us in that direction.
To be clear, the issue this article is talking about is essentially "during a video call the other party can see your eyes moving."
I agree that we should be vigilant when big corps are adding more and more sensors into our lives, but Apple is absolutely not reporting tracked eye-movement data to advertisers, nor do they allow third-party apps to do that.
The problem is the edge case where it's used for two different things with different demands at the same time, and the fix is to...not do that.
> Apple fixed the flaw in a Vision Pro software update at the end of July, which stops the sharing of a Persona if someone is using the virtual keyboard.
" lot about someone from their eyes. They can indicate how tired you are, the type of mood you’re in, and potentially provide clues about health problems. But your eyes could also leak more secretive information: your passwords, PINs, and messages you type."
Do you want that shared with advertisers? With your health care provider?
The article isn't about the technology, it is about sharing the data.
It both doesn't work, and if it did it would be net negative. Masks protect others from your spit, they don't protect you. If you are not ill and generating airborne spit the mask is useless
> They're machines which must be used [...] not magic evil talismans
I feel like there's a straw man in there. No one is worried about guns sitting around literally unused, and I don't think anyone cares too much about the used/unused ratio. Obviously the thing people are worried about is how they are used when they are used.
> map, and_then, etc. I think this would be called being "monadic".
Strictly speaking, I think providing "map" just makes it functorial. Monadic would need a flatmap. (In addition to the other functor and monad requirements, of course.)
So what would you call something like "it implements some common operations", like these?
For example, the Option and Result type both have functions like "map", they do the same thing just on different types. They're not quite generic in that sense, but on a high level they seem so.
Another example are reactive libraries. Everything is pegged into some common operations like map, take, and so on.
> Arguing you can't do something because someone will be offended is also not very helpful: you can almost always find some offensive interpretation of anything
You mentioned the sorites paradox earlier. Do you think it could be applied here as well?
> Are you saying this with the personal experience of being from a country that now speaks the language of its colonizer?
And to be clear, the US is excluded from this. Our cultural memory of our colonial history is an outlier—for most Americans our sense of our relationship with Britain is more that of friendly rivals than colonizer-colonized. The difference is largely because most of us are descended from the colonists (or people who arrived much later), not from the people that were there first, so the abuses that our ancestors suffered barely even register on the scale of colonial abuse.
That contrasts sharply with how the Irish or most Africans feel towards their former colonial powers. It's hard to feel positively towards a flag that represents a power that repeatedly committed genocide against your people.
But in the (hypothetical) limit where AI tools outperform all humans, what does this updated test look like? Are we even testing the humans at that point?