I imagine there will be some developers that try to have analytics fire when certain events happen, correlated with eye-hovering. Hopefully Apple will catch these tricks before they can go live. The potential value of knowing where someone's eyes are (whether they saw an advertisement, for example) could be very high, so I envision a cat-and-mouse game.
Apple addressed that in the keynote. There’ll be an invisible canvas over the window content, and the app never learns where you look (nor Apple, ie it is not further stored or processed). Only when you click (ie tap your thumb and index together) does the app get an event.
(Downside is that one loses the capability for hovering, but strikes me as the correct trade off.)
Sure, that works for certain contexts. But what about a game, where your vision is used to aim a weapon? The app has to know where you are looking in order to know where to shoot. If it's an online game, then the information cannot be isolated on the device since you're shooting at other players.
Now imagine the game has advertisements, or loot boxes, or something else that the game company wants to know if you've looked at. How does Apple prevent them from learning that, if they're able to know what they need to know for the game to function as a regular FPS?
Shooting is typically an action you take, not just a matter of the direction you're pointing in - so this fits the model well: the game gets information about motion to swing the camera, and gets the exact position when you press a button, but doesn't get to know exactly what you're looking at if you're not pressing the button.
The question will be how easy it is to reconstruct that information from the last exact position + movement info...
This doesn't hold true for FPS games. You absolutely need to tell the game where you are aiming so it can draw crosshairs, even for on-rails shooters favored by VR.
I expect fps will still require a controller/kbm rather than relying on the eye tracking/gesture input. For one thing, I don’t know how else you would move and perform actions at the same time.
There aren't usually any on-screen crosshairs in VR games. You just use the crosshairs that are on the weapon, which is tracked through controllers. I expect there to be trackable controllers available for Vision too. It's also possible to use hand tracking, but that's not optimal.
most probably you are right. But I could imagine crosshairs are not needed anymore with highly accurate eye tracking, you would just intuitively know where you are aiming anyways.