It will capture the depth map and generate the semantic mattes (except in some edge cases) no matter the subject if you explicitly set the camera in Portrait mode, which is how I would guess the plant photo from the article was captured.
My previous comment was about the default Photo mode.
If you have a recent iPhone (iPhone 15 or above iirc) try it yourself - taking a photo of a regular object in the standard Photo mode won’t yield a depth map, but one of a person or pet will. Any photo taken from the Portrait mode will yield a depth map.
You can find out more about this feature by googling “iPhone auto portrait mode”.
Apple’s documentation is less helpful with the terminology; they call it “Apply the portrait effect to photos taken in Photo mode”
Probably a pretty light classifier on the NPU. Doesn’t even have to care about what particular object it is, just if it matches training data for “capture depth map”.
My previous comment was about the default Photo mode.
If you have a recent iPhone (iPhone 15 or above iirc) try it yourself - taking a photo of a regular object in the standard Photo mode won’t yield a depth map, but one of a person or pet will. Any photo taken from the Portrait mode will yield a depth map.
You can find out more about this feature by googling “iPhone auto portrait mode”.
Apple’s documentation is less helpful with the terminology; they call it “Apply the portrait effect to photos taken in Photo mode”
https://support.apple.com/guide/iphone/edit-portrait-mode-ph...