Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It will capture the depth map and generate the semantic mattes (except in some edge cases) no matter the subject if you explicitly set the camera in Portrait mode, which is how I would guess the plant photo from the article was captured.

My previous comment was about the default Photo mode.

If you have a recent iPhone (iPhone 15 or above iirc) try it yourself - taking a photo of a regular object in the standard Photo mode won’t yield a depth map, but one of a person or pet will. Any photo taken from the Portrait mode will yield a depth map.

You can find out more about this feature by googling “iPhone auto portrait mode”.

Apple’s documentation is less helpful with the terminology; they call it “Apply the portrait effect to photos taken in Photo mode”

https://support.apple.com/guide/iphone/edit-portrait-mode-ph...



Seems crazy to run an object recognition algorithm in order to decide if depths should be recorded.

I’d thought that would be heavier than just record the depths.


Probably a pretty light classifier on the NPU. Doesn’t even have to care about what particular object it is, just if it matches training data for “capture depth map”.


there was recently a 64 gates NN implementation in C shared on HN that was interesting for stuff like this


> [...] if you explicitly set the camera in Portrait mode, which is how I would guess the plant photo from the article was captured.

Ah, that makes sense, thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: