Hacker News new | past | comments | ask | show | jobs | submit login

Ehh, no.

My typical test is to take a photo of the full moon. It works acceptably well on an iPhone (or Android). My recent Pixel phone even adjusts the brightness automatically. Sure, the lens is pretty wide-angle, so the pictures don't have many details.

I had to fiddle around for 20 minutes with settings on my Sony Alpha camera, eventually using manual focus and manual exposure. The pictures are, of course, better because of the lens and the full-frame sensor.

But the user experience is just sad. So I often just don't bother to take my camera with me on my trips anymore.

Also, a note to camera makers: USE ANDROID INSTEAD OF YOUR CRAPPY HOME-GROWN SHITWARE. Add 5G, normal WiFi, GPS, Play Store, a good touchscreen. You'll have an instant hit.




No. That's a hill I'll die on.

Ass end Nikon Z50, 250mm kit lens, hand held, no setup really other than shutter priority ... https://imgur.com/edCyNjV (very heavy crop!)

And a Pixel 6a mutilating a shot: https://imgur.com/290gXkU

I do not want android on a phone. I don't want to update or reboot it. I want to turn it on, use it and turn it off again. And I don't want someone substituting pictures of the moon for stuff online (hey Samsung!)


Wow, that Pixel 6 shot is awesome in all the wrong ways. I have no idea how it could have happened.

Of course, cameras with large sensors and lenses are going to be better than small phone sensors. Physics is physics. It's just that it doesn't matter that much for most people (me included).

> I do not want android on a phone. I don't want to update or reboot it.

I have used Galaxy Camera back in 2014. It was awesome. I could take pictures, and automatically upload them to Picassa (RIP) or share them with people. The UI was also pretty good, but it was clearly a V1 without too much polish.

> I want to turn it on, use it and turn it off again.

I have an Onyx book reader that runs Android. It works just like this. I pick it up, press a button, and it shows the book I've been reading within a second. So it's clearly possible.


Great example of the primary difference here. I've said that the photos out of my mirrorless (also Z50, great camera) are true photographs in the sense that they capture light and show it to me.

My smartphone however does not create photos, it creates digital art based on the scene. Your Pixel image is a perfect example of how algorithms (now called "AI") re-paint a scene in a way that resembles reality when zoomed out.

Comparing smartphone and camera is really apples to oranges at this point, as smartphones aren't even capturing photos, they're entirely repainting scenes.


> Comparing smartphone and camera is really apples to oranges at this point, as smartphones aren't even capturing photos, they're entirely repainting scenes.

Calm down, it's not that bad. Take for example night sight or astrophotography; it's using ML to intelligently stitch together light across time because available light in one moment is not enough to capture anything intelligible. Your end result is an accurate representation of what your eyes see (e.g. my own face in a nighttime selfie) and what is sitting there in the sky (the stars). You can call that repainting, but I disagree, it's more information aggregation over the temporal dimension.

Super resolution is similar, using shakes in your hand to gather higher resolution than you can accomplish with a single frame of data from your low res sensor grid. 2-3x digital zoom with super res technology is actually getting more information and more like optical zoom. It's not just cropping+interpolating.

Now...portrait mode. That's clearly just post-processing. But also...does blurring the background using lens focus have any additional merit vs doing it in post (besides your "purity"-driven feelings about it)?

At the end of the day, I want my mirrorless to do more than be a dumb light capture machine. I spent $X thousand+ for a great lens and sensor, so I want to maximize. It should do more to compensate automatically for bad lighting, motion blur, etc. It should try harder to understand what I want to focus on. As a photographer, I should get to think more about what photo I want taken and think less about what steps I need to take to accomplish that. My iPhone typically does a better job of this than my $X000 mirrorless. So I use my iPhone more.


> Take for example night sight or astrophotography

Oh speaking of astrophotography. It occured to me that all those pretty images of remote planets and nebulas have been doctored to hell and back.

What I don't know is where I can find space images that show the visible spectrum - i.e. what I'd see if i managed to travel there and look out the window.

Is there such a thing?


Well, you're of course using the best example on the one side and the worst on the other side, so that's not really a fair comparison.

Apart from that: The phones generally try to substitute the tiny sensors through highly complex software algorithms, creating something that sometimes only has a broad similarity with the original scene. The cameras, on the opposite, usually have crappy software and rely on their great sensors (and other hardware). So in an ideal world, you'd have a proper camera with good software. That software then doesn't have to do all the (good or bad) stuff which is only there to try to make the best out of less than ideal image input, but instead it can provide more user friendly features which allow making quick and easy photos without having to study tutorials for a week (yes, now I am exaggerating a little on purpose :)).

This software doesn't have to do all the crap that in any way reduces the image quality in the end.

Please don't just think in the extremes, but look for the healthy middle way that would provide the best out of both worlds.

It is not Android that does the image processing itself btw., but special software that the phone manufacturers add on top of Android. So this part would be the responsibility of the camera's manufacturer again, but this time they could focus more on their central use case (help making good pictures) instead of writing everything (like the user interface) themselves. And they could even provide their users with more options to extend their software for even better photos.


Please, no. A camera need to be ready to shoot the moment I flip it on.

I also don't want it to have reduced battery life just so that I can use the god-awful playstore on it.


My eBook reader uses Android, and it's instant-on, after a week on my couch.

Instant-on Android devices are a solved problem.


I bet it's not coming from a cold start every time.


Of course. It does an equivalent of suspend-to-RAM after a few minutes of inactivity. It then can stay in this state for at least several weeks. I'm not sure for how long, I have never left my book reader for more than two weeks.

Cold reboot after updates takes about 20 seconds, like my phone.

This strategy can work for cameras.


Except for when it's fully off and you want to take a picture.


So turn your camera on once when you pack your things for the flight. It can stay in sleep mode for at least a couple of weeks without draining the battery.

Honestly, I don't see a problem.


Does the phone really take a sharp picture of the moon, though, or does it just add detail it knows is there.


At this point I would've imagined Apple would have a moon detection feature and just replace it with a stock image cutout when detected in the field of view.



Samsung actually do that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: