A lot of these features are nice to have but definitely not essential, especially for general amateur photography.
But here's one important feature the list misses: the ability to shoot uncompressed, or losslessly compressed, RAW images. The last time I checked, the introductory Sony cameras (the A6500 and its cousins) do not permit this. The customer can only access lossy compressed data. The same is true for lower-end Nikon DSLRs.
Most of the time, this is not a problem – though in some situations with heavy postprocessing or harsh lighting, you will certainly see compression artifacts. I just find it amazing that they're asking customers to pay almost a thousand dollars for a camera+lens kit that doesn't support lossless compression, or let me access the true "raw" data.
It's 2021! This is well-known technical knowledge taught in undergraduate CS classes.
> A lot of these features are nice to have but definitely not essential, especially for general amateur photography.
While I agree with you, I feel like on a mirrorless, the Sensor Dust Protector Curtain is essential if you want to be able to change lenses while out in the field.
Many years ago I used to work for a small local paper, they gave me a DSLR so I had the luxury of the sensor being at least partially protected by the mirror. (I wish they gave me two bodies so I didn't have to swap lenses, but it was a small newspaper and they didn't have a lot of money).
Now I own a mirrorless (that I don't use professionally) and I would never attempt to change the lens unless I'm in an indoor, non-dusty room, without fans or a/c operating. Just looking at the naked sensor staring back at me makes me nervous ;-)
I can't imagine swapping lenses on the mirrorless while on assignments, I think I would have to resort to buying a couple of bodies just to avoid lens swapping.
I don’t think it’s that big of a deal. Sensors can be easily cleaned! I’ve changed lenses on my Fuji mirrorless in wind, dust, huddled under an umbrella, etc. On the one occasion in the past 5 years that dust spots were an issue in a photo, a quick few pixel clone stamp solved the problem.
For the average shooter, even one who uses their camera like a tool, maintaining it reasonably should make this a nonissue.
Yes if you use your camera as a hobbyist I agree with you. For professional use, the clone stamp solution is not going to work, as I explained in the sibling thread. In that case you would need to have your sensor professionally cleaned, which can be done relatively easily as you mentioned in the beginning of your comment, but costs money at least where I live (Sydney, Australia).
Maybe I'm less fussy than others, but I've never understood why people are so obsessed with making sure their sensor is spotless. If a dust spot is in a place it's easy to see in the resulting image, then it's trivial to remove it in post. If it's somewhere that's hard to remove, then it's not noticeable. Maybe my standards are low :)
Like in any field, when you go from hobby to doing it as a job, any trivial task becomes a problem if you have to repeat it dozens of times per month.
Perhaps you can create a macro to apply the same edit automatically to all your imported pictures. In a fast paced industry like news, that's still an unwelcome overhead though.
I’m not familiar with edit templates, they would be ok as long as they can be stored for later reuse, this is because the typical workflow in news is that you don’t edit 300 photos all at once at the end of the month, it’s more like: you edit 10 photos every day, and sometimes you edit 5 of them at noon and another 5 at 6pm (numbers and times are made up obviously).
I would love to have a protection curtain like that on E-M5. But so far, it hasn't been essential. I routinely change lenses outdoors, and haven't had (or noticed!) a problem with dust on the sensor. I have a routine for doing the swap quickly, with minimal exposure of the sensor. It's obviously not watertight, but it seems okay. Although i do feel a bit like Dave Bowman re-entering the Discovery without his helmet on every time i do it ...
I wouldn't know what I would need that curtain for. But I am using Olympus cameras which have the strongest sensor cleaning avialable. Olympus also was the first brand to offer sensor cleaning and never gave up their lead.
> The customer can only access lossy compressed data
Honest question - is this a tech or a biz decision? I.e., is it more expensive to include output to raw? More i/o requirements or totally different chipsets or the like? Or is "raw" just deemed a "pro" feature that they feel they can upcharge for? (e.g. paying to flip a bit to unlock more battery life in an EV)
The lossy compression schemes used to be pretty simple, so they're a fast and easy way to reduce file size quickly, as older low-end cameras didn't have a ton of memory (costs money, y'know, and a modern camera has a few GB of RAM nowadays) or sometimes didn't support the newest/fastest card standards. In terms of IQ there is rarely a difference.
Most cameras offer lossless compressed raw these days, some (e.g. the low-end Nikon Zs, and also the flagship Z9) don't even have uncompressed raw anymore, only lossless compressed or a choice of lossy options. Much to the chagrin of some people, a particular Youtuber especially, who make a huge deal out of using uncompressed raw. The person in question doesn't understand what the word "lossless" means and claims there's a difference between losslessy compressed raws and "true uncompressed" raws.
I think compression is done by default because it speeds up picture taking immensely. My understanding is that there's a buffer that holds recently shot pics while they're being written to the SD card. Clearly, if you use compression, you can fit more pics in the buffer, allowing more consecutive snaps before the camera locks up for a bit.
Why it's lossy instead of lossless, I do not know. My guess is that lossy compression allows them to get more compression, and hence improve their effective buffer size for advertising purposes. I think 90% of the time this is a good trade-off. You can view some example artifacts here: https://stephenbayphotography.com/blog/sony-raw-compression-....
[Note that the author mentions being able to access uncompressed raw on his Sony after 2015, but it's a pro-level model. For the amateur models I believe there's no such option.]
Obviously, if you're trying to make them happen, you can do it, but for pics of your kids/pets/plants/etc. in normal daylight situations it shouldn't be a problem.
Still, as a technically minded person, it's infuriating to me that I can't access the original data.
I used to work for Blackmagic Design in the BRAW team (yes, technically not RAW, Red somehow got a bullshit patent on the compression of RAW data that they defend aggressively).
There was definitely an R&D cost associated with developing the libraries to read/write that format, and additional QA steps required when implementing them, but no additional hardware was needed (it actually required less on-hardware processing to write to RAW).
If a company has RAW support on one camera but not another - especially for still photos, it is 100% a market segmentation strategy.
Casual photographers are fine with just using OOC JPEGs and apps like Apple or Google Photos. They aren't paying for Lightroom or Camera One and worrying about the sensor's dynamic range or ISO performance.
Also using RAW especially on higher megapixel sensors requires your entire stack to be more expensive. Computer needs to be fast with lots of hard drive space. You need a UHS-2 card reader as well as 300Mbps cards etc.
It's been a long time since I've bought a new camera, but I'm amazed that this would not be the case. My 2008-vintage Pentax K200D has the option to save JPEG and associated RAW images with every shot, as well as (IIRC) a button that could be configured to just do it when you're planning on having the need. Pentax has some great features for the price, but this was far from a fancy DSLR at the time.
It's not a hidden feature by any stretch. Different vendors allow for different types. Some only allow a smaller frame size while still RAW. Some allow a compression to be applied.
Back-in-time buffer is amazing, I have it on a Casio camera of mine.
You hold the shutter halfway and it takes photos at whatever speed you choose (I usually have it at about 10fps), then when you press fully, it stores up to 40 shots from before you fully pressed the shutter (you choose beforehand how many shots to store from before the full shutter press).
I would love to have a full-frame mirrorless with that feature, but alas.
I've got a lot of nice photos of birds taking off, fish jumping out of the water, street photography to capture people in an exact position of my choosing, the exact moment a bat hits a ball, and many other things that would have been almost impossible to capture otherwise.
It amazes me how stuck in the past camera companies must be, there's so little innovation.
Even something as basic as wifi/phone support is almost always half-baked and barely functional.
There's so many opportunities to improve with good phone integration too. You buy a $2k SLR, yet it's still difficult and time consuming to simply snap a picture and send it using your phone. It shouldn't be this complicated.
Indeed. I've used it on my Olympus gear (Fuji X sytem also has it, btw) and it's very helpful when, e.g. you know that bird is going to take off at some point but not exactly when.
Given how simple it is (a ring buffer for images captured using an electronic shutter mode basically) it's surprising that every camera with a decent e-shutter implementation doesn't have this.
I agree with a lot of this stuff. I would really love to be able to do more processing in-camera. For example, if I'm doing focus or exposure bracketing, I am going to composite that in lightroom 100% of the time. Why can't the camera do this for me? I would at least love if I could tag those photos as being a pair, so that Lightroom could automatically detect this. I'm kind of surprised by how conservative camera manufacturers have been with adding new features. Some cameras don't even have intervalometers built in!
It boggles me how the big camera makers seem content to cede marketshare to smartphones. Obviously, they can't compete on pocketablity or ubiquity, but computational photography is their home turf. I shouldn't have to reach for my phone to take panoramas, photo spheres, and "live photos", and I'm not interested in spending hours compositing stuff later on the PC.
The big camera makers do throw in a bunch of "modes" on their prosumer lines (which they toss out the window on their professional models) to pander to the consumer. Lame stuff, mostly. Give me some real stuff like auto-masking foreground subjects or out-of-camera photogrammetrly created 3D models.
I get that prosumers are a dwindling niche and pros are where the money come from, but you need a source for more pros in the future, right? How does the camera compete if--as an artifact--it's inspirational value is outclassed by the mundane cellphone?
Agreed. They should go all in on computation. They would probably need different processors and more batteries but a “pro” camera should be better than a phone in absolutely every aspect. I just got an iPhone 13 and it’s interesting how it handles night shots better than my Fuji X-T3 with its much bigger sensor.
I've been harping on this for years. If Apple released a micro-43rds camera (as a starter) with a A-series CPU and it's image processor, they would clean up.
The beauty would be that they already have all the APIs to support doing this, and you could run one of the existing 3rd-party camera apps too. (Hi, Halide!)
Lightroom's autostacking works pretty well for this if the bracketed shots are close together in time as they are when the camera is automating the bracketing. The lightroom feature augments the typical camera feature in this case.
An Olympus EM-1 II has a pair of quad core ARM processors.
The biggest challenge is that the hobbyist photography community is relentlessly, angrily furiously hostile to improving cameras in ways that move beyond pretending we're still using darkrooms, but with pixels.
The hostility to anything that is more like the "fauxtographers" who take excellent photographs with phones is one rich source of that, but there are many more.
One more feature I would like to see, I've never heard of a camera supporting it but there has been aftermarket support on some Canons:
Automatic shooting of HDR exposures. Stacking them requires some human intervention and shouldn't be done on the camera, I'm simply thinking of the shooting. You put your camera on a tripod, select HDR and push the button. It shoots a frame and checks the histogram--are pixels falling off either end? If so, adjust only the shutter speed and shoot again. Keep doing so until the slowest shot has nothing falling off the dark end and the fastest shot has nothing falling off the bright end.
A lot of cameras have bracketing--even the bridge camera I carry while hiking has it. I'm talking about HDR-aware bracketing that decides how many shots to shoot rather than simply shooting a fixed range. Imagine a shot in a building looking out the window with sky visible. You could have a range of 10 stops or more.
A modern full-frame camera will have 14 stops of dynamic range. Five shots 3 EVs apart is enough, you'll get a total of 29EVs of dynamic range. Well beyond your ability to actually tonemap properly. That's equivalent of, practically speaking, going from a shutter speed of 10 seconds to 1/3200ths of a second, and still having fourteen stops of headroom left (7 at both ends).
FWIW I can take pictures in a room with a window looking at the sky and change the exposure to clearly see both the sky and the room without even bracketing on a full frame camera.
Yeah, if you have a camera that pricey you could do it that way. Note, though, that you don't want to use all your dynamic range as the quality drops off.
Tone mapping is very much a subjective process. There's no one "HDR" algorithm. At best the camera could convert everything to linear or logarithmic light and save in OpenEXR or something like that.
I'm not sure if this is what your parent meant, but I think it can require more or fewer frames, with more or less spacing depending on the camera, and on the dynamic range of the scene. Currently (at least with Fujifilm) you have to manually program all this in.
Exactly--all it requires is a quick inspection of the histogram to tell if you need another shot, but that's problematic for a human unless you can do it all remotely as you don't want to touch the camera if possible. It would be a trivial task for the camera itself.
The initial stack doesn't require a human. Compressing the dynamic range to produce a useful output is still a task that AFIAK is best performed by a human.
> Currently Found On: Sony A9 II and A1; Nikon Z9; Canon full-frame mirrorless cameras other than the EOS RP
That's not really accurate - the Sony and Canon are just using one of the shutter curtains for this, while the Z9 doesn't have a shutter so gets a dedicated mechanism not nearly as fragile as a shutter.
> 6. Bulb Mode Preview
Not sure how this is supposed to work without having a non-destructive readout sensor, unless it's just an approximation based on a quicker exposure before starting the bulb exposure.
> 8. Native Image Averaging
idk all of those guys are shooting raw anyway, so just stack them which you were probably going to anyway?
> 14. Raw Histograms
Yeah.
> 15. 16-Bit Raw
Probably in the next generation of landscape cameras like the Z7III
Raw histograms are a no brainer, I don't understand why this isn't already commonplace.
16 bit raw is another matter. If you're not getting 16 bits of dynamic range from the pixels, all you'd be doing is getting a more accurate reading of the noise. That just adds a lot of expense for no perceivable gain.
I've been a working pro for nearly 35 years. One feature I would love is a shortened shutter response mode. Something akin to 6ms response from shutter trigger (this has nothing to with shutter speed). Canon actually built a camera like this in the film days - The Canon EOS RT and EOS 1N RS. It used a pellicle prism so no mirror had to be lifted out of the way to make an exposure.
Why would anyone want this? Let me show you how I photograph wild bats in flight at night. I have many other use cases. For now I have to mount a shutter on the front of my lens to gain back the shutter response that i use to utilize from these film cameras of old.
I wonder if the Olympus "pro capture" mode mentioned in the article would do the job. That effectively gives you the ability to choose the shutter response time after you have taken the photo, including the option of negative values.
I have an Oly EM1.3. Pro-capture on a rental EM1.2 is what originally sold me the camera, but Live ND (image averaging) is probably my most used feature outside of standard aperture priority shooting.
It's pretty fun what you can do with it. Here's an example from a backpacking trip last year:
Agree on: the trend to get GPS data from a smartphone is particularly irritating (either drains power on the phone, or you forget to enable it). I miss the 6D/6Dii built-in GPS.
I hadn't thought of sensor shift star tracking. That'd be useful, assuming it would work with, say, the R6.
Having owned several cameras, each has pros and cons in its GPS approach.
Canon EOS 6D: Standalone GPS receiver on the body. Takes a long time to get first fix. Weak signal on airplanes. Heavy battery drain. Does not work underground or in big buildings.
Canon EOS M6: Post-shooting tagging with GPS when camera connects to smartphone, using timestamps for correlation. Smartphones are extremely fast (e.g. A-GPS) and accurate (e.g. GPS+GLONASS), can use Wi-Fi for indoor geolocation, and can tag basically 100% of photos without missing any. The cons are the need for accurate timestamps, the need to keep the phone on and logging (potential high battery drain), and the need to manually invoke the sync action after shooting.
Canon EOS RP: Bluetooth connection to phone. The phone's geolocation is fast and accurate. But the Bluetooth connection is terrible; takes 5~10 seconds to establish; cannot geotag missed photos after the fact (unlike the M6); the phone has a tendency to evict the Canon Camera Connect app which means no Bluetooth connection to the camera.
That's the thing that's most baffling to me. It seems like GPS as a feature would cost nearly nothing in money or space inside the camera, but it's still extremely rare. Pentax had it in the K-3 II but dropped it for the K-3 III.
It's all fancy shit. But guess what. My 35mm camera from eighties has command back with intervalometer. I can say stuff like in 10 minutes do an hour long exposure directly on the camera. The fact that I cannot do it on modern camera without cables or mobile phones is a travesty.
My Panasonic Lumix FZ82 has a built-in intervalometer, and stop motion, slow mo video, WiFi and Bluetooth remote control, and a lot more. Plus a 60* optical zoom. All for about $350 these days. Its a great daily-driver camera.
They aren't stupid. Sensors get hot while they're taking an image, and if you can't dissipate the heat quickly enough you have to limit the exposure time. It's likely that new Fuji sensors just run cooler, so they can increase the time they're active.
Yes. People keep repeating this and while it is a real problem for some designs, most professional cameras allow to bypass it with a remote trigger, often an official one.
Besides people were doing long exposure since like forever with great success. If the sensor overheating was a problem, it would make sense to introduce a hard limit and some cameras have them, but from my experience the likely reasons for the prevalent 30s are: 1/ most people don't do several minute exposures 2/ they need to sell remote triggers to somebody.
30s limit was there even during nineties when you could do hours on film without overheating anything.
With modern post processing, why would you be interested in a single 1 hour exposure vs a stacked image?
One thing that most people comparing a film camera and digtal camera by exposure lengths is the fact that a digital cameras image decreases the longer the chip is energized. Heat builds up, and the image gets more noisy the longer the exposure. One rule of thumb for time lapse is to have an interval in between shots as long as the exposure being taken to allow time for the sensor to cool back down. YMMV
I have done 30 minutes exposures with digital (5D mark II and magic lantern) with very little noise, so I doubt sensor heat is the problem. Try to bump ISO and you got so much more noise.
I often need it for HDRs at night. I know I can do like wayloads of high iso noisy short exposures and average them in post, but it's clunky and even harder to set up automatically. I would prefer something user friendly. When I'm shooting at night, it's often cold and even not so safe. Fiddling with apps and cable releases is not something I'd like to do.
I know that the hardware is capable of it, but they just have to sell you that cable release or what not in order to unlock the capability, which is what I'm not happy about.
I use the same body, and shoot lots of long exposure night time stuff as well. If your doing hour long exposures, you're already fiddling with mounting gear, so small external intervelometer is the least of that needed gear. I have a custom VR rig that mounts multiple camera bodies that all shoot timelapse in sync. Imagine the cabling involved there and you can see why a single body with an external controller sounds like heaven to me.
You don't have to do high ISO anything. If you're willing to shoot 1 hour exposure (which are you really doing that on DSLR?), seems like you'd be willing to shoot 30 2-minute exposures, or 60 1-minutes. What doesn't motion blur out in 1-minute shots other than maybe cars at a red light? Also, how many 1 hour exposure shots can you get out of a single battery charge (2 batts if using grip). I know doing 45s exposures with short intervals will chew up batteries for me.
It seems to me that Canon has decided that if you're going to do longer than 30s exposure, you're going to be using bulb mode and an external controller. I know MagicLantern allows setting custom bulb exposures, but I haven't messed with that option. Its built in intervelometer isn't reliable for short intervals (anything 5s or less is iffy regarding consistent shooting frequency). With my wired controller, I can do 1s gaps. Anything less, and I start getting issues with dumping to card reliably on the old body. Using my custom arduino controller for .5s gaps with 1s exposure with camera mounted to car for motion blurry goodness freaked the mkII out for some reason. newer bodies handle it much better.
First, I'm not complaining about 5Dm2. I use magic lantern on it and I'm happy with it. It does what I want. I'm just sad that unpaid enthusiast must provide functionality that should be there in the first place. Also, I recently bought Panasonic Lumix S1 and as you can imagine, there's not magic lantern for it.
Second, an hour was just an example. Most of my exposures are not longer than 4 or 8 minutes.
Third, I'm happy that you are fine with external controllers. But I'm not. I find them crude and unnecessary. I'm doing a lot of HDRs and I don't understand why I cannot simply preprogram my camera with sequence of shots and execute it without any gadgets and limitations. Say, there's this nonsensical limit for 1 minute or 30 seconds everyone seems to defend for some strange reason, (is there some sort of Stockholm syndrome at work?), at least the manufacturer could provide a better bracketing setting to compensate for it. It's not like it's rocket science. An intern could write the firmware in a few days. I cannot even patch the camera without voiding the warranty. I know they are always workarounds, but why make things complicated, when they could be simple.
Because that's not how you sell the more advanced camera.
The mkII got video basically because someone realized the chip could do it, and stuck his head in an office saying they could make it happen, and then did it. It pissed off the pro-video camera guys to no end. So, I'm guessing Canon is not going to let that happen again by letting the interns loose. It's goes against Sony's DNA to let features go out for free.
But seeing as this is HN, many a device has been launched of people tired of status quo and went and did their own thing to make what they wanted. You're welcome to post us a Show HN when you've fixed your problems that will make things easier for the rest of the world too. Or, better yet, sounds like it's ripe for disruption, so why not get a YC backed startup going to build a better camera. Try not to make the camera equivalent of Homer's car though.
I understand the business decisions. But, what you are suggesting is already kinda happening with mobile phones. They have shitty sensors, but software ecosystem that can thrive around it. My guess the next big thing will be a professional camera with android and third party software running on it, doing all sorts of cool stuff.
maybe someone will do something similar to Oculus and just remove the incredibly wasteful phonestack out of the phone. Now, the device is actually useful. Throw a full frame sensor in, add a nice lens mount, viola.
I'll add one idea to the list: tracking moving objects using IBIS.
Like when you shoot moving car or flying bird - use existing autofocus to detect the object and then track it over exposure time to keep it free from motion blur but get nice, blurred background. Pentax have simple version of that for tracking stars mentioned in the article but I hope some company figures out how to do it for arbitrary, fast moving objects in real-time.
Sony a9 series already got rid of blackout during shooting so it is technically possible to get image data in realtime, have continuous autofocus and save full size RAW files at the same time: https://www.youtube.com/watch?v=_ZXFI-eIXk8
I don't think IBIS systems have enough travel to do something like that, especially for the longer focal lengths where this would be useful. In the meantime optical stabilizers and IBIS systems already smooth your pans on the micro-level pretty well.
I don’t think they can move the sensor fast enough. It would also only work for objects whose parts don’t move. Airplanes may work but with birds the wings move in different directions.
But I think with much faster sensor readouts you will be able to compute sharp images sometime in the future.
One feature that wasn't mentioned is the option to capture and save as embedded metadata the camera's motion sensor info used for video stabilization. Some Sony cameras currently do this and the data can be used to increase stabilization quality by doing post processing in Sony's desktop app.
A good list with a lot of very useful features. One thing I found interesting there was, that Olympus was the brand which implemented a lot of those features. Micro Four Thirds is often ignored for its "small sensor". But in practise, a lot of features often count more than the sensor size (which of course has a lot of advantages for optics design, as size and portability). All photographers should pay more attention to Olympus, or rather OMDS, as the new company is called. They are making really good and versatile cameras.
Indeed, it is. The lenses just are great. I have a large Olympus setup. Could I have gotten any full frame camera? Yes, but for most tasks I consider the Olympus the better choice in the combination of features, great optics and a small kit which can be carried even with a bad back.
For me, the only real competitor is the Leica M system :)
I agree, and I would love to try a Leica, but I don't have that kind of money for one among several hobbies. :-)
I also don't get this term "full frame". When I started photography, "full frame" was 120 format. 35mm was "compact camera". And I think you need to go to 120 (at least) to do noticeably better than Olympus micro four-thirds.
It is just a term which got established. Most of the times, I just talk about the 35mm format. The best explanation is, that Canon and Nikon used to sell digital versions of their 35mm camera with smaller sensors. And when they started to make 35mm digital sensors, these cameras were "full frame", as the digital camera had the same sensor size as the film cameras of the same system had. I don't care to much about those terms, I am happy with Olympus.
With respect to the Leica: if you can trust your self control, by all means try out a Leica, it is an experience one should have. But yes, deposit your credit card at a safe place, they are lovely. For me the downfall was that I bought a M-mount Voigtländer 40/1.4 as a short tele lens for mFT, back then when there where not many prime lenses to choose from. It works lovely on a digital M :)
To be very contrarian and sound like a bit of a neo-Luddite, I have been a photographer for more than 30 years and have never taken photographs as good as the ones I take with my Leicas, which are the most feature-poor expensive cameras you’re likely to find.
No auto-focus, no video, limited feature light metering, etc.
I drooled over getting a Leica M for 15 years and finally took the plunge about five years ago. Best choice I have made gear-wise in all this time. After getting the first, I went ahead and got the Monochrom as well (yes, B&W-only sensor).
Some of these seem like they are workarounds that aren't going to be required in future camera designs.
Others fall well into the amateur photography traps of "I must have this techno gadget in the camera to get better", whereas Pros have been getting that shot for decades without said gadget.
I have a pretty fancy system (but have been gradually losing interest the last 5 years or so) but I feel like it's all becoming irrelevant over time and the camera companies are going to be left with a very small pro market (that's much more pragmatic about upgrades) and the hard core amateur market that just buys everything to show off.
The biggest things that would keep camera systems relevant would be much better connectivity options to your phone or other device. Make it really easy and seamless, and wireless. Or put LTE right in the camera.
I've wondered about a lot of these features and how they would be ubiquitous if only every brand of camera has a standard way of loading applications and interfacing with the sensor and lens.
Sadly such an interface doesn't even exist on Android.
One feature I've been unable to find anywhere is the ability to sync the sensor clock very precisely, either manually as a menu option or using radio signals (Bluetooth, Wifi, GPS or tv broadcast, I'm not sure which would be better).
The idea would be to sync perfectly multiple cameras to capture an event in time exactly at the same moment, but from different angles. I know professionnal cameras can do this using a Genlock, but with the amount of sensors and radios on modern cameras, I'm pretty sure this could be made available to the masses.
As a Canon camera owner it's sad to see it missing from the supported cameras lists under nearly all of the features mentioned. Mine does have a GPS unit, though.
I think they’re great - absolutely perfect for situations with young kids who are constantly moving, or just to capture little vignettes of movement that bring a photo to life for a moment.
But support for them is virtually non-existent outside the iOS Photos app.
You can extract them as a pair of HEIC and MOV files, but I know of no Free photo management software that is capable of associating them and displaying them together (and yes, there are internal IDs in both files that make it possible to link them, so they aren’t quite as fragile as most sidecar files, although they are still fragile because Apple for some insane reason decided not to make use of the HEIF container format’s native ability to store both in the same file).
I’m unsure if Android has an equivalent feature or how it’s implemented, but I’m pretty sure no DSLRs have anything like it.
Pixel phones can take "Motion Photos" — it seems like this is basically just an mp4 file stuffed into a jpg somehow (but without sound). The Google Photos app lets you export it as an mp4 or gif file, or pick a specific timestamp to export a still image.
It looks like the Samsung Camera has something similar, but I'm not sure if it's compatible...
All of the current high-end cameras are pretty similar here and do around 30 fps stills shooting at full resolution or alternatively 8K (33 MP) video. Just 8K video alone is pretty much the same, as far as resolution is concerned, as yesteryears dedicated high-res bodies.
At the levels below that you'll have to settle for full-res stills at somewhere between 14 and 20 fps or 4K30p/4K60p, for now.
Pretty much vote for those. Not that the other are unimportant. "Native Image Averaging" - with the live preview please and the ability to zoom into that preview.
The only feature I really feel is missing is photo encryption. It's something journalists have been asking for ever since DSLRs existed, and doesn't seem technically infeasible.
I want to see smaller form factors e.g. Sony RX1, Sigma fp. The whole point of mirrorless was to move away from DSLR sized kits and yet years later we are back there again.
Also would be nice to see more options for manual focusing. Pixii is interesting being a digital rangefinder but there isn't much innovation here.
Besides mirror slap/vibrations, what's the other benefit? Size, and therefore cost, of glass is why all of my professional friends moved over. You can get better lenses for cheaper. And, the cameras themselves are much simpler, so can be cheaper/higher quality.
Many of the "live viewfinder" benefits already existing in DSLRs that had live viewfinder modes, with the rest being excluded for cost or differentiation (focus pixels, live histograms, etc). I saw it as an more of an eventuality of the live viewfinder being made more featureful than the lens view, so removing the mirror just made sense. The only people I know that are still using DSLR are those that "need" the classic viewfinder, but there's no reason the features of a mirrorless camera couldn't be put behind a sometimes-flopping mirror. The only problem with that would be vibration and the increase in size of the camera and lens.
Field curvature, astigmatism, coma, and chromatic aberration need to be handled optically.
A larger sensor remains beneficial, even if the lens is physically larger, at least as long as you keep a wide aperture.
There is a hard limit at F/0.5 that you won't get closer to than ~ F/0.65, but the crop factor makes it so you have a smaller absolute aperture with a smaller sensor to get the same field-of-view.
MFT needs half the focal length and half the F-number to match full frame in FOV and DOF.
E.g. an F/1.3 full-frame lens already matches what exotic optics can reach on MFT, and an F/1.0 one would match the theoretical limit of MFT.
Speaking about more realistic MFT lenses like F/1.4, that'd be a rather harmless F/2.8 in full-frame.
Yes, the latter is more expensive, but only because F/1.4 is quite a bit away from F/0.7, the practical limit for "normal" photography lenses (anything faster is probably exotic).
ahh wouldn't the star tracker require pretty specialized or higher powered hardware? seems a bit much to cram into a camera no? unless you want the cost to increase by the cost of 1 cell phone?
I have a Pentax camera (it's great -- they're undervalued by the market). The astrotracer uses GPS lookups and needs either a K-1 or a (very overpriced) GPS dongle that plugs in via the flash hotshoe.
It gets an accurate lock and moves the sensor about its 5 axes of magnetohydrodynamically-controlled freedom in order to emulate, to some degree, an equatorial mount. No image-domain tracking required -- purely physics. The things people get are insanely good (c.f. [1])
Mechanically there's only three degrees of freedom (translation in the plane and rotation about the optical axis), the cameras just calculate an approximately correct translation for pitch/yaw movements. Pitching or yawing the sensor itself would take it immediately out of the depth of focus (the image side equivalent of depth of field) and the image would be a blurry mess.
1. Find bright spots in last second of exposure (which is then added to a sum for the final picture)
2. Compare their locations to bright spots in the previous second
3. Move the sensor slightly so the bright spots remain "in the same place" on the sensor.
Note that cameras with integral anti-vibration do that by moving the sensor, so they already have the hardware for (3).
For star tracking there would be a max speed of stars based on Earth's rotation multiplied by focal length you could use to set a reasonable cap on movement. You could also expect similar movement for each second to the seconds before.
The camera could have a data structure per star being tracked, capping the number used to say 32 or 64 "stars" for accuracy. Each "Star" would "vote" how much movement has occurred. You'd gain faith in various stars as analysis frames go by if they keep giving valid results, a black-list of "not stars" would also be useful.
Some would be stationary, like lights on the ground. Some would be bogus, such as navigation lights on aircraft. Actual stars should expect similar movement and "agree" though if the zoom was low it might change across the image.
The hardware is already there. You don’t need to move much to counteract the movement of the stars. Many cameras have in body stabilization, which can move the sensor around.
Pentax had had this for a while (as early as 2011).
I doubt you need that much hardware/costs, given that even an Arduino can do it [1]. From an outsider's perspective though, camera companies are probably hesitant to shake up what's already there which is probably they don't add such features.
Isn't that the gizmo that triggers external (studio / photo-shoot) lights to flash? It sends a radio or IR signal on shutter press rather than flashing on the camera.
An alternative are studio lights which are set up to trigger when the camera flash itself goes off.
Commander mode uses the flash to send the commands. It’s actually bananas what the protocol can do. A Nikon commander can signal three different groups of speed lights to test-fire, figure out the power the photo requires from each group, signal them all to dial in the exposure, then signal them all to fire. It does this all with 1960s technology.
What I would like cameras to have is apps. Sony started some time ago to allow apps on their cameras under the name PlayMemories but for some reasons, they back off.
I want cameras that have apple airdrop, messaging apps, popular social media apps and file sync apps along with a full cellphone , wifi , bt, gps chip using android. It would be a dream to have my camera actually start syncing my photo shoot automatically when I plug it in the charger at home and have it not be a shit experience.
OR wifi sync and phone sharing that doesn't suck. But no camera manufacturer has shown the ability to do that, so an android app mode is the next best thing so they can offload the software writing to much bigger tech companies instead.
Top 1 annoyance with non smartphone cameras is sharing the photos I just took with other people at the event. I'm not fucking around with the frankly awful wifi experience on most cameras and I'm not going to bring a cable or sd card dongle to do so either.
# 2 is the lack of built in GPS. It's 2021, adding GPS is extremely cheap, especially if you've already done wifi anyway.
#3 is qi charging. After a shoot put the camera on the wireless dock and everything is syncing well with the local desktop computer.
Neither is going to happen. The photography market has split into three camps.
Casual users who care about apps will just use their phones and the quality will be on par with Micro-4/3 and likely APS-C with computational photography advances.
Prosumers will stick with smaller full frame optimised for style and old school usability e.g. Leica. Professionals will either use full frame for speed or move towards medium format which Fuji is leading the charge towards.
I've noticed that a bunch of camera brands have started making 'vlogger' cameras targeted towards youtubers and others. The camera I'm describing is pretty much what an 'influence' camera would look like or the next stage of these vlogger cameras. Someone who would benefit from their photos being higher quality and also wants a way to frictionlessly publish them quickly. A Ricoh GR with the features I describe would be the ideal version of this.
I disagree. Apps necessitate an app store, maintenance, updates, reasonable SDKs. For something like a camera apps as an expansion joint is a bridge-too-far. That's not to say that these cameras shouldn't provide an SDK for access to core functionality. I mostly would like Sony and others to open source there camera software so bugs/features can be addressed by the community.
From what I can tell, many of the needs of all of this whole comment section would be achieved with a "thin client" camera. Some large, dumb, image sensor, with an IMU, that uses a smartphone for everything other than recording a raw image/video stream.
I completely agree. Can you imagine have smartphone-level ability to add new functionality to high-end cameras? The things you could do would be amazing. Although I suspect and IO and processing power may be limiting factors, given that even a full-on PC can take a while to perform many operations.
But here's one important feature the list misses: the ability to shoot uncompressed, or losslessly compressed, RAW images. The last time I checked, the introductory Sony cameras (the A6500 and its cousins) do not permit this. The customer can only access lossy compressed data. The same is true for lower-end Nikon DSLRs.
Most of the time, this is not a problem – though in some situations with heavy postprocessing or harsh lighting, you will certainly see compression artifacts. I just find it amazing that they're asking customers to pay almost a thousand dollars for a camera+lens kit that doesn't support lossless compression, or let me access the true "raw" data.
It's 2021! This is well-known technical knowledge taught in undergraduate CS classes.