Reaction to some of my recent articles has led me to a conclusion: for most folk, the train has gone as far as you’d care to take it. That doesn’t mean the train won’t continue to chug out into the hinterland with a few hardy souls aboard.
What I’m talking about is image quality. I’m fairly convinced at this point that most people can’t see any tangible differences in their images past a certain point. For most people, that seems to be 24mp, and that also seems to be somewhat regardless of format. I’m not finding a lot of people who can see the subtle but real differences in the two 36mp full frame cameras currently available, for example.
I suspect some of this has to do with monitors, amongst other things. Even if you’re pixel peeping, if all you’ve got is a 24” sRGB gamut display then there are simply things you probably won’t see as we keep pushing data capture beyond its current boundaries. I may have to recalibrate my reviews again, as further pushes in megapixels are just going to muddy the waters even more.
For example, the two iMac models have displays of 1920x1080 and 2560x1440, or 2mp and 3.7mp respectively. With a 24mp camera, when you pixel peep on the 27” iMac you’re looking at 2560 pixels of a 6000 pixel input, and 1440 of 4000 on the other axis, so you’re seeing less than one quarter of the available pixels. At some point you’re going to bounce back out and ask for Fit in View display, and that’s going to look pretty darned good from virtually every 24mp camera at virtually every setting you’d tend to use. So even if you can see some small differences in the pixels—which again, I’m convinced we’re at a point where few can, especially at “normal” ISO values—you’re going to dismiss their importance.
What about prints? Well, if a 27” monitor is good enough when displaying the full image, then I’d guess most folk would also be satisfied with a 22” or so print. Realistically, how many people print larger than that, ever?
If 24mp DX really is a bit of an upper barrier, then this adds a new problem to the camera maker’s problems. Smartphones are going to continue working their way up in image quality from the bottom. As I’ve noted on gearophile.com, smartphones are likely to move to multi-sensor, multi-lens options that are used to recalculate the data into a better representation of what’s in front of the phone, and that will include zooming capability, too. So smartphones will continue to encroach the low end of the digital camera realm, gobbling higher and higher up the compact camera chain.
Meanwhile, if making a 36mp DX camera won’t result in 99% of users seeing or appreciating the advance, we get a bar at the top of where cameras can be sold, too. That makes the territory between those convenient smartphones and a “good enough” top of the line camera get smaller and smaller every day. It’s no wonder that the camera makers are poking at different camera concepts trying to see if that might be the solution to their declining sales numbers.
Realistically, the answer is what I’ve been writing for over a decade now: the real issue is that the user workflow at the camera end today is the same as it was in film days. Moreover, that workflow needs to change with the changing scene where images are used (some Facebook APIs can change weekly; certainly no major “connected” API is static, let alone still tied to DOS 8.3 filenames). Camera designs are mostly focused on dealing with what gets to the sensor, not so good at dealing with what the sensor/ASIC sent to memory. In other words, the cameras still are mostly just input devices. No one seems to think of them as anything other than a single pipe output device. That’s the real opportunity in designing a new camera, not how good its sensor is or how buttons you can put on it, or even things like weatherproofing.
Now don’t get me wrong. I did not write that there are no worthwhile image quality gains still to be had. Me, I want pixel level data that is better than what we have. We’re still not at theoretically dynamic range for most sensors. We still have low level rounding and precision errors in the data on many cameras. We still don’t have perfect spectrum management. And yes, more pixels in the same sensor space, despite diffraction, still mean more potential resolution and/or acuity. So can we have 36mp DX and 104mp FX cameras and get more out of them? Sure. We might want lenses that are better matched, as we can take some defects out in post processing (chromatic aberration, for instance) but can’t really calculate back other defects, such as clear corner resolving problems.
However, the number of folk that are doing work that would greatly benefit from such gains is very small, and the number of folk that would pursue such gains simply from the hobby aspect of wanting the best is also limited.
That’s why so many people are getting off the train. They’ve come as far as they care to. Why should they spend another thousand or more dollars on gear that doesn’t produce any useful difference they can see?
To some degree, that’s why some people like the Sigma cameras with the Foveon sensors: they can see the acuity gain from those sensors versus Bayer sensors of similar pixel counts. So, non-Bayer types of sensors potentially have the ability to reset the clock slightly in this respect. Nevertheless, the camera makers are now constrained: smartphones define the low end of what they can sell, “good enough” constrains the top end. The space is getting tight, small, and crowded. We’re going to see failures of companies out of this, is my guess.
I’ll have more to say on this later in the week.