The Right Answer to the Wrong Question

I think everyone pretty much missed some very clear signals lately. Let me explain.

One of the most frequent questions I get is "when will sensors get better?" This is the wrong question, but let me answer it first before answering the right question.

Sensors, like most semiconductors, will tend to always get better with time. The relevant question isn't whether they'll get better, but how they'll get better. What happens is this: every once in awhile you get a big bang where things move rapidly forward into a new realm due to a technology breakthrough, but most of the time you just get minor improvements through iteration and extension. 

We're in the iteration and extension realm right now. In particular, new sensors are mostly trying to lock down slightly more accurate DNs (digital numbers, the bit values in raw files) and faster off-load of data from the sensor. 

"What about dynamic range?" you ask? Done deal for the time being; we're about at the edge of what can be done with current technologies, and that's more than we need for any output we use (of course if you miss exposure or think that moving shadows up six stops looks good, all bets are off ;~). Worse still, most current APS-C and full frame sensors are pretty accurate at recording the randomness of photons. What that means is that the "floor" of the dynamic range is pretty close to as good as it'll get. We'll see very small gains there as a few odds and ends get cleaned up in the transistor-level electronics of photosites, but nothing dramatic. 

So-called organic sensors actually try to get to a different solution than electronics: collecting more of the light due to filtration layer changes and efficiencies. Top sensors right now tend to be about 60% efficient at collecting light, so even if organic sensor technology can improve that, it's less than a stop to 100% efficient. We'd still be accurately recording random photons, just doing it a little deeper into the shadows. 

Which brings us to rollover techniques at the highlight end: just flip a bit to indicate a photosite has saturated and start collecting new data again, right? Just like adding 5 + 6 and having to roll over the 1 to the next digit in math (1, carry the 1). The problem with this is simple: collecting more highlight information doesn't necessarily give you anything—you're still going to have to condense it to the reduced dynamic range of your output device to make it useful—and you have the issue of subject motion if you're keeping the capture going longer to collect that extra highlight information. (If you're not keeping the capture going, then you're burying the shadows more, and we can't pull much more out of shadows.) Worse still, keeping electron storage around in the sensor tends to impact the noise floor, so you might gain at the top and lose at the bottom.

No, it isn't the sensor technologies that enable "better imagery" at the moment. You're asking the wrong question. 

The right question is "how will we get better images than we've been getting?"

Surprise: the camera companies (and smartphone makers) have already answered this question! I told you that you've missed some pretty clear signals.

Canon and Nikon specifically said "better lenses" make better images with their RF and Z mounts. They're correct, and they've already proven that with even just the initial lens output from each. For example, a Nikkor 35mm f/1.8G on a D850 simply doesn't produce as good a pixel level result as a Nikkor 35mm f/1.8 S on a Z7. Not that the D850 result is bad, but the Z7 result is clearly better, all else equal. 

As pixel counts have gone up (and continue to) what's happening at the pixel level other than dynamic range is actually incredibly important, otherwise we really don't need those pixels, right? At base ISO, it's not dynamic range that's the biggest issue, it's edge acuity and the removal of anti-aliasing. 

So, removal of spherical aberration, coma, and other optical defects becomes very important, and those are lens design things. Meanwhile, Bayer pattern filtration and the interpolation that it requires is also an issue in terms of getting accurate pixels with strong integrity, and that ends up as camera design features (e.g. pixel shift). 

Thus, one of the things you should be looking at more closely is optical designs and whether those are moving what's presented to the sensor forward (they are; even in DSLRs, Nikon's most recent F-mount lenses have been far better than the ones they replaced, e.g. the 70-200mm f/2.8E). It seems that everyone has upped their lens game recently. Canon (RF), Nikon (F and Z), Sigma (Art), Sony (G and GM), and Tamron have all started producing lenses that are extremely difficult to review because their optical faults are far smaller and more nuanced than the lenses that came before these. 

The other side of things where better pixels is happening you need to pay attention to is something that you see in some high-end cameras now as well as in the iPhone 11 Pro's Deep Fusion capability: some form of multiple exposure processing used to essentially pull out more detail and bust the Bayer bastardization of edges. 

By way of introduction, here's one-tenth of an iPhone 11 Pro's output (1.2mp) with the exposure equivalent to f/2 at 1/13 second at ISO 200 (it's impossible to exactly tell what the iPhone used in Deep Fusion mode, as it varies settings for the multiple exposures it uses, so this "exposure" is what I measured with an ILC I had handy):

bythom deepfusion


Note the detail in the wood grain, despite low light. Note the lack of noise. This is Deep Fusion at its best (it can be wonky, and it doesn't work for some things, like the very wide angle lens on that phone). I see similar gains when using the pixel shift capabilities of some of the ILCs, too. 

Indeed, I've started shooting landscapes and some architectural shots with multiple exposures and running median processing to get back some of what the Bayer filtration takes away (as well as to remove noise, which helps reveal detail). Coupled with a great lens, the integrity of the pixels looks far, far better. Similarly, the four-shot pixel shift on the Sony A7R m4, shot and processed well, does the same thing. 

Do I think we need better sensors? Not particularly. I'm all for any gain that can be made at the photon-to-electron-storage conversion, obviously, but in looking at images made with better lenses and multi-processing techniques, I'm seeing more progress in those that is useful at the moment in making better images. 

Of course, almost all this progress I just noted is only useful and visible for one of two things now: (1) immense cropping; or (2) really large output. If all you're doing is staying within the realm of what you might display on a 4K screen (8mp) or print to 13x19" (24mp), then you probably don't need a high megapixel camera or a better sensor than what we have today. 

Epilogue: All that said, one thing that's missing when you go for absolute pixel-level integrity is dismissal of long-accepted style. Those optical aberrations in older lenses produce a "look": an old-school look. Perfect pixels with no Bayer anti-aliasing produce a different "look": modern precision. One argument I get into all the time with others is with lenses like many of Fujifilm's: that company tends towards old-school looks with their current optics. Thus, the Fujifilm's tend to have clear coma and spherical aberration that slightly distorts outer areas, plus they render focus-to-out-of-focus in a way that looks like what we got in the film days. Nothing wrong with that, but it's not accurate to reality.

We've had the same discussion in filmmaking, too. Filmmaking runs at 24 fps because we're used to the "look" of interframe blur. When filmmakers like Douglas Trumball tried to move to something else—Trumbull's version in the late 80's was Showscan, large format 70mm film at 60 fps—they met a lot of resistance, which persists to this day (note the more recent controversy over Peter Jackson's use of 48 fps and the subsequent higher shutter speeds in the Lord of the Rings). With film, making something look too real can be quite problematic; pans look wrong to us, the suspension of disbelief can be lost. 

With stills, the opposite is true, I think. Removing antialiasing and lens defects is a bit like lifting a veil that was over a painting: you can see the detail and intent better. I do think some people go too far and remove critical depth cues (e.g. focus stacking a landscape from two feet to infinity and then cranking up sharpening and saturation as many are doing these days is eye-jarring, and very unreal). But getting pixel-level integrity is a good thing, I believe.

I remember the first time I was using a top lens on a top camera at a sporting event and then ran deconvolution sharpening on the raw file along with some other careful processing tweaks. The threads on the player's jersey were clearly visible. The photographer in the press box next to me looked at the detail in my image and said "how did you do that?" I looked at his images: JPEGs with too much noise reduction and some wonky style that was pushing saturation so uniform colors weren't right.

And that's the trick: optimal capture with optimal processing. Our sensors are already excellent. What's happening today is that we're able to use better lenses and do multi-image processing easier. What the iPhone proves is that the latter can be done in camera without you noticing any processing delay. What the Canon RF and Nikon Z lenses (as well as others) prove is that even the sensors today are able to collect clearer and better data when you improve the thing that is rendering the scene on the sensor (i.e. lens). 

And all that said, it's still the moment and the composition that drives the most impressive images you'll see. Don't get lost in the weeds looking for the tree. 

Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: