The "More is Better" Problem

I've long been on record as writing (and saying) that more sampling is always better. Put in a camera context, more pixels are better, all else equal. 

Recently I've gotten some pushback on that from engineers who are at the forefront of semiconductors. I need to adjust my statement ever so slightly, I believe: we're currently still in the more sampling is always better phase

Let's talk about full frame for a moment. In recent times we've gone from 24 to 35 to 42/45, and now to 60mp. The constraint in the "more sampling" is that the actual linear increase in resolution is getting smaller and smaller with each step, while the recorded diffraction impacts get clearer and clearer and nibble at lens capability at lower apertures. 

In conjunction with a computational imaging talk I was putting together just after the turn of the century, I took the time to run the complex math that starts to get involved. I actually needed help from a couple of friends whose math skills are better than mine to complete the analysis. Sensors were Bayer APS-C at the time, so that's the type and size we worked with. 

The results were that past about 24mp, while there would be recorded data differences, you ran into almost a wall in terms of getting something useful out of those differences. If you keep an AA filter over the sensor, the antialiasing and diffraction impacts simply get in your way. If you take the AA filter off, the faux data that gets generated beyond the Nyquist frequency is chaotic and doesn't reliably net you anything real. Add in the Fourier transforms used by JPEGs, the Bayer demosaic artifacts, and more, and 24mp sure seemed like a logical stopping point for APS-C. That's what I wrote in 2003, and that's basically what I still believe today.

As you probably noted, that didn't stop Sony Semiconductor from going to 26mp, Samsung to 28mp, and now Canon to 32.5mp with APS-C sensors. At 60mp, the full frame sensors are now hitting that same level of pixel density and thus expose the same issues as >24mp APS-C.

Of course, during that same period just after the turn of the century I also remember a respected camera engineer telling me we'd never have 1 micron photosites ;~). Because they wouldn't render anything useful. Obviously wrong. But why? 

Basically, when engineers hit a ceiling, they start getting creative. 

The most common creative approach at the moment is to apply AI-type analysis to the data structure you have and create a "better" or "fuller" structure. The pixels are no longer "real," but they look just fine. Both Topaz Labs and Skylum seem to be deep into doing this post mortem with post processing software, but we're going to see more companies try to move this into the hardware (and Sony Semiconductor has basically announced this as a goal, too). 

What could possible be driving the need for more pixels in cameras (and smartphones)? After all, the number of folk that ever print larger than 8x10" or display photos on more than an HD device (2mp) is incredibly small. 

Well, it has been small, but it's almost certainly going to get larger. 

That's driven by displays. HD was 2mp, 4K is 8mp, 8K is going to be 32mp. At the point where your Living Room wall is no longer painted gypsum but rather a large flat display, we will be talking needing far higher pixel counts in order to put excellent images on them. It's clear that's where we're headed. The only question is when we'll get there.

One of the things I started experimenting with a few years ago is taking low pixel count images and using them on large displays. They look bad. I apparently wasn't the only one that thought of that and came to that conclusion. Just as we got line doublers back in the last days of the original NTSC, I'm now seeing plenty of data interpolation capabilities appearing everywhere, some of which use AI to try to identify the objects to be resized. The 4K television I bought had a rudimentary form of data interpolation—US cable TV was 1080P at best at the time—but what I'm seeing now are much more complex algorithmic, and yes, AI approaches. 

A couple of years ago I would have said you couldn't probably do reasonable upsizing on existing consumer hardware in anything approaching a reasonable time frame. Today, I acquiesce. I'm still not sure you can do it for real time video, but processing a lower pixel count still image to something that looks good—or maybe I should say looks better than it should—on a big screen is starting to take very little time on a high end workstation.

Pretty much any trend eventually ends (or at least plateaus). But with a 5K display on my desk and likely an 8K and larger screen coming to my Living Room some day, I don't think we're near the end of this. We simply are going to find we like images with more pixels better than ones with fewer pixels.

So do I want to explore the Sony A7Rm4 (60mp) and the Fujifilm GFX100 (100mp)? You bet. I like living on the front edge of tech. It's where I grew up (Silicon Valley; Cupertino to be exact), it's where I spent most of my business life, and I am still fascinated by the game of trying to extract "more" from what basically starts as sand. 

Here's the thing, though: the camera makers are forgetting the primary user problems while pursuing problems we don't really have yet. 100mp is not yet solving a user problem. Sharing images would be. While I'm happy that some engineers are chasing down the pixel rabbit hole, that's not the rabbit hole that will pay their current salary.

So I find myself on the same soap box today as I've been on: yes, I'll take more pixels, and please figure out to make cameras share images more easily.

Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: