Sony's 3.7 micron Sensors

bythom sony 60mp

Jim Kasson, someone I correspond with from time to time behind the scenes, was the first one I know of to post this, but it's something I had noted earlier and wondered whether or not it was worth speculating on: Sony Semiconductor now has four different sensor sizes with much the same pixel structure and technology.

That would be:

  • APS-C 28mp
  • Full Frame 61mp
  • Small Medium Format 100mp
  • Large Medium Format 150mp

These all appear to be the same basic BSI pixel design at about 3.7 microns, with Exmor dual gain capability, and using copper wiring to do the heavy bandwidth lifting. The next step is likely 3 micron pixels, with perhaps some additional new technologies added in. 

Nikon, for some reason, has been dabbling with sensors right around 4.3 microns (20mp APS-C and 45mp full frame), but not exactly the same pixel size. It's unclear why Nikon picked those odd sizes, but the basic photo diode can be easily scaled, so there must have been a benefit Nikon saw to going slightly off Sony Semi's standard releases.

But there's a bigger picture thing to see here: it used to be that we got distinctly different moves in pixel size and technology at different dedicated camera sensor sizes, but now the same basic structure has rolled into basically all the Sony Semiconductor sensors, so we're seeing more technology parity across the sizes. One aspect of this is that it tends to make the oft-stated rule of thumb about sensor size more accurate (e.g. full frame is about a stop better than APS-C, all else equal; well, now most everything else is equal).

Unfortunately, a not-so-much-discussed aspect is now also being revealed: when companies like Sony Semi basically release the same tech across their entire lineup in a very short time period, that is often the hallmark of nearing the end-of-the-line in an iterative-type technology progression. Much as Moore's Law has slowed in CPUs, whatever improvement line we were following with image pixels has also slowed. To the point where it is simply being replicated upward through the sensor sizes.

What we need next is a new disruptive image sensor technology. 

The one that everyone has tried to perfect is what I call rollover saturation: in other words, break the saturation limit on an individual photosite without breaking anything else. I'm aware of at least four different basic approaches that have been attempted, so far not particularly successfully. Something else tends to break, unfortunately. Still, there's a lot of potential here if you can get past the issues. The whole notion of limited dynamic range would completely disappear if you could get past the saturation point. (The other end is the noise floor, and we're now in a situation where we fairly accurately record the randomness of photons, so we're not likely to improve the floor much; it's the saturation ceiling we need to break through.)

What Sony Semi says they're actually working on next is something completely different, though: Artificial Intelligence (AI) on (or with) the sensor. Some logical candidates exist for how to focus that AI: noise reduction, object/edge recognition, and color integrity (or manipulation) come to mind as clear possibilities. Other ideas get slightly into the "crazy" realm, but I'm all for crazy, as that is often where true innovation is found. 

I'm still a little leery about AI, though, as it suggests that we're moving away from collecting more accurate real data to creating faux data that mimics reality.

The real problem for dedicated cameras isn't really the image sensor, though. We've got plenty of good data to work with from pretty much any modern camera with a decent lens out front. The real problem is the fact that the data from an image sensor is two dimensional. 

You're probably thinking I'm about to talk about curved image sensors. Nope. True, a curved image sensor solves one optical problem when done correctly in conjunction with lens design, but it's still basically a two-dimensional approach.

Where the smartphones have been going for some time is in building depth data. You can do that using multiple cameras or by using Time of Flight type devices (a sensor looking at projected laser light reflections). You can also do it using some other techniques, but the two I mention are the ones that have the most work behind them at this point. 

As we throw more AI and computational photography at the underlying data, having three dimensional data sets is much more useful than two dimensional data sets. To me, this is the thing that the dedicated camera makers are going to have the most issues with in keeping the smartphones from continuing to erode the bottom end of the camera market. Already there are things that you can do with some smartphones—such as remap lighting—that are tough to do with dedicated cameras. This problem is only going to get worse as long as we're stuck in the 2D world. 

Thing is, though, as long as the higher end of the camera enthusiasts keep reacting positively to "more pixels" and "more dynamic range" and the basic, simple, iterative things that are going on with large image sensors, the more the camera makers are going to find themselves going down the wrong rabbit hole. 

At this stage, I'm still welcoming "more" because we haven't quite hit the limits where small gains on pixels and dynamic range just aren't visually useful in any way. But we're getting close. Plus each iteration is generating another group of Last Camera Syndrome users, where they fail to see the gain they get for the price they pay, and thus just happily stay with what they own.

Simply put: we're still in the rough patch for dedicated camera makers. The pool is getting smaller, but we still have large sharks trying to eat. Basic sensor iteration isn't changing that. Indeed, it's aggravating it and making the pool (number of customers) smaller. 

Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: