Re: BSI and Stacked sensors

(commentary)

Sony’s recent A7rII and RX camera announcements have directed a lot of questions about sensors into my In Box. Back Side Illumination (BSI) and Stacked sensors have a lot of folk scrambling to understand what’s happening technology-wise. So here are some of the things being asked and my quick-to-the-point answers:

Q: Will BSI give you better image quality?
A: Trick question. So let’s split the question.

Q: Will BSI deliver higher dynamic range?
A: Yes, but there is a declining ability that scales with sensor size, all else equal. In very small sensors (smartphones and compact cameras, for instance), the traditional sensor designs meant that part of the top of the photosite included data and power lines. That reduced the area that was collecting and converting light to electrons. This is sometimes called Fill Factor. Microlenses were one way of increasing Fill Factor (pointing the light to the collection area), changes in walls alongside the photo diode were another thing that impacted this (some photons got absorbed into the walls if they came in at an angle). 

BSI sensors move most of the electronics underneath the actual photo collection area, leaving the top of the photosite to be fully available to collect light. The Fill Factor is essentially 100%. Thing is, on very large sensors (e.g. full frame), the Fill Factor was already quite high, and getting higher as companies moved to smaller process fabs. (Smaller process means smaller power and data lines and supporting electronics.) 

We have one direct apples-to-apples comparison we can measure: Sony took the original RX100 1” sensor and converted it to BSI for a subsequent iteration of the camera. The overall dynamic range improvement worked out to be about a third of a stop. In smaller sensors the difference had been higher than that. In larger sensors it should be lower than that (again, all else equal; you could create a large sensor with lots of small photosites, and there we’d see an improvement, but going from 36mp non-BSI to 42mp BSI isn’t likely to show a lot of gain just based on the BSI component). 

Thus, I don’t expect much change to the dynamic range for full frame by moving to BSI. Certainly not enough to get excited about.

Q: Will BSI deliver some other image quality advantage?
A: Yes. It’s likely that this improves corner tendencies (color shift, vignetting) with lenses that aren’t delivering light to the sensor telecentrically (e.g. many wide angle lenses). It also likely removes the need for complex offset microlenses. I’ve seen a few other scholarly treatises that indicate that there may be some other small additional gains. 

Q: So is the BSI in the new A7rII something to get excited about?
A: I’m sure Sony’s excited. They can market it as the largest BSI sensor around (previously claimed by one of their big competitors, Samsung, with the APS-sized NX1). Personally I expect some modest gains in terms of image quality as that bar is always moving with sensors, but I still believe that giving us 14-bit uncompressed raw would be more useful in achieving optimal image quality in the A7 ;~) (Sony uses only 11-bit lossy compressed raw files to date.)

Q: Why 42mp?
A: I have no idea. The difference in resolution to the 36mp sensor is visually meaningless. Most studies I know of show that a large majority of you can’t see a <10% change in resolution, and this comes in somewhere around 7%. 

Q: So what about Stacked sensors?
A: I’m actually more interested and excited in what Sony did with the 1” RX sensor. In BSI, basically you’re sort of turning the traditional sensor design upside down: the photo diode and the electronics are still laid down together on the stepper and are not completely separate. In a stacked sensor, it’s a bit like producing a photosensitive thing and an electronics thing separately and gluing them together; there’s no fighting between the two regions for any of the 2D real estate (even BSI has some of this, though it’s minimal). Like BSI, Stacked sensors maximize the Fill Factor up top. The “stack” in these new chips now includes on-board memory immediately adjacent to the image acquisition electronics (Sony marketing shows this under the other electronics—i.e. a third layer—but I’m not sure that’s completely accurate). The memory helps solve some of the bandwidth problem in dealing with lots of data as it acts as an additional physically adjacent buffer available at high speed to the sensor itself. When trying to move large amounts of data, the more you can reduce the distance that data has to move, the faster you can move the data. Thus, the on-sensor memory buffer here is giving Sony some of those eye-popping abilities, like the very high frame rates (960 fps) and the ability to use the entire sensor for video instead of sub-sampling.

I wrote a long, long time ago (more than a decade) that one prime advantage of CMOS over CCD was that it would allow much more complexity on the sensor itself as we moved forward. The ability to individual address photosites is now being coupled with additional “smarts” and “tools,” and I think we’re just at the front edge of that becoming obvious to everyone. We’re going to see more and more move to the sensor itself, which is why Sony Semiconductor smiles every time they see the latest market share they have in camera sensors (and smartphone sensors, too): Sony is more and more becoming the key driver of what’s happening on-sensor. Where they haven’t managed to create that IP themselves, they’ve been clever enough to pick up cross licenses. 

Sony is more and more looking like Intel in this respect. Look at how many Nikon cameras already have Sony sensors in them. Pentax is using Sony sensors. The Medium Format market has shifted towards Sony sensors. 

This is putting the camera makers like Nikon in an even more precarious position where they really need to mimic what Apple has done many times now. Basically, that’s take state-of-the-art,= off-the-shelf components (e.g. Intel CPUs, ARM CPUs, many different I/O and sensor chips, etc.), and using software build something that solves previously unsolved user problems. When things like CPUs and sensors become ubiquitous, there’s always the vendors that will try to ride lowest common denominator to the bottom: basically cutting costs to the bone, sacrificing some quality/performance, copying designs, and using price and marketing to outsell the others. 

I hate the race to the bottom. It’s driven by accounting bean counters, not clever and intelligent engineers. It almost always ends up in failure at some point. I believe you actually have to race to the top: finding the things that the users don’t really know they need, making more complex and difficult tasks easier for them, while driving performance upwards. Think Tesla, not Tata. 

And that’s been my disappointment with Nikon lately. While they sometimes manage to do something that resembles a race to the top (D800, for example), they also keep pushing more race to the bottom products with compromises that will eventually box them in. You can’t really have it both ways. Race to the bottom destroys brand reputation eventually, especially once you start cutting customer service ;~). Race to the top enhances brand reputation. 

Q: Should I buy Sony stock?
A: Probably not. While Sony Semiconductor is looking mighty healthy these days, both with sales growth and clear technology leadership, it’s just part of a huge conglomerate that still has plenty of other problems to solve, including some biggies (mobile products, for example). You’d be betting that Sony can solve those lingering problems without downsizing the whole operation. 

Q: Should I buy a Sony camera?
A: Maybe. Does it do what you need doing, do the lenses you need to accomplish that exist, and can you live with some compromise decisions that Sony Imaging keeps making (e.g. 11-bit lossy compression for raw)? I like the RX and A7 cameras. Some really nice things and design decisions in both product lines, and the products themselves perform quite well. 

But I don’t sell Nikon short. I’d argue that the D750 is a better do-everything camera than the A7II, the D810 probably a better do-everything camera than the A7rII (still to be determined by testing). But a lot of people are just looking for good enough for some general (or specific in the case of the A7s) uses, and that’s where the A7 series starts to attract users. 

Q: Should Nikon be worried?
A: Yes. More and more I’m seeing companies other than Nikon being able to make a clear technology or performance claim that Nikon is having trouble matching. The two big Nikon “technology” things they can trumpet this year have been the Phase Fresnel 300mm f/4 lens (a really small, light, excellent telephoto lens) and the D810a (astronomically-oriented changes to an already great camera). Everything else has tended to fall in the “more of the same” types of announcements you expect in a race to the bottom. 

Worse still, Nikon hasn’t had an image sensor “story of note” lately. Even Canon has managed that. This puts a lot of focus on the upcoming D5, as it’s likely the only chance Nikon has in the short term of claiming true ownership of something in the sensor technology race. 

Don’t get me wrong. Nikon is squeaking out arguably slightly better image quality from those Toshiba and Sony sensors than others are at the moment. But that just makes them a “tuner,” not a true leader. Nikon’s losing the “wow factor” race at the moment, while still making a number of really good products. Good in a declining market isn’t going to cut it, I think.    

Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: