Coffin Not Quite Shut, But Apple Working on It

For months now I’ve been reading the Internet prognostications of the iPhone 7 killing the DSLR. This was all predicated on the use of dual imaging sensors with differing focal lengths, which creates a virtual zoom in the smartphone along with a lightfield-like depth map. And assuming that Apple would do something that is above and beyond what others have accomplished to date with multiple sensors. 

Today Apple finally announced the iPhone 7 models and we can start discussing reality (actual reality will have to wait until the phone is available September 16 and we can start taking photos with them).

But frankly the most important reality is as I’ve been describing it for quite some time (eight+ years): smartphones are winning the camera wars simply because of the workflow and ease of use difference. What smartphones can’t (currently) do is win based upon the physics of imaging. What all the Japanese cameras can’t (currently) do is win based upon getting your images conveniently where you want to share them, and without complicated user interaction.

The use of a dual image sensor is not unique to the iPhone 7+ model introduced today. The HTC One M8 tried it —with very low megapixel count sensors—in 2014, and in 2016 we got both the LG G5 and the Huawei P9. Both those are strange variations on the two-sensor idea. The LG uses two sensors, one for regular angle, one for wide angle. Yet curiously the LG uses a smaller 8mp sensor for the wide angle view, and a larger 16mp sensor for regular angle. That seems backwards.

The P9 uses two matched lenses and sensors—developed in partnership with Leica—except that one is traditional Bayer color and the other is pure monochrome. The goal here is to produce great monochrome images as well as use the monochrome information to reduce noise in the color images and to create slightly images with more acuity than possible with a single Bayer sensor. 

I’d call all the previous multi-sensor products interesting, but immature. In particular, the mating of the two sensor’s data seems to be prone to producing visible image artifacts. That’s particularly true of trying to use the P9’s software aperture option to get less depth of field: it’s pretty clear that this is a zone-based algorithm that produces visible changes; I’ve seen post processing software that does a better job of this using just variable Gaussian blur.

But let’s talk about what Apple seems to have produced, shall we?

Technically, we have two 12mp sensors, one with a 28mm (equivalent) f/1.8 lens, the other with a 56mm (equivalent) f/2.8 lens. These allow five capabilities: (1) 28mm 12mp images, (2) 56mm 12mp images, (3) a quasi-optical zoom using interpolated pixels from both 12mp sensors from 28-56mm, (4) digital zoom from 56-280mm using pixels from both 12mp sensors and generated by the imaging engine, and (5) a portrait mode to made available in October where the 56mm (#2) image is combined with depth mapping information from the 28mm image to perform processed depth of field reduction.

This is both less and more than most people expected. Less in that there’s very little of the depth mapping tricks that people were expecting from the dual sensor capability; more in the sense that Apple has pushed the virtual zoom well beyond where most people were thinking they would. 

Oh, and don’t forget that iOS 10 allows for raw DNG capability, that the camera is capable of recording and rendering in the P3 Color Space, that everything is optically stabilized, that all images are geo-tagged, and that all this comes in an IP67 rated body. (IP67 is “no ingress of dust” and “water projected in powerful jets from any direction”.) Oh, and don’t forget 4K video and slow motion video for HD. Still there.

Assuming the iPhone’s camera system delivers as expected, this puts another clear nail in the compact camera coffin. But DSLRs (and mirrorless)? Not so much. While Apple wasn’t specific at their announcement (or on their Web site), it appears that they’re once again using a back-illuminated CMOS Exmor RS-style sensor with 1.22 micron photosites. In other words, small sensor. 

Upping the maximum lens aperture from f/2.2 to f/1.8 provides more light, but we’re still talking about a smartphone type sensor, and the ILC cameras still have quite an advantage in low light situations in terms of light gathering. 

Moreover, the dual-sensor implementation still seems a little on the immature side to me. Some of the capabilities aren’t even available at launch, and Apple hasn’t gone as far as was promised by the Israeli company they acquired that pioneered multi-sensor capture.

That said, Apple talks a lot about their imaging engine (ISP) and the real-time detection and tone-processing it is doing. Indeed, they made a point of noting that it manages 100 billion operations in 25ms, and includes machine learning. It’s this engine that should scare the camera makers, because Apple is iterating it very quickly now and they have the volume the camera makers don’t to put a lot of engineering time into it. Multiple imaging sensors mean there’s more data coming into that engine for analysis that can be acted upon. In other words, we’re getting very close to computational imaging now.

So, no the iPhone 7 doesn’t eradicate the DSLR. Indeed, Phil Schiller even said that line to the audience during the iPhone 7 launch. Still, Apple is pushing a lot of new capabilities into the smartphone that make it much more functional as a broad use camera. 


Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: