News/Views

News and commentary about the Nikon DSLR world and photography in general. This page automatically updates with links for each new news/views story and is a good place to bookmark if you want to see the traditional bythom "front page" type of story. 

Latest News/Views stories (top is most recent):

Do You Need "That" To Be Better?

Today's announcement of the Sony RX100 Mark VII (I abbreviate this RX100m7) has me raising a question to camera makers that I've been seriously pondering for some time: do I need that to be better?

bythom sony rx100m7

"That" is some specific feature or performance metric, which is generally the big marketing punchline for the new product. For the Sony A7Rm4 "that" was 61mp. For the Sony RX100m7 "that" appears to be blackout free continuous shooting at 20 fps with tracking focus ala the A9.

No doubt these are technological improvements worth noting. But the relevant question in photography is: do you really need "that"? 

The question comes up in camera pairings, too. For instance the Z6 and Z7 are basically the same camera with different sensors (so 45mp becomes "that"), as are the Panasonic S1 and S1R and Sony A7m3 and A7Rm3 (basically the same "that" as with Nikon). 

One thing about technology: if you continue to hire and employ talented engineers, they continue to iterate it ;~). While some keep writing that what Sony is doing is innovation, I'd tend to say it for the most part "that" turns out to be expected iteration. Innovation comes when you don't make the expected iterations but you instead deviate from the course and try something different. The A7S, for instance, was an unexpected course change, thus I'd argue that it was innovative. Of course, we're still left with the question of "do you need 'that' to be better?" ;~)

Since I espouse optimal data as my photographic mantra, it's rare that I'd turn down "that", whatever it turns out to be, as long as "that" is an improvement that allows me to collect more optimal data. But to the vast majority of those practicing photography with dedicated cameras these days, I often wonder whether or not most people need whatever the latest "that" turns out to be. 

Of course, marketing is going to tell you that you need "that". Moreover, today it's nearly impossible to tell if you're being gamed in any messaging on the Internet, where you're more likely to see That! or THAT!!!. 

Here in the US we have regulations (from the FTC) that require that you identify "influence", but they're not followed by most and the FTC has been lax at enforcing them in all but the most extreme cases. (Disclosure: I try to follow the spirit and letter of regulations as best I can; all ads on my sites are identified as such, and anything I receive from a company I write about is fully disclosed (mostly loaner equipment, but there have been a couple of other different "perks" over the years, as well).)

Now it might seem like I'm picking on Sony today, but they just happened to make an announcement on the day I was contemplating writing this article. Here's what I think: the addition of the microphone jack and the other improvements on the video side were more "that" for me (disclosure: I bought, own, and use a Sony RX100m6). 

But the details of the RX100m7 still left me a little cold on the video side. 

Because it's iteration, not innovation. 

The first question you have to ask yourself is this: does the Vlogging world need a camera that fits a shirt-pocket? If the answer to that question is "yes," then the RX100m7 isn't really the answer. (Oh dear, what happened here?) 

Sony emphasizes vlogging in their press release and marketing materials. Here's how they want you to do that: add an external microphone, add a shooting grip, and buy a lot of batteries. Suddenly it isn't a shirt-pocket vlogging machine anymore (and wait, didn't Sony try this trick already with the RX0m2?). Not unless you have substantially bigger shirt pockets than I do.

Here's how innovation works to solve the problem: fold-out shotgun (or variable angle) mic, fold-out grip. Fits in a shirt-pocket. Here's how iteration solves the problem: add someone's existing microphone with a cable using an age-old connector, add a variation on an add-on grip you already make. Doesn't fit in a shirt-pocket, so why are we starting with a shirt-pocket camera?

So the question for today (and every day for that matter) is this: do you need "that"? If not, keep shooting with what you've got and put the money you didn't spend into savings. 

I'm sure the Sony RX-100 Mark VII is an excellent camera. After all, the Mark VI it derives from was, and the new version is now I better ;~). 

What is Tack Sharp?

You hear the term all the time, what does it mean?

Unfortunately, different things to different people. Consider this sample from an image:

bythom sharp1


I've run a strong deconvolution sharpening on it without regard to noise propagation so that we can better see if the animal's fur is "in focus." (More on that term, in focus, in a bit.)

No, that sample is not tack sharp. We can kind of make out the individual hairs, but there's not strong edge acuity to them. You should see a small bit of antialiasing blur on them, despite my attempts to "pull the edges in."

How about this part of the same image?

Nope, still not tack sharp. While very close, there's still a bit of small blur to the fine detail, just like before. We're closer here to where we want to be, but not quite there.

How about this one?

bythom sharp2


Hmm, that looks a little better, doesn't it? The individual hairs have more clear delineation, and you can see exactly how fine and distinct they are at the upper right edge. Actual focus is probably just a bit to the upper left of center on this sample. And that's just a bit forward of the eye position on the animal.

You may have guessed by this point that these samples all come off the same image:

bythom sharp4


The first sample was on the back of the front shoulder, the second obviously the face, and the third in the middle of the raised leg.

Overall, this 45mp D850 image renders just fine when processed decently (not as much exaggeration as I've used here). Focus is very close to where I wanted it (the eye), but not quite there. Focus is clearly on the front of the animal, and there's very clear depth cues because the back of the animal is not in focus. That depth cue actually helps people say this image is in focus. 

Which brings me to "in focus." 

I keep encountering other photographers, other Web sites, and even photo editors that will say that something is "in focus" when I can assure you that the focus plane is actually not where it is supposed to be. Often they try to emphasize that by saying those examples are tack sharp (and that's after applying some form of sharpening, as I did ;~).

In the heat of the moment shooting sports and wildlife in particular, you will have a difficult time putting focus dead square where it should be. If you rely upon the automatic tracking focus modes, the camera will also have a hard time putting the focus plane on precisely the same spot of the subject over and over again (and yes, that's despite face and eye detection). I keep seeing sequences posted by others where they say their camera was precisely tracking a subject, but when I look closely at the sequence at 100% view, I can see clear front/back focus shifts on the subject through the sequence. 

Hint: when someone shows you sports images, take the time to look at the grass/field and see if you can pinpoint the focus plane and then follow that through the sequence. This is often how I can see the "waver" of the actual focus plane. You can sometimes do this with wildlife shots, too, though the fields animals are in generally are not as consistent and flat as they are in sports, so it can be more difficult to ascertain where the focus actually was.

What we have in most pronouncements of "in focus" is actually another form of "good enough." Between the camera's tracking mechanism, the depth of field, and the tendency on clear action for our eyes to take in the action without regard to focus, some wobble to the focus plane in action sequences is generally tolerable. But to call the results tack sharp would be wrong. 

And yes, different cameras produce slightly different results in automatic tracking. I'd say the two best of the bunch right now are the Nikon D5 and the Sony A9. Picking hairs (pardon the pun) between them I'd say the Nikon is a bit more consistent in placement of the focus plane being in the exact right place than the Sony, but I can accept the results from either.

In the Nikon-only world, I'm going to shock you. Here's the order in which I think the automatic focus tracking does the right thing: D5 is best, Z6/Z7 is second, D850/D500 is third, D7500 is fourth, D750 is fifth. But be forewarned, there's another variable here: the D5 is best at finding the focus plane in the first place on moving subjects (one reason the Z6/Z7 come in second). If you can't adapt to EVF lag or get the focus sensors positioned right initially, the D850/D500 would be second, and the Z6/Z7 might slip further than third. The DX cameras do well because they put their focus sensors fully across the screen, and thus compensate for the camera not staying perfectly framed on the subject somewhat better (another reason why the Z6/Z7 are higher than you probably expected). 

Canon users are probably shocked that I'm putting both Nikon and Sony ahead of them. The Canon 1DXm2 isn't really much behind the D5/A9. But it's now behind. Indeed, I tend to find that at every level when comparing Canon/Nikon products until we get down to some of the recent consumer cameras, where Canon has deployed dual-pixel focus and Nikon is still using old technology, like the 11-sensor focus system on the D3500. 

Now for the kicker (I really feel like I'm a mirrorless promoter today ;~): for static subjects, pretty much all the mirrorless cameras are more precise and consistent in setting their focus plane than the DSLRs. That gets even more pronounced with fast prime lenses, but it applies to landscape work, too. I'd say that if you're shooting static subjects with a mirrorless camera and missing focus, you're doing something wrong. Very wrong.

To me, tack sharp requires three things: (1) the focus plane be where it's supposed to be; (2) a lens of high MTF capability and little or no astigmatism/coma; and (3) a high resolution sensor capable of clearly resolving the detail produced by the first two. Anything less starts to fall into the "good enough" bin.

Thing is, the camera makers are all going up-scale in their offerings, and we've got better gear today than we've ever had. Gear that's capable of making better images than we used to be able to achieve. 

That means that we also have to go up-scale with our assessments, too. My definition of tack sharp today is clearly different than it was a decade ago. If you go back and look at the three ingredients I said were needed you'll see why: (1) focus systems have gotten better; (2) lenses have gotten better; and (3) we've gotten higher resolution cameras.

Next time someone uses a term like "in focus" or "tack sharp," ask them to define their terms. If they stumble at doing that or equivocate at all, then I'd say you should be looking for other experts to listen to.

This article is also getting posted to Lenses/Lens Articles.

The 20 Year Anniversary

June 15th marked the twenty-year anniversary of Nikon's announcement of the D1, the camera that most feel kicked off the DSLR era. The camera actually didn't ship to customers until early 2000, but a number of us got a chance to use it briefly in 1999.

Upon initially handling the D1—despite its many modal UI flaws (all fixed in the D1h)—I knew that the serious photography world was about to change.

Yes, I'm well aware of and even used some of the Kodak SLR conversions and the Nikon/Fujifilm E2 experiment in the 90's. I don't dismiss those products, but it was clear with the D1 that something different was about to happen: a new era of cameras you would find in every camera store and which carried on the mantel from the SLR bodies. 

It's illustrative to look at the specifications of that original D1 to see just how far we've come.

  • 2000 x 1312 megapixel images (3mp). 
  • A top ISO of 1600 (base of 200). 
  • An APS-C sized CCD image sensor producing 12 bits per photosite. 
  • A 2" LCD with 130k dots. 
  • A claimed top frame rate of 4.5 fps. 
  • The F5 autofocus system. 
  • All for US$5500. 

That frame rate turned out to be not quite true with the then current CompactFlash cards, particularly with the old Microdrive cards. Plus the buffer really worked out to be less than a dozen raw images (less than two dozen JPEGs). 

Still, as limiting as those specifications might seem today, for photojournalists in particular there was clear promise here, to the point that the D1 really changed serious photography as we knew it very quickly. 

Canon shot low, with the consumer D30 DSLR as their first effort in 2000, and feeling a bit rushed to market at that (sound familiar here in the mirrorless transition?). Nikon shot high, with the D1 being followed by the very solid D1h, D1x in early 2001 and the also serious D100 in early 2002. Canon responded with the 1D—featuring a stitched sensor from a non-Canon source, again a sign of rush—in 2001 and the 1Ds in 2002. Those six cameras pretty much were the kick-in-the-butt that blasted the DSLR era into high growth and killed the SLR.

I note that dpreview gave the D1 a Highly Recommended rating in its review. I didn't. I declared the D1 as being too modal and likely to trigger you to miss shots. It wasn't until the D1h came along that I could recommend to others that the DSLR era had truly arrived with a highly usable camera.

So twenty years on, what have we gotten from the DSLR iteration highway?

  • A transition from CCD to CMOS. That didn't come without complaint. Both semiconductor approaches have their pluses and minuses. That said, anyone in the silicon business knew that CMOS was going to be the winner if you could address its (then) image quality  shortcomings. That's because of the ability in CMOS to address cells individual and to add additional electronics into the image sensor itself, things we have in spades in today's cameras, and which make them better.
  • More and better pixels. A lot of people don't know that the original D1/D1h were actually 10.4mp sensors. Say what? Nikon took some Sony Semiconductor pixel technology and had them bin it! An individual pixel in the D1/D1h was actually four sub pixels binned together. The D1x saw a different binning approach, with only two horizontal pixels binned together. When people today talk about the camera makers being beat to the punch in computational photography by the smartphones, that's not exactly true. That D1x in 2001 used computational methods to build JPEG images that were bereft of short axis pixel information yet still looked quite good. That's because of computational work done in the imaging ASIC chip of the D1x.
  • Bigger and better LCDs. One of the primary benefits of the digital camera was its ability to let you immediately review what you just shot and evaluate if you need to change a setting or shoot again. As I've written before, this was one of the things that, once discovered by consumers, triggered the rapid change from SLR to DSLR. It's kind of amazing that this was clear even with a 2" display that only had 160k dots (today's screens are typically a minimum of 3" and a minimum of 1m dots, so larger and much more detailed).
  • More images faster and for longer periods of time. The D1 was effectively a 4 fps camera with a 2.5 second buffer (and remember, this is for a top-end professional camera). Today the D5—despite the extra megapixels and bit depth—is a fully functional 12 fps with a 17 second buffer. I remember clearly the first time I was on the football sidelines with a new deep buffer Nikon body standing next to a bunch of Canon shooters and I just decided to hold down the shutter release (as you all well know, I don't shoot long bursts to "save my butt," but am much more selective about timing and bursts). Every Canon user's head turned my direction in disbelief. That's basically where we are today with current cameras: anyone using an old one is going to wonder how it is that you're still shooting, should you care to fill you card up.

Looking at a D1 today, you see that the D5 isn't all that far from it, other than the things I just mentioned. Sure, Nikon added some additional controls (e.g. the thumb stick) and buttons, and changed card formats. But the bones and muscular structure are all there and are still being inherited today. Indeed, the D5 is really just another continuation of the all-electronic design Nikon put forth with the F5 in the 1990's. The things that weren't broken weren't fixed. Things that were missing were added. A few things have been fleshed out and made better.

You'll note that most of the changes and benefits we've gotten in the 20 years post D1 are internal. Indeed, that's something to consider in these days of transition from DSLR to mirrorless. 

Anyone who has been involved with the technology industry knows that the primary thrust in true tech product change comes from the silicon inside (and by inference, the software). That's because there are huge benefits to reaping the results of Moore's Law to combine, group, and simplify into fewer and fewer components that can be mass produced and assembled using automation. 

We got all the major changes in the DSLR I outlined above because of semiconductor advancement (yes, even the LCD addition). The transition that's going on right now in ILC is mostly a logical continuation of that. Mirrorless cameras remove most of the mechanical parts from DSLRs and put more emphasis on parts that can be mass produced on automated machines. The good news is that the constant push in semiconductor technology continues and will result in even more happening inside our cameras (see my other article today on where cameras are headed).

I haven't run into anyone lately still using a D1 era camera (e.g. the D1h). I have encountered one recently who was still shooting a D2Hs. And I still see quite a few dragging D3's around. D4's are almost as common as D5's when I look around the events I shoot at. 

Which tells another story about this anniversary: while we've had 20 years where Nikon (and Canon) has put out new and better iterations every four years, our DSLRs have proven to be long-lived even as the potential technology inside started to pass them by. I fully expect to see folk still shooting D500, D850, and D5 cameras (and similar Canons) four years from now. 

So happy birthday, D1. It's been a long and wonderful journey you started us on.

The Coming Cameras

Updated

I like to do a "thought piece" every now and then premised on what I did through most of my career in Silicon Valley: look five to ten years out and try to understand what developing or new technologies could be used to solve current user problems.

One problem with doing that at the moment in the camera market is this: they aren't today where they should have perceived they needed to be five years ago. (Yes, that's a complex sentence. Read it again and make sure you understand it.)  In other words, they're already behind where they should/could be, therefore looking further forward may not be quite as useful as it ought to be.

For example, you're probably well aware that inside your cameras are a bunch of semiconductors, including the image sensor. One of the predictable things about semiconductors has been—though it's getting more problematic at the forward edge of progress—the reduction of what's called process size. As a placeholder, think of process size as the smallest possible transistor. The smaller the transistor, the more of those you can pack on a chip and the closer you can put them together (which provides quicker communication between them). Those two things mean more computational power at faster speeds (though heat dissipation can become an issue as you miniaturize). 

So, here's a question: what's the process size for your image sensor? Or your imaging chip?

Apple is currently using a 7nm process size for its latest CPU (A12X Bionic). Indeed, looking at Apple's iOS CPUs is an illustration in process size reduction: 45nm, 32nm, 28nm, 20nm, 14nm, 16nm, 10nm, 7nm. That's why the newest iPhone and iPad have been getting faster, more capable, and able to do more things.

The problem with image sensors is that the the photons-to-electrons part of the sensor (photo diode) doesn't really benefit from process reduction, so there's not been as great a push to change it. But the ride-along electronics on CMOS sensors absolutely do benefit. Smaller process size allows you to do more with the storage charge the image capture creates, and to do it faster.

So again, what's the process size for your image sensor?

Would you believe probably something in the 200nm+ range? That's huge by today's state-of-the-art.

Update: An engineer or two pointed out that image sensors still work in the analog realm—what, you thought they were digital?—and going below 180nm becomes an issue. I should have caught that. One reason why Sony may have gone to stacked sensors has to do with this: if you can make the light gathering/ADC side of the sensor under a larger process and hook that fairly directly to something that is done in a smaller process, you can get some of the benefits. Nevertheless:

Moreover, from what I can gather, even the BIONZ, DIGIC, EXPEED type of chip is lagging behind current semiconductor state-of-the-art. I can't get official confirmation, but I believe the latest EXPEED6 chip, for instance, is made with 28nm process, and Nikon's SoC supplier, Socionext, currently only offers 16/12nm process as its smallest possible size. 

Why am I starting here? Because silicon is one of the easiest things to predict. Apple and Nikon both use ARM-licensed cores, but Nikon is using older, larger process Cortex cores while Apple has moved forward to their own version of ARM's latest core technology and producing it on smaller process fabs.

The trend that intersects with this is the use of computational and AI algorithms with image data. The smaller process, more sophisticated, main chip that Apple is using at the heart of their iOS devices simply can do more than the best the camera companies can do when it comes to changing pixel data or analyzing pixel data for hints on how to tune the camera's performance.

Moreover, Apple seems to have taken one of my original design philosophies on the QuickCam to heart: "the smallest number of components to get image sensor data into the CPU." There's almost nothing between the image sensor and A12X Bionic chip other than a data pipe. In our cameras, there's a bit more going on, and on designs such as Sony's stacked sensor chips, that can get quite complex and more expensive to make.

Where am I going with this? 

The future is going to see much more computational and AI logic in our cameras. No doubt about that. This was clear back at the turn of the century, but it was the smartphones that really got serious about this first, unfortunately for the dedicated camera makers. Heck, it was clear when I managed the team that put out the QuickCam in 1994, because the whole idea behind that product was to use your computer's computational power behind an image sensor.

This is a long-winded way of saying that camera makers have some catching up to do. Okay, not some. A lot. The silicon capabilities are there to let them do it, but when we talk about SoC (system on chip) entities like BIONZ, DIGIC, and EXPEED, we find ourselves caught up in the real problem: the camera industry is contracting rapidly. 

The reason that smartphones are eating the camera maker's lunch has to do with a lot of things, but one of them is volume. 1.5b cell phones were sold in 2018. Compare that to the 19.4m units that CIPA says shipped in the same period (that would understate total camera sales a bit, as there are a few non-Japanese companies that ship cameras). That's almost two orders of magnitude difference. Simply put, the smartphone companies can afford to spend more on R&D in keeping their silicon up to the state-of-the-art because they have so many more units over which to spread the cost.

Thus, one prediction that's easy to make is that dedicated cameras will continue to get better at adding computational and AI features in the future, but they won't catch up—let alone pass—the big smartphone vendors in the next five years. To do so would take a leap of innovation that is highly unlikely. 

Even for Sony, who recently decided that their smartphone and dedicated camera groups needed to work together, the volume problem is still a real issue. Sony's Xperia phones are not exactly big sellers (<2% of the market). So while combining efforts of their two groups does give them more volume to spread costs over, it's not quite as big a boost as it at first seems.

You wonder why there's so much emphasis on full frame these days? It's because the camera makers are looking not just for profit, but they're trying to stay in a lane they're pretty sure that the smartphones can't play in. "Good enough" is owned by the smartphone cameras now. That really only leaves "Exceptional and Unique" as the playground in which the camera makers can retain foothold. 

That's the reason why you see Nikon only making compact cameras with huge focal length range lenses or waterproofing. And the latter is now becoming a smartphone trait, so short of adding a really long focal length zoom to the waterproof camera...

The problem, of course, is that by defining narrower and narrower niches—full frame, superzoom, rugged/waterproof—you also limit your market size. Pushing up-scale to higher priced products also limits your market size.

So my first prediction is that we'll see a slow move towards more and more computational and AI attributes in our cameras. It will be slow not because the technology to do it is slow in coming, but because the cost of deploying it is being born over fewer and fewer units. Canon and Sony have a bit of an advantage here in that their scale of business is bigger than Nikon's and can better support additional R&D costs. But still, everyone is cautious because no one knows just how far the camera industry will contract. That's a bit of chicken and egg, though. If you move too cautiously, you actually make the industry contract more. (I'll come back to that in a bit.)

Meanwhile, there's another thing that's semiconductor-related that smartphones have gotten right and the cameras haven't: communications. 

Let's just admit the obvious: for the vast majority of people taking photographs, those images are now shared electronically. Smartphones embrace that in so many ways I'm not sure I can count them all. Dedicated cameras? Not so much, as I've pointed out many times.

The irony is that Nikon got into the photos-in-the-cloud business early with what's now known as Nikon Image Space (formerly myPicturetown, which dates all the way back to 2008!). Here's an easy way to see that Nikon doesn't understand what they're doing in cloud photo storage: exactly why aren't the Nikon Image Ambassadors using and promoting Nikon Image Space (NIS) to store, manage, and share their photos? Oops. NIS is a separate app from SnapBridge and doesn't allow sharing directly from it. Oops. Can Lightroom push my images to NIS? Oops. (The oops go on and on, but you should get my point with just three examples.)

Update: If you want to see an even bigger Oops, just check out the message Nikon sent to NIS users in May.

So here's the thing: if camera makers want people to keep using cameras, they need to fully embrace the way that people are using images and enable that. But for the most part, they aren't. Yet the technology is available that would let them do that.

At some point during the continued contraction, someone in Tokyo is going to bang their head against the wall, say "Doh", and start trying to do what users actually want and need. And guess what? While that might not generate the kinds of growth in the digital camera market we saw in the first decade of this century, a camera that functions well in today's image sharing world will sell better than one that doesn't. 

I put that last part in bold because every time I write about the fact that the communications side of dedicated cameras is terrible and needs to be fixed, I get a lot of pushback. Things like "that won't save the camera market." That's not what I'm saying at all. I'm saying that camera makers are getting sub-optimal sales because they've ignored a common and highly requested (and now necessary) user need. We can argue about whether camera sales would continue to go down (perhaps by a smaller percentage), stabilize, or start to grow a bit if the camera makers put the right technology done the right way into their cameras, but failing to do so will simply make them fail faster.

The thing is, the Japanese consumer electronics companies are fighting against Silicon Valley. In Silicon Valley, almost the opposite problem happens: Silicon Valley will pursue solving customer problems first and foremost almost without regard to cost or profit. Get the customer first, then worry about the business finances. Worse still, Silicon Valley stole the whole notion of sharing of images electronically from Japan, where that was technically done first with some early cell phones (but not particularly well commercialized or followed up on). 

What I find ironic is that in this world where everyone talks about the Internet of Things (IoT), dedicated digital cameras are some of the worst connected digital devices on the planet. Not only do they not "plug and play" into the Internet easily, making them too complicated for consumers, but their performance in doing so is woefully behind. 

We're about to go 5G in cellular, Wi-Fi 6 in radio. Both will be the primary thing you hear about in wireless communications in the next few years. Cameras aren't even close to the abilities we expect from those new technologies. Wi-Fi 5 (the current 802.11ac) is theoretically a minimum of 433Mbps speed. Divide by 9 (8 bits plus some overhead): 48MBps. A 48MB file should transfer in a second. Does it? Not on any Wi-Fi 5 capable camera in my gear closet (and there are several). Why? Because of the way the cameras are designed. 

So one thing that's going to have to change soon is in some internal structural ways that cameras are designed. In essence, cameras consider "communications" an interruptible and low priority background process. In a world where images are shared, and immediately shared at that, communications needs to be a primary process with some guarantee of delivery speed. The video camera makers have figured this out. I can stream real time from my video camera. The still camera makers are laggards.

Meanwhile, the latest trend in tech is in the proliferation of artificial intelligence software (AI), though I'm not at all sure I'd characterize all the things that are called AI as actual artificial intelligence. Just as graphics got its own dedicated chip (GPU), AI is now getting its own dedicated chip (Google calls theirs a TPU; I'll call it an IPU, for Intelligence Processor Unit). 

We've already seen camera makers deploy two aspects of this. For example, the Nikon D5 has a chip dedicated to autofocus. Olympus and Sony are referring to the new autofocus algorithms they're using as AI. 

But true AI as is being explored now in the labs is not task specific. The goal is to use the same "learning" and "processing" techniques to any problem that needs solving. We have lots of problems in cameras. Saturated signals, noise, distortions, astigmatisms, stray light, subject recognition, camera movement, depth cues, the list goes on and on. 

What you really want to build in the near future is a set of electronics that can do computational (CPU), graphics (GPU), and intelligence (IPU) tasks. Apple already has that in their latest iOS processors (a nascent AI engine being the latest processing core to be added). The net result of having a fast, deep, wide range of "processing" capabilities available in a single chip is that you reduce hardware costs while enabling the software guys to come along and do interesting things with all that facility. 

Finally, there's one other thing I believe will (should) happen: camera makers have to recognize that "tagging" (metadata) isn't something just for their own internal use (e.g. EXIF Maker Tags), but that in the coming world of imaging users need to tag their photos in quite a few ways. Copyright is an obvious one. We have some cameras that follow the IPTC guidelines on this, mostly because the camera companies' big press clients basically insisted or they'd stop buying product. (Irony note: apparently the Japanese camera makers haven't noticed that consumers stopped buying their product. Maybe we need to form a consumer lobbying organization ;~).

When most people take a photo now they actually would want it to be fully tagged, and to be tagged automatically if at all possible. When, where, who, what, photographer, who should be able to see it, and more. 

  • When: GPS, cell tower, or Wi-Fi provided data to be accurate. Automatic time zone detection.
  • Where: GPS, but with automatic coordinate-to-placename insertion.
  • Who: names of any identifiable people, pets, things, etc. With full ability to train those, and to make those private or public.
  • What: Intelligent categorization, which depends upon the Where and Who fields. For instance, a human identified in front of a well-known museum might be tagged "visiting the Louvre."
  • Photographer: could even go so far as to have fingerprint detection on the shutter release!

I could go on, but I think you get the idea. Because photos are now living in cloud space and shared via the Internet, it is highly desirable to make sure that the photographer can control how that image might be found by others, and that's going to involve deep and wide metadata that's being stored with the image data from the moment the photo was taken.

All the things I write about in this article are possible in the very near period (five years). Indeed, I'd argue that they're required and inevitable. There's some probability that a few of them will work their way into our cameras soon. The questions are how much so, and how fast?

The biggest problem I see is that the camera companies are hesitant to fully fund all the R&D that would be necessary to get these things—and others I haven't mentioned in this article—done sooner rather than later. That's a self-fulfilling prophecy, as I've pointed out over and over: by not getting at the front of the technology wave in recent years, cameras have now fallen behind. The potential buying public may not consciously understand that, but they've figured it out subconsciously. They know that their smartphone is doing things their camera can't do, so why do they need a new camera?

One of the things that surprised me coming out my MBA program into a wild, fast growing startup in the early days of Silicon Valley was this: all the problems we studied in those Harvard Case Studies came up. Every last one of them! What they didn't teach at the Kelley School of Business was this: the solution to the problem is always different than just making the numbers, procedures, or dependencies work right. The problem was always people. People that insisted on ignoring the right answer.

To a large degree I feel that's the problem in Tokyo right now. I'm pretty sure that there are plenty of engineers at the camera companies that understand everything I just wrote and want to give it to you in products. They're being held up in many cases by financial departments and upper management that is reluctant to take risk. 

What I know from my decades of experience in Silicon Valley is this:

  • Sometimes when you take risk, you fail.
  • If you don't take risk, you fail.

Understanding and coming to grips with those two statements is essential to a technology career. It's essential to any company that purports to be a technology company. 

So, in the end what I'll be looking for in the coming five years is not whether or not we get IPUs or 5G or full tagging. What I'll be looking at with the camera companies is who's seeing the wants/needs correctly and taking the risks necessary to fulfill those. 

Update: a final note: implicit in much of what I wrote is that the camera companies suck at software. Those of us who were appalled at how bad the original Windows version back in the 80's was are now reconsidering how good we actually had it ;~). 

Are You Following Your Heart or Head?

It's a simple thing, really: if you're confused about what to do and are contemplating changing gear, do you know whether you're following your heart or your head in your thinking?

I've written about wants and needs before, and it's the same thing: 

  • Heart = want
  • Head = need

Thing is, how you evaluate how well something actually "works" for you after purchasing and using it depends upon which of those two you're following.

What I'm seeing a lot of lately is that people following their heart (wants) end up not fully satisfied. Whether that be sampling another brand, switching to another brand, moving from DSLR to mirrorless, or any other Big Change option, their heart (wants) compels them to act, they act, and then they find something wanting (ironic, right? ;~).

Those following their head (needs) rarely end up in that situation.

When Canon and Nikon both say that they'll continue to serve the DSLR user, they're actually considering that there are heart versus head decisions being made in their user base. A long-time DSLR user, for instance, would be generally satisfied with the capabilities of any current camera. The things that they still need are likely to be more in the DSLR realm. The trick is whether or not Canon and Nikon can figure out what those are and produce them in a timely fashion.

I'll use the upcoming 2020 Olympics in Tokyo as an example. The 5000 or so Canon and Nikon shooters that end up in Tokyo will be mostly DSLR-based. That's what they own, that's what they're used to using, that's what they know. For mirrorless to appeal to them at an event where they're on the line to produce images worth their cost of being there, mirrorless would have to do something special for them. Really special. That seems unlikely. A new DSLR lens might prove more useful to them. Even a new DSLR body that advanced the product they're already using might be more useful to them. This is one reason why everyone is predicting that Canon will produce a 1DXm3 and Nikon a D6: the timing is correct, and it's a natural extension for those user's needs.

When Canon and Nikon both say that mirrorless is the future and then provide marketing that tells you about how the new lenses will be better, or the WYSIWIG nature of the EVF is better, of it can shoot silently, and so on, they're catering mostly to the heart (wants). 

So again, let's consider the upcoming Olympics. Would Canon produce an RX and Nikon a Z9? Maybe. But why would the Olympics photographer opt for that? Faster frame rate, perhaps. Ability to track focus outside the central area, perhaps. Other supposed mirrorless advantages, not so much. In other words, some very specific needs. Thus, there's some opportunity for both companies to do something somewhat unheard of, and produce both a top end DSLR and a top end mirrorless camera simultaneously. I'd guess that Nikon would be more likely to do something like this, as they have the D1h/D1x/D100, D3/D300, and D5/D500 multiple announcement precedents, all of which worked well for them. Still, you have to consider whether or not that truly would get a top level shooter to move from DSLR to mirrorless at the last minute like that at an event that is so important to their clients.

Of course, the Olympics are not a problem most of us have ;~). Canon and Nikon have that problem because it's the biggest event where they see and interact with the greatest number of serious photographers, and it is fairly representative of their pro clientele. That interaction potential is one of the few reasons why I'd want to go shoot an event like the Olympics, by the way. The ability to stand out from the top pros with better access is not something that would likely happen for me at the Olympics. I'm happy sticking with smaller events and clients where I can stand out.

But let's get back to heart versus head. It's an age-old problem, and it comes up in everything from romance to work to toys. The real trick is to always understand which of the two you're following, and then to do a quick analysis of whether the other will be served well enough.

In other words, if you're following your heart, you don't want to be completely bolloxed with your needs. If you want Camera X because the marketing messages melted your mind, before jumping to the new mistress make sure that you can still do what it is you need to do. 

Likewise, if you follow your head, you need to make sure that you're still going to be happy, at least happy enough to continue on without second guessing yourself every day.

It's getting the balance of the two right that is always the problem. Pay some attention to both and know which one you're following and why and you'll be fine.

Is Disinformation a Problem?

We've got state actors attempting to influence other states' elections with disinformation. We've got lobbying organizations running campaigns with clear agendas. We have individuals who've discovered that they can be a big influencer of others by just typing on their keyboard. We have "numbers" published that are supposedly meaningful.

Have you ever thought about whether or not misinformation might be harming the photography market?

I have.

And it's not just outright misinformation, it's also information where the nuance is all reduced down to some overall numerical value (e.g. dpreview's "Overall Score", which I find mostly pointless). 

For instance, recently a flurry of messages came my way about DxOMark's rating for the 24-70mm f/4 S lens. In particular, the thing that comes up every time that DxOMark publishes a lens test these days is a number they report for "Sharpness." For the Nikkor in question, that number was 19, which is lower than the 20 given the Nikkor 24-70mm f/2.8E, and more intriguingly, way lower than the 24 given the Sony-Zeiss 24-70mm f/4 ZA, a lens I long ago gave up on using because of its poor performance. 

Bore down a bit into those DxOMark "scores," though. The highest rated lens for Sharpness is on the highest resolution camera (5DS R). The two Nikkor numbers come from two completely different cameras and sensors (Z7 and D800E). DxOMark is creating scores using different test platforms.

What struck me most, though, was the Nikkor Z and Sony FE comparison. One lens I think is really good (the Z), one I think is not worth using for a lot of work (the FE). The difference comes down to how each lens performs as you move away from the center of the frame. 

So, let's look at DxOMark's testing protocol. In DxOMark's own words: "For each focal length and each f-number sharpness is computed and weighted across the image field, with the corners being less critical than the image center. This results in a number for each focal length / aperture combination. We then choose the maximum sharpness value from the aperture range for each focal length. These values are then averaged across all focal lengths to obtain the DxOMark resolution score that is reported in P-MPix (Perceptual Megapixels)."

Uh, what? Corners less critical? In what way? How is that weighted? Why choose the maximum sharpness obtained and not the median? Why average the maximum of all focal lengths? 

Virtually no one seeing a DxOMark Sharpness number—even on DxOMark's own site—does the drill down to see what the heck that number actually means. 

It means absolutely nothing that's useful, in my opinion. It's an average of the maximums of unspecified weightings! Oh, and by the way, in their reviews they have this disclaimer: "Remember that the lenses are intended to be used on different camera systems and mounts, so the comparisons are not strictly applicable."

Funny thing is, if you read DxOMark's actual textual review of the Nikkor in question, they write as their conclusion "Great all-rounder for Nikon Z users." Indeed it is. Unfortunately, the testing methodology that DxOMark uses masks exactly how that might actually really compare to other brands' products. 

Not that I'm trying to call out DxOMark here. I'm just using them as one example. There are plenty more where that came from. The real culprit here is all the folk on the Internet who want to repeat a single number that sums up a subjective evaluation criteria (DxOMark's Sharpness number) as if it is meaningful in comparing two lenses.

Another example of what seems to be disinformation happened recently when at the X Summit where Fujifilm introduced a concept called "Value Angle." I know that the camera makers struggle to describe why certain mount decisions can have significant impact on optical design, and I, too, have at times short-handed the discussion by simply pointing to the angle from the edge of the sensor to the edge of the mount. However, Fujifilm is a bit disingenuous in their discussion. 

The reason why they promote Value Angle is that their APS-C XF mount calculates to a bigger angle than the best full frame mount (the Nikon Z mount). So it must be better, right? You don't need to buy full frame at all!

Unfortunately, the Fujifilm XF mount has the worst Value Angle of any of the mirrorless APS-C mounts; Canon's EOS M would be the best in this calculation. Own goal, Fujifilm.

Fujifilm doesn't make full frame cameras, though. So obviously the XF cameras are better than the full frame cameras because of the mount, right? At least that's what they want you to believe.

Not so fast Fujifilm. There are way too many factors that go into the optical design of a lens and the way the optical system at the focal plane—UVIR filtration, lowpass filtration, filtration depth, gap to sensor, microlenses, photo diode depth—works for digital cameras to reduce everything to one number across differing formats. (And to add insult to injury, Fujifilm's medium format GF cameras would be worse than the full frame cameras using the Value Angle metric!)

Aside: What a larger opening and a wider angle from that opening to the sensor gives you is more optical design flexibility (all else equal). Optical center point, size and position of rear element, angle changes of extreme light through the optical path, all have more options available when you create a bigger/shorter mount for any given size sensor. Note the last highlighted clause: Fujifilm actually created the worst mount scenario for their APS-C sensor size.

What we have here is another arbitrarily calculated number standing in for actual useful information, and this time in marketing information. Be wary of those arbitrary numbers you see.

Reviewing in context is difficult. I first became aware of that back in the late 70's with High Fidelity reviews. In the early 80's I wrote one of the first standardized reviewing guidelines (50+ pages) for the computer industry in my role as editor of InfoWorld. At one point I fired a reviewer who did not disclose a paid relationship with a company in the industry, which was required by our guidelines. I managed a similar project in the 90's at Backpacker. Because all the outdoor product companies have hidden special pricing for influencers (even back in the 90's), I forbid staff to take advantage of that. We also returned or donated all gear we received for review, rather than keep it as sometimes happens elsewhere. I know how hard it is to try to describe how a product actually performs and what that might mean to a user. And I think I know what trying to maintain integrity and disclosure of conflicts means, too.

But none of us are perfect, nor are any of us writing on the Internet capable of perfectly reviewing a product, with full context to all other relevant products and your particular needs and usage. 

Thus, I caution everyone to be smart in their reliance upon external information being passed around on the Internet. There are bad actors, paid influencers, poor articulators, and meaningless numbers to wade through. Even the best of us may not get everything perfectly and adequately described, and you have to be careful not to read things into words that weren't intended. 

Over time, you find sources that you can trust. Don't trust new sources without vetting them. Trust, but verify, sources you've grown used to. Understand the business model of the sites you visit and how that might influence them (DxOMark's current business model appears to be consulting services plus selling their Analyzer product, which puts them in competition with Imatest for producing a set of numbers from standardized testing [disclosure: I use Imatest in measuring camera and lens performance in the lab, though I don't report these numbers because they're generally not comparable across products as DxOMark would like you to believe]).

How Do You Make the Mirrorless/DSLR Choice?

It seems that despite my repeated efforts to try to put things into perspective, a number of people are still asking the same questions, typically along the lines of "should I buy a D750 or Z6"? 

That's certainly a good question, no matter how you're coming at it (e.g. from a D3500, a D610, or a competitor's camera). But it doesn't have a simple answer.

As I've tried to indicate, DSLRs aren't dead. They still have some advantages to them. Most of those center around the optical view of the world, the time-tested autofocus system, and the support for them (which ranges from lenses to accessories to repair facilities to education, and more). 

The optical view of the world does not impact your night vision. It doesn't slice time up into frames. It doesn't need a moment to turn on when it hasn't been in use for awhile. It doesn't drain batteries. (Technically, Nikon's DSLR viewfinder overlay system does drain batteries, but it'll take a month or so to do so.)

The DSLR autofocus system may have limitations (area covered, only face detect, many tricky attributes to learn, and issues of tolerance in some cases), but both Canon's and Nikon's current systems are fast, reliable, and consistent once you learn how to use them. (Unfortunately many people don't; they want "all automatic" systems to do all the heavy-lifting.)

And you can't say that DSLRs are unsupported. If there's an accessory you need, it's been made. Third parties have had plenty of time to iron out wrinkles in compatibility. The lens selection for Canon and Nikon DSLRs is incredibly wide and deep. 

So why wouldn't you buy a DSLR, particularly given that prices have dropped on them?

Well, mirrorless cameras have their advantages, too. Those center around WYSIWIG viewfinders (what you see is what you get), wider/smarter autofocus systems, and generally smaller and lighter bodies for the same level of capability (e.g., the Z7 is smaller and lighter than a D850, one of the most equivalent comparisons we can make between DSLR and mirrorless). With Canon and Nikon mirrorless, in particular, we also have the promise of potentially new and exciting lens capabilities in the future. 

DSLRs are the establishment. Mirrorless is the future. 

If you're absolutely new to interchangeable lens cameras, the answer is relatively easy: buy the future (mirrorless).

If you're a long-time SLR/DSLR user, you're in the establishment. The question becomes whether you're happy with what you've been using and just need to upgrade or is there a clear benefit to stepping into a future that isn't fully there yet. 

I think my answers have been relatively consistent for awhile now. Let me throw a few of them at you:

  • New to ILC and want affordable? Canon EOS M, Fujifilm XF, and Sony E are probably where you should be looking.
  • New to ILC and want full frame? Sony FE is the first choice, as it's well fleshed out now and there are plenty of lenses to choose from. Nikon has done enough that it should get your attention; they'll be where Sony is within the next 18 months I'll bet. Canon's line has all kinds of promises, but no clear delivery yet. Yet they, too, probably will be where Sony is soon.
  • DSLR user looking to upgrade? Start with examining the logical upgrade for you (e.g. D7000 to D7500, maybe D600 to D750, D3x to D850, etc.; see my Ultimate Camera Upgrade Advice, which I update every year). If that leaves you wanting, then figure out what the missing element is and look for that in someone else's product (which may be mirrorless).

What most people upgrading tend to overlook in their decision making is that (a) they may need to learn a completely new system; (b) they may need entirely new lens sets and accessories. I see a lot of people that approach me like this "I'm a D7000 user and it's time to upgrade; I'm thinking about the Sony A7m3..." 

That may or may not be a good choice for that person. I generally have to ask questions back when I get such queries to give a reasonable answer (as should any good camera store salesperson). A lot of the time, though, the questioner has been tempted by marketing, Internet hype, or FOMO (fear of missing out). In practice, an answer of "buy a D7500" is often still the right one for that person, especially if they update a lens or two.

While I've written about "leakers" and "samplers" before, I haven't mentioned that I see a fair number of "returners", too. I've got a big data file of folk that went mirrorless and came back to DSLR. Over time, as the mirrorless systems get more mature and fully fleshed out, I suspect that I won't see so many returnees in the future, but up through today? Still seeing them.

Nikon doesn't make things any easier, either. 

As I write this, I can buy a D850 for US$3000 or a Z7 with FTZ adapter for US$2800. Image quality-wise, they're near identical. Feature-wise, they're pretty darned close. I happen to think the D850 is a better well-rounded camera than the Z7, but the Z7 isn't exactly a slouch at that. It's really close to my #2 choice in well-rounded, the A7Rm3. So close that sometimes I think I'm splitting hairs. 

So which of those do you pick? 

It gets back to whether you're more rooted in the establishment (DSLR) or the future (mirrorless). 

But then Nikon goes and throws a monkey wrench into things:

As I write this, I can buy a D750 with the 24-120mm f/4 lens for US$1800 or a Z6 with the 24-70mm lens and FTZ adapter for US$2400. Whoa Nelly. Again, in terms of image quality, the sensor in those two cameras is close enough to identical for most users. Feature-wise, they're a bit more different than the D850/Z7 pair, but still close. 

So what happens is that price gets in the way of product rationalization for the potential buyer. You pay a third more for the future in this case (US$600 is not something to ignore, even for a well-heeled customer). Nikon clearly wants you to buy the established. (I should also point out that those rumors of a D750 replacement point to a very tricky problem for Nikon: any D760 needs quite a bit of enhancement, or a lower price, to work in the market now.)

Canon has near equivalent cameras in the 6Dm2 (DSLR) and RP (mirrorless), with the pricing now US$1300 (body only) for either. 

Indeed, it's usually on the price issue that I find most people hesitating on their decisions. At near equal price they tilt towards the future (mirrorless). At high discounts, they tilt towards the established (DSLR).

As I noted up front, there isn't a simple answer. 

More so than ever, you really need to prioritize your needs and your wants, and couple that to a budget. A budget that includes all the extra things you might need to switch to mirrorless (cards, lenses, accessories, etc.). 

Cameras last a long time. As I look back at my images as I reorganize them, I'm very happy with images I took with the D3x, for example. Did I need more camera than that? Not really, even today, though the 36mp/45mp sensors definitely were a modest step forward. 20/24mp is a good solid point for most imagery for most users. Higher than that buys some flexibility and future-proofing, I suppose, but I'm finding quite a few folk now that are over-buying for their needs.

What current cameras have 24mp?

  • Canon: PowerShot G7 X and G1 X, EOS M5, M6, M50, M100, 80D, SL3, T7, 6Dm2 and RP (26mp), 
  • Fujifilm: X-T30 and X-T3 (26mp), XF10, X-T100, X-H1, X-A5, X-E3, X100F
  • Nikon: D3500, D5600, D610, D750, Z6
  • Panasonic: S1
  • Pentax: KP, K-70
  • Ricoh: GRIII
  • Sony: A6000, A6300, A6400, A6500, A7 (m1 to m3)

That's a lot of choice, ranging from compact cameras with large sensors to full frame DSLRs and mirrorless. 

So again, sort through your requirements (needs) and wants. The answer becomes clearer as you scratch things off the list that don't meet those. As it always has.


Nikon 2019 News

In these folders you’ll find the several hundred news and commentary articles about Nikon and DSLR cameras that appeared on this site in 2019:


Nikon 2018 News

In these folders you’ll find the several hundred news and commentary articles about Nikon and DSLR cameras that appeared on this site in 2018:

Nikon 2017 News

In these folders you’ll find the several hundred news and commentary articles about Nikon and DSLR cameras that appeared on this site in 2017:


text and images © 2019 Thom Hogan
portions Copyright 1999-2018 Thom Hogan-- All Rights Reserved
Follow us on Twitter: @bythom, hashtags #bythom, #dslrbodies