What the Lytro Pivot Means

Tuesday Afternoon

Venture-based Silicon Valley firms aren’t exactly doing well at revolutionizing imaging, yet they are. 

What do I mean by that seemingly contradictory statement? 

Well, Foveon, for instance, didn’t exactly take over the sensor marketplace as they originally sought, but they did produce a sensor that is still unique and provides advantages to regular imaging sensors in some situations. A lot of people don’t know, for example, that after Sigma bought Foveon, they didn’t just put that sensor in their cameras, they used it behind the scenes in lens development and production for resolution testing. Indeed, I suspect that many of the real gains we’ve seen in Sigma’s recent lenses are partially due to that unseen use of the Foveon sensor.

But as a VC-based investment, Foveon was a bust.

GoPro did better, and revolutionized one sector of the video market, though they’re now struggling with both competitors and the fact that they saturated what may turn out to be a smallish niche market. I’ve seen fewer GoPro-for-production-video accessories and options than I expected to, with the most common one being Google Glass-like monitoring systems for the active shooter.

Lytro took two turns in Silicon Valley at building a new still imaging product that would revolutionize the photography world. The whole claim to fame of those cameras was really “focus after the fact.” Of course, you didn’t have even a 1080P image to do that with, and you needed to use Lytro’s own software to accomplish that, which had its own issues. 

Now Lytro has admitted that their still camera idea didn’t go anywhere and has pivoted to a new place down the road from Silicon Valley: Hollywood. Here at NAB Lytro was touting the new Lytro Cinema camera that they’ll rent to you, using the opportunity to show off and explain a short film they made with the new rig.

Why rent? Because the Lytro Cinema isn’t exactly a mass production device: 755mp raw capture at up to 300fps. That generates 400GB of data a second. Not exactly going to be something you stick on an SD card, is it? ;~)

The pitch Lytro is making to Hollywood is still based on the Light Field technology—micro lenses in front of the sensor(s) displacing the light in a way that helps you detect where the light came from—but with a few twists that very well may make it successful.

First, there’s the sheer data volume captured. In essence, the Lytro Cinema is attempting to collect a 3D set of data about the scene, and then via software pull that apart in ways that allow you to separate every object in the scene. They’re doing that at way more than HD or 4K levels, too (Lytro claims equivalence to 40K video). What that means is that everything in the scene is replaceable with 3D software. No more traditional green screens: simply send the data from the camera directly into your 3D modeling system. Mix, match, and replace items as you desire.

But beyond that, the sheer immensity of the data capture allows other things, such as post production dollies, camera shifts, focus shifts, depth of field changes, and shutter speed simulations. You can even do things like say “anything beyond X distance should be blanked out (made transparent)” so that you can layer the foreground material captured by the Lytro Cinema over a modeled or differently captured background. 

None of this comes cheap, and that’s part of the point of the pivot, I’m sure. Lytro will be renting the camera to Hollywood and other production companies, as well as providing cloud and plug-in services. In a way, they’ve taken on the technology side for realistic imaging much as Pixar did for animation. I suspect that this will be a success for Lytro, especially if the actuality is even close to the claims in the press materials and demos. I shared a cab ride over to the convention center with two Disney execs, and the one booth they wanted to hit was the Lytro booth.

So what’s this have to do with the photography audience reading this site? 

Simple: Lytro was way too early in attempting to get light field into the hands of the masses. Yet we’re starting to see simpler approaches hit the shelves, particularly the two-sensor smartphone models. While two sensors don’t provide you a “light field,” they do provide you additional useful data, which light field does in spades. 

We’re going to see more and more of this “additional data” gaining attention as we move further into the digital era. The traditional camera companies are lagging far behind. Indeed, they just seem to have recently discovered bullet time (the multi-camera use that produced the Matrix-style stop-time-and-move-position effects you might have seen, especially in SciFi and fantasy productions). 

At least today I can buy parts off the shelf to fire multiple Nikon bodies simultaneously and make my own bullet time rig. But where is the data connection and the software to make quick and effective use of that? 

What if I set up four Nikon D810’s in different positions and just took multiple pictures of a landscape? Manually, I could build a pretty decent depth map from that data. But Nikon’s doing nothing to help me with that, and as far as I know, they’re not really working with software companies to build the right data set for third party software.

So my point, I suppose, is this: Silicon Valley sees the future well. Sometimes so well it gets way ahead of the actual market possibilities, as Lytro did with its still camera initiative. The Japanese camera companies see the future far less well. They tend to see now, not the future. Sometimes they do this so poorly that they get behind the actual market possibilities in their products.

Serious shooters are going to be using multiple imagers simultaneously at some point in the not-too-distant future. They’ll be doing that because it gives them some advantage, and that advantage will be attained due to the software handling the data the multiple camera set created.

One point of looking orthogonally at a market as I am here at the NAB convention is that sometimes you can more easily see the future of your own discipline by looking at another. 

If you think Photoshopping today is “magic” with all its cloning and smart healing and layering capabilities, image a future where PhotoPieces basically lets you completely model the scene you shot in 3D and then move things in and out while leaving the rest of the image parameters intact. Want a tree behind your subject? Just remove what was there and replace it with tree data. Need to remove that soda can from your pristine landscape? Done, and replaced with what was around the can automatically. Call it Location Aware fill. 

In essence, Photoshop “layers” are going to be supplanted by “3D object manipulation.” 

Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: