[FoRK] ExtremeTech: Pelican Imaging promises freedom from focusing

Stephen Williams sdw at lig.net
Fri Apr 12 11:04:46 PDT 2013


Good details, some minor mistakes:
"Taking advantage of the CPU, GPU and ISP on high-speed mobile designs" -> "Taking advantage of the CPU, GPU and DSP on 
high-speed mobile designs".
The photo of the sensor is my macro product shot.

http://www.extremetech.com/computing/152761-pelican-imaging-promises-freedom-from-focusing
Pelican Imaging promises freedom from focusing

     By David Cardinal on April 12, 2013 at 9:30 am

Pelican imaging featured image showing multiple focus points at once

We all know that second of terror after pressing the shutter button — waiting for the camera to focus while the scene disappears 
before our eyes. Camera phone users have it the worst, with autofocus often taking a second or more. Pelican Imaging aims to 
free all of us from focus anxiety by eliminating the need for focusing altogether. Images come out of its camera module entirely 
in focus, from foreground all the way to background. Users will be able to fiddle with the focus after the fact — like with a 
Lytro — but don’t have to. It is a true “fire and forget” solution to the problem of autofocus.

What makes Pelican’s approach unique is that, unlike other array-based or so-called plenoptic cameras, it uses a small array 
(4×4 and 5×5 in its first designs) of traditional imaging elements — each one sensitive to a single color. By contrast, Lytro 
and other plenoptic designs, like the newly announced Toshiba chip, use a sensor composed of hundreds of thousands of multipixel 
elements, each creating a mini version of the image. Pelican’s unique approach provides it with one huge differentiator — its 
sensor module is self-contained, with no additional lens required. The tiny lenses mounted over each active area of its sensor 
are all that is required to form an image. The result is a dramatically thinner and less expensive camera module. Pelican’s 
modules can be made under 3mm high — a low enough “z height” to be designed into just about any smartphone.
Finding depth through parallax

Pelican 4x4 imager arrayThe basic idea behind Pelican’s design is not new. Light field photography — defined as the capture of 
not just the amount of light falling on a sensor, but also the direction of that light — has been around for a while. Recently 
it has found its way into a consumer product in the form of Lytro’s camera. Lytro uses hundreds of thousands of “super pixels” 
that feature specialized microlenses over groups of adjacent photosites. Its design allows it to capture the light coming from 
each of several directions on each of those groups of pixels. By knowing the direction of each ray, Lytro can recompute focus 
after the image is taken. In exchange, it sacrifices resolution, since each group of pixels acts more like a single pixel as far 
as resolution. Its 11 million photosite camera captures 11 “megarays,” but produces a final image of just over 1MP.

Pelican turns the traditional plenoptic design on its head, in what in hindsight seems like a straightforward and simple to 
manufacture approach. It divides its imager up into an array of “mini cameras” — typically only a few in each dimension. Each 
tiny imager only records a single color. It is probably no accident that this design harkens back to the original camera array 
work at Stanford, since Pelican advisor Marc Levoy was one of the guiding lights of that effort.

While there is some cleverness involved in the arrangement of the sensor components — they aren’t in a strict Bayer pattern, for 
example — CTO Kartik Venkataraman is quick to stress that the real magic is in the image processing software that reassembles a 
full-resolution image and detailed depth map from the many low-resolution versions captured by each section of the imager. In 
its 4 x 4 reference design the 16 .75MP images are processed and reassembled into an 8MP final JPEG version, complete with 
embedded depth map.

Venkataraman and Pelican’s new CEO, Chris Pickett, clearly took great pride in describing Pelican’s unique software to me, as 
they should. The image reassembly process has to tackle a true chicken and egg situation. The Pelican sensor doesn’t know either 
the light source — the way a Kinect or LEAP devices do by providing their own — or the distance of the subjects in the scene. 
Typically, once the distance to an object is known, it is relatively simple to calculate how it registers on multiple sensors 
and combine multiple images into one.

Conversely, once it is known how an image or light source registers on two different sensors, it’s possible to use the math of 
parallax to calculate how far away it is. Pelican’s software starts by knowing neither distance or registration and calculates 
both. More amazingly, it can do it in near real-time on a sufficiently powerful mobile processor — like the Qualcomm Snapdragon 
800 it uses for demos.

It’s easy to dismiss a canned video demo, so I stopped by Pelican to see for myself. If anything the samples it has on the web 
underhype the possibilities of the technology. Watching it work in realtime in various situations around Pelican’s offices was 
really fun, and made me want to be able to use the final product.
Moore’s law to the rescue: Timing is everything

The algorithms that Venkataraman has incorporated in the Pelican software are revolutionary. They are also processor intensive. 
When he co-founded the company in 2008, there weren’t any mobile processors fast enough to run them in real time. Now there are. 
It is no accident that Pelican’s demo at Mobile World Congress used a Snapdragon 800. Taking advantage of the CPU, GPU and ISP 
on high-speed mobile designs is crucial to making the Pelican imager work. Recognizing its reliance on tight integration with 
mobile device architectures, and the reality of its small size, Pelican has elected to license technology and design expertise 
to hardware vendors, rather than going it along with its own finished products. It expects to begin announcing partnerships 
soon, with products in the marketplace by the first half of next year.

In addition to entirely removing autofocus delay, Pelican also removes the noise that goes with it. Imagine having complete 
control over focus in a video without any distracting focus motor noise. Even shot to shot time should be minimized with the 
Pelican design, as post processing can be delayed until the processor and camera are idle.
Does this mean more trouble for Lytro?

Stanford Prototype Plenoptic Light Field Camera ArrayAt first blush, it’s easy to look at what Pelican is offering and wonder 
who will want to buy a Lytro camera, or one of the other much more expensive plenoptic cameras on the market. Even at $399 the 
Lytro doesn’t produce all-in-focus images, requiring user intervention to pick a focus point after the fact. However the array 
of small lenses in the Pelican design can’t do one very important thing that point and shoot owners really want and the Lytro 
can — zoom.

Smartphones are currently built with fixed focal length designs, so the 3mm high, inexpensive, Pelican design is perfect. In 
contrast, the relatively large single lens of traditional point and shoots and the Lytro allow them to zoom as well as focus. At 
least for now, Pelican will have to be content with revolutionizing the mobile device space. Fortunately for Pelican, another 
advantage it has over Lytro is that its imager can also capture video — essential in the mobile space.

Pelican isn’t alone in working minimize autofocus worries. It’ll face competition from the MEMS-based offering from Digital 
Optics Corporation (DOC) that features a reduced 200ms AF acquisition time at a similar price point. The DOC module stops at 
fast AF though. It doesn’t offer refocusing or any depth information for later processing.

Didn’t Nokia try this with Extended Depth of Field?

Another much-heralded attempt to deliver focus-free photography has been the Extended Depth of Field (EDOF, or “full focus”) 
camera modules. Using carefully-designed aspherical lenses and some after-the-shot image reconstruction, EDOF imagers can render 
almost all of an image scene in focus at the same time.

Nokia in particular released several phones featuring EDOF imagers. Unfortunately, EDOF technology doesn’t allow close focusing. 
With business cards, meals, and macro photographs being popular smartphone camera subjects, the 40cm or so minimum focus 
distance for EDOF imagers just isn’t close enough for most people. By contrast, Pelican’s designs can focus down to 15cm — less 
than 6 inches from the camera.

Camera plus depth and gesture control, all for under $20

CEO Chris Pickett shows me a realtime depth map on a Qualcomm tablet using the Pelican imagerWhile freedom from focusing is the 
most obvious benefit of Pelican’s new imager design, that’s really just the beginning of what’s possible with the full depth map 
that it creates while processing the image. The map is accurate enough for gesture detection, for example, making it plausible 
that a single front-facing camera could be used both as a webcam and for gesture control of a computer or phone. It’s no LEAP 
Motion, so don’t expect to be able to sign your name in the air with it, but swiping aside windows or rotating objects on the 
screen should be a no-brainer.

To accommodate these types of applications, the Pelican imager generates a low-resolution depth map in literal real time (yes, I 
saw it demoed and it’s pretty cool), which after the fact it can process into a full-resolution depth map for more detailed 
applications. Other applications could include “all-in-one” auto backup cameras that are also distance detectors, and of course 
video games.

Pelican believes that in volume its imager designs should cost under $20 to produce, a similar price to the camera modules used 
in high-end smartphones like the Apple iPhone 5 today.

On 2/23/13 11:52 PM, Stephen D. Williams wrote:
> Newly updated:
> http://www.pelicanimaging.com/
>
> Some things published about us, although with some questionable comments:
> http://image-sensors-world.blogspot.com/2013/01/london-image-sensor-conference.html
> http://image-sensors-world.blogspot.com/2012/11/pelican-imaging-capabilities-presented.html
>
> Stephen

-- 
Stephen D. Williams sdw at lig.net stephendwilliams at gmail.com LinkedIn: http://sdw.st/in
V:650-450-UNIX (8649) V:866.SDW.UNIX V:703.371.9362 F:703.995.0407
AIM:sdw Skype:StephenDWilliams Yahoo:sdwlignet Resume: http://sdw.st/gres
Personal: http://sdw.st facebook.com/sdwlig twitter.com/scienteer



More information about the FoRK mailing list