[FoRK] It's Time to Start 3D Scanning the World

Stephen Williams sdw at lig.net
Wed Jan 11 16:26:14 PST 2012

On 1/11/12 10:00 AM, Ken Meltsner wrote:
> On Wed, Jan 11, 2012 at 9:22 AM, Stephen Williams<sdw at lig.net>  wrote:
>> No special hardware is needed to get very good 3D scans, once we get a
>> little more CPU and improve the algorithms just slightly.  Stereo vision,
>> using just a single camera moved around a bit, is plenty to get pretty good
>> data.
> You can even do it with a single camera in one shot and without special
> light sources.  Mike Bove at the MIT Media Lab did his dissertation on
> extracting 3-D information using depth of field -- basically, you have a
> special lens that splits the viewed scene into two pictures with different
> depths of field.  After lots of computation, you can determine how far away
> any part of the scene is.
> Twenty-five years ago, this was a serious computational task, but it might
> be feasible today with much, much less hardware.  The technique did work
> best when the object in question had a visible pattern or texture so that
> the program could determine how well focused each image section was.

Yes, computing effective resolution / sharpness of two images with different depth of field is a good method.  I read an article not 
long ago that I believe postulated, based on some new evidence, that the human visual system rapidly focuses based on just a couple 
samples by depth of field estimation.

Add parallax, lighting change, occlusion tracking, visual flow rate differential, and 3D surface model fitting when you can get more 
than one image (video preferably), and you're in good shape.  Combining these is best because if you are clever you can drastically 
cut total computation.

Lately, I've been thinking about dynamic resolution / attention optimization, although I don't think the camera systems are set up 
for that yet.

> Ken Meltsner


More information about the FoRK mailing list