[Beowulf] cluster for doing real time video panoramas?

Robert G. Brown rgb at phy.duke.edu
Wed Dec 21 12:48:13 PST 2005


On Wed, 21 Dec 2005, Jim Lux wrote:

> OK all you cluster fiends.. I've got a cool application (for home, sadly, not 
> for work where I'd get paid to fool with it)..
>
> I've got a robot on which I want to put a half dozen or so video cameras 
> (video in that they capture a stream of images, but not necessarily that put 
> out analog video..) with overlapping fields of view.  I've also got some 
> telemetry that tells me what the orientation of the robot is.  I want to take 
> the video streams and stitch them (in near real time) into a spherical 
> panorama, that I can then render from a corrected viewpoint (based on 
> orientation) to "stabilize" the image.

Goal being to get spherical panorama in 2d, or to reconstruct
polynocular representation of 3d facing surfaces?  That is, is the robot
trying to "know where it is" and how far away things are in its field(s)
of view, or just generating a 2d point-projective representation of
incoming light...

Second question is how are you going to handle the map?  Patches?
Spherical coords?  A triangular covering?  I ask because (of course)
there is no nice mapping between the cartesian coords implicit in most
e.g. video cameras and spherical coordinates e.g. \theta,\phi of a point
projective view I(\theta,\phi) (incoming light intensity as a function
of position on the projective sphere).  This leaves you with a long-term
problem in handling nonlinear cartesian to whatever pixel
transformations as well as an addressing problem.  As in this could be
the MOST expensive part of things computationally, as it involves
trancendental function calls that are some 1000x slower than ordinary
flops in a linear transform.

    rgb

(who faces similar problems in spherical decompositions in some of his
research, and finds them to be a real PITA.)

>
> So.. you can get cheap 1394 video cameras from a variety of sources.
> There's a package of tools for doing the panoramas called panotools from 
> Helmut Dersch, which I've used successfully with still frames (but not 
> video!) that can do all the needed camera transformations and resampling (I 
> think).
>
> But, then, how do you do the real work... should the camera recalibration be 
> done all on one processor?  Should each camera (or pair) gets its own cpu, 
> which builds that part of the overall spherical image, and hands them off to 
> yet another processor which "looks" at the appropriate part of the video 
> image and sends that to the user?
>
> here's an example of someone who did video panoramas on a Mac (but not in 
> real time, I suspect)
> http://www.vrhotwires.com/InexpensivePanoramicVideo.html
>
> Panotools info at: 
> http://www.panotools.info/mediawiki/index.php?title=Main_Page
>
>
> James Lux, P.E.
> Spacecraft Radio Frequency Subsystems Group
> Flight Communications Systems Section
> Jet Propulsion Laboratory, Mail Stop 161-213
> 4800 Oak Grove Drive
> Pasadena CA 91109
> tel: (818)354-2075
> fax: (818)393-6875
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf
>

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list