University of Cambridge > Talks.cam > Microsoft Research Machine Learning and Perception Seminars > Capture of dynamic scene using multiple cameras provides rich spatial-temporal information that can be used for solving challenging computer vision problems.

Capture of dynamic scene using multiple cameras provides rich spatial-temporal information that can be used for solving challenging computer vision problems.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

Indeed, multi-view systems have been used by various computer-vision methods in the last decade, where the setting of cameras depends on the application. In this talk, I will discuss the use of multi-view systems in two “distant” setups: First, I will present the use ofcalibrated and synchronized camera-array for solving the problem of dense 3D structure and 3D motion estimation. Then, I will consider the scenario of a group of people capturing asynchronously a dynamic scene, using probably the most popular means of photographing today – cellphones. The combined data obtained this way can be regarded as the output of a new type of extended camera, which we call a crowd-based camera (or CrowdCam). Using this new setup introduced us with a novel problem of photo-sequencing—temporally ordering a set of still images taken by a CrowdCam, and a geometry-based solution, as I will present. We believe that photo sequencing is an essential step for analysis of a dynamic scene form a set of still images.

This talk is part of the Microsoft Research Machine Learning and Perception Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity