BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Computational Video: Methods for Video Segmentation and Video Stab
 ilization\, and their Applications.  - Irfan Essa\, Georgia Institute of T
 echnology
DTSTART:20140818T100000Z
DTEND:20140818T110000Z
UID:TALK53737@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:In this talk\, I will present two specific methods for Computa
 tional Video and their applications. \n\nFirst I will describe a method fo
 r Video Stabilization. I will describe a novel algorithm for video stabili
 zation that generates stabilized videos by employing L1-optimal camera pat
 hs to remove undesirable motions. Our method allows for video stabilizatio
 n beyond conventional filtering\, that only suppresses high frequency jitt
 er. An additional challenge in videos shot from mobile phones are rolling 
 shutter distortions. We propose a solution based on a novel mixture model 
 of homographies parametrized by scanline blocks to correct these rolling s
 hutter distortions. Our method does not rely on a-priori knowledge of the 
 readout time nor requires prior camera calibration. Our novel video stabil
 ization and calibration free rolling shutter removal have been deployed on
  YouTube where they have successfully stabilized millions of videos. We al
 so discuss several extensions to the stabilization algorithm and present t
 echnical details behind the widely used YouTube Video Stabilizer\, running
  live on youtube.com\n\nSecond\, I will describe an efficient and scalable
  technique for spatio-temporal segmentation of long video sequences using 
 a hierarchical graph-based algorithm. We begin by over-segmenting a volume
 tric video graph into space-time regions grouped by appearance. We then co
 nstruct a region graph over the ob- tained segmentation and iteratively re
 peat this process over multiple levels to create a tree of spatio-temporal
  segmentations. This hierarchical approach gen- erates high quality segmen
 tations\, and allows subsequent applications to choose from varying levels
  of granularity. We demonstrate the use of spatio-temporal segmentation as
  users interact with the video\, enabling efficient annotation of objects 
 within the video. This system is now available for use via the videosegmen
 tation.com site. I will describe some applications of how this system is u
 sed for dynamic scene understanding. \n\nThis talk is based on efforts of 
 research by Matthias Grundmann\, Daniel Castro and S. Hussain Raza\, as pa
 rt of their research efforts as students at GA Tech. Some parts of the wor
 k described above were also done at Google\, where Matthias Grundmann\, Vi
 vek Kwatra and Mei Han are\, and where Professor Essa is working as a Cons
 ultant. For more details\, see http://prof.irfanessa.com/ \n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
