University of Cambridge > Talks.cam > ne289's list > From Pixels to 3D Motion: Modelling the Physical Natural World from Images

From Pixels to 3D Motion: Modelling the Physical Natural World from Images

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Miss Naidine Escoffery.

Pixel-based generative models nowadays excel at creating compelling images, but often struggle to preserve basic physical properties, such as shape, motion, material, lighting, etc. These are often critical pieces bridging computer vision towards a wide range of real-world engineering applications, from interactive VR, robotics, design, and manufacturing, to scientific domains like biology and medical analysis. One fundamental challenge in computer vision is shifting from modeling pixel distributions to modeling physics-grounded representations, and characterizing 3D motion is a key stepping stone along this path.

In this talk, I will mainly discuss a line of research that attempts to model dynamic 3D objects from casually recorded, in-the-wild images and videos, without any direct 3D supervision—an approach applicable to various natural objects such as wildlife. The resulting model can then turn a single image into an animatable 3D asset in feed-forward fashion and generate 3D animations instantly.

This talk is part of the ne289's list series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity