University of Cambridge > Talks.cam > CUED Computer Vision Research Seminars > Modeling Light for View Synthesis

Modeling Light for View Synthesis

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Gwangbin Bae.

Abstract

Recent years have seen an immense amount of progress in many-input view synthesis in particular, largely driven by the use of volumetric scene representations in an inverse rendering framework. However, when it comes to taking a careful, scientific approach to radiometry and image formation, these methods are still relatively immature. In this talk, I will address multiple aspects of “modeling” in view synthesis: how we model outgoing radiance in 3D scene representations, and how we model image formation inside the camera. I will also discuss dataset capture, give some thoughts on the current state of neural volumetric scene representations, and show some cool results.

Bio

Ben is a research scientist at Google Research, where he works on problems in computer vision and graphics. He recently received his PhD from UC Berkeley, where he was advised by Ren Ng and supported by a Hertz fellowship. In the summer of 2017, he was an intern in Marc Levoy’s group in Google Research. In the summer of 2018, he worked with Rodrigo Ortiz-Cayon and Abhishek Kar at Fyusion. He did his undergrad at Stanford University and worked at Pixar Research in the summer of 2014.

Location

The talk will be given at Lecture Theatre 1 (LT1) in the Engineering Department (Trumpington St, Cambridge CB2 1PZ ).

Google Calendar

To get updates on future seminars, please subscribe to the following Google calendar: https://calendar.google.com/calendar/u/0?cid=c2pjcHN0YXM2N3QyMWU3c2FqNjBqNWNiYXNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ

This talk is part of the CUED Computer Vision Research Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity