![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Rainbow Group Seminars > Perceptual quality metric and loss function for 3D and temporal consistency
Perceptual quality metric and loss function for 3D and temporal consistencyAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Yancheng Cai. To better train and evaluate 3D reconstruction methods (NeRF, Gaussian Splatting) or 3D generative models, both for static (3D) and dynamic (4D) scenes, we will develop a new full-reference quality metric and no-reference loss function. Those will be trained and validated on a new 4D quality dataset, with the subjective quality measured in stereoscopic presentation (e.g., on a VR headset). The developed techniques will improve 3D and temporal consistency of the rendered views, resulting in fewer temporal artefacts. They will also allow automatic hyper-parameter tuning and more reliable evaluation and comparison of 3D rendering techniques. This talk is part of the Rainbow Group Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsQuantum Computing for Quantum Chemistry Type the title of a new list here Joint EBI/ Cambridge University Research SymposiumOther talksA Year in Kew Kernel Quantile Embeddings and Associated Probability Metrics Panel Discussion Save the date. Details of this seminar will follow shortly. Calibration of probabilistic predictions Poster Session with Morning Coffee |