COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Learning Multi-Scene Absolute Pose Regression with Transformers
Learning Multi-Scene Absolute Pose Regression with TransformersAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Pietro Lio. Absolute camera pose regressors estimate the position and orientation of a camera from the captured image alone. Typically, a convolutional backbone with a multi-layer perceptron head is trained with images and pose labels to embed a single reference scene at a time. Recently, this scheme was extended for learning multiple scenes by replacing the MLP head with a set of fully connected layers. In this work, we propose to learn multi-scene absolute camera pose regression with Transformers, where encoders are used to aggregate activation maps with self-attention and decoders transform latent features and scenes encoding into candidate pose predictions. This mechanism allows our model to focus on general features that are informative for localization while embedding multiple scenes in parallel. We evaluate our method on commonly benchmarked indoor and outdoor datasets and show that it surpasses both multi-scene and state-of-the-art single-scene absolute pose regressors. We make our code publicly available from: https://github.com/yolish/multi-scene-pose-transformer BIO: Dr Yoli Shavit is a Principal Research Scientist Manager at Huawei Tel Aviv Research Center (TRC) and a Postdoctoral Researcher at Bar-Ilan University. Before joining Huawei, Yoli worked at Amazon and interned at Microsoft Research. She holds a PhD in Computer Science from the University of Cambridge, an MSc in Bioinformatics from Imperial College London and a BSc in Computer Science and in Life Science from Tel Aviv University. Yoli is the recipient of the Cambridge International Scholarship, and her thesis was nominated for the best thesis award in the UK. Her current research focuses on deep learning methods for camera localization and multi-view stereo, with recent publications in CVPR , ICCV, ECCV and NeurIPS. This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsLeadership for Learning webinar with Assistant Professor Rabab Tamish Seminars on Adaptation to Climate Change Beyond Profit Think TankOther talksOrthogonal Polynomials and Symmetric Sextic Freud weights Milner Seminar Series - October 2022 Branching morphogenesis of the lung: tales of the sculptor and the sculpture Locating the Reversals: Adapting EMMA for the screen The ubiquitous acoustic bubble: a brief introduction |