University of Cambridge > Talks.cam > jjd50's list > Multimodal Inference and Assistance for Effortless XR Interaction

Multimodal Inference and Assistance for Effortless XR Interaction

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact John Dudley.

Spatial computing wearables promise to usher in the third wave of computing devices. How these devices facilitate effective user interaction is an open question. In this talk, I propose that leveraging multimodal capabilities along with intelligent inference techniques yields highly performant and effortless interactions. I’ll discuss multiple projects that use multimodal information such as gaze and hand dynamics to implicitly infer and enable the user’s intent. I’ll further discuss how rich wearable haptics can be designed to aid user interaction in XR.

Speaker Bio: Aakar is a Principal Researcher at Fujitsu Research America. Prior to this, he worked as a Research Scientist at Meta Reality Labs Research for four years. He did his PhD in Computer Science from University of Toronto. Aakar’s primary research area is in computational and AI-assisted interactions for spatial computing. Prior to his PhD, Aakar worked on technology interventions for underserved users in India in collaboration with Microsoft Research Bangalore. His work has resulted in 30+ publications at top-tier HCI venues such as CHI and UIST including four Best Paper Honorable Mention Awards.

This talk is part of the jjd50's list series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity