University of Cambridge > > Microsoft Research Cambridge, public talks > Multimodal Gaze-Supported Interaction

Multimodal Gaze-Supported Interaction

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

While our eye gaze represents an important medium for perceiving our environment, it also serves as a fast and implicit way for signaling interest in somebody or something. This could also benefit a flexible and convenient interaction with diverse computing systems ranging from small handheld devices to multiple large-sized screens. Considerable research has already been pursued on gaze-only interaction, which is however often described as error-prone, imprecise, and unnatural. To overcome these challenges, multimodal combinations of gaze with additional input modalities show a high potential for fast, fluent, and convenient human-computer interaction in diverse user contexts. Promising examples for this novel style of multimodal gaze-supported interaction are the seamless selection and manipulation of graphical objects displayed on distant screens by using a combination of a mobile handheld (such as a smartphone) and gaze input.

In my talk, I will provide a brief introduction to gaze-based interaction in general and present insights into my research at the Interactive Media Lab. Thereby, I will particularly emphasize the high potential of the emerging area of multimodal gaze-supported interaction.

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity