|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Evaluating User-Adaptive Systems: Lessons from Experiences with a Personalized Meeting Scheduling Assistant
If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.
This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending
We present experiences from evaluating the learning performance of a user-adaptive personal assistant agent. This work was part of the CALO project, which led to spin-out Siri acquired by Apple in 2010. We discuss the challenge of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. Reflections on negative and positive experiences point to the challenges of evaluating user-adaptive agent systems. Lessons learned concern early consideration of evaluation and deployment, characteristics of AI technology and domains that make controlled evaluations appropriate or not, holistic experimental design, implications of “in the wild” evaluation, and the effect of AI-enabled functionality and its impact upon existing tools and work practices.
This talk is part of the Microsoft Research Cambridge, public talks series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsPhysics of Living Matter lectures Odd perfect numbers Faculty of Music - Lectures
Other talksRamanujan Graphs and Finite Free Convolutions of Polynomials "Payment is applause: markets and business models past, present and future" Spectacular Chemistry Demonstration Lecture Evolution of Functionality of Porous Coordination Polymers The MInT study: Effectively communicating about, and intervening with, overweight in young children “The AML Genome(s): Mutations in Four Dimensions”