|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
On automatically analyzing learner language: Interpreting form and meaning in context
If you have a question about this talk, please contact Chris Cummins.
The automatic analysis of learner language can play a role in the annotation of learner corpora and in intelligent language tutoring systems. In this talk, I first want to raise some questions about the nature of the linguistic categories which are appropriate for learner language under different perspectives, and which role the context, explicit tasks, and learner modeling play for the interpretation of learner language. Then the talk moves from analyzing form to evaluating aspects of meaning. I discuss our work in the CoMiC project on automatically evaluating the meaning of learner answers to reading comprehension questions, for which we explore which linguistic representations and comparison strategies are effective and robust enough to evaluate meaning in the face of significant well-formed and ill-formed variation.
This talk is part of the RCEAL Tuesday Colloquia series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsPhD Colloquium - Dept of Architecture Cambridge Centre for Risk Studies Kavli Institute for Cosmology - Summer Series
Other talksRecent developments and research challenges in data linkage OpenPiton Inferring gene-gene associations and gene networks beyond standard statistical models Don't multiply lightly: exploring how DNN depth interacts with HMM independence assumptions in hybrid HMM/DNN's used for ASR Modern Bayesian Record Linkage: Some Recent Developments and Open Challenges From Sensory Perception to Foraging Decision Making, the Bat's Point of View