Multitask Learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact David Greaves.
Machine learning studies the problem of learning to perform
a given task from a dataset of examples. A fundamental limitation of
standard machine learning methods is the cost incurred in preparing
large training datasets. Often in applications a limited number of
examples is available and the task cannot be solved in isolation. A
potential remedy is offered by multitask learning, which aims to learn
several related tasks simultaneously. If the tasks share some
constraining or generative property which is sufficiently simple it
should allow for better learning of the individual tasks even when the
individual training datasets are small. In the talk, I will present a
wide class of multitask learning methods which encourage different
forms of task relatedness and involve certain notions of structured
sparsity and low rank tensor representations. I will also discuss
iterative algorithms to implement these methods, building upon ideas
from convex optimisation. Finally, I will illustrate the performance
of these methods in applications arising in affective computing,
computer vision and user modelling.
This talk is part of the Wednesday Seminars - Department of Computer Science and Technology series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|