| COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Statistics > On the sample complexity of multi-objective learning
On the sample complexity of multi-objective learningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Qingyuan Zhao. In multi-objective learning (MOL), several possibly competing prediction tasks must be solved jointly by a single model. Achieving good trade-offs may require a model class G with larger capacity than what is necessary for solving the individual tasks. This, in turn, increases the statistical cost, as reflected in known MOL bounds that depend on the complexity of G. We show that this cost is unavoidable for some losses, even in an idealized semi-supervised setting, where the learner has access to the Bayes-optimal solutions for the individual tasks as well as the marginal distributions over the covariates. On the other hand, for objectives defined with Bregman losses, we prove that the complexity of G may come into play only in terms of unlabeled data. Concretely, we establish sample complexity upper bounds, showing precisely when and how unlabeled data can significantly alleviate the need for labeled data. These rates are achieved by a simple, semi-supervised algorithm via pseudo-labeling. This talk is part of the Statistics series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCancer Metabolism Interest Group Seminars Michael Perkins Lecture Type the title of a new list hereOther talksA meeting of minds: Modulating mentalizing in autism The evolution of evolvability Nanoscale thermodynamics Introduction to Day 2 Seminars in Cancer Connecting the False Discovery Rate to shrunk estimates |