COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Highly-Smooth Zero-th Order Online OptimizationAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Quentin Berthet. The minimization of convex functions which are only available through partial and noisy information is a key methodological problem in machine learning. We consider online convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as online logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence of our upper-bounds on the degree of smoothness. In particular, we show that for infinitely differentiable functions, we recover essentially the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for convex and strongly-convex functions, with finite horizon and anytime algorithms. This talk is part of the Statistics series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge Conservation Seminars ArcDigital and CoDE talks at Anglia Ruskin Film screening - Salaam Bombay! Thinking Society: General and Particular POLIS Department Research Seminars Churchill College Phoenix SocietyOther talksClimate and Sustainable Development Finance for Industrial Sustainability in Developing Countries Drugs and Alcohol Slaying (or at least taming) a dreadful monster: Louis de Serres' treatise of 1625 for women suffering from infertility Deep & Heavy: Using machine learning for boosted resonance tagging and beyond UK 7T travelling-head study: pilot results |