COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Information Theory Seminar > How the strength of the inductive bias affects the generalization performance of interpolators
How the strength of the inductive bias affects the generalization performance of interpolatorsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Prof. Ramji Venkataramanan. Interpolating models have recently gained popularity in the statistical learning community due to common practices in modern machine learning: complex models achieve good generalization performance despite interpolating high-dimensional training data. In this talk, we prove generalization bounds for high-dimensional linear models that interpolate noisy data generated by a sparse ground truth. In particular, we first show that minimum-l1-norm interpolators achieve high-dimensional asymptotic consistency at a logarithmic rate. Further, as opposed to the regularized or noiseless case, for min-lp-norm interpolators with 1
This talk is part of the Information Theory Seminar series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsPerspectives in Nano Information Processing Type the title of a new list here Ecology Lunchtime SeriesOther talksCambridge Immunology Forum - 22nd Cambridge Immunology Forum: Cancer at the interface of immunity, metabolism and microbiota Ethics for the working mathematician, Seminar 7: Psychology 101: How to survive as a mathematician at work Supernovae: Superheroes of the universe The Anne McLaren Lecture: Embryonic and adult neural stem cells- what underlies their difference Internal Solitary Waves and their interaction with sea ice |