COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Statistics > Approximate Cross Validation for Large Data and High Dimensions

## Approximate Cross Validation for Large Data and High DimensionsAdd to your list(s) Download to your calendar using vCal - Tamara Broderick, Massachusetts Institute of Technology
- Friday 17 January 2020, 14:00-15:00
- MR12.
If you have a question about this talk, please contact Dr Sergio Bacallado. The error or variability of statistical and machine learning algorithms is often assessed by repeatedly re-fitting a model with different weighted versions of the observed data. The ubiquitous tools of cross-validation (CV) and the bootstrap are examples of this technique. These methods are powerful in large part due to their model agnosticism but can be slow to run on modern, large data sets due to the need to repeatedly re-fit the model. We use a linear approximation to the dependence of the fitting procedure on the weights, producing results that can be faster than repeated re-fitting by orders of magnitude. This linear approximation is sometimes known as the “infinitesimal jackknife” (IJ) in the statistics literature, where it has mostly been used as a theoretical tool to prove asymptotic results. We provide explicit finite-sample error bounds for the infinitesimal jackknife in terms of a small number of simple, verifiable assumptions. Without further modification, though, we note that the IJ deteriorates in accuracy in high dimensions and incurs a running time roughly cubic in dimension. We additionally show, then, how dimensionality reduction can be used to successfully run the IJ in high dimensions in the case of leave-one-out cross validation (LOOCV). Specifically, we consider L1 regularization for generalized linear models. We prove that, under mild conditions, the resulting LOOCV approximation exhibits computation time and accuracy that depend on the recovered support size rather than the full dimension D. Simulated and real-data experiments support our theory. This talk is part of the Statistics series. ## This talk is included in these lists:- All CMS events
- All Talks (aka the CURE list)
- CCIMI
- CCIMI Seminars
- CMS Events
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Chris Davis' list
- DPMMS Lists
- DPMMS info aggregator
- DPMMS lists
- Guy Emerson's list
- Interested Talks
- MR12
- Machine Learning
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Statistics
- Statistics Group
- Trust & Technology Initiative - interesting events
- bld31
- custom
- ndk22's list
- rp587
Note that ex-directory lists are not shown. |
## Other listshmm Cambridge Natural History Society Developmental Biology Seminar Series## Other talksBabraham Distinguished Lecture - Different mechanisms define lncRNA and protein coding gene transcription units in mammalian cells Towards a Global History of Knowledge? Premises, Promises, Concerns – gloknos Annual Lecture The Unit’s first 50 years: Some science, some history and some tales Attention, perception, and neural response: testing the limits Invoicing and Pricing-to-market: Evidence from UK Exports/Imports Transactions Neoliberalism's Literary Rhythms: Engaging with Canonical Texts to Vanquish the Market Myth |