COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Statistics > Selection and estimation in sparse, high-dimensional models

## Selection and estimation in sparse, high-dimensional modelsAdd to your list(s) Download to your calendar using vCal - Maarten Jansen, Université libre de Bruxelles
- Friday 10 October 2014, 16:00-17:00
- MR12, Centre for Mathematical Sciences, Wilberforce Road, Cambridge.
If you have a question about this talk, please contact . In many applications, the objective of variable selection is to find a good compromise between the likelihood and the complexity of the model. The balance between the likelihood and complexity is controlled by a regularisation parameter. The selection and estimation thus proceeds in two stages. The first step is the assessment of the regularisation parameter, through the optimisation of an information criterion, which estimates the distance to the true model. The second step is then to find the best selection and estimation for a given value of the regularisation parameter. Obviously, the former step has to anticipate for effects occurring during the latter step. In particular, when the class of models under investigation is high-dimensional while the true model is sparse, then a relatively large number of false positives may contribute to the likelihood. The impact of false positive selections on the likelihood can be reduced by shrinking the estimates, especially the smaller ones. This approach, however, makes the selection procedure as a whole too tolerant for false positives, leading to a major overestimation of the model size. If we take the model size as complexity measure, then the best estimation within a selection involves no shrinkage. The effect of false positives can then be described as a so-called mirror: among the parameters that are not prominently part of the true model, false positives present themselves as the best candidates for being part of the model, whereas in reality, they are worse than a random choice of a non-significant parameter. We present information criteria that adjust for this mirror effect. This talk is part of the Statistics series. ## This talk is included in these lists:- All CMS events
- All Talks (aka the CURE list)
- CMS Events
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- DPMMS Lists
- DPMMS info aggregator
- DPMMS lists
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- MR12, Centre for Mathematical Sciences, Wilberforce Road, Cambridge
- Machine Learning
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Statistics
- Statistics Group
- bld31
- custom
- rp587
Note that ex-directory lists are not shown. |
## Other listsNew Era in Russian Politics: Mayoral Campaign of Alexey Navalny IMS-MRL External Seminar series Open Cambridge talks Economic and Social History Graduate Workshop Physics of Medicine Journal Club## Other talksThe Design of Resilient Engineering Infrastructure Systems with Bayesian Networks The genetics of depression The race to solve the solar metallicity problem with neutrinos and discover dark matter Introduction to the early detection of cancer and novel interventions Biomolecular Thermodynamics and Calorimetry (ITC) Science Makers: multispectral imaging with Raspberry Pi |