University of Cambridge > Talks.cam > Statistics > On adaptation of false discovery rate

On adaptation of false discovery rate

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Richard Samworth.

The false discovery rate (FDR) is a tool coming from multiple testing theory which is extensively used in many practical applications like microarrays, neuroimaging and source detection. It is defined as the expected proportion of errors among the items declared as significant. Maintaining this (expected) ratio below a nominal level α provides a global type I error control for which many items can be declared as significant, even if the dimension strongly increases.

Surprisingly, the FDR , which was initially designed to address a pure multiple testing problem, has recently been shown to enjoy remarkable properties in other frameworks of statistical decision theory, as estimation or classification. Namely, when the signal is sparse, it is adaptive to the unknown sparsity contained in the data.

In this talk, after a short presentation of the FDR concept, we will investigate the adaptation to the unknown sparsity of the FDR thresholding in a classification setting where the “0”-class (null) is assumed to have a known, symmetric log-concave density while the “1”-class (alternative) is obtained from the “0”-class either by translation (location model) or by scaling (scale model). Non-asymptotic oracle inequalities are derived for the excess risk of FDR thresholding and an explicit choice for the nominal level α is proposed. Numerical experiments show that the proposed choice of α is relevant for a practical use.

This is a joint work with Pierre Neuvial.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity