COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Neyman-Pearson ClassificationAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dr Sergio Bacallado. In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, alpha, on the type I error. Although the NP paradigm has a century-long history in hypothesis testing, it has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than alpha do not satisfy the type I error control objective because the resulting classifiers are still likely to have type I errors much larger than alpha. This talk introduces the speaker and coauthors’ work on NP classification algorithms and their applications and raises current challenges under the NP paradigm. This talk is part of the Statistics series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsProfessor Sir Brian Heap Wolfson Research Event 2016Other talksHow Tuberculosis senses and responds to the host to achieve infection. Rivers in the Rock Record: From Utah to Wales The Galactic Lineage of the Milky Way Optical superresolution microscopy of molecular mechanisms of disease Cancer: From science to benefit. |