Cross-validation and bootstrapping are unreliable in small sample classification
2008 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 29, no 14, 1960-1965 p.Article in journal (Refereed) Published
The interest in statistical classification for critical applications such as diagnoses of patient samples based on supervised learning is rapidly growing. To gain acceptance in applications where the subsequent decisions have serious consequences, e.g. choice of cancer therapy, any such decision support system must come with a reliable performance estimate. Tailored for small sample problems, cross-validation (CV) and bootstrapping (BTS) have been the most commonly used methods to determine such estimates in virtually all branches of science for the last 20 years. Here, we address the often overlooked fact that the uncertainty in a point estimate obtained with CV and BTS is unknown and quite large for small sample classification problems encountered in biomedical applications and elsewhere. To avoid this fundamental problem of employing CV and BTS, until improved alternatives have been established, we suggest that the final classification performance always should be reported in the form of a Bayesian confidence interval obtained from a simple holdout test or using some other method that yields conservative measures of the uncertainty.
Place, publisher, year, edition, pages
2008. Vol. 29, no 14, 1960-1965 p.
Supervised classification, Performance estimation, Confidence interval
Medical and Health Sciences Signal Processing
Research subject Signal Processing
IdentifiersURN: urn:nbn:se:uu:diva-111034DOI: 10.1016/j.patrec.2008.06.018ISI: 000259712200008OAI: oai:DiVA.org:uu-111034DiVA: diva2:279177