728x90
Paper
https://arxiv.org/abs/1711.09325
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art de
arxiv.org
Code
https://github.com/alinlab/Confident_classifier
alinlab/Confident_classifier
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018 - alinlab/Confident_classifier
github.com
728x90