Learning to Learn

  • 분류 전체보기 (215)
    • Research Note (0)
      • KD for openset (0)
      • instance learning for opens.. (0)
      • Use Background as openset (0)
      • Vision Transformer (0)
      • Apply Fourier Transform Sam.. (0)
      • Check Openset sample model (0)
    • Paper (34)
      • Openset recogniton (8)
      • Out-of-distribution (2)
      • Imbalanced data (7)
      • Semi-Supervised learning (1)
      • Unsupervised learning (9)
      • Knowledge distillation (1)
      • ETC (6)
    • Project (0)
      • ICT (0)
      • LG (0)
      • AI Challenge (0)
    • Tutorial (1)
    • Code (2)
    • Deep learning (1)
    • Math (6)
      • Linear Algebra (1)
      • Baysian learning (1)
      • Causal Inference (3)
      • Optimizaiton (1)
    • Python (25)
    • Algorithm (0)
    • Pytorch (8)
    • Terminal (13)
    • Docker (4)
    • CUDA (1)
    • Dic (102)
  • 홈
  • 태그
  • 방명록
/ /

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples : ICLR 2018

2021. 7. 11. 20:08
728x90

Paper

https://arxiv.org/abs/1711.09325

 

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art de

arxiv.org

 

Code

https://github.com/alinlab/Confident_classifier

 

alinlab/Confident_classifier

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018 - alinlab/Confident_classifier

github.com

 

 

728x90
저작자표시 (새창열림)

'Paper > Out-of-distribution' 카테고리의 다른 글

Soft Labeling Affects Out-of-Distribution Detection of Deep Neural Networks : ICML'20 Workshop on Uncertainty and Robustness in Deep Learning  (0) 2021.02.07

+ Recent posts

Powered by Tistory, Designed by wallel
Rss Feed and Twitter, Facebook, Youtube, Google+

티스토리툴바