PaperReading

Learning Confidence for Out-of-Distribution Detection in Neural Networks

July 2019

tl;dr: Jointly learn an out-of-distribution confidence as “hint”.

Overall impression

The intuition is quite thought-provoking and interesting. Give a budget for NN to peak at the label y. If the NN is not sure, it can predict a low conf score so that the label is injected into the prediction. However this is under a budget for predicting low conf.

This idea seems to be easy to implement, and fairly straightforward to generalize to regression tasks. Much easier to understand than Kendall and Yarin’s uncertainty paper at NIPS 2017.

Another benefit of learning this confidence score is that the classifier is less sensitive to threshold in deployment, as evidenced by the flat bottom in Fig. 4.

Key ideas

Technical details

Notes