PaperReading

Double Descent: Reconciling modern machine learning practice and the bias-variance trade-of

January 2020

tl;dr: Machine learning models will generalize better once going beyond the interpolation peak.

Overall impression

This is a mind-blowing paper. It extends the U-shaped bias-variance trade-off curve to a double descent curve. Beyond the interpolation threshold where the model starts to have zero empirical/training risk, test risk starts to dropping as well.

Key ideas

Technical details

Notes