PaperReading

Taskonomy: Disentangling Task Transfer Learning

March 2020

tl;dr: Answers this question: which tasks are better transferred to others?

Overall impression

This paper proposed a large dataset with 4 million images, each has 26 tasks labeled as GT. This work directly inspired task grouping, which answers a different question of how to perform multitask learning more efficiently.

The main purpose of transfer learning is to reduce the number of images needed for training the task–they focus on supervision efficiency. Given enough images, trained from scratch is also viable, per rethinking ImageNet pretraining.

Key ideas

Technical details

Notes