PaperReading

DTP: Dynamic Task Prioritization for Multitask Learning

August 2019

tl;dr: Dynamically adjust the weight of tasks during training based on training progress with focal loss.

Overall impression

GradNorm and dynamic task prioritization are very similar. However the weight rate adjustment is exponential as compared to focal loss (essentially also exponential), and the progress signal is loss drop as compared to KPI.

This paper is one important task in self-paced learning (automated approach to curriculum learning). Task weights are usually done with extensive grid search, and maintain static throughout time.

Contrary to curriculum learning, the paper proposes that it is important to prioritize difficult task first. Perfecting simple tasks wastes valuable resources.

This work explored two methods: 1) assign dynamic weights to tasks. 2) structure the network structure to reflect task hierarchy.

Methods Learning Progress Signal hyperparameters
Uncertainty Weighting Homoscedastic Uncertainty No hyperparameters
GradNorm Training loss ratio 1 exponential weighting factor
Dynamic Task Prioritization KPI 1 focal loss scaling factor

Key ideas

Technical details

Notes