arXiv Analytics

Sign in

arXiv:1810.04650 [cs.LG]AbstractReferencesReviewsResources

Multi-Task Learning as Multi-Objective Optimization

Ozan Sener, Vladlen Koltun

Published 2018-10-10Version 1

In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. A common compromise is to optimize a proxy objective that minimizes a weighted linear combination of per-task losses. However, this workaround is only valid when the tasks do not compete, which is rarely the case. In this paper, we explicitly cast multi-task learning as multi-objective optimization, with the overall objective of finding a Pareto optimal solution. To this end, we use algorithms developed in the gradient-based multi-objective optimization literature. These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks. We therefore propose an upper bound for the multi-objective loss and show that it can be optimized efficiently. We further prove that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions. We apply our method to a variety of multi-task deep learning problems including digit classification, scene understanding (joint semantic segmentation, instance segmentation, and depth estimation), and multi-label classification. Our method produces higher-performing models than recent multi-task learning formulations or per-task training.

Related articles: Most relevant | Search more
arXiv:1707.03426 [cs.LG] (Published 2017-07-11)
Multi-Task Learning Using Neighborhood Kernels
arXiv:1203.3536 [cs.LG] (Published 2012-03-15)
A Convex Formulation for Learning Task Relationships in Multi-Task Learning
arXiv:1809.10336 [cs.LG] (Published 2018-09-27)
Multi-task Learning for Financial Forecasting