Abstract
Multi-task Learning (MTL), which involves the simultaneous learning of multiple tasks, can achieve better performance than learning each task independently. It has achieved great success in various applications, ranging from computer vision to bioinformatics. However, involving multiple tasks in a single learning process is complicated, for both cooperation and competition exist across the including tasks; furthermore, the cooperation boosts the generalization of MTL while the competition degenerates it. There lacks of a systematic study on how to improve MTL's generalization by handling the cooperation and competition. This thesis systematically studies this problem and proposed four novel MTL methods to enhance the between-task cooperation or reduce the between-task competition.
Specifically, for the between-task cooperation, adversarial multi-task representation learning (AMTRL) and semi-supervised multi-task learning (Semi-MTL) are studied; furthermore, a novel adaptive AMTRL method and a novel representation consistency regularization-based Semi-MTL method are proposed respectively. As to the between-task competition, this thesis analyzes the task variance and task imbalance; furthermore, a novel task variance regularization-based MTL method and a novel task-imbalance-aware MTL method are proposed respectively. The above proposed methods can improve the generalization of MTL and achieve state-of-the-art performance in real-word MTL applications.