Recurrent Network and Multi-arm Bandit Methods for Multi-task Learning without Task Specification

Abstract

This paper addresses the problem of multi-task learning (MTL) in settings where the task assignment is not known. We propose two mechanisms for the problem of inference of task's parameter without task specification: parameter adaptation and parameter selection methods. In parameter adaptation, the model's parameter is iteratively updated using a recurrent neural network (RNN) learner as the mechanism to adapt to different tasks. For the parameter selection model, a parameter matrix is learned beforehand with the task known apriori. During testing, a bandit algorithm is utilized to determine the appropriate parameter vector for the model on the fly. We explored two different scenarios in MTL without task specification, continuous learning and reset learning. In continuous learning, the model has to adjust its parameter continuously to a number of different task without knowing when task changes. Whereas in reset learning, the parameter is reset to an initial value to aid transition to different tasks. Results on three real benchmark datasets demonstrate the comparative performance of both models with respect to multiple RNN configurations, MTL algorithms and bandit selection policies.

Department(s)

Engineering Program

Document Type

Conference Proceeding

DOI

https://doi.org/10.1109/IJCNN.2019.8851824

Keywords

bandit algorithm, multi-task learning, recurrent network

Publication Date

7-1-2019

Share

COinS