Recurrent Network and Multi-arm Bandit Methods for Multi-task Learning without Task Specification
This paper addresses the problem of multi-task learning (MTL) in settings where the task assignment is not known. We propose two mechanisms for the problem of inference of task's parameter without task specification: parameter adaptation and parameter selection methods. In parameter adaptation, the model's parameter is iteratively updated using a recurrent neural network (RNN) learner as the mechanism to adapt to different tasks. For the parameter selection model, a parameter matrix is learned beforehand with the task known apriori. During testing, a bandit algorithm is utilized to determine the appropriate parameter vector for the model on the fly. We explored two different scenarios in MTL without task specification, continuous learning and reset learning. In continuous learning, the model has to adjust its parameter continuously to a number of different task without knowing when task changes. Whereas in reset learning, the parameter is reset to an initial value to aid transition to different tasks. Results on three real benchmark datasets demonstrate the comparative performance of both models with respect to multiple RNN configurations, MTL algorithms and bandit selection policies.
bandit algorithm, multi-task learning, recurrent network
Nguyen, Thy, and Tayo Obafemi-Ajayi. "Recurrent Network and Multi-arm Bandit Methods for Multi-task Learning without Task Specification." In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2019.