Meta-Learning Related Tasks with Recurrent Networks: Optimization and Generalization

Abstract

There have been recent interest in meta-learning systems: I.e., networks that are trained to learn across multiple tasks. This paper focuses on optimization and generalization of a meta-learning system based on recurrent networks. The optimization investigates the influence of diverse structures and parameters on its performance. We demonstrate the generalization (robustness) of our meta-learning system to learn across multiple tasks including tasks unseen during the metatraining phase. We introduce a meta-cost function (Mean Squared Fair Error) that enhances the performance of the system by not penalizing it during transitions to learning a new task. Evaluation results are presented for Boolean and quadratic functions datasets. The best performance is obtained using a Long Short-Term Memory (LSTM) topology without a forget gate and with a clipped memory cell. The results demonstrate i) the impact of different LSTM architectures, parameters, and error functions on the meta-learning process; ii) that the mean squared fair error function does improve performance for best learning; and iii) the robustness of our meta-learning framework as it generalizes well when tested on tasks unseen during meta-training. Comparison between No-Forget-Gate LSTM and Gated Recurrent Unit also suggest that absence of a memory cell tends to degrade performance.

Department(s)

Physics, Astronomy, and Materials Science

Document Type

Conference Proceeding

DOI

https://doi.org/10.1109/IJCNN.2018.8489583

Keywords

long short-term memory, meta-learning, performance optimization, recurrent networks

Publication Date

10-10-2018

Journal Title

Proceedings of the International Joint Conference on Neural Networks

Share

COinS