Minseop Park, Saehoon Kim, Jungtaek Kim, Yanbin Liu, Seugnjin Choi
Most of meta-learning methods assume that a set of tasks in the meta-training phase is sampled from a single dataset. Thus when a new task is drawn from another dataset, the performance of meta-learning methods is degraded. To alleviate this effect, we introduce a task-adaptive ensemble network that aggregates meta-learners by putting more weights on the learners that are expected to perform well to the given task. Experiments demonstrate that our task-adaptive ensemble significantly outperforms previous meta-learners and their uniform averaging.