AITRICS announces Lifelong learning with Dynamically Expandable Networks at ICLR 2018
February. 26, 2018
AITRICS, a data-driven health intelligence company focused on the development of precision medicines, today announced that its novel deep network architecture, which is capable of learning a compact overlapping knowledge sharing structure among tasks, has been selected for a presentation at the 6th International Conference on Learning Representations (ICLR) taking place April 30-May 03, 2018 in Vancouver, Canada.
Referring the deep network architecture for lifelong learning as Dynamically Expandable Nework (DEN), AITRICS researchers tackled the essential problem of what’s also known as continual learning, where tasks arrive in sequence, by implementing an efficient deep learning technique.
While multiple methods had been proposed earlier to achieve the primary goal of lifelong learning, to leverage knowledge from earlier tasks to obtain better performance and faster convergence on models for later tasks, most of the methods focused on performing incremental learning, constantly training the network with new incoming data.
However, simply retraining the network leaves the possibility of performance degeneration for both the previous and new tasks. If the given tasks are distinctively different, such as in the case where the previous task is to classify images of animals and the new task is to classify images of cars, then the methods used in the previous task may turn out to be futile in training the new task.
In the study, Dynamically Expandable Nework (DEN) was instructed in an online manner to perform selective retraining and by doing so, dynamically expand network capacity upon arrival of each task with only the necessary number of units and effectively prevent semantic drift by splitting/duplicating units and timestamping them.
To ensure that the knowledge sharing through the network is interactively beneficial for all of the existing tasks, the researchers had each added task pick out only the relevant features from the previous trained network, while still allowing to expand the network capacity when necessary. This way, each task can use a different sub-network from the previous tasks, while still sharing a considerable part of the sub-network with them.
The researchers validated its incremental deep neural network for lifelong learning on multiple public datasets under lifelong learning scenarios, on which it not only significantly outperformed existing methods for deep networks, but also achieved the same level of performance as the batch counterparts with substantially fewer number of parameters. Furthermore, the task fine-tuned on all tasks reached even better performance over the batch models.
The study findings suggest that this unique methodology can result in estimating the optimal network structure when all tasks are available in the first place.Therefore, in the healthcare arena, it is expected to be used as a potential tool for analyzing millions of medical records to predict patient risk for various types of diseases and ultimately deliver optimal diagnosis and health management.