PUBLICATIONS

  • PUBLICATIONS

SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization

ICML 2017

 

SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization

Juyoung Kim, YooKoon Park, Gunhee Kim, Sungju Hwang

 

We propose a novel deep neural network that is both lightweight and effectively structured for model parallelization. Our network, which we name as SplitNet, automatically learns to split the network weights into either a set or a hierarchy of multiple groups that use disjoint sets of features, by learning both the class-to-group and feature-to-group assignment matrices along with the network weights.