7 AITRICS Papers Accepted to NeurIPS 2018
November 22, 2018
Seven research papers with AITRICS researchers have been accepted to NeurIPS 2018, the 32nd Conference on Neural Information Processing Systems, and will be presented at the event from the 4th to the 7th in December
The ‘Neural Information Processing Systems” or “NIPS”, is a very well-known machine learning conference that recently adopted “NeurIPS’ as an alternative acronym for the conference.
This year, 1,011 papers were accepted out of 4,856 submissions for a 20.8% acceptance rate. AITRICS is among the top corporate research institutions with the most accepted papers, along with Deepmind, Element AI and Amazon.
Listed below is the overview of AITRICS papers that NeurIPS has accepted.
- Uncertainty-Aware Attention for Reliable Interpretation and Prediction
- Joint Active Feature Acquisition and Classification with Variable-Size Set Encoding
- DropMax: Adaptive Variational Softmax
- Stacked Semantic-Guided Attention Model for Fine-Grained Zero-Shot Learning
- Stochastic Chebyshev Gradient Descent for Spectral Optimization
- A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
- Learning to Specialize with Knowledge Distillation for Visual Question Answering
The most important challenge with prediction models is to produce an argument explaining why the algorithm came up with a certain prediction or suggestion. An attention mechanism is most often used for interpretation of neural network-based systems, since it enables the neural network to focus on relevant parts of the input to produce predictions, and this way it measures the correlation of the features and the prediction results.
However, the problem with the conventional use of attention mechanisms is that it is difficult to analyze the reliability of feature contribution to the final result. When working with noisy datasets that can hardly be one-to-one matched with the prediction, such as in case of risk predictions with patients’ electronic health records, the overconfident and inaccurate attentions can lead to incorrect predictions, which could possibly result in severe outcomes.
To tackle such limitation of attention mechanisms, AITRICS proposes a novel methodology that captures input-level uncertainty – that knows when it is safe to make predictions and when it is not. This accurate calibration of model uncertainty and attentions that align well with human interpretations demonstrates that the use of uncertainty-aware attention can provide high reliability and interpretation in health care.
We also propose a joint learning framework for active feature acquisition and classification, which can minimize the number of unnecessary examinations and reduce overall medical expenses.
Doctors often make initial diagnosis based on a few symptoms that patients report. Then they conduct further examinations to narrow down the set of diseases the patient might have, until they are confident enough to make the final diagnosis. Some medical tests involve blood tests, a urinalysis, an electrocardiogram, and etc.
However, acquiring results from all the tests are often to be found inappropriate. It costs a bundle and may increase risks for not receiving proper treatment at the right time. Furthermore, collecting irrelevant features might only add noise to the data and make the prediction unstable.
In our paper, we suggest the framework that is designed to sequentially collect the subset of features to achieve the optimal prediction performance in the most cost-effective way.
We have evaluated both of our novel approaches on electronic health record (EHR) datasets, on which they outperformed all baselines in terms of prediction performance, interpretation, and feature acquisition cost.