AITRICS
Technology

With the world’s most reliable, trustable,
interpretable state-of-art tech,
We can change the way AI is developed
and utilized.

Low Resource AI bg image
Low Resource AI bg image

Transfer Learning

Improving generalization performance and efficient data learning by transferring knowledge acquired from a trained model to a new problem

Meta-Learning

Enabling AI models to quickly adapt to new tasks by training a model to generalize to various tasks

Data Augmentation & Perturbation

Improving the generalization performance by increasing or seemingly expanding the number of training samples through data augmentation and perturbation

Neural Network Compression

Effectively reducing the memory and computation cost of neural network models through network weight reduction, bit compression, and knowledge distillation

Related Paper

ICML 2020

Self-supervised Label Augmentation via Input Transformations

Hankook Lee, Sung Ju Hwang, Jinwoo Shin

ICML 2020

Cost-effective Interactive Attention Learning with Neural Attention Processes

Jay Heo, Junhyeon Park, Hyewon Jeong, Kwang Joon Kim, Juho Lee, Eunho Yang, Sung Ju Hwang

ICML 2020

Adversarial Neural Pruning with Latent Vulnerability Suppression

Divyam Madaan, Jinwoo Shin, Sung Ju Hwang

ICLR 2019

Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning

Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang

arXiv 2018

Adaptive Network Sparsification via Dependent Variational Beta-Bernoulli Dropout

Juho LeeSaehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang

SAFE AI bg image
SAFE AI bg image

Interpretable ML

Providing a basis for the prediction result at the sample and model levels in an interpretable form

Uncertainty Modeling / Quantification

Improving prediction reliability by modeling and quantifying the uncertainty of model knowledge and prediction

Adversarially-Robust ML

Learning a robust model from hostile attacks that make the model's prediction results to be inconsistent

Privacy-Preserving ML

Protecting privacy-sensitive data and learning securely

Related Paper

ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning

A benchmark study on reliable molecular supervised learning via Bayesian learning

Doyeong Hwang, Grace Lee, Hanseok Jo, Seyoul Yoon, Seongok Ryu

ACS 2020

Comprehensive Study on Molecular Supervised Learning with Graph Neural Networks

Doyeong HwangSoojung Yang, Yongchan Kwon, Kyung Hoon Lee, Grace LeeHanseok JoSeyeol YoonSeongok Ryu

Critical Care 2019

A Deep Learning Model for Real-time Mortality Prediction in Critically ill Children

* Soo Yeon Kim, * Saehoon Kim, Joongbum Cho, Young Suh Kim, In Suk Sol, Youngchul Sung, Inhyeok Cho, Minseop Park, Haerin Jang, Yoon Hee Kim, ** Kyung Won Kim and Myung Hyun Sohn (*: equal contribution, **: corresponding)

NeurIPS 2018

Uncertainty-Aware Attention for Reliable Interpretation and Prediction

Jay Heo, Haebeom Lee, Saehoon Kim, Juho Lee, Kwangjun Kim, Eunho Yang, Sung Ju Hwang

Auto ML bg image
Auto ML bg image

Meta-Learning

Training an artificial intelligence model to adapt quickly to new tasks by learning to generalize to a variety of tasks

Neural Architecture Search

Learning algorithm automatically consider constraints of data, and search for an optimal network.

Bayesian Optimization

Automatically searching for hyperparameters of black box model through Bayesian inference

Related Paper

ICML 2019

Learning What and Where to Transfer

Yunhun Jang, Hankook Lee, Sung Ju Hwang, Jinwoo Shin

NeurIPS 2017 Workshop on Bayesian Optimization

Learning to Transfer Initializations for Bayesian Hyperparameter Optimization

Jungtaek Kim, Saehoon Kim, Seungjin Choi

Evidence of
our technology