Augmenting Imbalanced Time-series Data via Adversarial Perturbation in Latent Space

Nov 17, 2021

Speakers

About

Success of training deep learning models largely depends on the amount and quality of training data. Although numerous data augmentation techniques have already been pro- posed for certain domains such as computer vision where simple schemes such as rotation and flipping have been shown to be effective, other domains such as time-series data have a relatively smaller set of augmentation techniques readily available. Besides, data imbalance is a phenomenon that is often observed in real-world data. However, a simple oversampling may make a model vulnerable to overfitting, so a proper data augmentation is desired. To tackle these problems, we propose a data augmentation method that utilizes latent vectors of an autoencoder in a novel way. When input data is perturbed in its latent space, the reconstructed input data retains similar properties to the original one. On the other hand, adversarial augmentation is a technique to train robust deep neural networks against un- foreseen data shifts or corruptions by providing a downstream model with difficult samples to predict. Our method adversarily perturbs input data in its latent space so that the aug- mented data is diverse and conducive to reducing test error of a downstream model. The experimental results demonstrate that our method achieves a right balance in significantly modifying the input data to help generalization while keeping the realism of it.

Organizer

About ACML 2021

The 13th Asian Conference on Machine Learning ACML 2021 aims to provide a leading international forum for researchers in machine learning and related fields to share their new ideas, progress and achievements.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow ACML 2021