Publications
2023
- Mutual Learning for Long-Tailed RecognitionChanghwa Park, Junho Yim, and Eunji JunIn Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Jan 2023
Deep neural networks perform well in artificially-balanced datasets, but real-world data often has a long-tailed distribution. Recent studies have focused on developing unbiased classifiers to improve tail class performance. Despite the efforts to learn a fine classifier, we cannot guarantee a solid performance if the representations are of poor quality. However, learning high-quality representations in a long-tailed setting is difficult because the features of tail classes easily overfit the training dataset. In this work, we propose a mutual learning framework that generates high-quality representations in long-tailed settings by exchanging information between networks. We show that the proposed method can improve representation quality and establish a new state-of-the-art record on several long-tailed recognition benchmark datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
2022
- Transfer Learning for Extreme Domain GapMyeongjin Kim, Changhwa Park, Junho Yim, and 1 more authorJan 2022
2021
- Removing Undesirable Feature Contributions Using Out-of-Distribution DataSaehyung Lee, Changhwa Park, Hyungyu Lee, and 3 more authorsIn International Conference on Learning Representations, Jan 2021
Several data augmentation methods deploy unlabeled-in-distribution (UID) data to bridge the gap between the training and inference of neural networks. However, these methods have clear limitations in terms of availability of UID data and dependence of algorithms on pseudo-labels. Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues. We show how to improve generalization theoretically using OOD data in each learning scenario and complement our theoretical analysis with experiments on CIFAR-10, CIFAR-100, and a subset of ImageNet. The results indicate that undesirable features are shared even among image data that seem to have little correlation from a human point of view. We also present the advantages of the proposed method through comparison with other data augmentation methods, which can be used in the absence of UID data. Furthermore, we demonstrate that the proposed method can further improve the existing state-of-the-art adversarial training.
- Deep learning for anomaly detection in time-series data: review, analysis, and guidelinesKukjin Choi, Jihun Yi, Changhwa Park, and 1 more authorIEEE Access, Jan 2021
As industries become automated and connectivity technologies advance, a wide range of systems continues to generate massive amounts of data. Many approaches have been proposed to extract principal indicators from the vast sea of data to represent the entire system state. Detecting anomalies using these indicators on time prevent potential accidents and economic losses. Anomaly detection in multivariate time series data poses a particular challenge because it requires simultaneous consideration of temporal dependencies and relationships between variables. Recent deep learning-based works have made impressive progress in this field. They are highly capable of learning representations of the large-scaled sequences in an unsupervised manner and identifying anomalies from the data. However, most of them are highly specific to the individual use case and thus require domain knowledge for appropriate deployment. This review provides a background on anomaly detection in time-series data and reviews the latest applications in the real world. Also, we comparatively analyze state-of-the-art deep-anomaly-detection models for time series with several benchmark datasets. Finally, we offer guidelines for appropriate model selection and training strategy for deep learning-based time series anomaly detection.
2020
- Joint contrastive learning for unsupervised domain adaptationChanghwa Park, Jonghyun Lee, Jaeyoon Yoo, and 2 more authorsarXiv preprint arXiv:2006.10297, Jan 2020
Enhancing feature transferability by matching marginal distributions has led to improvements in domain adaptation, although this is at the expense of feature discrimination. In particular, the ideal joint hypothesis error in the target error upper bound, which was previously considered to be minute, has been found to be significant, impairing its theoretical guarantee. In this paper, we propose an alternative upper bound on the target error that explicitly considers the joint error to render it more manageable. With the theoretical analysis, we suggest a joint optimization framework that combines the source and target domains. Further, we introduce Joint Contrastive Learning (JCL) to find class-level discriminative features, which is essential for minimizing the joint error. With a solid theoretical framework, JCL employs contrastive loss to maximize the mutual information between a feature and its label, which is equivalent to maximizing the Jensen-Shannon divergence between conditional distributions. Experiments on two real-world datasets demonstrate that JCL outperforms the state-of-the-art methods.
2019
- Learning condensed and aligned features for unsupervised domain adaptation using label propagationJaeyoon Yoo*, Changhwa Park*, Yongjun Hong, and 1 more authorarXiv preprint arXiv:1903.04860, Jan 2019
Unsupervised domain adaptation aiming to learn a specific task for one domain using another domain data has emerged to address the labeling issue in supervised learning, especially because it is difficult to obtain massive amounts of labeled data in practice. The existing methods have succeeded by reducing the difference between the embedded features of both domains, but the performance is still unsatisfactory compared to the supervised learning scheme. This is attributable to the embedded features that lay around each other but do not align perfectly and establish clearly separable clusters. We propose a novel domain adaptation method based on label propagation and cycle consistency to let the clusters of the features from the two domains overlap exactly and become clear for high accuracy. Specifically, we introduce cycle consistency to enforce the relationship between each cluster and exploit label propagation to achieve the association between the data from the perspective of the manifold structure instead of a one-to-one relation. Hence, we successfully formed aligned and discriminative clusters. We present the empirical results of our method for various domain adaptation scenarios and visualize the embedded features to prove that our method is critical for better domain adaptation.