NoisyMix: Boosting Model Robustness to Common CorruptionsShow others and affiliations
2024 (English)In: International Conference On Artificial Intelligence And Statistics, Vol 238 / [ed] Dasgupta, S Mandt, S Li, Y, JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2024, Vol. 238Conference paper, Published paper (Refereed)
Abstract [en]
The robustness of neural networks has become increasingly important in real-world applications where stable and reliable performance is valued over simply achieving high predictive accuracy. To address this, data augmentation techniques have been shown to improve robustness against input perturbations and domain shifts. In this paper, we propose a new training scheme called NoisyMix that leverages noisy augmentations in both input and feature space to improve model robustness and in-domain accuracy. We demonstrate the effectiveness of NoisyMix on several benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Additionally, we provide theoretical analysis to better understand the implicit regularization and robustness properties of NoisyMix.
Place, publisher, year, edition, pages
JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2024. Vol. 238
Series
Proceedings of Machine Learning Research, ISSN 2640-3498
National Category
Physical Sciences
Identifiers
URN: urn:nbn:se:kth:diva-356096ISI: 001286500304022OAI: oai:DiVA.org:kth-356096DiVA, id: diva2:1911643
Conference
27th International Conference on Artificial Intelligence and Statistics (AISTATS), MAY 02-04, 2024, Valencia, SPAIN
Note
QC 20241108
2024-11-082024-11-082025-08-15Bibliographically approved