NoisyMix: Boosting Model Robustness to Common CorruptionsVise andre og tillknytning
2024 (engelsk)Inngår i: International Conference On Artificial Intelligence And Statistics, Vol 238 / [ed] Dasgupta, S Mandt, S Li, Y, JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2024, Vol. 238Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]
The robustness of neural networks has become increasingly important in real-world applications where stable and reliable performance is valued over simply achieving high predictive accuracy. To address this, data augmentation techniques have been shown to improve robustness against input perturbations and domain shifts. In this paper, we propose a new training scheme called NoisyMix that leverages noisy augmentations in both input and feature space to improve model robustness and in-domain accuracy. We demonstrate the effectiveness of NoisyMix on several benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Additionally, we provide theoretical analysis to better understand the implicit regularization and robustness properties of NoisyMix.
sted, utgiver, år, opplag, sider
JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2024. Vol. 238
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-356096ISI: 001286500304022OAI: oai:DiVA.org:kth-356096DiVA, id: diva2:1911643
Konferanse
27th International Conference on Artificial Intelligence and Statistics (AISTATS), MAY 02-04, 2024, Valencia, SPAIN
Merknad
QC 20241108
2024-11-082024-11-082025-08-15bibliografisk kontrollert