Luís A. Nunes Amaral
Professor of Engineering Sciences and Applied Mathematics
Professor of Medicine (by courtesy)
Professor of Molecular Biosciences (by courtesy)
Professor of Physics & Astronomy (by courtesy)
Chemical & Biological Engineering
2145 Sheridan Road (Room E136)
Evanston, IL 60208, US
Phone:
(847) 491-7850Adversarial training and attribution methods enable evaluation of robustness and interpretability of deep learning models for image classification
Physical Review E 110, 054310 (2024)
Abstract
Deep learning models have achieved high performance in a wide range of applications. Recently, however, there have been increasing concerns about the fragility of many of those models to adversarial approaches and out-of-distribution inputs. A way to investigate and potentially address model fragility is to develop the ability to provide interpretability to model predictions. To this end, input attribution approaches such as Grad-CAM and integrated gradients have been introduced to address model interpretability. Here, we combine adversarial and input attribution approaches in order to achieve two goals. The first is to investigate the impact of adversarial approaches on input attribution. The second is to benchmark competing input attribution approaches. In the context of the image classification task, we find that models trained with adversarial approaches yield dramatically different input attribution matrices from those obtained using standard techniques for all considered input attribution approaches. Additionally, by evaluating the signal-(typical input attribution of the foreground)- to-noise (typical input attribution of the background) ratio and correlating it to model confidence, we are able to identify the most reliable input attribution approaches and demonstrate that adversarial training does increase prediction robustness. Our approach can be easily extended to contexts other than the image classification task and enables users to increase their confidence in the reliability of deep learning models.