{\rtf1\ansi\deff0\deftab360 {\fonttbl {\f0\fswiss\fcharset0 Arial} {\f1\froman\fcharset0 Times New Roman} {\f2\fswiss\fcharset0 Verdana} {\f3\froman\fcharset2 Symbol} } {\colortbl; \red0\green0\blue0; } {\info {\author Biblio 7.x}{\operator }{\title Biblio RTF Export}} \f1\fs24 \paperw11907\paperh16839 \pgncont\pgndec\pgnstarts1\pgnrestart Guo C, Lee MJ, Leclerc G, et al. Adversarially trained neural representations may already be as robust as corresponding biological neural representations. arXiv. 2022. doi:https://doi.org/10.48550/arXiv.2206.11228.\par \par Dapello J, Kar K, Schrimpf M, et al. Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness. bioRxiv. 2022. doi:https://doi.org/10.1101/2022.07.01.498495.\par \par Baidya A, Dapello J, DiCarlo JJ, Marques T. Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs. Shared Visual Representations in Human & Machine Intelligence - NeurIPS Workshop. 2021. Available at: https://arxiv.org/abs/2110.10645.\par \par Dapello J, Marques T, Schrimpf M, Geiger F, Cox DD, DiCarlo JJ. Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations. Neural Information Processing Systems (NeurIPS; spotlight). 2020. doi:10.1101/2020.06.16.154542.\par \par }