Research Publications
An Impartial Take to the CNN vs Transformer Robustness Contest
Francesco Pinto, Philip H. S. Torr, and Puneet K. Dokania
The European Conference on Computer Vision (ECCV) 2022, Tel Aviv
Abstract
Following the surge of popularity of Transformers in Computer Vision, several studies have attempted to determine whether they could be more robust to distribution shifts and provide better uncertainty estimates than Convolutional Neural Networks (CNNs). The almost unanimous conclusion is they are, and it is often conjectured more or less explicitly that the reason  of this supposed superiority is to be attributed to the self-attention mechanism. In this paper we perform extensive empirical analyses showing that the recent state-of-the-art CNNs (particularly, ConvNeXt~\cite{ConvNeXt}) can be as robust and reliable or even sometimes more than the current state-of-the-art Transformers. However, there is no clear winner. Therefore, although it is tempting to state the definitive superiority of one family of architectures over another, they seem to enjoy similar extraordinary performances on a variety of tasks while also suffering from similar vulnerabilities such as texture, background and simplicity biases.
Download the full paper