Publication
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
Tobias Christian Nauen; Sebastian Palacio; Federico Raue; Andreas Dengel
In: Proceedings of the Winter Conference on Applications of Computer Vision (WACV). IEEE Winter Conference on Applications of Computer Vision (WACV-2025), February 28 - March 4, Tucson, AZ, USA, AZ, USA, Pages 6955-6966, ISBN 979-8-3315-1083-1, IEEE, 2/2025.
Abstract
Self-attention in Transformers comes with a high computational cost because of their quadratic computational complexity, but their effectiveness in addressing problems in language and vision has sparked extensive research aimed at enhancing their efficiency. However, diverse experimental conditions, spanning multiple input domains, prevent a fair comparison based solely on reported results, posing challenges for model selection. To address this gap in comparability, we perform a large-scale benchmark of more than 45 models for image classification, evaluating key efficiency aspects, including accuracy, speed, and memory usage. Our benchmark provides a standardized baseline for efficiency-oriented transformers. We analyze the results based on the Pareto front – the boundary of optimal models. Surprisingly, despite claims of other models being more efficient, ViT remains Pareto optimal across multiple metrics. We observe that hybrid attention-CNN models exhibit remarkable inference memory- and parameter-efficiency. Moreover, our benchmark shows that using a larger model in general is more efficient than using higher resolution images. Thanks to our holistic evaluation, we provide a centralized resource for practitioners and researchers, facilitating informed decisions when selecting or developing efficient transformers.
Projects
- SustainML_SDS - Application Aware, Life-Cycle Oriented Model-Hardware Co-Design Framework for Sustainable, Energy Efficient ML Systems
- Albatross - Applications for Lifelong Based Algorithms Targeting Robust Optimization on Sustainable Settings