Skip to main content Skip to main navigation

Publication

Comprehensive Layer-wise Analysis of SSL Models for Audio Deepfake Detection

Yassine El Kheir; Younes Samih; Suraj Maharjan; Tim Polzehl; Sebastian Möller
In: Luis Chiruzzo; Alan Ritter; Lu Wang (Hrsg.). Findings of the Association for Computational Linguistics: NAACL 2025. Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL-2025), April 29 - May 4, Albuquerque, USA, Pages 4070-4082, ISBN 979-8-89176-195-7, Association for Computational Linguistics, 4/2025.

Abstract

This paper conducts a comprehensive layer-wise analysis of self-supervised learning (SSL) models for audio deepfake detection across diverse contexts, including multilingual datasets (English, Chinese, Spanish), partial, song, and scene-based deepfake scenarios. By systematically evaluating the contributions of different transformer layers, we uncover critical insights into model behavior and performance. Our findings reveal that lower layers consistently provide the most discriminative features, while higher layers capture less relevant information. Notably, all models achieve competitive equal error rate (EER) scores even when employing a reduced number of layers. This indicates that we can reduce computational costs and increase the inference speed of detecting deepfakes by utilizing only a few lower layers. This work enhances our understanding of SSL models in deepfake detection, offering valuable insights applicable across varied linguistic and contextual settings. Our trained models and code are publicly available.

Projects