Skip to main content Skip to main navigation


Don't freeze: Finetune encoders for better Self-Supervised HAR

Vitor Fortes Rey; Dominique Nshimyimana; Paul Lukowicz
In: Monica Tentori; Nadir Weibel; Kristof Van Laerhoven; Zhongyi Zhou (Hrsg.). Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable. International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp-2023), ACM, 10/2023.


Recently self-supervised learning (SSL) has been proposed in the field of human activity recognition (HAR) as a solution to the labelled data availability problem. The idea being that by using pretext tasks such as reconstruction or contrastive predictive coding, useful representations can be learned that then can be used for classification. Those approaches follow the pretrain, freeze and fine-tune procedure. In this work we investigate how a simple change - not freezing the representation - leads to substantial performance gains across pretext tasks. The improvement was found in all four investigated datasets and across all four pretext tasks and is inversely proportional to amount of labelled data. Moreover the effect is present whether the pretext task is carried on the Capture24 dataset or directly in unlabelled data of the target dataset.


Weitere Links