Skip to main content Skip to main navigation


FaceEat: Facial and Eating Activities Recognition with Inertial and Mechanomyography Fusion Using a Glasses-Based Design for Real-Time and on-the-Edge Inference

Hymalai Bello; Sungho Suh; Bo Zhou; Paul Lukowicz
In: Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing. ACM International Symposium on Wearable Computers (ISWC-2023), located at International Symposium on Wearable Computing, October 8-12, Cancun, Mexiko, Mexico, UbiComp/ISWC '23 Adjunct, ISBN 9798400702006, Association for Computing Machinery, 2023.


Facial expressions recognition and eating monitoring technologies can detect stress levels and emotional triggers that lead to unhealthy eating behaviors. Wearables offer a ubiquitous solution to help the individual develop coping mechanisms to manage stress and maintain a healthy lifestyle. Introducing FaceEat, a privacy-focused, real-time, and on-the-edge (RTE) wearable solution with minimal power consumption (≤ 0.55 Watts) and utilizing a tiny memory space (11 − 19KB). Its purpose is to recognize facial expressions and eating/drinking activities. At the heart of FaceEat are lightweight convolutional neural networks, serving as the backbone models for both facial and eating scenarios. During the RTE evaluation, the system achieved an F1-score of over 86% in facial expression recognition. Additionally, we achieved an F1-score of 90% for monitoring eating and drinking activities for the user-independent case with an unseen volunteer for the RTE.


Weitere Links