Skip to main content Skip to main navigation


Multimodal Fusion Using Deep Learning Applied to Driver's Referencing of Outside-Vehicle Objects

Abdul Rafey Aftab; Michael von der Beeck; Steven Rohrhirsch; Benoit Diotte; Michael Feld
In: IEEE (Hrsg.). 2021 IEEE Intelligent Vehicles Symposium (IV). IEEE Intelligent Vehicles Symposium (IV-2021), 32th Intelligent Vehicles Symposium, located at IV21, July 11-17, Nagoya, Japan, Pages 1108-1115, ISBN 978-1-7281-5394-0, IEEE, 11/2021.


There is a growing interest in more intelligent natural user interaction with the car. Hand gestures and speech are already being applied for driver-car interaction. Moreover, multimodal approaches are also showing promise in the automotive industry. In this paper, we utilize deep learning for a multimodal fusion network for referencing objects outside the vehicle. We use features from gaze, head pose and finger pointing simultaneously to precisely predict the referenced objects in different car poses. We demonstrate the practical limitations of each modality when used for a natural form of referencing, specifically inside the car. As evident from our results, we overcome the modality specific limitations, to a large extent, by the addition of other modalities. This work highlights the importance of multimodal sensing, especially when moving towards natural user interaction. Furthermore, our user based analysis shows noteworthy differences in recognition of user behavior depending upon the vehicle pose.

Weitere Links