Artikel

Multimodal Input in the Car, Today and Tomorrow

Christian Müller; Garrett Weinberg
In: IEEE Multimedia, Vol. 18, No. 1, Pages 98-102, IEEE Computer Society, 1/2011.

Abstract

After a surge in horrific automobile accidents in which distracted driving was proven to be a factor, 38 US states have enacted texting-while-driving bans. While nearly everyone can agree that pecking out a love note on a tiny mobile phone keypad while simultaneously trying to operate a vehicle is bad idea, what about the other activities that we perform on a day-to-day basis using the electronic devices either built in or brought in to our cars? Finding a nearby restaurant acceptable to the vegetarian in the back set? Locating and queuing up that new album you downloaded to your iPod? This article offers a brief overview of multimodal (speech, touch, gaze, etc.) input theory as it pertains to common in-vehicle tasks and devices. After a brief introduction, we walk through a sample multimodal interaction, detailing the steps involved and how information necessary to the interaction can be obtained by combining input modes in various ways. We also discuss how contemporary in-vehicle systems take advantage of multimodality (or fail to do so), and how the capabilities of such systems might be broadened in the future via clever multimodal input mechanisms.

Weitere Links

BibTeX