Direkt zum Inhalt Direkt zur Navigation
Intelligente Benutzerschnittstellen
@article{pub5439,
    abstract = {After a surge in horrific automobile accidents in which distracted driving was proven to be a factor, 38 US states have enacted texting-while-driving bans. While nearly everyone can agree that pecking out a love note on a tiny mobile phone keypad while simultaneously trying to operate a vehicle is bad idea, what about the other activities that we perform on a day-to-day basis using the electronic devices either built in or brought in to our cars?  Finding a nearby restaurant acceptable to the vegetarian in the back set?  Locating and queuing up that new album you downloaded to your iPod?  

This article offers a brief overview of multimodal (speech, touch, gaze, etc.) input theory as it pertains to common in-vehicle tasks and devices. After a brief introduction, we walk through a sample multimodal interaction, detailing the steps involved and how information necessary to the interaction can be obtained by combining input modes in various ways. We also discuss how contemporary in-vehicle systems take advantage of multimodality (or fail to do so), and how the capabilities of such systems might be broadened in the future via clever multimodal input mechanisms.
},
    number = {1},
    month = {1},
    year = {2011},
    title = { Multimodal Input in the Car, Today and Tomorrow},
    journal = {IEEE Multimedia},
    volume = {18},
    pages = {98-102},
    publisher = {IEEE Computer Society},
    author = {Christian MĂĽller and Garrett Weinberg}
}