Multi Modal Navigation through Spatial Information

Johannes Schöning, Antonio Krüger

In: T.J. Cova , H.J. Miller , K. Beard , A.U. Frank , M.F. Goodschild (Hrsg.). Proceedings of the 5th International Conference on GIScience, Extended Abstracts. International Conference on Geographic Information Science (GIScience-2008) 5th September 23-26 Park City UT United States Lecture Notes in Computer Science (LNCS) 5266 Springer 2008.


We show how multi-touch hand gestures in combination with foot gestures can be used to perform typical basic spatial tasks within a Geographic In- formation System (GIS). The work is motivated by the high complexity of User Interfaces common GIS usually display and which requires a high de- gree of expertise form its users. Recent developments in interactive sur- faces that enable the construction of low cost multi-touch displays and rela- tively cheap sensor technology to detect foot gestures allows the deep ex- ploration of these input modalities for GIS users, with medium or low ex- pertise. Combining multi-touch hand and foot interaction has a couple of advantages and helps also to rethink the use of the dominant and non- dominant hand. In pure multi-touch hand interaction systems the non- dominant hand often sets the reference frame that determines the naviga- tion mode, while the dominant hand carries out the precise task. Since in this case one touch is only used to define a certain mode, the advantages of multi-touch are not fully exploited. Foot gestures can be used to provide continues input for a spatial navigation task (such as panning or tilting), which is more difficult to provide with the hands in a natural way. In this paper propose how to combine those with a small set of foot gestures to improve the overall interaction with spatial data.

MultiModalNavigations.pdf (pdf, 689 KB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence