Florian Daiber

Senior Researcher | florian.daiber@dfki.de

ABOUT

Florian Daiber

I am post-doctoral researcher in the Cognitive Assistants department led by Prof. Dr. Antonio Krüger at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken, Germany. My main research is in the field of human-computer interaction, 3D user interfaces and ubiquitous sports technologies. I am coordinating a couple of projects at DFKI and Saarland University. At DFKI, I am currently mainly working in the IoTAssist, THE CAMLOT, and PORTAL projects. At Saarland University I am mainly involved in the CPEC and medical trAIning project.

I was Marie Curie Early Stage Researcher at the School of Computing and Communications in Lancaster, UK. At the School of Computing and Communications I was working on in the group of Prof. Hans Gellersen. Besides developing a post-doc research profile I explored gaze-based interaction.

In 2015, I defended my doctoral thesis on "Interaction with Stereoscopic Data on and above Interactive Surfaces" at the Saarbrücken Graduate School of Computer Science. In 2008, I received a diploma in geoinformatics at the Institute for Geoinformatics, University of Münster, Germany.

I have working experience in organizing different workshops and conferences, e.g. the "CHI SIG Touching the 3rd Dimension" and was also involved in the follow-up CHI workshop "The 3rd dimension of CHI (3DCHI)" and the Dagstuhl Seminar "Touching the 3rd dimension". I organized the Tutorial and Workshop on Interactive Surfaces for Interaction with Stereoscopic 3D (ISIS3D) at ITS 2013 and moderated the Spatial User Interaction Panel at SUI 2017. Currently I co-organized the CHI 2020 Workshop on Everyday Proxy Objects for Virtual Reality (EPO4VR). In the context of my Ubiquitous Sports Technology research I organized the Tutorial on Wearbale Computing in Sports at Mobile HCI 2017, UbiMount Workshops at UbiComp 2016 and UbiComp 2017. I am co-organizer of the CHI 2017 SIG on Interactive Computing in Outdoor Recreation and the CHI 2018 Workshop on HCI in the outdoors.

I was Student Volunteer Chair at ITS 2010 Web and Social Media Chair at the ACM Symposium on Spatial User Interaction (SUI) 2014, ACM Symposium on User Interface Software and Technology (UIST) and SUI 2018, Publication Chair at UbiComp 2016, Poster Chair at the SUI 2017 and PC Member at the CHI PLAY 2017, CHI PLAY 2018, CHI PLAY 2019, ITS 2015, IUI 2015, IUI 2016, IUI 2017, MuC 2019, MUM 2017, MUM 2018, MUM 2019, SUI 2019, VR 2018, VR 2019 and VR 2020. Currently, I serve as a PC Member at CHI PLAY 2020, MuC 2020 and as Review Editor at frontiers in Virtual Reality.

EDUCATION

Interaction with Stereoscopic Data on and above Multi-touch Surfaces
This doctoral thesis project evaluated multi-touch and gestural 3D interaction on and above interactive surfaces and explores the design space of interaction with stereoscopic data.
Saarbrücken Graduate School of Computer Science

MAY 2015

Gestural Multi-touch Interaction with Virtual Globes
Diploma in Geoinformatics
University of Münster

JULY 2008



PROJECTS

PORTAL
Plant breeding using robotics and AI for advanced data analysis and decision making in virtual spaces

The PORTAL project will extend and bring together state-of-the-art technologies from the fields of Artificial Intelligence (AI), Robotics and Virtual Reality (VR) to create a significant, disruptive advancement for plant breeding as an important part of agriculture. This will enable plant breeders for the first time ever to revisit and inspect field plots, independently of time and space, in a virtual and augmented environment: the virtual plant breeding nursery.

since February 2021

CAMELOT
Continuous Adaptive Machine-Learning of Transfer of Control Situations

CAMELOT is the follow-up project of TRACTAT and builds on its results by looking at the Transfer of Control (ToC) task from the perspective of self-learning systems and multimodal human-machine interaction. Machine learning models support the system to recognize and classify situations. The models are multiply adaptive; they can be improved by passive observation and by actively being teached by user to deal with new situations. Multimodality plays a role on the one hand as a source for the recognition of user behaviour in response to the system, and on the other hand for the natural communication between system and user in case of a transfer of control. By using new methods in combining symbolic and sub-symbolic learning, not only is explainability and extensibility ensured, but the overall recognition performance is also improved compared to the state of the art.

since September 2020

IoTAssist
Entwicklung einer Endbenutzer-Plattform für Assistenzdienste mit interoperablen IoT-Geräten und tragbarer Sensorik

The main goal of the project is the development of a platform that enables the interoperability between wearables and IoT devices and services. Based on this platform the project aims to allow the simple and intuitive development of intelligent assistants for health and wellbeing. The use of individualized, "bottom-up" gamification to motivate fitness goals will be investigated. This approach enables the user to customize the gamification during runtime to her individual needs.

since March 2020

TRACTAT
Transfer of Control between Autonomous Agents

Autonomous systems, like self-driving cars and collaborative robots, must occasionally ask people around them for help in anomalous situations. They form a new generation of interaction platforms that provide a comprehensive multimodal presentation of the current situation in real-time, so that a smooth Transfer of Control (ToC) to human agents is guaranteed. Several scientific questions are associated with the ToC, including what should cause a ToC, when and how should another agent notified, and how to manage many of these situations. In this project, we will investigate these challenges using Artificial Intelligence (AI).

since September 2017

EIT Digital Smart Retail

EIT Digital Smart Retail is a High Impact Initiative of EIT Digital aimed predominantly at creating solutions for 'blended retail', which are based on the Digital Retail Suite (DRS). It is based on omnichannel data and it explores real-time data about consumer purchases and interests collected from different channels. Smart Retail provides a seamless shopping experience anywhere, according to customers' needs. Omnichannel data is an important source for Smart Retail opportunities; customers are becoming omnichannel in their thinking and behavior. Consumers expect to have a seamless experience in store, as well as anywhere online. The benefits of this project are cloud based services, in-store analytics, smart customer interfacing, tailored shopping experiences, efficient choices, increased sales.

JAN 2016 - DEC 2016

EIT ALIGRE
Affective lighting for novel grocery retail experiences

Today's grocery retail stores are typically lit with homogeneous ambient lighting of a single color temperature. Whereas in lighting design it is well known that applying variations in brightness and color temperature gives a more immersive user experience, better guides the attention of people and enhances perception. With the latest developments in LED technology and controls, it has now become affordable for retailers to differentiate the lighting conditions for the various zones in the supermarket (e.g. for product segments like wine and health products but also to fit zones to themes like Eastern or Christmas). Moreover, zones with semantic lighting technology offer novel ways of interaction with products and smartphone.

This ALIGRE project will test and validate the effect of the new lighting solutions on the shopping experience using highly advanced sensor and data analytics tools, thereby creating the necessary quantitative proof points to commercialize the propositions. The benefits are supported by real-life test installations and accompanying user study in an operational grocery store.

JAN 2016 - DEC 2016

T3D
Touching the 3rd Dimension

Two technologies have dominated recent tech exhibitions as well as the entertainment market: multi-touch surfaces and 3D stereoscopic displays. Currently, these promising technologies are combined in different setups, and first commercial systems are available that support (multi-)touch interaction as well as stereoscopic display. Recent research projects address technological questions of how users interact with stereoscopically displayed three-dimensional content on a two-dimensional touch surface. The approach of combining multi-touch surfaces and 3D stereoscopic displays has great potential to provide plausible as well as natural interaction for a wide range of applications, e.g. in entertainment, planning and design, education, and decision-making. It can also be applied to different user interface systems ranging from 3D desktop environments to more immersive collaborative large tabletop or other projection-based setups.

Although stereoscopic multi-touch enabled surfaces induce several perceptual conflicts, e.g. visual-haptic or accommodation-vergence conflicts, it is reasonable that they will further dominate future user interfaces in various settings due to their potential as well as attractiveness for human users. So far most approaches have not taken into account the mentioned perceptual conflicts and are in most cases limited in their focus on the actual moment of touch (i.e. when the finger touches the surface), whereas the essential time period before the touch is rarely considered. Obviously - in the case of stereoscopic display - these moments are particularly important since most virtual objects are rendered not on the surface, but before or behind it. Hence, usually touching virtual objects and touching the physical surface occur at different moments during the interaction. The benefits, challenges and limitations of using this combination have not been examined in-depth and are so far not well understood.

The project Touching the 3rd Dimension (T3D) therefore aims to address these questions by analyzing the perceptual aspects during the lifetime of a touch, i.e. the pre-touch, as well as the actual touch phase. On the one hand we intend to design and evaluate different interaction concepts for stereoscopic multi-touch enabled surfaces based on perceptual limitations of the user, and on the other hand we will exploit our setup to gain novel insights into the nature of touch and perception in the real world. In addition we will explore potential application areas, in particular 3D modeling in the domains of city modeling and computer-aided design (CAD).

JULY 2013 - 2016

Nuance-Project
Multi-modal interaction with distant objects using eye gaze and multi-touch input

Tabletop interaction with objects in and out of reach is a common real world as well as virtual task. Gaze as additional input mode might support this interactions in terms of search, selection and manipulation of objects on digital tabletop. The aim of this work is the design and evaluation of interaction techniques that rely on gaze and gestural multi-touch input. In particular the selection and manipulation of distant objects will be investigated. This approach allows the interaction with different kinds of distant objects. First objects out of physical reach are easily made available to the user without forcing her to extreme and exhausting body movements. We aim to investigate the performance and accuracy of combined selection and manipulation using multi-modal input through explicit manipulation on implicit selected objects. Through our multi-modal approach we expect an improvement in terms of accuracy and task performance time.

JULY 2012 - JULY 2013

iMUTS
Interscopic Multi-touch Surfaces

In recent years visualization of and interaction with three-dimensional data have become more and more popular and widespread due to the requirements of numerous application areas. Two-dimensional desktop systems are often limited in cases where natural and intuitive interfaces are desired. Sophisticated 3D user interfaces, as they are provided by virtual reality (VR) systems consisting of stereoscopic projection and tracked input devices, are rarely adopted by ordinary users or even by experts. Since most applications dealing with three- dimensional data still use traditional 2D GUIs, current user interface designs obviously lack adequate 3D features and user support.

Multi-touch interaction has received considerable attention in the last few years, in particular for non-immersive, natural 2D interaction. Some multi-touch devices even support three degrees of freedom (DoF) in terms of 2D position on the surface and varying levels of pressure. Since multi-touch interfaces represent a good trade- off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the fundaments of the next generation 2D and 3D user interfaces. Stereoscopic display of 3D data provides an additional depth cue, but until now challenges and limitations for multi-touch interaction in this context have not been considered. In this project we aim to develop interscopic multi-touch user interfaces. An interscopic multi-touch surface (iMUTS) will allow users to interact intuitively with stereoscopically displayed 3D objects and with usually monoscopically displayed 2D content.

JANUARY 2010 - DECEMBER 2012

SoKNOS
Service-orientierte ArchiteKturen zur Unterstützung von Netzwerken im Rahmen Oeffentlicher Sicherheit (Service-Oriented ArchiteCtures Supporting Networks of Public Security)

The SoKNOS research project aimed to develop concepts that are valuable in the support of governmental agencies, private companies, and other organizations active in the handling of disastrous events in the public security sector. SoKNOS was funded by the Federal Ministry of Education and Research within the security research program of the German federal government.

SoKNOS developed data-based solutions that particularly shorten the structuring phase, i.e., the phase after the occurrence of the disaster. SoKNOS aimed to support a cross-organizational collaboration – in real-time and on all levels between local, regional, national, and international organizations.

JULY 2008 - DECEMBER 2009


SELECTED PUBLICATIONS

The Space Bender: Supporting Natural Walking via Overt Manipulation of the Virtual Environment

Adalberto Simeone, Niels Christian Nilsson, André Zenner, Marco Speicher, and Florian Daiber
In IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE Conference on Virtual Reality and 3D User Interfaces (VR-2020), IEEE, 2020.
VR, Locomotion, Natural Walking Techniques

The Space Bender is a natural walking technique for room-scale VR. It builds on the idea of overtly manipulating the Virtual Environment by “bending” the geometry whenever the user comes in proximity of a physical boundary. We compared the Space Bender to two other similarly situated techniques: Stop and Reset and Teleportation, in a task requiring participants to traverse a 100 m path. Results show that the Space Bender was significantly faster than Stop and Reset, and preferred to the Teleportation technique, highlighting the potential of overt manipulation to facilitate natural walking.


LIVE: the Human Role while Learning in an Immersive Virtual Environment

Adalberto L. Simeone, Marco Speicher, Andreea Molnar, Adriana Wilde, and Florian Daiber
In Proceedings of the Symposium on Spatial User Interaction (SUI '19). ACM, 2019.
Eye Tracking, Mobile Interaction, Gaze Interaction, Error Model, Error-Aware

This work studies the role of a human instructor within an immersive VR lesson. Our system allows the instructor to perform "contact teaching" by demonstrating concepts through interaction with the environment, and the student to experiment with interaction prompts. We conducted a between-subjects user study with two groups of students: one experienced the VR lesson while immersed together with an instructor; the other experienced the same contents demonstrated through animation sequences simulating the actions that the instructor would take. Results show that the Two-User version received significantly higher scores than the Single-User version in terms of overall preference, clarity, and helpfulness of the explanations. When immersed together with an instructor, users were more inclined to engage and progress further with the interaction prompts, than when the instructor was absent. Based on the analysis of videos and interviews, we identified design recommendations for future immersive VR educational experiences.


Slackliner

Slackliner - An Interactive Slackline Training Assistant

Felix Kosmalla, Christian Murlowski, Florian Daiber, and Antonio Krüger
In Proceedings of the 26th ACM international conference on Multimedia (MM '18). ACM, 2018.
Slackline, sports technologies, projection, real-time feedback

In this paper we present Slackliner, an interactive slackline training assistant which features a life-size projection, skeleton tracking and real-time feedback. As in other sports, proper training leads to a faster buildup of skill and lessens the risk for injuries. We chose a set of exercises from slackline literature and implemented an interactive trainer which guides the user through the exercises and gives feedback if the exercise was executed correctly.
A post analysis gives the user feedback about her performance. We conducted a user study to compare the interactive slackline training system with a classic approach using a personal trainer. No significant difference was found between groups regarding balancing time, number of steps and the walking distance on the line for the left and right foot. Significant main effects for the balancing time on line, without considering the group, have been found. User feedback acquired by questionnaires and semi-structured interviews was very positive. Overall, the results indicate that the interactive slackline training system can be used as an enjoyable and effective alternative to classic training methods.


Setup of the error-aware gaze-based interface study

Error-aware gaze-based interfaces for robust mobile gaze interaction

Best Paper Award

Michael Barz, Florian Daiber, Daniel Sonntag, and Andreas Bulling
In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (ETRA '18). ACM, 2018.
Eye Tracking, Mobile Interaction, Gaze Interaction, Error Model, Error-Aware

Gaze estimation error can severely hamper usability and performance of mobile gaze-based interfaces given that the error varies constantly for different interaction positions. In this work, we explore error-aware gaze-based interfaces that estimate and adapt to gaze estimation error on-the-fly. We implement a sample error-aware user interface for gaze-based selection and different error compensation methods: a naïve approach that increases component size directly proportional to the absolute error, a recent model by Feit et al. that is based on the two-dimensional error distribution, and a novel predictive model that shifts gaze by a directional error estimate. We evaluate these models in a 12-participant user study and show that our predictive model significantly outperforms the others in terms of selection rate, particularly for small gaze targets. These results underline both the feasibility and potential of next generation error-aware gaze-based user interfaces.


FootStriker: An EMS-based Foot Strike Assistant for Running

Mahmoud Hassan, Florian Daiber, Frederik Wiehr, Felix Kosmalla, and Antonio Krüger
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 1, Article 2 (March 2017), 18 pages.
Electrical Muscle Stimulation, Wearable Devices, Wearables, Real-time Feedback, Motor Skills, Motor Learning, Sports Training, Running, In-situ Feedback, Online Feedback, Real-time Assistance

In running, knee-related injuries are very common. The main cause are high impact forces when striking the ground with the heel rst. Mid- or forefoot running is generally known to reduce impact loads and to be a more e cient running style. In this paper, we introduce a wearable running assistant, consisting of an electrical muscle stimulation (EMS) device and an insole with force sensing resistors. It detects heel striking and actuates the calf muscles during the ight phase to control the foot angle before landing. We conducted a user study, in which we compared the classical coaching approach using slow motion video analysis as a terminal feedback to our proposed real-time EMS feedback. The results show that EMS actuation signi cantly outperforms traditional coaching, i.e., a decreased average heel striking rate, when using the system. As an implication, EMS feedback can generally be bene cial for the motor learning of complex, repetitive movements.


Virtual Reality Climbing: Exploring Rock Climbing in Mixed Reality Environments

Felix Kosmalla, André Zenner, Marco Speicher, Florian Daiber, Nico Herbig, and Antonio Krüger
In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI '17)
Passive Haptic Feedback, Rock Climbing, Mixed Reality, Virtual Reality

While current consumer virtual reality headsets can convey a strong feeling of immersion, one drawback is still the missing haptic feedback when interacting with virtual objects. In this work, we investigate the use of a artificial climbing wall as a haptic feedback device in a virtual rock climbing environment. It enables the users to wear a head-mounted display and actually climb on the physical climbing wall which conveys the feeling of climbing on a large mountain face.


ClimbSense - Automatic Climbing Route Recognition using Wrist-worn Inertia Measurement Units

Felix Kosmalla, Florian Daiber, Antonio Krüger
In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM International Conference on Human Factors in Computing Systems, ACM, 2015.
Climbing, Sports Technologies, Inertial Sensors, Machine Learning

Today, sports and activity trackers are ubiquitous. Especially runners and cyclists have a variety of possibilities to record and analyze their workouts. In contrast, climbing did not find much attention in consumer electronics and human-computer interaction. If quantified data similar to cycling or running data were available for climbing, several applications would be possible, ranging from simple training diaries to virtual coaches or usage analytics for gym operators. This paper introduces a system that automatically recognizes climbed routes using wrist-worn inertia measurement units (IMUs). This is achieved by extracting features of a recorded ascent and use them as training data for the recognition system. To verify the recognition system, cross-validation methods were applied to a set of ascent recordings that were assessed during a user study with eight climbers in a local climbing gym. The evaluation resulted in a high recognition rate, thus proving that our approach is possible and operational.


Hoverspace

Hoverspace

Paul Lubos, Oscar Ariza, Gerd Bruder, Florian Daiber, Frank Steinicke, Antonio Krüger
In: Julio Abascal; Simone Barbosa; Mirko Fetter; Tom Gross; Philippe Palanque; Marco Winckler (Hrsg.). Human-Computer Interaction – INTERACT 2015. Pages 259-277, Lecture Notes in Computer Science (LNCS), Vol. 9298, ISBN 978-3-319-22697-2, Springer International Publishing, 2015.
Hover Space, Touch Interaction, Stereoscopic Displays, 3D Interaction

Recent developments in the area of stereoscopic displays and tracking technologies have paved the way to combine touch interaction on interactive surfaces with spatial interaction above the surface of a stereoscopic display. This holistic design space supports novel affordances and user experiences during touch interaction, but also induce challenges to the interaction design. In this paper we introduce the concept of hover interaction for such setups. Therefore, we analyze the non-visual volume above a virtual object, which is perceived as the corresponding hover space for that object. The results show that the users’ perceptions of hover spaces can be categorized into two groups. Either users assume that the shape of the hover space is extruded and scaled towards their head, or along the normal vector of the interactive surface. We provide a corresponding model to determine the shapes of these hover spaces, and confirm the findings in a practical application. Finally, we discuss important implications for the development of future touch-sensitive interfaces.


Interacting with 3D Content on Stereoscopic Displays

Interacting with 3D Content on Stereoscopic Displays.

Florian Daiber, Marco Speicher, Sven Gehring, Markus Löchtefeld, Antonio Krüger
In: Proceedings of the International Symposium on Pervasive Displays. International Symposium on Pervasive Displays. Pages 32:32--32:37, ACM, 2014.
Spatial Interaction, Gestural Interaction, Mobile Interaction, 3D Travel, Large Displays, Media Facades

Along with the number of pervasive displays in urban environments, recent advances in technology allow to display three-dimensional (3D) content on these displays. However, current input techniques for pervasive displays usually focus on interaction with 2D data. To enable interaction with 3D content on pervasive displays, we need to adapt existing and create novel interaction techniques. In this paper we investigate remote interaction with 3D content on pervasive displays. We introduce and evaluate four 3D travel techniques that rely on well established interaction metaphors and either use a mobile device or depth tracking as spatial input. Our study on a large-scale stereoscopic display shows that the physical travel techniques outperformed the virtual techniques with respect to task performance time and error rate.


Autostereoscopic Handheld AR

Is Autostereoscopy Useful for Handheld AR?

Best Paper Nominee

Frederic Kerber; Pascal Lessel; Michael Mauderer; Florian Daiber; Antti Oulasvirta; Antonio Krüger
In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia. ACM, 2013.
Autostereoscopy, mobile devices, depth discrimination, empirical and quantitative user study, augmented reality

Some recent mobile devices have autostereoscopic displays that enable users to perceive stereoscopic 3D without lenses or filters. This might be used to improve depth discrimination of objects overlaid to a camera viewfinder in augmented reality (AR). However, it is not known if autostereoscopy is useful in the viewing conditions typical to mobile AR. This paper investigates the use of autostereoscopic displays in an psychophysical experiment with twelve participants using a state-of-the-art commercial device. The main finding is that stereoscopy has a negligible if any effect on a small screen, even in favorable viewing conditions. Instead, the traditional depth cues, in particular object size, drive depth discrimination.


Interactive surfaces for interaction with stereoscopic 3d

Interactive Surfaces for Interaction with Stereoscopic 3D (ISIS3D): Tutorial and Workshop at ITS 2013

Florian Daiber; Bruno Rodrigues De Araujo; Frank Steinicke; Wolfgang Stuerzlinger
In: Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces. Pages 483-486, ACM, 2013.
Stereoscopic Displays, 3D User Interfaces and Interaction, Touch- and Gesture-based Interfaces, Adaptive and Perception-inspired Interfaces, Psychophysiological Studies related to Stereoscopy

With the increasing distribution of multi-touch capable de- vices multi-touch interaction becomes more and more ubiq- uitous. Multi-touch interaction offers new ways to deal with 3D data allowing a high degree of freedom (DOF) without instrumenting the user. Due to the advances in 3D technolo- gies, designing for 3D interaction is now more relevant than ever. With more powerful engines and high resolution screens also mobile devices can run advanced 3D graphics, 3D UIs are emerging beyond the game industry, and recently, first prototypes as well as commercial systems bringing (auto-) stereoscopic display on touch-sensitive surfaces have been proposed. With the Tutorial and Workshop on “Interactive Surfaces for Interaction with Stereoscopic 3D (ISIS3D)” we aim to provide an interactive forum that focuses on the chal- lenges that appear when the flat digital world of surface com- puting meets the curved, physical, 3D space we live in.


Designing Gestures for Mobile 3D Gaming

Florian Daiber; Lianchao Li; Antonio Krüger
In: Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia. ACM, 2012.
3D User Interfaces, Gestural Interaction, Mobile Interaction, Mobile Gaming, Stereoscopic Display

In the last years 3D is getting more and more popular. Besides the increasing number of movies for 3D stereoscopic cinemas and television, serious steps have also been undertaken in the field of 3D gaming. Games with stereoscopic 3D output are now available not only for gamers with high-end PCs but also on handheld devices equipped with 3D autostereoscopic displays. Recent smartphone technology has powerful processors that allow complex tasks like image processing, e.g. used in augmented reality applications. Moreover these devices are nowadays equipped with various sensors that allow additional input modalities far beyond joystick, mouse, keyboard and other traditional input methods. In this paper we propose an approach for sensor-based interaction with stereoscopic displayed 3D data on mobile devices and present a mobile 3D game that makes use of these concepts.


Balloon Selection revisited - Multi-touch Selection Techniques for Stereoscopic Data

Florian Daiber; Eric Falk; Antonio Krüger
In: Proceedings of the International Conference on Advanced Visual Interfaces. Pages 441-444, ACM, 2012.
3D User Interfaces, Gestural Interaction, Selection tech- niques, Stereoscopic Display

In the last years 3D is getting more and more popular. Besides the increasing number of movies for 3D stereoscopic cinemas and television, serious steps have also been undertaken in the field of 3D gaming. Games with stereoscopic 3D output are now available not only for gamers with high-end PCs but also on handheld devices equipped with 3D autostereoscopic displays. Recent smartphone technology has powerful processors that allow complex tasks like image processing, e.g. used in augmented reality applications. Moreover these devices are nowadays equipped with various sensors that allow additional input modalities far beyond joystick, mouse, keyboard and other traditional input methods. In this paper we propose an approach for sensor-based interaction with stereoscopic displayed 3D data on mobile devices and present a mobile 3D game that makes use of these concepts.



CONTACT

Email
florian.daiber@dfki.de

Address
Ubiquitous Media Technology Lab
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI)
Stuhlsatzenhausweg 3, D-66123 Saarbrücken
Campus D3_2, Room 1.81

Phone
+49(0)681 85775 5115

SOCIAL LINKS