Skip to main content Skip to main navigation

Project

KOALA

Context-Adaptations for E-Learning Platforms by Analyzing the User‘s Cognitive State

Context-Adaptations for E-Learning Platforms by Analyzing the User‘s Cognitive State

  • Duration:

E-learning platforms allow users to learn at their own pace and at any time and any place. The constant availability of mobile internet and the ubiquitous availability of smartphones nowadays even enable ubiquitous learning, e.g. in buses and trains or even during sports by listening to audio content. The variety of media formats and content offered by these platforms allows users to switch between them and follow course content in different order instead of adhering to rigid linear instructions. However, there is currently hardly any automated adaptation to the user's learning situation. Existing context adaptations are usually statically defined and only consider a few aspects. For example, the interface adapts to the end device, the learning content to a predefined learning objective or alternative resources or forums are suggested if a user performs poorly in tests.

In this project, methods are to be found to adapt the learning experience specifically to the situation of the user and thus contribute to a better learning success. A novelty in relation to the context is the multitude of states under consideration. In addition to common context information, advanced features such as the movement mode (at rest, walking, running, in the car) and ambient noise are used. In addition, sensor values that provide information about the cognitive state of the user are integrated. The visual attention of the user is to be determined by eye tracking, while cognitive load and perceived stress are to be determined by a combination of biometric sensors from wearables and a front camera integrated into the device. The sensors close to the body provide important information such as heart rate variability and skin conductance, while the cameras are used to determine emotional expressions (using existing neural networks) and the frequency of blinking. Furthermore, the integration of an eye tracker for better analysis of eye movements is conceivable. While this multitude of sensors record data, the productivity of the user in the situations described by this data is to be analyzed. This can be done using simple metrics such as the time required for reading modules, jumping back and forth in videos, or the performance in automated tests. Based on the collected data on learning productivity in different situations, machine learning is to be used to train a model that suggests optimal learning content for given situations, i.e. to adapt the media format and the abstraction level.

For example, in stressful situations with little attention, e.g. on public transport, more application-oriented videos with a high level of abstraction could be suggested, while with good attention and little stress at a desk, complex specialist articles could be offered for reading.

Partners

Holtzbrinck Publishing Group

Sponsors

BMBF - Federal Ministry of Education and Research

01IS17043

BMBF - Federal Ministry of Education and Research

Publications about the project

Nico Herbig; Tim Düwel; Mossad Helali; Lea Eckhart; Patrick Schuck; Subhabrata Choudhury; Antonio Krüger

In: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. International Conference on User Modeling, Adaptation, and Personalization (UMAP-2020), July 12-18, Genoa, Italy, Pages 88-97, ISBN 9781450368612, Association for Computing Machinery, New York, NY, USA, 7/2020.

To the publication

Nico Herbig; Patrick Schuck; Antonio Krüger

In: Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia. International Conference on Mobile and Ubiquitous Multimedia (MUM-2019), November 27-29, Pisa, Italy, ISBN 978-1-4503-7624-2/19/11, ACM, 11/2019.

To the publication