I am Senior Researcher at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken, Germany. I entered DFKI in 2012 as a member of the Sign Language Synthesis and Interaction (SLSI) research group, where I worked on the use of Natural User Interfaces for the authoring of sign language animations. In parallel, I worked on the connection between the aesthetics of virtual characters and the perception of personality, and on the development of the YALLAH framework for the generation of real-time interactive virtual humans. I participated in many international projects, such as ATLAS and VERITAS, as project coordinator and leader of the development team, where I was responsible for the development of the virtual character animation synthesis engine.
In 2019-2020, I collaborated with the Interactive Machine Learning (IML) group.
Since 2020, I am also part of the Affective Computing group.
Official profile info at the DFKI web site.
SignReality - Development of an Augmenter Reality (AR) app for the synthesis of Sign Language through a context-aware 3D avatar.
BIGEKO - Development of corpora and technologies for sign language synthesis, with a focus on facial expressions.
SocialWear - Development of corpora and technologies for sign language recognition and synthesis, in both desktop and AR environments.
EASIER - Sentiment analysis on text and video to enhance translation between text and sign languages.
MindBot - Generation and animation of virtual characters for mediating the interaction between humans and robots in the industry.
Skincare - Using Deep Learning for the development of a mobile application for patients and health professionals in the context of skin cancer diagnosis and treatment.
DECAD (DFKI Embodied Conversational Agent Demo) - An online demo of our new pipeline for the production of interative virtual agents.
Sign Language Synthesis and Interaction group - Virtual Interpreters for Sign Languages. Specifically, I investigate the use of Natural User Interfaces as a convenient mean for deaf user to author sign language animations on virtual characteres.
Transfer BlendShapes via UV - A Blender add-on able to transfer Shape Keys (aka blend shapes) between geometries, using UV maps as bridging information [Blender, Python].
RecSyncNG - a tool for for video recording with wireless frame-level snchronization between Android cameras. Enhanced with remote desktop GUI [Android Studio, Java, Python, FFmpeg, PyQT].
SL-Videotools - Sign Language Video Processing Tools is an aggregation of procedures for analysing human body movement with the goal of extracting relevant information for sign language analysis [Python, FFmpeg, Pillow].
VSM Visual Scene maker is a visual tool for configuring the behaviour of interactive social agents [IntelliJ IDEA, Java].
YALLAH (Yet Another Low Level Agent Handler), a framework for the generation of real-time interactive virtual characters, agents, avatars [Blender, Unity, Python, C#].
TIML (a Toolkit for Interactive Machine Learning) provides a set of command line tools and a web server to facilitate training an usage of Deep Convolutional Neural Networks for image classification and analysis through eXplainable Artificial Intelligence (XAI) techniques [Python, Keras, Tensorflow, Flask].
DeEvA, a platform for the generation of virtual characters from personality traits [Blender, Python, Django, R].
BlenderProjectTemplate, a file organization strategy to work, on the same computer, on many different projects based on different versions of Blender and plugins. Includes PyCham support scripts to see the blender `bpy` namespace and perform debug.