This project addresses novel interaction paradigms and technologies for enhancing the flexibility of interaction and dialogue management in computer games. This will enable computer-driven game characters to lead more intelligent and versatile dialogues than is possible with current games technology.
Research is targeted towards the following innovations.
Intelligent dialogue control by means of methods from artificial intelligence
- Adaptation of the dialogue flow to the current context, the user's actions and the emotions of virtual characters, which are computed at run-time;
- Consideration of previous dialogue in order to avoid repetitions and to generate a consistent behaviour.
Modelling emotions and personality on the basis of socio-psychological models
- Integration of a component for modelling and computing emotions into a game engine;
- Improvement of the bodily behaviour of virtual characters, e.g. selection and individual execution of gestures (which may accompany speech), idle movements, or the type of facial expression;
- Impact of emotions on the dialogical and interactional behaviour of virtual characters in computer games.
Speech synthesis with reliable quality
- Embedding speech synthesis in a game engine;
- Predictability of synthesis quality;
- Reduction of the manual effort during the creation of new synthetic voices;
Expressive speech synthesis
- Generation of voices typical for game scenarios (giant, dwarf, robot, etc.) from synthetic speech by means of speech signal processing methods;
- Automatic learning methods for emotion-specific sound patterns from sample data in view of generating synthetic speech;
- Increasing the spectrum of available expressivity through domain-specific and domain-oriented synthetic voices.