• Partner:

  • DFKI

  • TU Berlin

Idea

Description

Successful simulation

Voice User Interfaces are gaining importance for telecommunication services in all areas. This necessitates a fast and economical development and evaluation of these systems. Currently, two aspects are being evaluated separately. The first consists in testing the performance of individual system components (e.g., speech recognition, natural language understanding, dialog management, and speech synthesis) or of the entire system. These tests can be carried out automatically or semi-automatically for single components, but automatic end-to-end evaluation of entire dialog systems so far couldn’t be realized without human evaluators. The second aspect concerns the quantification of quality measures such as efficiency, comfort, usability and acceptance. Since quality is the result of perceptive and judgmental processes, the measurement of quality aspects usually requires controlled tests with human subjects.

The project SpeechEval opened up a new approach for quantifying quality and usability of dialog systems with minimal human input, so that tests can already be carried out in the design phase. We developed a platform which allows semi-automatic evaluation of dialog systems according to the aspects detailed above.

To do this, we used statistical methods to extract human user behavior from a general (cross-domain) corpus of human-machine dialog. This enabled us to model user behavior and simulate it even in interactions with new or unseen systems and domains. The approach was validated by carrying out black-box tests with real-world deployed spoken dialog systems.

Features