The research work in DEEPLEE, which is carried out in the Language Technology research departments in Saabrücken and Berlin, builds on DFKI's expertise in the areas of "deep learning" (DL) and "language technology" (LT) and develops it further. They aim for profound improvements of DL approaches in LT by focusing on four central, open research topics:
- Modularity in DNN architectures
- Use of external knowledge
- DNNs with explanation functionality
- Machine Teaching Strategies for DNNs
The result of the research work will be a DL-based modular framework system that enables end-to-end applications in information extraction (IE), question answering (QA) and machine translation (MT). The following research objectives are pursued:
- Complex LTs (IE, QA, MT), which are traditionally based on heterogeneous technology collections, are to be modeled as uniform end-to-end learning scenarios based on neural networks.
- The end-to-end performance of classical approaches based on heterogeneous technology collections is to be evaluated against neural approaches.
- A repertoire of "linguistically inspired" neural building blocks for LTs will be established, which are linguistically-agnostic and can be reused (including explanatory functionality and learning aspects such as different degrees of monitoring, model distribution, transfer learning, multi-task learning for such modules). We will do this for IE, QA and MT scenarios covering a wide range of building blocks and applications.
- A portfolio of approaches to a variety of DNNs and tasks (NMT, NQA and NIE) will be established, which can be explained to a human expert.
- IE, QA and MT are to be designed as text-to-text applications.
- Development and evaluation of ways to integrate external knowledge sources into NN-based LTs.