Modern machine learning methods excel at detecting highly complex statistical relationships in the form of correlations within large datasets. However, correlations do not necessarily imply causation; they do not always describe cause and effect. If correlations are nonetheless interpreted causally, this can quickly lead to false conclusions. These can prevent a learning algorithm from transferring its results to new environments or even lead to misinterpretation of data in scientific studies. However, a solid causal understanding of a problem is crucial for scientific progress.
The research group Causal Models and Representations (CaMoRe) is dedicated to the question of how causal knowledge and modern AI systems can be combined. How can existing expert knowledge be integrated into a machine learning method for efficient learning? Under what conditions can the algorithm itself learn to distinguish between causation and non-causal correlation, and when is this distinction impossible and the algorithm unreliable? What data would we need to collect to support causal decision-making?
CaMoRe explores these and other questions with a particular focus on methods of reinforcement learning, as well as applications in medicine, ecology, and climate science. We work closely with scientists from various German and international research institutes and universities.
Head of CaMoRe:
Dr. Jonas Wahl
Jonas.Wahl@dfki.de
Office of CaMoRe:
Denise Cucchiara
Denise.Cucchiara@dfki.de
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
Gebäude D3 2
Stuhlsatzenhausweg 3
66123 Saarbrücken
Germany