Skip to main content Skip to main navigation

How AI is changing our relationship with truth and information

| Press release | Learning & Education | Knowledge & Business Intelligence | Data Management & Analysis | Image Recognition & Understanding | Language & Text Understanding | Smart Data & Knowledge Services

Generative artificial intelligence (AI) is changing our media and information environment more radically than ever before. Systems such as ChatGPT, Midjourney and deepfake generators create deceptively real texts, images and voices. This technology opens up enormous potential, but also poses an unprecedented challenge to our trust in information.

© Lando Michael Lehmann
Imagine receiving a call from someone you know well – but the voice belongs to an AI. How confident can we be today in distinguishing between what is real and what is not?

The challenge: trust in a world of synthetic content

When all content can be manipulated, synthesised or artificially generated, there is a risk of losing our bearings and security. The ‘liar's dividend’ – the strategic exploitation of this doubt – jeopardises the basis of democratic debate and social understanding. Trust is in danger of becoming a scarce commodity that everyone must work to preserve. 

At the German Research Centre for Artificial Intelligence (DFKI), we take this challenge seriously. In the Generative AI Competence Centre, we develop methods for image and text forensics, work on explainable language models and innovative methods for verifying synthetic content.

Christoph Maerz, Head of Generative AI Competence Centre, DFKI

‘Generative AI opens up fascinating possibilities, but it also raises the question: How can we create systems that users can truly trust? Transparency and explainability are the cornerstones of our research at DFKI.’

Christoph Maerz, Head of Generative AI Competence Centre, DFKI

Projects such as CERTAIN, NewsPolygraph and GretchenAI set standards and provide tools that expose manipulation and build trust. With ‘Wegweiser KI’ (AI Guide), we are working with the German Press Agency (DPA) to help journalists critically evaluate AI-based content and use it responsibly.

Media, science and society: working together against disinformation

Quality journalism remains indispensable – even in the age of AI. While AI increases the speed and diversity of research, it is journalists who check facts, provide context and take responsibility. DFKI promotes dialogue between research and media practice in order to integrate AI as a tool in a constructive and transparent manner. 

Trust in the digital world does not arise on its own. It requires media literacy, critical questioning and clear labelling of synthetic content – a task for education, politics, science and business alike. DFKI is involved in educational formats and public dialogues to raise awareness and educate.

Christoph Maerz, Head of Generative AI Competence Centre, DFKI

‘Trust is not a given – it only arises when we as scientists actively explain how AI works, what opportunities and risks it presents, and work together with the media and society to use this technology responsibly.’

Christoph Maerz, Head of Generative AI Competence Centre, DFKI

Utilising creative freedom – AI as an opportunity

Generative AI can strengthen society if we manage it responsibly. The rules of this new digital world are being written today – in laboratories, editorial offices, classrooms and parliaments. 

DFKI invites you to help shape this future together: for greater transparency, trust and an enlightened society in dealing with AI.

This article was first published in the Rhein-Zeitung (print edition of 26 June 2025) and appears here in an edited version with the kind permission of the editorial team.

Contact:

Dipl.-Inf. (FH) Christoph Maerz, M.Sc.

Leiter Kompetenzzentrum Generative KI, DFKI

Press contact:

Jeremy Gob

Redakteur & Pressereferent Kaiserslautern, DFKI