When all content can be manipulated, synthesised or artificially generated, there is a risk of losing our bearings and security. The ‘liar's dividend’ – the strategic exploitation of this doubt – jeopardises the basis of democratic debate and social understanding. Trust is in danger of becoming a scarce commodity that everyone must work to preserve.
At the German Research Centre for Artificial Intelligence (DFKI), we take this challenge seriously. In the Generative AI Competence Centre, we develop methods for image and text forensics, work on explainable language models and innovative methods for verifying synthetic content.
Projects such as CERTAIN, NewsPolygraph and GretchenAI set standards and provide tools that expose manipulation and build trust. With ‘Wegweiser KI’ (AI Guide), we are working with the German Press Agency (DPA) to help journalists critically evaluate AI-based content and use it responsibly.
Quality journalism remains indispensable – even in the age of AI. While AI increases the speed and diversity of research, it is journalists who check facts, provide context and take responsibility. DFKI promotes dialogue between research and media practice in order to integrate AI as a tool in a constructive and transparent manner.
Trust in the digital world does not arise on its own. It requires media literacy, critical questioning and clear labelling of synthetic content – a task for education, politics, science and business alike. DFKI is involved in educational formats and public dialogues to raise awareness and educate.
Generative AI can strengthen society if we manage it responsibly. The rules of this new digital world are being written today – in laboratories, editorial offices, classrooms and parliaments.
DFKI invites you to help shape this future together: for greater transparency, trust and an enlightened society in dealing with AI.
Leiter Kompetenzzentrum Generative KI, DFKI
Redakteur & Pressereferent Kaiserslautern, DFKI