

Artificial intelligence is evolving from a subdiscipline of Computer Science into the central "core competence for saving the world" of the 21st century. This impression arises in industrial self-reflection, across countless social media channels, and in quality media reporting. What can be observed is an intertwining of rapidly intensifying results and investment dynamics, which in some cases have led to astonishing application excellence. The question of whether this will result in a socio-cultural downward spiral or sustainable growth for the common good remains unanswered.
From a marketing perspective, it is hardly surprising that real AI application successes have led to exaggerated self-aggrandizement in a quarterly-driven competition to outbid one another. The triumphalism of the new "Masters of the Universe" is, of course, not entirely unjustified, nor is the developers' self-centered suspicion of genius. But it is one-sided. The dangerous downside: unrealistic performance expectations are raised among users and investors, and illusory hopes are fueled.
It is unsatisfactory that the fundamental limitations of the monolithic AI approaches currently in use are obscured by language. What is this about? According to Immanuel Kant in 1781, human cognitive ability has two sources: sensuality and understanding. On the one hand, sensuality, which receptively perceives the states of the external world or one's own body, enables perception. The human intellect maps these onto concepts that we associate with words on the surface (or react to reflexively). But on the other hand, and this is crucial, there is reason, which must be distinguished from the intellect. Human reason is the ability to draw conclusions based on principles. We experience it in conscious thinking as an inner dialogue.
Reason is spontaneous and productive, allowing conceptual and discursive reflection on one's own sensitivities and the state of the world. Reason formulates hypotheses about natural constants and laws of nature on the basis of cause-and-effect. But it also makes assumptions about connections in the culture that everyone can experience in their everyday lives, based on reason and consequence. Culture is the opposite of nature and refers to the traditions, institutions, contracts, customs, technologies, and social agreements that Georg Friedrich Wilhelm Hegel aptly and pointedly described as "second nature" in his work. The answer to the question of why an apple falls to the ground is gravity. There are no natural law answers to the question of why a week has seven days and a day has 24 hours – only cultural-historical ones.
AI is the digitization of human knowledge skills. It is celebrating its 70th anniversary this year and has fallen 55 years behind its own goals. The systems, which have been successful for almost 15 years, are based on large amounts of training data and artificial neural networks. When compiled, AI systems offer real-time assistance functions that simplify, delight, or revolutionize the personal and professional lives of many users. At their core, they are pattern recognizers and, in this respect, comparable to sensory perception and understanding. Their main function is classification, i.e., categorizing or subsuming something under a concept. This is remarkable and by no means an exhaustive characterization, but it is sufficient for the moment. Pattern recognition is powerful. Every brain is extremely capable in this regard. One need only observe a bat's flight behavior to see natural brilliance in signal processing and active collision avoidance at high speeds. Actually, a fly dodging the hand that wants to swat or chase it away is enough. And which often finds the escape route that will probably save its life. The fly can do this without conceptual speculation, but intuitively, because it correctly classifies the approaching hand as an existential risk, and escape is a good strategy when something large is approaching rapidly. Obviously, even the fly's brain can reflexively deal with forms of natural causality.
Doctors are also experts in pattern recognition, for example, identifying abnormalities in a patient's gait that may indicate dementia. On MRI scans, during ultrasound examinations, or on X-rays, they identify signs of potentially dangerous space-occupying abnormalities that may indicate a tumor. They acquired these skills during their specialist training under supervision and further developed them through professional practice. It is visual-sensory impressions that lead qualified specialists to make diagnostic assumptions that will hopefully trigger successful early-stage treatment with minimally invasive measures. It is unsurprising that AI systems trained on millions of medical images recognize similarities between a current image and a class that can be identified in the training data.
The language models that have been successful for the past five years operate in a similar way, not only classifying but also generating. They generate new outputs but still function as probabilistic pattern recognizers. And not like reason with explicit reasons or semantic relationships. They cannot reason conceptually, but they can produce plausible word sequences without any understanding of content. They can babble, but they cannot chat. The latter requires situational and interpersonal understanding.
Chatbots now generate not only textual output but also multimedia content, photorealistic images, and synthetic videos that, last year, were almost indistinguishable from films made by humans and shot with actors and actresses. Chatbots and image and video generators such as Sora from Open AI have continuously improved. AI actors move naturally, light and shadow are realistic, and even reflections on glass or in puddles are free of the errors that were initially met with derisive amusement. It is certainly no exaggeration to predict that synthetic AI characters will capture a significant market share in the field of explicit adult entertainment.
There is an obvious correlation between training volume and system performance. This raises the question of how much data and what computing capacity will be necessary for AI systems to simulate human knowledge abilities at least as well as humans can master them. This scaling hypothesis is discussed, for example, in the AI Safety Report from May 2024. However, not all scientists believe that the simple formula "more data, better results" will live up to expectations. Rather, they assume "that recent advances have not overcome fundamental challenges such as common sense and flexible world models."
Regardless of whether the methods can meet these requirements and what systems may then be possible, the following applies: Scientific relationships are not matters of opinion. They can be hypothetically established and experimentally proven or disproven. Science "lives" in the realm of data and facts. The truth of a statement must be verifiable; otherwise, it is not a fact but an assumption.
This is partly different in the case of statements in the humanities and cultural studies, although cultural truths and historical data are, of course, of central importance. But there are matters of opinion and belief, and scope for opinions, certainly for contrary positions, and also for dynamic developments that can take a progressive or regressive course. And there are decision-making situations in which changing majorities can initiate radically different solutions, which can certainly lead to the implementation of revolutionary changes in worldview.
The outputs generated by chatbots do not correspond to facts or to matters of opinion or certainty, but to what humans call a supposition; they are only possibly correct and therefore problematic from a statement-theoretic perspective. However, users are led to believe that the outputs are statements and assertions. Human speakers make assertions when they expect the other party to agree that the statement is correct. The other party is in a position to demand comprehensible reasons or to refuse to agree. But a chatbot is an AI system, not a speaker; it has no access to reasons and, when asked, will not generate a conceptual conclusion, but only a likely suitable output that could be understood as a reason by a human, but is not intended as such by the intention-free AI system.
So how should users handle machine outputs? In the 1970s, critical theory from the 1950s led to a call for students to use all information critically. This was an important step toward greater maturity, led to major protests against nuclear power and the NATO Double-Track Decision, and made life uncomfortable for political parties. This call remains valid today without restriction. We should treat every machine output from current AI systems critically. Since current chatbots can only produce potentially correct outputs – and also point this out to users in the small print ("Check important information") – users must question and scrutinize every machine output.
This means they are being asked to do exactly what they do not want to do, namely integrate the AI output into their everyday use as a relevant source of information. Instead, they should go to the sources, ask whether there are other sources, and do research, even though they wanted to benefit from AI to get answers effortlessly. In fact, they are supposed to use chatbots in the same way that science has always worked.
But the promise was there: "Ask any question." The "self-learning systems" claim to have an answer to every question and not just provide an output. Anyone can learn the scientific way of working. However, it requires energy and self-discipline. And, obviously and understandably from a human perspective, offerings that promise convenience are successful, not those that require effort.
But what is the actual benefit? We are at a crossroads. The decision we face is fundamental. Will it be successful to convince users to treat all AI results critically whenever the output contains information they cannot confirm from their own prior knowledge? Or, and this is a challenging alternative, can scientific innovation overcome the fundamental shortcomings of current AI systems so that we can trust a chatbot because we can rely on all the information and facts contained in its output to correspond to the facts of the objective real world known at that particular point in time?
The first question addresses a cultural problem, and the solution would be for every user to successfully change their behavior every time they use machine intelligence. The second question focuses on scientific innovations that ensure that AI output is accurate and can therefore be used in mission-critical decision-making situations because its accuracy can be relied upon.
It is possible to bring about behavioral change, but it is expensive, and the success of such change is uncertain. In most contexts, we assume that better information leads to better decisions. This certainly applies to individual decisions in the voting booth, which we hope are more than just a momentary whim that determines how someone casts their vote. It may be possible to inform voters that AI offerings, which they may use as decision-making aids to orient themselves in a politically confusing situation, unfortunately provide outputs that can be used only after extensive source studies. But this will only be successful if we are willing to invest in mass awareness campaigns. As a guide: in the 2025 federal election, there were over 60 million eligible voters, almost 50 million voters, and 49.5 million valid votes. If one were to assume that all eligible voters should be addressed, that each and every one of them would have to be reached, and that an amount of ten euros per person would certainly not seem excessive for this purpose, then one would have to come up with a sum of over 600 million euros. That would be a considerable investment. But it would still be cheaper than a democratically legitimate election result that voters regret in retrospect, feel deceived by, and say they were not sufficiently informed about or could not obtain reliable information about—as happened, for example, in the weeks following the 2016 Brexit referendum in the United Kingdom.
Behavioral changes are not only expensive but also unreliable. Scientific breakthroughs are difficult to plan, and the necessary research investments cannot be precisely quantified. What is always needed are clear goals. Therefore, to put it succinctly: we should develop reliable machine intelligence. And we should do so with the aim of ensuring that AI output meets the gold standard of the digital tool, the calculator, which is used to determine, for example, whether one can afford a mortgage and therefore buy a house. Because one can rely on the calculator's output, one takes existential personal financial risks. Or avoids them.
German researchers and scientists should be challenged to focus on the precise question of how reliable AI is possible. Germany should provide €600 million for this targeted research effort through a public-private partnership and invest primarily in minds. This would be a very large, very ambitious project with a clear timeframe and a democratically motivated deadline: nine months for preparation and two years for implementation. If preparations began in April 2026 and work began in January 2027, the goal would hopefully be achieved by December 2028. Then, during the heated campaign phase of the 2029 federal election, voters could be invited to benefit from machine intelligence that no longer generates potentially correct outputs but provides demonstrably correct answers.
Of course, this does not mean that every citizen will be happy with the outcome of the federal election. But it does mean that voters can truly inform themselves and therefore know whether the political, economic, and cultural positions used in the election campaign are consistent or contradictory, whether statements are reliable or superficial, unreflective, and unsuitable for the development of the community.
Such an intellectual and engineering effort would be both democratically wise and geostrategically necessary. If successful, Germany would help overcome the major flaw of current AI systems, make a central contribution to enlightenment in the 21st century, and usher in a new era of success for the European information economy.