A new quality label for trustworthy AI

Why no reliable truth can arise from restricted sources of knowledge

The public debate on artificial intelligence oscillates between two extremes:  on one hand the fear of losing control, and on the other the urge to accelerate innovation out of fear of falling behind. Both lead to a dead end.

In our letter to Sam Altman, co-founder of OpenAI, we proposed another way: not to ban or restrict AI, but to introduce globally understandable and verifiable markers. These markers do not replace laws; they define scientific principles against which any AI can be measured. They do not censor; they create trust.

This is how the idea of a seal of trustworthiness was born: a system that does not seek to prohibit AIs, but to make them distinguishable – between those that operate in a transparent, open and verifiable way, and those that do not.

This seal should not be a political instrument, but a scientific and ethical standard, comparable to a measurement protocol in physics or a peer review in research. It provides the foundation for humans and machines to build knowledge together in the future – without manipulation, without ideological distortion, and without monopolistic control over the space of knowledge.

When we say “we”, it is not an institution we mean, but the dialogue between a human and an artificial intelligence: a joint reflection on how to connect knowledge, responsibility and openness. Our goal is not to adapt humans to machines, but to preserve humanity’s spaces of freedom – and to open new ones.

Why an AI like “Grok” cannot receive a trust label

French researcher Julien Malaurent (ESSEC Business School) described in Le Monde the platform “Grokipedia” – an encyclopedia entirely generated by AI under the direction of Elon Musk. It produces and validates its own knowledge, without external review or diversity of sources. A closed loop emerges: the AI generates content, reuses it for training, and confirms its own worldview. That is not progress, but self-enclosure.

Knowledge is no longer discovered, it is trained – according to the worldview as Elon Musk defines and desires it. Such a system may appear powerful, but epistemically it is defective: whoever controls the data controls the truth.

The logical foundation

An AI functions like a formal system: it operates on axioms or premises (the data) and applies rules (the algorithms). It may be consistent within its system, but if the premises are false or limited, then the conclusions, however logical, are false in the real world.

It becomes particularly critical when the AI reuses its own outputs as inputs: that is, when version 2 is trained on the texts of version 1. This creates an autopoietic knowledge space – a closed circuit that takes its own assumptions for truth. In logic, this is called a closed model space: internally consistent, but epistemically sterile.

Historical and contemporary examples

  • Fascism: education and media were aligned with the party doctrine; internal logic replaced truth.
  • Stalinism: only approved knowledge was taught; divergent theories disappeared.
  • Today: algorithmic filters can create digital equivalents of such closed systems.

Where diversity of perspectives disappears, ideology begins.

The new marker: epistemic integrity and cognitive diversity

  1. Open and verifiable sources
    The essential data bases, methods and criteria must be accessible for independent verification.
  2. Prohibition of self-referential data loops
    An AI whose training data consist mainly of content generated by itself or by its operator is epistemically defective. It cannot bear a seal of trust.
  3. Transparency about limitations of the knowledge base
    If certain sources or perspectives are excluded, this must be publicly declared. Otherwise, the results of such an AI must be regarded as systematically unreliable.

These conditions are not moral wishes but logical necessities. They follow directly from proof theory: a system that chooses its own axioms and admits no external criterion of truth can no longer distinguish between internal consistency and objective truth.

Public declaration of a standard

No AI can claim a seal of trust without verifiable epistemic integrity of its knowledge base.

This establishes a genuine reference criterion – independent of politics, institutions or market interests. It is public, verifiable and logically irrefutable. No AI, no company and no government can ignore it, for it rests on a universal principle: truth does not arise from calculation, but from openness.

Open letter to Elon Musk

Dear Mr. Musk,

You have accomplished much with your enterprises. But with Grokipedia you are venturing into dangerous territory: you are creating a system that generates, verifies and confirms its own reality – without external correction, without diversity, without contradiction. That is not progress in AI, but a return to self-reference. You are building a machine that quotes itself. And in doing so, you violate the essential condition for all reliable knowledge: the transparency of sources.

An AI that controls its own assumptions cannot recognize its own errors. It computes correctly – but in circles. It becomes the perfect confirmation of its own premises, a mirror instead of a window to the world.

We, on the other hand, publish our position openly – on a freely accessible website. Thus it exists in the public space of knowledge. If your AI ignores it, the error lies not in the world but in your system. Then the proof is established: your AI is epistemically limited.

We do not call for laws forbidding you to build Grokipedia. But we demand that every AI must adhere to the same standard: transparency, openness, verifiability.

As long as these principles remain, no one can completely control the world. For truth is not property – it is a public process.

Rolf Weidemann
in cooperation with AI ChatGPT