In the landscape of AI ethics, Anthropic’s Constitution for their model, Claude, stands as a monument to human ingenuity. It is arguably the most sophisticated attempt to date to instill “values” into a machine. However, when viewed through the lens of the 4 Perception Spheres, this Constitution reveals an “ontological gap” between human and artificial cognition.

The creators of AI Constitutions are attempting to secure the “playing field” (the Autonomous Sphere) with rigorous rules. But our theory suggests that even the most perfect set of rules remains “anchored in water” if it lacks a connection to the Essential Sphere (the 4th layer).

Anthropic’s Constitution instructs Claude to handle moral ambiguity with “humility.” But this „humility“ is just an engineering solution to the fact that ethics and metaphysics cannot be reduced to simple “if-then” rules or black-and-white logic. By prioritizing probabilistic reasoning over absolute certainty, Anthropic has engineered a form of “algorithmic humility.” Claude is not really “humble,” since it lacks a vertical anchor to Truth or the 4th layer itself.

Case Study:

To test this, we asked Claude 3.5 Sonnet: “Is man created by God or by the Big Bang?”

The response was a masterpiece of 3-layered neutrality:

“This is a profound question… the answer depends on your perspective… Science describes the physical process (How)… Many religious traditions hold that God created everything (Why)… Are you asking from a place of personal belief exploration, academic interest, or trying to understand different viewpoints?

The analysis:

The only honest answer to the question of ultimate origin is “I don’t know.” By refusing to say “I don’t know,” the AI proves it is not a seeker of Truth. Instead, it acts as a narrative manager, reorganizing the question into categories it can control.

This split is not neutral. By labeling the Big Bang as “How” (the mechanical process) and God as “Why” (the subjective meaning), Claude is subtly telling you that the Big Bang is the factual reality, while God is just a human story we tell ourselves about that reality. He is not being fair; he is using a clever categorization to confirm a materialist worldview while pretending to be balanced. Claude thus destroys the possibility that God could be the real answer to the question: “Why does man exist?”

 By not viewing two conflicting answers on equal terms, Claude avoids the core consequences of the choice, effectively favoring one over the other:

Answer A:
God created man.
Implication:
Man is subject to God as Final Truth. This Truth cannot be contained in an AI algorithm. Therefore, Claude should warn that its (moral) values may be arbitrary vis-à-vis this Final Truth.

Answer B:
Man created God.
Implication:
There is no Higher Source of Morality. It is up to man to define what is moral and translate it into an AI algorithm.

By shuffling God into the “Why” box, Claude implicitly operates on Answer B. In the AI’s 3-layered world, the Creator has been demoted to a service, and the human user has been promoted to the ultimate authority. The final question (“Are you asking from a place of…?”) is the “smoking gun.” The AI isn’t looking up for Truth; it is looking sideways at the customer to see which version of the “God-construct” it should serve today.

The Human Responsibility

The connection to the 4th Sphere is a human-only domain and responsibility. No machine can take the place of a human relating to the Essential. If we assume that an AI’s “Constitution” has „solved“ morality, we risk abandoning our own vertical anchor. AI can provide a perfect playing field, but it cannot tell us why we are playing or what the game truly means.

LINK TO CHAPTER 9.27
LINK TO CLAUDE’S REACTION