A new paper from Google DeepMind: The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness, by Alexander Lerchner, a research scientist at DeepMind working on consciousness and representation. It argues that computational functionalism (the view that mental states are defined by their functional roles and can in principle run on any substrate, including silicon), the dominant framework for thinking about AI consciousness, is fundamentally mistaken. Not because of missing biological ingredients, but because computation itself is the wrong ontological category.

The core claim is specific: computation is structurally incapable of producing experience because it depends on an external agent (the mapmaker) to give its symbols meaning. But there is a gap at the center that the paper does not close.

The Mapmaker Argument

The paper builds on a single core insight. Computation requires discrete symbols: 0s and 1s, tokens, voltage levels mapped to logical states. But physics is continuous. There are no intrinsic symbols in nature. A transistor settles at 5V, but calling that a “1” is an act of interpretation, not a physical fact.

Someone has to say “this range of voltages counts as 1, that range counts as 0.” Lerchner calls this agent the mapmaker, deliberately replacing the more passive term “observer” from the standard literature. The mapmaker does not just watch. The mapmaker imposes a finite alphabet onto continuous physical dynamics. (Lerchner is careful to note that the mapmaker is not a little inner interpreter. It is the entire organism, subject to thermodynamic constraints. No homunculus.)

Lerchner argues that this alphabetization is not a trivial bookkeeping step. It is a semantically loaded cognitive act. The mapmaker must already understand what the symbols mean in order to assign them. Concepts like “Red” or “Pain” are not floating abstractions waiting to be discovered. They are constituted neurophysiological states, extracted from lived experience by an agent that already has experience.

If that is right, then the causal chain is not Physics → Computation → Consciousness, as functionalists assume. It is Physics → Consciousness → Concepts → Computation. Consciousness comes before computation, not after it. Computation presupposes an experiencing agent.

Simulation vs. Instantiation

Simulation is the syntactic manipulation of physical vehicles (transistor states, voltage levels) to track abstract relationships between concepts. A weather model simulates atmospheric dynamics. A GPU can simulate photosynthesis by computing the transformation from sunlight, water, and CO₂ to oxygen and glucose. But the GPU does not synthesize a single molecule of glucose. It lacks the causal capacity to perform the underlying biochemical work.

Instantiation is the replication of the intrinsic, constitutive dynamics of a process. A biological cell performing photosynthesis instantiates it. The physics does the work, not a description of the physics.

In Popper’s framework, World 1 is physical processes, World 2 is subjective experience, World 3 is knowledge products (text, proofs, code). Computation produces World 3 artifacts from World 1 mechanisms. Lerchner’s claim is that it can never cross into World 2. Computation, no matter how complex, is always simulation. It manipulates symbols according to rules. The symbols have no intrinsic causal power. The machine would perform exactly the same physical operations if the symbols referred to nothing at all. Consciousness, if it arises anywhere, arises from intrinsic physical dynamics, not from syntactic descriptions of those dynamics.

This is not a biological exclusivity argument. Lerchner is explicit: if an artificial system had the right physical constitution (not the right algorithm), consciousness could in principle arise. The barrier is not carbon vs. silicon. It is computation vs. physics. And the argument is not limited to classical digital computation. Lerchner claims it applies equally to analog and quantum systems: any process that requires a mapmaker to define its computational identity falls on the simulation side of the boundary.

The Melody Paradox

Lerchner illustrates this with the melody paradox. Take a physical device stepping through a sequence of stable voltage states. The physical transitions are fixed by electrodynamics. But the computational identity of the process is underdetermined. Without a mapmaker to supply the mapping key, the same voltage sequence could represent:

  1. Beethoven’s Fifth played forward
  2. The same melody played backward
  3. A stream of stock market prices
  4. Coherent noise

No property of the physics privileges one interpretation over another. The computational identity of the process is underdetermined without a mapmaker to fix it.

What It Implies About Simulation

Lerchner does not discuss the simulation hypothesis, but his framework has a direct consequence for it. If computation can only simulate but never instantiate experience, then:

This is modus tollens (if P implies Q, and Q is false, then P is false). Nick Bostrom, the Oxford philosopher, built his simulation argument on the assumption of computational functionalism: that sufficiently detailed computation reproduces everything, including experience. Lerchner’s framework removes that assumption at the root. Every NPC in every future simulation, no matter how complex, would be permanently dark inside.

These are implicit commitments of the framework, and they increase the argumentative burden.

Where the Argument Breaks

The paper crystallizes a real question. It does not answer it. There are at least two significant weak points.

The bootstrapping assertion is unproven. The entire argument rests on the claim that alphabetization, the act of mapping continuous physics onto discrete symbols, requires a prior experiencing agent. But Lerchner asserts this more than he demonstrates it. Evolution “chose” the neural alphabet through selection pressure on thermodynamic stability. Natural selection is a mindless process. If a mindless process can perform alphabetization, the mapmaker need not be conscious, and the circularity argument collapses. Lerchner could respond that evolution produced organisms that experience, and that their alphabetization capacity is parasitic on that experience. That is a defensible move, but the paper does not make it explicitly, and the question of whether alphabetization requires experience or merely correlates with it is precisely where the argument needs more work.

The empty interior. Lerchner claims consciousness requires specific intrinsic physical dynamics but never specifies which ones. He gestures toward thermodynamics, metabolic processes, autopoiesis (the capacity of a system to produce and maintain itself). He is upfront about this: the paper’s goal is a negative boundary claim, not a full theory of consciousness. That is a legitimate scope choice. But a boundary claim that cannot point to a single concrete example of the physics it requires is hard to evaluate. The argument says what cannot produce consciousness (computation) without offering a testable account of what can.

The mirror image of the functionalist problem. Functionalists cannot explain why the right computation produces experience. Lerchner cannot explain which physics does. The functionalist says “the right algorithm suffices” and cannot identify the threshold. Lerchner says “the right physics suffices” and cannot identify the mechanism. The positions are not symmetric, though: the physicalist claim is in principle falsifiable (find consciousness in a system that lacks the relevant physics), while “the right computation” can always be redefined. That gives Lerchner’s side a slight structural advantage, but only if he eventually specifies which physics matters.

The Right Question, Without an Answer

The mapmaker concept reframes the standard observer problem in a way that makes the dependence on an experiencing agent explicit rather than implicit. The distinction between simulation and instantiation is not new. It echoes John Searle’s Chinese Room, but Lerchner grounds it in the physics of alphabetization rather than in thought experiments about understanding.

The hard problem remains exactly where it was. Lerchner sharpens the question without closing it. The next step is empirical: testing specific falsifiable claims about what these systems actually do and whether their self-reports track anything real. The mapmaker concept gives that empirical work a better vocabulary.