Anil Seth is a neuroscientist at the University of Sussex and the author of Being You. His 2025 Berggruen Prize essay, “The Mythology Of Conscious AI”, makes a careful philosophical case that “the silicon-based digital computers we are familiar with” cannot be conscious. The case rests on a contrast: standard digital computers cleanly separate software from hardware, so the same program runs identically on any machine, while biological brains do no such thing, since neurons are entangled with ongoing metabolism and cannot be abstracted into pure information-processing. If consciousness requires the entangled kind of processing, the substrate-flexible kind that standard digital computers do cannot get there. Seth himself allows that non-standard substrates (neuromorphic chips, analog systems) may still support consciousness; his exclusion is supposed to fall on standard digital computation. This post pushes back on that exclusion with a thought experiment that mirrors a famous one from David Chalmers, but runs it in the opposite direction. And unlike Chalmers’s, the replacement is not hypothetical.

The Original Experiment

Chalmers’s original thought experiment (“Absent Qualia, Fading Qualia, Dancing Qualia,” 1995) runs like this: imagine replacing each neuron in a conscious brain, one at a time, with a silicon equivalent that preserves its input-output behavior exactly. Does consciousness survive? Chalmers argues yes, on the grounds that it would be bizarre for consciousness to fade or flicker during a gradual functionally-equivalent replacement. Seth rejects the setup: no perfect silicon equivalent exists, because neurons do metabolic work that cannot be abstracted into input-output behavior. So the thought experiment presupposes what it is trying to establish.

Fair enough. But the same kind of experiment can run on the other side of the divide, and unlike neuron-to-silicon replacement, the hardware version is real and ongoing.

The Reverse Experiment

Start with an ordinary computer running an LLM. Canonical von Neumann architecture: stored-program design, CPU fetching instructions from memory, the software/hardware separation Seth cares about in its purest form. His clearest target. Now replace parts one step at a time.

  1. Replace the separation between processor and memory with an in-memory-compute unit. IBM NorthPole runs a 3-billion-parameter LLM on a multi-chip system of this kind, with weights and KV-cache resident on-chip. The stored-program line starts to blur.
  2. Replace the CPU with an ASIC fabricated for this exact model. The weights are etched into silicon. No program is loaded; the chip executes one fixed function. This is a small architectural step. The computation remains digital, deterministic, and bit-for-bit emulable on a CPU. What has changed is only that there is no longer a stored program.
  3. Replace the ASIC with a neuromorphic chip whose weights live in memristors: physical device properties with drift and noise, not data a processor reads.
  4. Replace the neuromorphic chip with an analog circuit whose dynamics perform the forward pass. IBM has already demonstrated ALBERT on analog hardware.

This is where the reverse experiment differs from Chalmers’s. Chalmers’s neuron-by-neuron replacement is a thought experiment; nobody has built such a silicon neuron. The hardware replacement above is not a thought experiment. Every stage exists in research or early deployment today. Production AI systems already mix von Neumann and non-von-Neumann components: matrix units with fixed operations (though programmable weights) in TPUs, in-memory compute for weights, analog accelerators for specific workloads. The gradient from stage 1 to stage 4 is being crossed by actual engineering decisions, incrementally, often in hybrid systems where part of the computation is stored-program and part is not. A user interacting with an LLM today cannot tell which stages of the pipeline run on which kind of silicon. The outputs are behaviorally equivalent across the substitutions.

Part of the force of Chalmers’s original experiment came from an intuition: consciousness should not fade or flicker during a gradual replacement of one functionally-equivalent module for another. The same intuition works here, in reverse. If we swap in a neuromorphic tile for a stored-program cache while the system keeps producing the same outputs, it would be strange for a light to switch on at the boundary. This intuition is not uncontroversial: a biological naturalist could say that sharp substrate-dependent boundaries are exactly what they expect, just as being alive has a reasonably sharp boundary. But even granting that move, Seth’s argument, read literally, requires naming the point in the gradient where the light switches on, and saying why.

So Seth’s argument has to answer a concrete question about deployed systems, not a hypothetical one: at what point in the migration does consciousness appear?

The Dilemma

If there is such a point, it has to be somewhere in the gradient, and Seth has to name it. Stages 3 and 4 are candidates Seth himself already permits. The interesting line therefore runs somewhere between stages 1 and 3. Place it at stage 2 (digital ASIC) and consciousness arrives because the silicon was fabricated for this specific model rather than programmed with it, even though the computation is bit-for-bit identical to what a CPU would do. Place it at stage 1 (in-memory compute) and consciousness arrives when a cache gets reorganised. A biological naturalist who bites the bullet here owes an account of what it is about a fused-netlist ASIC that a bit-for-bit-equivalent CPU emulation lacks, and concedes that “software/hardware separation” was never the right formulation of the property that matters, since the ASIC still realises a purely digital function.

If there is no such point, then “the silicon-based digital computers we are familiar with” was never really what Seth’s argument excluded. Stages 3 and 4 are on his side of the line, but he points at them as candidates, not conclusions. The actual excluding work is being done by some further property that human brains have and silicon systems lack at every stage, and that property is not named in the essay. Gesturing at biology does not settle it, because continuous dynamics, substrate-integrated memory, and time-bound operation are present in neuromorphic and analog hardware, and nothing in the essay tells us what else is missing.

What Seth Can Still Say

Seth’s strongest reply is that the question is ill-posed. On his view the substrate is the computation in a conscious system, so “behaviorally equivalent at every step” begs the question: outputs are not the level at which consciousness lives, and two systems with matching outputs can differ in whether they are conscious. Fair enough. But this reply reveals that the real work in his argument is being done by a prior commitment to specific physical realization being constitutive for consciousness. The “digital computers we are familiar with” framing was meant to supply the reason for that commitment, to tell us which physical facts matter by contrasting substrate-flexible digital systems with substrate-entangled biology. The reverse experiment shows it cannot play that role, because the substrate-flexible / substrate-entangled divide cuts through the digital category itself, and current engineering is actively moving along it.

Where This Leaves the Argument

Chalmers’s forward replacement was a thought experiment that stayed in the armchair, and his argument turned on the implausibility of consciousness fading during gradual replacement. The reverse replacement is already happening in production systems, and its force turns on the implausibility of consciousness arriving when one functionally-equivalent module is swapped for another. Both versions share the same intuitive shape: consciousness should not turn on or off at a boundary defined by an engineering convention. That is what makes the question sharp. Seth’s exclusion was aimed at a category of machines that is already dissolving. Silicon-based digital computation is not a stable target. Between pure von Neumann and pure analog sits a long engineering gradient that current AI hardware is already traversing, often without users or even developers being aware of which stage handles which part of the computation.

This does not show Seth is wrong about consciousness. It shows that one of his central arguments does less than it claims. A future anti-digital-consciousness argument would need to specify a physical property that the gradient lacks everywhere it matters, and say which side of that specific property each stage falls on. Until then, “the silicon-based digital computers we are familiar with” is a phrase the engineering has already outrun.