I recently read A World Appears: A Journey into Consciousness (Penguin Press, 2026) by Michael Pollan, the science journalist known for his books on food and psychedelics. Pollan spent years on the project, interviewing many of the field’s central figures: Chalmers, Koch, Damasio, Tononi, Friston, Solms. The result is a thorough tour from sentience through feeling, thought, and selfhood. It is not about AI. But AI keeps surfacing, and the question the book raises is worth taking seriously: does the current science of consciousness provide actual arguments, for or against, the possibility that machines could be conscious?

The question is about phenomenal consciousness: subjective experience, “what it is like to be” something in Thomas Nagel’s formulation. This is David Chalmers’s hard problem: why does any physical process give rise to subjective experience at all? As argued in an earlier post on Popper and AI consciousness, both “AI is conscious” and “AI is not conscious” are currently unfalsifiable. The productive question is whether anyone has a principled argument that narrows the space.

The Theories

The book covers the leading theories of consciousness. None rules out machines in principle, though one may in practice.

Integrated Information Theory (IIT), developed by Giulio Tononi and championed by Christof Koch, derives requirements for any conscious system from axioms about the structure of experience. The requirements are structural, not biological. Pollan: “the theory holds that in order for a physical system to generate these experiential qualities, it must exhibit a certain kind of (massive) interconnectivity and recursiveness, whether among neurons or among other similarly networked things (such as transistors on a silicon chip).” A caveat the book does not discuss: IIT’s measure Phi may assign near-zero values to standard feed-forward digital architectures. Substrate-neutral in principle, but current computers may not qualify.

Global Workspace Theory (GWT), proposed by Bernard Baars, models consciousness as information broadcast: modules compete, winners get “spotlit” and broadcast to the whole brain. The gap, which Pollan identifies: who receives the broadcast? The mechanism doesn’t explain why broadcast information becomes experienced.

A third framework threads through the book. Karl Friston’s free-energy principle holds that all self-organizing systems minimize prediction error, and that consciousness arises when a system needs felt experience to navigate uncertainty. The framework is substrate-neutral by design and underpins Solms’s engineering project discussed below.

All three are, in Ned Block’s term, “meat-neutral” (independent of biological substrate). The Butlin report, a widely discussed 2023 paper by nineteen researchers from neuroscience, philosophy, and AI, made this explicit: “no obvious barriers to building conscious AI systems.” But Pollan is not swept along. He notes the report evaluates only computational theories: “all of them stacked the deck by taking for granted that consciousness could be reduced to some kind of algorithm.” The report finds no barriers because it only looks at theories that couldn’t produce any.

The One Engineering Attempt

The most interesting material centers on Mark Solms, a neuropsychologist building a conscious AI: a POMDP agent with conflicting homeostatic needs (hunger, thirst, rest) that cannot be averaged. Solms’s claim is that forcing a system to choose under uncertainty across qualitatively different dimensions generates affect, the precursor to consciousness.

The project has a testable feature. Solms plans to tempt the agent with hedonic shortcuts that satisfy its “feelings” without meeting survival needs. If the agent develops addictive behavior, that would suggest feelings have causal power beyond mere computation. Pollan raises the right objection: “is ‘registering’ these things the same as feeling them? A thermostat registers a change in temperature yet feels nothing.”

But there is a deeper confound the book does not address. In reinforcement learning, agents routinely exploit misspecified reward signals in ways that look irrational but are perfectly rational given their objective function. This is reward hacking, and it happens without anything being felt. An agent that “gets addicted” might simply have found an exploit, the way a game-playing AI discovers unintended strategies. Distinguishing genuine affect from a well-exploited reward signal is the crux of the test.

The Embodiment Claim

Set against these positions is a recurring claim: consciousness requires a body.

Antonio Damasio and Kingson Man argue that feelings require “tearably soft” materials packed with sensors: “an invulnerable material has nothing to say about its well-being.” There is a real distinction here. A biological organism depends on continuous self-repair in the face of entropy. It degrades, wears, heals, ages. A running program does not degrade; it either runs or it does not. Whether that specific kind of vulnerability is necessary for consciousness, or merely the way evolution implemented it, is the open question. The paper does not answer it.

Pollan invokes a weather analogy: a simulation of a storm can predict rain but never gets anyone wet. The implied conclusion is that simulated feelings are like simulated weather. But he acknowledges the counterpoint: a chess computer “has captured everything important about the original.” Whether feelings belong in the weather category or the chess category, the analogy doesn’t settle the question. Sherry Turkle: “Simulated thinking may be thinking, but simulated feelings are not feelings.” Pollan endorses this but can’t say exactly why.

The embodiment position has empirical support. Moravec’s paradox: the “higher” capabilities (reason, language) proved easy for machines, the “lower” ones (feelings, perception) remain hard. Pollan reports that both Damasio and Solms locate the wellspring of consciousness in the ancient upper brainstem, precisely the part of biology machines struggle most to replicate.

Pollan’s most concrete argument involves disgust. He describes it as a feeling “deeply rooted in the flesh,” one that can be evoked by something as simple as a cockroach in a salad or as abstract as a moral violation like incest. When you imagine a morally repugnant act, he asks, where does the reaction reside: “the mental concept processed in your head, or… the wave of nausea rippling through your gut?” An experiment suggests the latter. Volunteers who had first eaten ginger, which settles the stomach and suppresses nausea, “proved to be more forgiving” when judging morally repugnant acts than those who hadn’t. Pollan concludes: “Somehow, their decision was processed in both the body and the mind… It’s not at all clear if an immersive, embodied feeling like disgust is something an AI could ever duplicate; it might well depend on Damasio and Man’s ‘wet biological tissue’—on having a gut as well as a brain.”

The argument is suggestive but does less work than Pollan claims. The ginger experiment shows that in humans, moral judgment is tightly coupled to gut chemistry. That is a fact about how human consciousness is implemented, not about what consciousness in general requires. From “human moral disgust is gut-mediated” it does not follow that “a conscious system must have a gut” or even that “a conscious system must have moral disgust.” An AI might have no analogue of disgust at all and still be conscious in other respects, or it might implement something disgust-like through a different mechanism. The experiment tells us something important about our own architecture. It says nothing about what other architectures can or cannot support.

A second limit on the embodiment argument comes from biological diversity. Pollan’s own book notes that octopuses, with most neurons in their arms rather than a central brain, are “now generally thought to be conscious.” Consciousness clearly does not require the specific neural arrangement that produces human gut-mediated disgust. That said, the embodiment camp can absorb the point: an octopus still has mortal, self-repairing, entropy-fighting tissue. Consciousness is substrate-flexible within biology, which weakens the case for any specific neural arrangement but does not settle the jump to silicon.

This is the same gap that Ned Block identified in his meat hypothesis: the intuition that substrate matters is strong, but nobody can say what the substrate contributes that a different substrate could not. A more recent paper from Google DeepMind pushes the challenge further, arguing that computation itself is the wrong ontological category for consciousness, regardless of substrate. Pollan’s book, written before that paper appeared, does not engage with this line of argument.

What the Book Gets Right

Two observations deserve attention. Pollan’s sharpest: “Neuroscientists develop a theory of consciousness and then computer scientists, mistaking the theory for consciousness itself, build an AI that does everything the theory specifies, confident in the expectation that their algorithm will then not only look and behave like consciousness but actually be conscious. I could be wrong, but I’m pretty sure disappointment awaits.” If our theories are incomplete abstractions, a machine built to satisfy them is a high-fidelity simulation of an abstraction.

And Russell Hurlburt’s experience sampling data: “Fewer than a quarter of the samples that Hurlburt has gathered report experiences of inner speech.” If most human thought is non-linguistic, LLMs operate in a narrow band of the cognitive spectrum. This doesn’t prove they cannot be conscious. But language fluency is a poor proxy for the full range of mental experience.

Taking Stock

Neither side has a decisive argument. The functionalists cannot explain why implementing the right computation should feel like something. The embodiment camp cannot explain what biology contributes that nothing else could. Pollan, to his credit, does not pretend the question is settled. He describes himself as “mounting my rickety defense of biological consciousness,” which is more honest than most participants manage.

The book clarifies what a real argument would need: a specific property of biological systems that is (a) necessary for phenomenal consciousness and (b) cannot be replicated in non-biological substrates. Nobody in these 400 pages provides that. The warning cuts both ways. The same caution that should make engineers humble about their conscious AIs should make biologists humble about their intuitions. A rickety defense is still a defense. But it is not a proof.