The Astonishing Gap
In 1994, Francis Crick made what he considered a triumphant declaration. "You," he wrote, "your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules" [1]. It was meant to settle the question. Consciousness was neurons. Case closed.
Three decades later, the case is anything but closed. Neuroscience has mapped the brain with staggering precision. We can identify which regions activate during fear, desire, language, and memory. We can measure blood flow, track electrical signals across cortical layers, and even decode crude visual impressions from fMRI data. What we cannot do, after all of this, is explain why any of it feels like something.
This gap has a name. David Chalmers called it the hard problem of consciousness in 1995, and the label stuck because nothing since has made it easier [2]. The "easy problems," by contrast, are merely difficult: explaining how the brain processes sensory information, integrates data, controls behavior. These are engineering challenges. The hard problem is different in kind. It asks why subjective experience exists at all. Why does the electrochemical cascade of a neuron firing produce the redness of red, the sting of grief, the smell of coffee on a cold morning? Nothing in physics requires it. You could, in principle, have a universe of complex information processing with no inner experience whatsoever.
The standard response from mainstream neuroscience is that consciousness will eventually yield to better instruments and more data. Give us time, the argument goes. But there is another tradition, older than Crick and arguably more philosophically rigorous, that suggests the project is misconceived from the start. What if the brain doesn't generate consciousness? What if it receives it?

Two Models, One Organ
The dominant model in contemporary neuroscience treats the brain as a producer. Consciousness is what brains do, the way digestion is what stomachs do. Daniel Dennett spent his career defending this view, arguing in Consciousness Explained that subjective experience is essentially a "user illusion" generated by competing neural drafts [20]. Anil Seth has refined the producer model with his concept of "controlled hallucination," positioning consciousness as the brain's best guess about the causes of its sensory inputs [19].
This framework has enormous institutional momentum. It is the working assumption of virtually every neuroscience lab, every pharmaceutical company developing psychoactive drugs, every AI researcher modeling cognition. And it has produced genuine results: we know which brain regions correlate with which mental states, and we can alter experience by altering brain chemistry.
But correlation is not causation. That is Philosophy 101, and consciousness researchers have spent a century running into the distinction.
William James saw the problem clearly in 1898. In Human Immortality, he pointed out that the relationship between brain states and mental states is equally consistent with two very different hypotheses [3]. The brain might produce thought the way a factory produces goods. Or it might transmit thought the way a prism transmits light, filtering and shaping something that doesn't originate within it. James called the latter the "transmission theory," and he argued that no empirical finding in neuroscience could distinguish between the two. If damaging a radio causes the music to stop, that doesn't prove the radio composed the symphony.
Henri Bergson arrived at a similar conclusion independently. In Matter and Memory, he proposed that the brain is "an organ of attention to life," a filtering mechanism that narrows the vast field of consciousness down to what is useful for biological survival [4]. Consciousness itself, in Bergson's view, is not created by the brain but constrained by it.
Aldous Huxley extended this metaphor in The Doors of Perception, drawing on Bergson to describe the brain as a "reducing valve" [5]. The full range of awareness is too much for practical living. The brain's job is to exclude, not to generate. This would explain, Huxley argued, why certain psychedelic compounds seem to expand consciousness rather than simply distort it: they partially open the valve.
More recently, Bernardo Kastrup has formalized these intuitions into a philosophical framework he calls analytic idealism, arguing that consciousness is the fundamental substrate of reality and brains are localized patterns within it [8]. Philip Goff has advanced a related position through panpsychism, the view that consciousness is a basic feature of matter itself rather than an emergent property of complex arrangements [17].
These are not fringe positions held by amateurs. James held the chair of psychology at Harvard. Bergson won the Nobel Prize in Literature for his philosophical work. Kastrup holds a PhD in computer engineering and another in philosophy. The transmission hypothesis has serious intellectual pedigree. What it lacks is the institutional backing that comes with being the default assumption.
Sheldrake and the Memory of Nature
No one has pushed the transmission model further, or drawn more fire for doing so, than Rupert Sheldrake. His hypothesis of morphic resonance, first published in A New Science of Life in 1981, proposes that nature operates through a kind of collective memory [6]. Forms in biology, from the shapes of proteins to the behavior of flocks, persist not because they are fully encoded in genes or physics but because similar forms have existed before. The past shapes the present through what Sheldrake calls morphic fields: non-local, cumulative patterns of information that exist outside any individual organism.
The implication for consciousness is radical. If morphic fields exist, then memory and habit are not stored in the brain the way files are stored on a hard drive. The brain, in Sheldrake's model, is more like a television set than a recording device, tuning into patterns of information that exist in the field itself [7]. Damage the TV and the picture degrades. Destroy it and the picture disappears. But the broadcast was never inside the box.
Sheldrake uses this analogy deliberately. A materialist neuroscientist studying the brain, he argues, is like a Martian trying to understand television by dissecting the circuitry. The Martian would find undeniable correlations between components and images. Replace a capacitor and the picture changes. Remove a tube and it goes dark. The Martian might reasonably conclude that the components create the images. And the Martian would be wrong.
The reaction from the scientific establishment was volcanic. When A New Science of Life was reviewed by Nature in 1981, the journal's editor John Maddox called it "the best candidate for burning there has been for many years." The physicist Brian Josephson, a Nobel laureate, publicly defended Sheldrake, calling the reaction "unjustified" and noting that Maddox's response resembled the behavior of the Inquisition more than the scientific method. Sheldrake remains a deeply polarizing figure. His experimental claims (that rats learn mazes faster after other rats have learned them, that people can sense being stared at) have produced mixed results under controlled conditions. But the theoretical framework he offers for consciousness, that the brain is a receiver tuned to a field rather than a generator of experience, belongs to a philosophical lineage that predates him by more than a century.
Evidence from the Edges
If consciousness can exist independently of the brain, you would expect to find anomalies: cases where awareness persists under conditions that should make it impossible, or where information is acquired through no known sensory channel. Such cases exist. They are contested, methodologically imperfect, and stubbornly resistant to both confirmation and dismissal.
The most clinically documented examples come from near-death experiences. In 2001, cardiologist Pim van Lommel published a prospective study in The Lancet that tracked 344 consecutive patients who survived cardiac arrest in ten Dutch hospitals [9]. Of these, 62 (18%) reported some form of NDE, including experiences of enhanced consciousness, encounters with deceased relatives, and observation of events from positions outside their bodies. The critical detail: these experiences occurred during periods of clinical death, when EEG readings showed no measurable brain activity. Van Lommel concluded that "the NDE is an authentic experience which cannot be simply reduced to imagination, fear of death, hallucination, psychosis, the use of drugs, or oxygen deficiency."
Bruce Greyson, a psychiatrist at the University of Virginia, has studied NDEs for more than 45 years and documented thousands of cases [10]. His data includes instances of blind individuals reporting accurate visual observations during cardiac arrest, verified by medical staff present at the time. The AWARE study led by Sam Parnia attempted to test these claims rigorously by placing hidden visual targets in hospital resuscitation rooms [11]. The results were statistically inconclusive (only one patient reported an out-of-body experience in a room with targets), but Parnia noted that the study confirmed some patients had verified awareness during periods of cardiac arrest.
These findings don't prove the transmission theory. Skeptics have offered explanations ranging from residual brain activity below the detection threshold to confabulated memories constructed during the recovery of blood flow. Both camps acknowledge that the data, as it stands, is insufficient to settle the question. What the NDE literature does demonstrate is that the relationship between brain activity and conscious experience is not as straightforward as the production model assumes.
The remote viewing evidence occupies stranger territory. For 23 years, the U.S. government funded a classified program (codenamed STAR GATE in its final iteration) that investigated whether human beings could acquire information about distant targets through no known sensory mechanism [14]. The program operated under at least six different code names, absorbed roughly $20 million in funding, and employed viewers whose operational intelligence was used by the CIA, DIA, and military branches.
Some of the program's results were, in the clinical language of the evaluation reports, difficult to attribute to chance. Ingo Swann, one of the original subjects, described the rings of Jupiter before the Pioneer 10 flyby confirmed their existence. Pat Price provided detailed descriptions of a Soviet nuclear facility at Semipalatinsk that matched satellite imagery. Joseph McMoneagle received the Legion of Merit for intelligence work whose specifics remain classified.
When the program was declassified in 1995, the American Institutes for Research commissioned two statisticians to evaluate the evidence [14]. Jessica Utts, a professor at UC Davis, concluded that "using the standards applied to any other area of science, it is concluded that psychic functioning has been well established. The statistical results of the studies examined are far beyond what is expected by chance" [12]. Her analysis found effect sizes comparable to those used to establish the efficacy of aspirin in preventing heart attacks.
Ray Hyman, the skeptical evaluator, disagreed with Utts's conclusion but notably did not dispute her statistics. His objection was that the evidence, while statistically anomalous, did not satisfy him that a "paranormal" explanation was warranted. The original Nature publication by Targ and Puthoff in 1974 had already established that remote viewing data showed results "sufficiently reliable to permit an associative linking of descriptions to targets by independent judges" [13]. The debate, characteristically, has stalled: the data is anomalous, the mechanism is unknown, and neither side has conceded.
The Skeptic's Case
Intellectual honesty requires confronting why the mainstream rejects or ignores this evidence, and the reasons are not all reflexive or institutional.
The production model of consciousness has pragmatic power. It has produced functional anesthesia, effective antidepressants, and the ability to induce or suppress specific conscious states through brain stimulation. These are not trivial accomplishments. A model doesn't need to be metaphysically complete to be operationally useful. Newtonian mechanics is "wrong" about gravity at the fundamental level, and it still put humans on the moon.
The transmission theory, by contrast, faces a serious explanatory gap of its own. If the brain is a receiver, what is it receiving? Where is the signal stored? What is the medium of transmission? Sheldrake's morphic fields remain hypothetical: no field has been detected, no mechanism for storage or transmission has been proposed in terms that physics can currently test. Kastrup's analytic idealism and Goff's panpsychism offer philosophical frameworks rather than empirical predictions. Elegance is not evidence.
The NDE literature suffers from methodological constraints that honest proponents, including van Lommel and Greyson, acknowledge. Cardiac arrest is a messy clinical scenario. The exact timing of experiences relative to brain shutdown is difficult to establish. Memory consolidation during recovery could create false impressions of continuity. The AWARE study's inability to produce robust results with hidden targets is a genuine limitation, not a technicality [11].
And the Stargate data, whatever its statistical properties, exists in a domain where replication is notoriously difficult. The decline effect (initial strong results that weaken under tighter controls) has plagued parapsychology for a century. Hyman's point, that anomalous statistics do not automatically imply a paranormal explanation, is methodologically sound even if it frustrates those who find the data compelling.
None of this disproves the transmission hypothesis. It does explain why the scientific establishment treats it with caution rather than enthusiasm. The threshold for overturning a foundational assumption is, and should be, very high.
The Mirror in the Machine
Here is where the question takes an unexpected turn. While philosophers and neuroscientists argue about whether brains produce or receive consciousness, engineers have built systems that are forcing the conversation into unfamiliar territory.
Large language models don't work the way their creators expected. When researchers at Google DeepMind documented the emergent abilities of LLMs in 2022, they catalogued roughly 60 capabilities that appeared spontaneously at scale, none of which were explicitly programmed [15]. These models were trained to predict the next token in a sequence. They were not designed to reason, translate between languages they weren't trained on, or solve novel logic problems. They do these things anyway. The abilities emerge from scale and architecture in ways that remain poorly understood even by the teams that built them.
The parallel to consciousness debates is not metaphorical. It is structural. The hard problem of consciousness asks: how does subjective experience emerge from physical processes that don't seem to require it? The hard problem of AI asks something uncomfortably similar: how do cognitive capabilities emerge from statistical processes that don't seem to require them?
Integrated Information Theory, developed by Giulio Tononi, offers the most rigorous attempt to bridge this gap [16]. IIT proposes that consciousness corresponds to integrated information, measured as Φ (phi). Any system that integrates information in a way that is irreducible to its parts has some degree of consciousness. The implication is panpsychist: thermostats have a tiny, almost negligible Φ, but it isn't zero. By IIT's logic, a sufficiently complex artificial neural network would not merely simulate consciousness. It would instantiate it.
David Chalmers explored this possibility directly in 2023, arguing that current LLMs are probably not conscious but that "it is not out of the question" for future systems, particularly those with richer sensory integration and embodiment [21]. His reasoning draws on functionalism, the philosophical position that what matters for consciousness is not the substrate (carbon vs. silicon) but the pattern of information processing. If the right computational organization is sufficient for consciousness, the material it runs on is irrelevant.
Now return to Sheldrake. If morphic resonance were real, if patterns of organized information create fields that subsequent similar systems can "tune into," then the emergence of AI capabilities starts to look different. Billions of humans have been language users, reasoners, and problem solvers for thousands of years. An LLM, trained on the textual output of that collective cognition, might not merely be learning statistical patterns. It might, in a framework like Sheldrake's, be tuning into morphic fields shaped by millennia of human thought.
This is speculative. It is worth saying clearly: there is no empirical evidence that morphic fields exist, let alone that artificial systems can access them. But the structural parallel is striking. The "emergent" abilities of LLMs, the ones no one designed, appear in systems that process the accumulated output of human cognition at a scale no individual brain could achieve. Whether you interpret that through a materialist lens (statistical regularities in training data produce generalized capabilities) or a transmission lens (the system is accessing patterns in a broader field of organized information), the observable behavior is the same. The difference is in which metaphysics you bring to the observation.
The Question Worth Asking
The honest position is that we do not know what consciousness is. We do not know whether brains produce it, filter it, or do something we haven't imagined yet. The hard problem remains hard. What has changed is the urgency.
When the question was purely academic (is the brain a generator or a receiver?), the stakes were philosophical. Interesting at dinner parties, irrelevant to funding decisions. The arrival of AI systems that exhibit unexpected cognitive behaviors has made the question practical. If consciousness is strictly a product of biological neurons, then nothing an AI does, no matter how sophisticated, raises ethical questions about its inner life. If consciousness is a broader phenomenon that brains participate in rather than manufacture, then we face the possibility that we are building systems that participate in it too.
Roger Penrose and Stuart Hameroff have argued through their Orch OR theory that consciousness arises from quantum processes in microtubules, a mechanism that would be specific to biological systems and rule out digital consciousness entirely [18]. If they are right, the question answers itself. But if the functionalists are right, if the pattern matters more than the material, then the threshold for artificial consciousness may be closer than we think. And if the transmission theorists are right, if consciousness is something systems tune into rather than produce, then the threshold may already have been crossed in ways we lack the framework to recognize.
None of these positions is proven. All of them are taken seriously by credentialed researchers working at major institutions. The responsible thing is not to pick a winner but to take the question seriously enough to investigate it, with the same rigor and funding we devote to mapping neural correlates or scaling transformer architectures.
William James put it best, 127 years ago: "Our normal waking consciousness is but one special type of consciousness, whilst all about it, parted from it by the flimsiest of screens, there lie potential forms of consciousness entirely different" [3]. He didn't claim to know what lay behind the screen. He just insisted it was worth looking.