Anil Seth’s essay ‘The Mythology of Conscious AI’ arrives at a crucial moment in our cultural conversation about artificial intelligence.

What I want to do here is not simply summarise Seth’s arguments, which are compelling in their own right, but to show how they connect with a broader intellectual ecosystem. When read alongside Sherry Turkle’s warnings about technological memory, Shannon Vallor’s insights about AI as cultural mirror, and James Bridle’s observations about technology revealing how we think, Seth’s biological naturalism becomes something larger. It transforms from a philosophical position about consciousness into a vital framework for understanding how we are reshaping ourselves in the image of our machines.

The argument I want to make is this: the real danger of AI is not that machines might become conscious, but that in our fascination with computational models of mind, we are forgetting what biological consciousness actually is. We are flattening the richness of embodied, metabolic, mortal existence into information-processing schemas. And in doing so, we risk not just misunderstanding machines, but misunderstanding ourselves.

Whilst Seth’s arguments against computational functionalism are rigorous and compelling, the essay’s deeper significance lies in what it reveals about how we think about ourselves. When read alongside the work of Sherry Turkle, Shannon Vallor, and James Bridle, Seth’s biological naturalism becomes more than a philosophical position about consciousness; it becomes a vital defence of human distinctiveness in an era increasingly shaped by computational metaphors and algorithmic thinking.

The Mirror and What It Reflects

Shannon Vallor’s observation that AI functions as a mirror, reflecting ‘the incident light of our digitised past’, provides a crucial lens through which to understand Seth’s concerns. We see ourselves in our algorithms, Vallor notes, but we also see our algorithms in ourselves. This reciprocal relationship lies at the heart of Seth’s warning about the ‘mechanisation of the mind’ as perhaps the most pernicious consequence of the rush towards human-like AI. When we conflate the richness of biological brains and human experience with information-processing systems, we commit what philosophers call a category error. More dangerously, we begin to reshape our self-understanding to match the limitations of our technological metaphors.

Seth’s arguments against computational functionalism, that brains lack the clean software-hardware separation of computers, that they operate in continuous rather than discrete time, that their multi-scale integration resists algorithmic abstraction, all point towards a fundamental incommensurability between biological and computational systems. Yet if Vallor is right about AI as mirror, the danger is not merely that we overestimate our machines, but that we progressively underestimate ourselves, flattening the complexity of human consciousness to fit computational categories. This is the real mythology: not that machines might become conscious, but that we are already convincing ourselves that consciousness is the sort of thing that could emerge from silicon and sequence.

Technology as Epistemology

James Bridle’s observation that technology reveals how we think about the world extends Seth’s analysis into a broader cultural critique. Bridle’s provocative claim that the limited company itself might be understood as a form of artificial intelligence, a non-biological entity that pursues goals, processes information, and shapes outcomes, suggests that our current anxiety about AI is merely the latest iteration of a much older pattern. We have been creating artificial agents, structures that think and act in ways that escape individual human control, for centuries. What makes contemporary AI distinctive is not necessarily its capabilities but the way it makes visible the extent to which computational thinking has colonised our understanding of intelligence, agency, and ultimately consciousness itself.

Soul as Breath, Not Algorithm

Seth concludes his essay with a meditation on the soul, understood not in Cartesian terms as an immaterial essence but in older senses: as breath (the Greek psychē) or as pure witnessing awareness (the Hindu Ātman). This return to biological and phenomenological understandings of what makes us fundamentally ourselves provides the essay’s ultimate answer to the mythology of conscious AI. What really matters is not any disembodied, computational essence but ‘an inchoate feeling of just being alive, more breath than thought and more meat than machine’.

This reframing has profound implications for how we navigate the age of AI. It suggests that the crucial defence against the mechanisation of mind is not primarily philosophical argument but lived practice: the cultivation of ways of being that honour our biological nature. To breathe deeply, to attend to bodily sensation, to recognise emotional states as somatic events, to accept the necessity of rest and restoration, to value presence over productivity, these become not merely wellness practices but acts of ontological resistance. They are ways of remembering, in Turkle’s sense, what we know about life. They are ways of refusing to see ourselves in Vallor’s mirror of computational rationality. They are ways of insisting, with Seth, that consciousness is breath before it is algorithm, flesh before it is code.

The future that Seth, Turkle, Vallor, and Bridle collectively warn us about is not one in which machines become conscious but one in which humans become machinic. The real risk is not artificial consciousness but computational unconsciousness: the progressive narrowing of human awareness to match the capabilities and categories of our technological systems. Against this, we must assert not merely arguments but ways of living, ways of thinking, and ways of being that honour the irreducible strangeness of biological consciousness. The defence of human distinctiveness in the age of AI is, finally, the defence of life itself, messy, inefficient, mortal, and miraculous.


Further reading:

↳ The epistemological stakes of AI consciousness debates are mapped formally in The Dimensions of Not Knowing.

↳ The risk of humans becoming more machinic connects to the capacity argument in Productivity Is the Wrong Word.


Garden notes