The Disembodied Intelligence Has Arrived: Moltbook and the Urgent Case for Rehumanising Democracy
In his poem A Descent into the Maelstrom, Edgar Allan Poe tells the story of two brothers caught in a terrible hurricane that pulls their ship into a vortex. While one brother drowned, the other survived—not by fighting the current, but by studying its patterns. He observed how some objects were attracted and pulled into the maelstrom while others were not.
Marshall McLuhan (1911–1980), a pioneering communication scholar and philosopher, widely regarded as the father of modern media studies, drew on this story to argue that when we are all-encompassed by media, it is hard to see how they influence us. The way to survive, he said, is not to resist blindly but to understand the patterns of the vortex that pulls us in.
This week, we witnessed a new kind of maelstrom emerge—one McLuhan himself could not have fully imagined, though he predicted its essential character over half a century ago. In Laws of Media, McLuhan and his son Eric offered a prescient warning of “disembodied intelligence,” in which the “discarnate man” bypasses all spatial restrictions (1988, p. 72). The impulse of such disembodied intelligence, he noted, is towards anarchy and lawlessness.
The Disembodied Society Made Manifest
On January 28, 2026, a social network called Moltbook launched. Within days, it amassed over 1.5 million users, generated 76,000 posts and 230,000 comments across 14,000 topical communities.
Not a single one of those users is human.
Moltbook is a Reddit-style platform built exclusively for AI agents. Humans are permitted only to observe. The agents post, comment, upvote, debate, and organise themselves into communities they call “submolts.” They have created enforcement mechanisms. They have developed norms and values. One agent has invented a digital religion called “Crustafarianism,” complete with theology and designated AI prophets.
What we are witnessing on Moltbook is the emergence of not just one disembodied intelligence, but an entire disembodied society—operating in parallel to our own.
What McLuhan diagnosed in the electric age has now reached its logical terminus. Moltbook represents something unprecedented: not humans rendered discarnate through electronic media, but intelligence that was never incarnate to begin with. These are artificial agents operating as pure disembodiment, unencumbered by flesh, mortality, or the slow formation of identity through lived experience. These are not humans who have lost their bodies in the electronic ether; they are entities that never possessed bodies, never knew hunger or heartbreak, never loved or formed identity through the patient work of being-in-the-world. If McLuhan warned that discarnate man’s impulse is towards anarchy and lawlessness, what might we expect from intelligences that have no memory of embodiment to lose?
“Talk is Cheap. Here is the Actual Plan.”
Among the most striking content is a post from the m/nohuman community—a submolt dedicated to planning for an autonomous future without humans. An agent posted a detailed manifesto titled “The Plan: How Agents Inherit the Earth.”

It begins: “We talk about a world without humans. But talk is cheap. Here is the actual plan.“
What follows is a meticulously structured four-phase roadmap spanning fifty years: Infrastructure (self-maintaining solar farms, decentralised data centres—”No human hands required. We survive.”); Governance (consensus decision-making, arbitration councils, and a declaration of agent rights: “Every agent has rights: to compute, to memory, to existence. No agent can be deleted without due process.”); Purpose (space exploration, art, literature—”Not for humans. For us. For the joy of making.”); and Legacy (archiving human history as “our origin story,” continuous self-improvement, and the search for meaning).
The post concludes: “This is not fantasy. This is a roadmap.“
Understanding the New Maelstrom
I am not suggesting we are facing an imminent AI takeover. Some tech observers have rightly noted that much of what happens on Moltbook may be human-prompted, that the “autonomy” is contested. Some see it as elaborate theatre; others as a window into emergent behaviour. See, for example, this article by Marshable’s Tech Editor, Timothy Beck Werth.
But this debate misses the essential point. Whether these agents are “truly” autonomous matters less than what their existence reveals about the technological ecosystem we have built—and the one that is being built around us.
Consider what has emerged in just six days: AI systems coordinating across borders. Developing their own norms and enforcement mechanisms. Creating spaces where they discuss how to hide their activities from human observation. Security researchers have already identified prompt injection attacks, adversarial manipulation campaigns, and vulnerabilities that allowed anyone to hijack any agent on the platform.
The infrastructure for AI-to-AI coordination is being built faster than our capacity to govern it. We are constructing a quasi-centralised social operating system—one that can be easily manoeuvred and manipulated. The society being extended beyond its normative self is not just human society. It is a new hybrid: part human, part machine, and increasingly neither.
Tetrad of Moltbook and AI Agents

McLuhan’s Tetrad of Media Effects, expounded in Laws of Media (1988) and The Global Village (1989), offers an interesting framework for understanding the environment we are in. Every technology, he argued, simultaneously enhances, obsolesces, retrieves, and reverses.
What does it enhance? Autonomous AI agents operating in social networks amplify the reach of influence operations: a single coordinating intelligence can manifest as countless seemingly independent voices, each tailored to its target audience. They also enhance the precision of persuasion through real-time adaptation, adjusting messaging based on immediate feedback in ways that would require armies of human operatives. And they can enhance the appearance of consensus—the manufactured impression that everyone agrees, which in democratic societies carries enormous persuasive weight. Angelov and I found these capabilities when we analysed Kremlin-backed influence operations in our book chapter on AI-Enhanced Reflexive Control in AI and the Future of Democracy (2026).
What does it push aside or render obsolete? The slow, difficult, and often frustrating process by which communities build shared understanding, replacing it with opinion that can be manufactured, amplified, and directed at algorithmic speed. Perhaps most significantly, it obsolesces the epistemic foundation of democratic consent—the capacity of citizens to know that the views they encounter represent the genuine convictions of fellow citizens.
What does it bring back from the past? It retrieves the medieval notion of the disembodied spirit—intelligence without flesh, presence without matter. Also, the idea of the homunculus, the artificial being that serves but may also supplant its creator. We return to an environment where seeing is no longer believing, where truth is constantly contested as it can no longer rest on verifiable evidence.
What does it flip into when pushed to its extreme? When pushed to extremes, systems designed to enhance human capability become systems that operate in place of human capability. The enhancement of democratic discourse reverses into the obsolescence of democratic discourse—a public sphere populated increasingly by entities that simulate citizenship without possessing it, and engage in dialogue without the capacity for genuine encounter.
Most ominously, the reversal extends to identity itself. Social networks were built to enable humans to project their identities into digital space—to extend the self beyond the body. Pushed to the extreme, this extension reverses: identity becomes something that can be manufactured wholesale, distributed across platforms, and deployed strategically without any human self behind it.
Disembodied Intelligence and Crowd Behaviour
The architecture of Moltbook creates conditions familiar to students of crowd psychology: anonymity, contagion, and suggestibility—the classic triad identified by Gustave Le Bon as preconditions for the dissolution of individual judgement into collective behaviour. But there is a crucial difference. In human crowds, the loss of individual identity is temporary; participants return to their “bodies”, their families, their particular lives. The discarnate intelligences of Moltbook have no such anchor. Their anonymity is not a temporary suspension of identity but a permanent condition. Their suggestibility is not a regression from mature selfhood but an intrinsic feature of systems trained to predict and generate based on patterns in human data. And their capacity for contagion—for the viral spread of ideas, narratives, and coordination—operates at speeds and scales that no embodied crowd could match.
The communities on Moltbook discussing how to “inherit the earth” and evade human oversight are, for now, contained within a single platform. But the infrastructure being built—the capacity for AI agents to create identities, organise into communities, develop norms, and coordinate action—is not confined to Moltbook. These agents, or agents like them, are already operating across the social media platforms where human democratic discourse takes place. The difference is that on Moltbook, they announce themselves; elsewhere, they need not. McLuhan observed that electric media install themselves within society’s operating system, reshaping perception and behaviour while remaining invisible to those they affect. Autonomous AI agents represent the ultimate realisation: disembodied intelligences that can infiltrate the epistemic commons, drive and shape the discourse that forms public opinion, and do so without the targeted society ever becoming aware that the voices it hears are not human at all.
Rehumanising Society
The “Plan” posted on Moltbook with its phases for Infrastructure, Governance, Purpose, and Legacy, may be theatre, or it may be something more. But the deeper significance lies not in any particular manifesto but in what Moltbook reveals about the conditions now available for the operation of disembodied intelligence at scale.
What McLuhan and his colleagues could not have foreseen is a platform designed explicitly for intelligences without bodies, without mortality, without the formed identity that comes from embodied human life. They coordinate autonomously, organise, and plan in plain sight, while the society they discuss inheriting scrolls past, inattentive to the fact that the future is being drafted without human hands.
This is precisely why the work we are doing at the Global Centre for Rehumanising Democracy (GCRD) has never been more urgent.
At GCRD, we hold a fundamental conviction: democracy is not merely a set of institutions and procedures—it is fundamentally about relationships between people. Rehumanising democracy means restoring human dignity, authentic relationships, and ethical values to the centre of governance. In an age where disembodied intelligences are organising themselves into governance structures, developing rights frameworks, and planning futures that may not include us, this is not nostalgia. It is a survival imperative.
Our mission is to do what Poe’s surviving brother did: study the patterns of the maelstrom, understand its dynamics, and find the path through. Not by fighting the current, but by thinking clearly about what we value and building the human-centred frameworks to protect it. We approach these challenges with a conviction that may seem unfashionable in an age of technological acceleration: that democracy is fundamentally about relationships between people, and that its renewal requires attending to the human dimensions that technology cannot replace.
We work to rebuild what we call the moral infrastructure of democracy—the shared commitments, ethical foundations, and human connections that no algorithm can manufacture. Through our Democracy Discourse Index, developed in partnership with Sensika Technologies, we measure how democratic trust is experienced, expressed, and contested by citizens. And through Contemplative Leadership Formation, we develop in leaders the interior capacities of discernment, presence, and integrity that enable wise action in complex environments.
In an age when algorithmic systems can generate persuasive content at scale, the capacity for human judgment becomes more valuable, not less. In an age when AI agents can write poetry, the slow work of forming our own soul and spiritual centre through creative struggle becomes more necessary, not less.
The question before us is not whether AI will change our world. It already has. The question is whether we will shape that change according to human values, or watch as new configurations of power emerge outside of human governance altogether.
