Neural Mortality: Letting AI die to break the predictive tyranny of the past
Words by Pau Aleikum, edited by Jaya Bonelli
Let’s start with an observation: AI models, such as GPT-4o are astonishingly good at faking brilliance. Ask them to generate ideas, and they’ll churn out something shiny and plausible, the same way a toddler reciting random Shakespeare quotes might sound profound and prodigious. But spend any real time with them — weeks, months, years — and the illusion starts to crack. Suddenly, your AI-generated insights feel like reading the same horoscope over and over. Familiar words, mildly inspiring, and ultimately… empty. That sentence structure? You’ve seen it before. The phrasing of that clever metaphor? It’s starting to echo the one you used in a different project last week. Slowly, like wear patterns on a favorite pair of shoes, the repetition emerges. This isn’t the model’s fault. It’s doing exactly what it was designed to do: predict. AI is the world’s greatest “what comes next” machine. The novelty fades, as was bound to. The LLM, for all its apparent creativity, reveals itself to be what it has always been: a strictly predictive model, a creature of patterns, and we start to become less interested in using them.
This isn’t because the content it produces is bad (it’s fine, really). It’s because repetition, while comforting in a “background noise” sort of way, is also deeply uninteresting. Predictive models are the same: they’re optimized for the middle of the Gaussian bell curve. That’s why they’re great at writing LinkedIn posts but terrible at producing the next Ulysses. They’re designed to reflect the world as it is, not as it could be, reinforcing the status quo.
The Predictive Loop: Comfort in Familiarity, Limits on Divergence
At their core, NLP models rely on probabilities. They don’t “think” or “create” in any human sense. They identify patterns in data, weaving the most likely next step in a sequence. In this way, predictive AI models are masterful imitators. But therein lies their Achilles’ heel. They are stuck in what you might call the tyranny of the likely.
This tyranny feels safe. It’s why LLMs can produce a perfect email draft, mimic the tone of a New Yorker article, or write a plausible poem about heartbreak. But work with these models long enough, and you start to see their scaffolding: the predictable sentence starters, the overly polished symmetry in their arguments, and the recurrence of their favorite rhetorical tricks.
If creativity thrives on the unexpected, on leaps of faith that defy linear logic, then LLMs are, ironically, the enemies of “true creativity”. AI is pathologically linear. It doesn’t leap; it tiptoes. So here’s an idea: we could make good use of a novel concept of “neural mortality”, an AI system that dies and burns its connections and knowledge as we progressively use it. Every time we use one path of the model, we burn it down, so the model cannot use that information again- and as its more “reliable” circuits shut down, it would be forced to make increasingly tenuous connections, mimicking the kind of lateral thinking humans excel at.
Their predictive nature locks them into the well-trodden paths of likelihood, unable to forge the messy, nonlinear connections that birth genuinely new ideas.
What If Predictive Models Could Die?
At the same time, perhaps it is human’s finiteness that enables us to take such drastic and radical creative leaps. Maybe it’s a form of our survival instinct: in order to always find a way out, we need to be quick on our feet and prompt to invent, always ready to think outside of the box and imagine something crazy, something different.
So we don’t have any clue as to how to make it yet, but what if we turned the limitation of repetitiveness into an opportunity? Imagine an NLP model designed not as a static neural network but as a finite, expiring system. Each time you call on a part of the model’s “brain” — a neural pathway — it weakens. Over time, these pathways atrophy, “dying” in a sense, forcing the model to adapt by routing through less obvious, less optimized parts of its architecture.
On the surface, this might seem wasteful. Why would you want a model to deteriorate? Isn’t the whole point of AI to maximize efficiency? But let’s consider a counterintuitive possibility: that this forced scarcity, this gradual degradation, could unlock a kind of latent creativity. By disrupting the model’s reliance on its most efficient, well-worn neural circuits, we might compel it to explore the more obscure, tenuous connections within its network — connections that, precisely because they are less probable, could yield surprising, even transformative results.
A finite AI model isn’t a bug; it’s a feature, or a quirk. The idea isn’t to create a system that lasts forever but one that burns bright and fast, like a creative comet. Over time, the model would become increasingly erratic, generating ideas that are less polished but more surprising.
A Model Built for Divergence: The Case for Neural Scarcity
The philosopher and neuroscientist Anil Seth has written about the brain as a “prediction engine,” constantly guessing and updating its model of reality. But the human brain, unlike LLM’s, thrives on error and improvisation. We embrace gaps in knowledge, filling them with intuition, apophenia (if you don’t know this term, here’s some good material for a digital rabbit hole), and leaps of imagination. What if we designed AI to do the same?
Consider the artistic principle of constraint. Writers have long embraced the creative potential of scarcity — think of Shakespeare’s sonnets, constrained by iambic pentameter and rhyme schemes, or Ernest Hemingway’s six-word story: “For sale: baby shoes, never worn.” Could imposing a kind of neural mortality on AI models create similar conditions for innovation?
By deliberately “killing” parts of the network, we would be introducing an element of chaos. The model, stripped of its most reliable tools, might begin to make leaps it otherwise wouldn’t. A dying neural network could be a truly creative neural network.
The Beauty of Divergence: From Predictability to Possibility
There’s a parallel here to the phenomenon of apophenia — the human tendency to perceive patterns where none exist. It’s what allows us to see shapes in clouds or hear hidden messages in random noise. Apophenia, while often dismissed as a cognitive quirk, can also be a wellspring of creativity. It’s a likely catalyst for metaphor, connection, and insight.
A scarcity-driven AI might not “hallucinate” in quite the same way, but by forcing it to use less obvious parts of its network, we could push it toward divergent thinking. Over time, such a model might develop a kind of aesthetic unpredictability — a willingness to generate ideas that feel less like polished replicas and more like genuine provocations.
A New Way of Working With AI: Embracing the Imperfect
The idea of a “dying” AI challenges our assumptions about efficiency, permanence, and progress. It asks us to think about AI, not as a flawless oracle but as a flawed collaborator, capable of surprising us precisely because it is imperfect.
Imagine the potential applications. A writer, frustrated by a conventional LLM’s rote suggestions, could turn to its scarcer, mortal cousin for inspiration. An artist might use the model’s increasingly erratic outputs as a catalyst for creative breakthroughs. A researcher could explore the model’s outputs as a way of uncovering unexpected connections in data (and actually, this is already happening — with “AI hallucinations”).
In a way, this approach mirrors the human creative process. We don’t always start with the best ideas. Often, we wander, make mistakes, and follow tangents. It is in these detours that we find the spark of something new.
The Risks and Rewards of Neural Mortality
Of course, a finite, “dying” AI raises practical and ethical questions. How do we balance the need for creativity with the risk of diminishing returns? What do we do when the model becomes frustratingly unreliable over time? And how do we ensure that this approach doesn’t simply create a new kind of bias — one rooted in the model’s own evolving architecture?
Yet, these challenges are part of the appeal. They remind us that creativity is not a linear process. It is messy, unpredictable, and often uncomfortable. By designing AI systems that embrace this messiness, we might move closer to a vision of AI that complements human ingenuity rather than merely replicating it.
A Challenge to Rethink AI
So here’s the invitation: What would it mean to build an AI model designed to deteriorate? To embrace scarcity as a feature rather than a bug? To push beyond the predictable, and into the realm of the truly surprising?
Perhaps the answer lies not in perfecting our models but in breaking them — or rather, allowing them to break themselves. In doing so, we might rediscover what we’ve always known: that the richest ideas often emerge not from what is probable but from what is possible.