The Crucial Role of Time in the Adoption and Integration of AI
Writing evolved our intelligence instead of replacing it, and with enough time, so could AI.
Humanity’s relationship to AI is often portrayed as a zero-sum game: either we’ll be physically or economically wiped out by super-intelligence, or that same intelligence will fashion us into gods.
The crucial factor that most people overlook is time.
All indications suggest that AI will eventually develop into a generalized super-intelligence, the products of which can meet or exceed our own output as humans.
I doubt that we’ll be able to resist the temptation to let it take over our economy. A tireless mind that needs no food and doesn’t complain about healthcare is very appealing to businesses. When it can offer legal, medical, and engineering services for almost no cost, that will appeal greatly to consumers.
But today’s advanced AIs don’t “think” in the same way that we humans do. How could they? We don’t yet understand how our own minds work, nor have we mastered the basic physiological processes that support our brains, like sleep or nutrition.
Not to mention the can of worms that’s spirituality. So many of the leading engineers and investors in Silicon Valley are active proponents of psychedelics, which has led to a new spiritual awakening in the heart of the AI revolution. Have these humans reconciled the opening of machine minds with the opening of their third eyes (some are trying)?
I think there’s plenty of mystery in humans still left for us to create something altogether different from what AI is able to produce. If we could augment our own output with that from AI, perhaps we could evolve our own intelligence.
How the invention of writing evolved our intelligence
There are precedents. In Plato’s Phaedrus, Socrates decries the invention of writing—a relative novelty in Greece at the time—because he believes it would supplant memory, making us more stupid.
What Socrates couldn’t have seen is how writing changed the very way that our minds work. Our greatest scientific achievements would scarcely be imaginable without mathematical notation or the ability to precisely communicate and record observations. Some of our deepest sentiments and emotions, from schadenfreude to hygge, might never have been felt without the ability to write about them.
Just as writing proved to be much more than a memory-killer, we can direct AI development in a similar direction, augmenting our intelligence and spirituality rather than supplanting it.
To clarify, augmentation isn’t a way to “outsmart” AI—that’s a fool’s errand. It’s about differentiating our intelligence from what we have today and, by extension, what LLMs have been trained to mimic. A new human intelligence would create worlds that we can’t yet imagine, ones less bleak than the zero-sum world in which AI replaces all human creative output.
Timing is everything
All these changes depend on how much time we have before we reach generalized artificial super-intelligence (or AGI).
If LLMs are ready within a year or two to replace all professions that require no physical presence, I’m pretty sure we’re screwed. Government in theory would be the most likely actor to step in to handle the fallout. In reality, our sclerotic politics show no sign of being able to cope with a sudden displacement of millions of jobs. For those replaced by AI, the hit to their sense of purpose and meaning would be even deeper.
But if LLMs still need, say, ten years to replace doctors and lawyers, programmers and marketers, and so on, then maybe we have hope.