You’ve heard the story. Advanced technology is created, turns deviant, and coldly proceeds to supplant humanity. It is one of the more enduring science fiction tropes. From the cautionary imagery of Mary Shelley’s Frankenstein to Tom Cruise battling Artificial Intelligence in the latest Mission Impossible blockbuster, the robots going rogue theme consistently occupies our imaginative landscape.
Why? What is so enduring about benign technology gaining consciousness and plotting our demise? The reason is likely because the AI takeover story reflects some of our deepest anxieties: loss of control, uncertainty of progress, and fear of the other.
The anxiety is not unfounded. Although Economists have long offered data-backed arguments that innovative technology, commercial automation, and artificial intelligence are more complementary to human labor than substitutionary, everything seemed to change on November 30th, 2022. ChatGPT not only has the fastest adoption rate in tech history, the multi-modal potential of GPT-4 also exceeds anything previously imagined.
Where AI used to be evaluated for its computational capacity and speed, today it is praised for creativity—recently scoring in the top 1% on the Torrance Tests of Creative Thinking—a popular assessment to evaluate “the general creative abilities of individuals.”
Fear of AI’s potential to eviscerate humans is no longer originating from science fiction. Comparing AI’s destructive potential to pandemics and nuclear weapons, The New York Times recently suggested human existence, not simply livelihood, is at stake. Before his death, Stephen Hawking warned that AI’s development “could spell the end of the human race.” More recently, various scientists and AI experts signed an open letter requesting a pause on AI developments such as ChatGPT or other Large Language Models. Among other warnings, the letter raised the question of developing “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”
As one article read, “It is unusual to see industry leaders talk about the potential lethality of their own product.” When renowned scientists echo the imaginative doom that has marked the science fiction genre for decades, you know the story of malicious machinery is more than just a story. Fear is warranted. The threat is real.
But what if the nature of the threat is not conscious, malevolent AI? What if the threat is more subtle? Artificial Intelligence is not plotting to usurp our agency. More realistically, we will unwittingly give it away. That is the threat.
And the severity of unconsciously transferring agency to artificial intelligence is heightened by a Christian understanding of personhood. The irreducible, non-transferable essence of humans is that we are created in the image of God (Imago Dei). Part of realizing our humanity and enacting redemptive order is bound up in bearing out God-reflecting attributes such as creativity, productivity, moral reflection, and relational capacities. If this is constitutive of our humanity and integral to human flourishing, then consigning essential elements of our personhood to AI risks a disfigurement of God’s image and our dehumanization.
The theological framework of Imago Dei offers guideposts for considering the degree to which we ascribe agency to, or agentify, artificial intelligence. In other words, referencing a theologically informed conception of human personhood can helpfully govern AI usage or neglect.
For example, AI innovations that serve our health may advance human aims in complementary, non-threatening ways. Drug design, dosage recommendations, or the discovery of life-saving medicine allow us to better exhibit creativity or cultivate meaningful relationships with others. There are other case studies where AI adoption is more helpful and less threatening to our personhood. AI in the insurance industry, for example, can minimize uncertainty, better evaluate risk, detect fraud, or enhance accuracy.
These aims are not immune from questions, but AI usages such as health innovations or risk assessment are less likely to erode God’s image in our lives.
The same cannot be said for other AI applications. Consider the determination of prison sentences. Although “moneyballing” the criminal justice system with data analytics isn’t a new idea, predicting future misconduct or handing over sentencing and parole decisions to generative AI consigns assumptions and judgments away from the moral deliberation of a thickly-constituted community to icy, algorithmic proceduralism.
How about AI commandeering an F-16 fighter jet? An AI pilot may be superior to humans in an attack simulation, but are we prepared to weaponize our machinery with bounded autonomy to make lethal judgments?
Or consider AI applications within some of our longstanding mediating institutions. Artificial intelligence is being used to compose wedding vows, produce sermons, or write—and grade—essays in the classroom. This relegates heartfelt sentiment, contextualized truths, or ordered knowledge to an advanced chatbot that forages and assembles a probabilistically sequenced string of words. But it does not create. Further, this mistakes the order of knowledge as principally knowing and subsequently documenting. More accurately, we often write to know. Thoughtful human expression through the act of writing does not only produce an outcome (a wedding vow, a sermon, an essay, etc.)—it is its own humanizing outcome.
Other AI innovations provide mental health assistance, simulate empathy and friendship, or even date lonely men. While novel, these innovations bypass relational exchanges fundamental to the institutions they supposedly assist. To the extent that loneliness or eroding mental health are a function of increased digitization, it seems tragicomic to prescribe technology as a solution. Our remedies are sometimes worse than our maladies, remarked the playwright Moliere. More realistically, our personhood is realized in proximate relationships with others. We are dialogical beings, says philosopher Charles Taylor, and as such we “mutually constitute one another.”
Underlying AI’s productive capacity lies a Faustian bargain. Where we gain efficiency and convenience, we risk unwittingly surrendering elements of our God-reflecting personhood. Further, we elevate the wrong savior. In a modern twist of John the Baptist’s profession in John 3:30, uncritical AI adoption tacitly states, “Artificial intelligence must become greater; I must become less.”
AI already has “takeover” capacity. It’s there. And it will grow.
But artificial intelligence only has agency to the extent we grant. Questions about the appropriate application of AI, the dimensions of its use, and the degree of sovereignty we provide it—these are moral and spiritual questions. And if we start with that perspective, we broaden the conversation to invite new realms of consideration.
A helpful parallel can be found in medicine and our approach to mortality. Columbia University’s Lydia Dugdale references a distinction between the “intuitive mind” and the “analytic mind” in medical practice. The former aspires to meaning, asking questions such as “What is medicine for?” or “What is the goal for the patient?” For the intuitive mind, an integrated human body is greater than the sum of its parts. In contrast, the analytic mind, she says, cuts like a scalpel. It disassembles what is before it, analyzes, and resolves.
In other words, the analytic mind bypasses a broad set of moral, spiritual, and philosophical questions about what it means to be a human, and merely dissects and corrects. However, says Dugdale, what if we began with questions about what it means to die well? How would deliberation over Ars moriendi (“The Art of Dying”) change our approach to medical treatment?
Though the context is different, Dugdale’s work offers a fruitful parallel for how we consider human agency and AI. Like the intuitive approach to medicine, we are invited to pause and ask what the adoption of a given technology might do to individuals and communities. How might this change us? How might AI advance, or stunt, our God-reflecting capacities? What do we gain? What do we lose? What are we becoming in applying this to our life?
The purpose of the AI development moratorium called for by scientists and entrepreneurs was to cultivate “robust AI governance systems.” But the proposed halt side-steps questions about the good life, a just society, human personhood, or a flourishing humanity. Technology is, by nature, mono-dimensional. It functions to advance efficiency and productivity. It optimizes. That is its reason for existence. When considering innovation, we implicitly ask, “Can technology do this better?” Increasingly, the answer is “yes.”
But that cannot and should not be the only standard for its use. Our understanding of progress must be moderated by other considerations, and in the Christian faith tradition, an Imago Dei theology offers scaffolding to guide and govern how we think about technological innovation and the agency we are willing, and unwilling, to concede in exchange for digital conveniences.
AI is a good servant, but a poor master. In the absence of moral and spiritual reflection, and in a procedural republic merely concerned with letting individuals choose their ends and values, AI’s advancement—and our own quiet deterioration—will carry on unobstructed.
There is truth to the science fiction story. AI is a threat to humanity. But don’t expect our demise to look like The Terminator; expect an unwitting, unconscious recession of human agency. That is far more likely, but equally dangerous.