Skip to main content

You’ve heard the story. Advanced technology is created, turns deviant, and coldly proceeds to supplant humanity.  It is one of the more enduring science fiction tropes. From the cautionary imagery of Mary Shelley’s Frankenstein to Tom Cruise battling Artificial Intelligence in the latest Mission Impossible blockbuster, the robots going rogue theme consistently occupies our imaginative landscape.

Why? What is so enduring about benign technology gaining consciousness and plotting our demise? The reason is likely because the AI takeover story reflects some of our deepest anxieties: loss of control, uncertainty of progress, and fear of the other.

The anxiety is not unfounded. Although Economists have long offered data-backed arguments that innovative technology, commercial automation, and artificial intelligence are more complementary to human labor than substitutionary, everything seemed to change on November 30th, 2022. ChatGPT not only has the fastest adoption rate in tech history, the multi-modal potential of GPT-4 also exceeds anything previously imagined.

Where AI used to be evaluated for its computational capacity and speed, today it is praised for creativity—recently scoring in the top 1% on the Torrance Tests of Creative Thinking—a popular assessment to evaluate “the general creative abilities of individuals.”

Fear of AI’s potential to eviscerate humans is no longer originating from science fiction. Comparing AI’s destructive potential to pandemics and nuclear weapons, The New York Times recently suggested human existence, not simply livelihood, is at stake. Before his death, Stephen Hawking warned that AI’s development “could spell the end of the human race.” More recently, various scientists and AI experts signed an open letter requesting a pause on AI developments such as ChatGPT or other Large Language Models. Among other warnings, the letter raised the question of developing “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”

As one article read, “It is unusual to see industry leaders talk about the potential lethality of their own product.” When renowned scientists echo the imaginative doom that has marked the science fiction genre for decades, you know the story of malicious machinery is more than just a story. Fear is warranted. The threat is real.

But what if the nature of the threat is not conscious, malevolent AI? What if the threat is more subtle?  Artificial Intelligence is not plotting to usurp our agency. More realistically, we will unwittingly give it away. That is the threat.

And the severity of unconsciously transferring agency to artificial intelligence is heightened by a Christian understanding of personhood. The irreducible, non-transferable essence of humans is that we are created in the image of God (Imago Dei). Part of realizing our humanity and enacting redemptive order is bound up in bearing out God-reflecting attributes such as creativity, productivity, moral reflection, and relational capacities. If this is constitutive of our humanity and integral to human flourishing, then consigning essential elements of our personhood to AI risks a disfigurement of God’s image and our dehumanization.

The theological framework of Imago Dei offers guideposts for considering the degree to which we ascribe agency to, or agentify, artificial intelligence. In other words, referencing a theologically informed conception of human personhood can helpfully govern AI usage or neglect.

For example, AI innovations that serve our health may advance human aims in complementary, non-threatening ways. Drug design, dosage recommendations, or the discovery of life-saving medicine allow us to better exhibit creativity or cultivate meaningful relationships with others. There are other case studies where AI adoption is more helpful and less threatening to our personhood. AI in the insurance industry, for example, can minimize uncertainty, better evaluate risk, detect fraud, or enhance accuracy.

These aims are not immune from questions, but AI usages such as health innovations or risk assessment are less likely to erode God’s image in our lives.

The same cannot be said for other AI applications. Consider the determination of prison sentences. Although “moneyballing” the criminal justice system with data analytics isn’t a new idea, predicting future misconduct or handing over sentencing and parole decisions to generative AI consigns assumptions and judgments away from the moral deliberation of a thickly-constituted community to icy, algorithmic proceduralism.

How about AI commandeering an F-16 fighter jet? An AI pilot may be superior to humans in an attack simulation, but are we prepared to weaponize our machinery with bounded autonomy to make lethal judgments?

Or consider AI applications within some of our longstanding mediating institutions. Artificial intelligence is being used to compose wedding vows, produce sermons, or write—and grade—essays in the classroom. This relegates heartfelt sentiment, contextualized truths, or ordered knowledge to an advanced chatbot that forages and assembles a probabilistically sequenced string of words. But it does not create. Further, this mistakes the order of knowledge as principally knowing and subsequently documenting. More accurately, we often write to know. Thoughtful human expression through the act of writing does not only produce an outcome (a wedding vow, a sermon, an essay, etc.)—it is its own humanizing outcome.

Other AI innovations provide mental health assistance, simulate empathy and friendship, or even date lonely men. While novel, these innovations bypass relational exchanges fundamental to the institutions they supposedly assist. To the extent that loneliness or eroding mental health are a function of increased digitization, it seems tragicomic to prescribe technology as a solution. Our remedies are sometimes worse than our maladies, remarked the playwright Moliere. More realistically, our personhood is realized in proximate relationships with others. We are dialogical beings, says philosopher Charles Taylor, and as such we “mutually constitute one another.”

Underlying AI’s productive capacity lies a Faustian bargain. Where we gain efficiency and convenience, we risk unwittingly surrendering elements of our God-reflecting personhood. Further, we elevate the wrong savior. In a modern twist of John the Baptist’s profession in John 3:30, uncritical AI adoption tacitly states, “Artificial intelligence must become greater; I must become less.”

AI already has “takeover” capacity. It’s there. And it will grow.

But artificial intelligence only has agency to the extent we grant. Questions about the appropriate application of AI, the dimensions of its use, and the degree of sovereignty we provide it—these are moral and spiritual questions. And if we start with that perspective, we broaden the conversation to invite new realms of consideration.

A helpful parallel can be found in medicine and our approach to mortality. Columbia University’s Lydia Dugdale references a distinction between the “intuitive mind” and the “analytic mind” in medical practice. The former aspires to meaning, asking questions such as “What is medicine for?” or “What is the goal for the patient?” For the intuitive mind, an integrated human body is greater than the sum of its parts. In contrast, the analytic mind, she says, cuts like a scalpel. It disassembles what is before it, analyzes, and resolves.

In other words, the analytic mind bypasses a broad set of moral, spiritual, and philosophical questions about what it means to be a human, and merely dissects and corrects. However, says Dugdale, what if we began with questions about what it means to die well? How would deliberation over Ars moriendi (“The Art of Dying”) change our approach to medical treatment?

Though the context is different, Dugdale’s work offers a fruitful parallel for how we consider human agency and AI. Like the intuitive approach to medicine, we are invited to pause and ask what the adoption of a given technology might do to individuals and communities. How might this change us? How might AI advance, or stunt, our God-reflecting capacities? What do we gain? What do we lose?  What are we becoming in applying this to our life?

The purpose of the AI development moratorium called for by scientists and entrepreneurs was to cultivate “robust AI governance systems.” But the proposed halt side-steps questions about the good life, a just society, human personhood, or a flourishing humanity. Technology is, by nature, mono-dimensional. It functions to advance efficiency and productivity. It optimizes. That is its reason for existence. When considering innovation, we implicitly ask, “Can technology do this better?” Increasingly, the answer is “yes.”

But that cannot and should not be the only standard for its use. Our understanding of progress must be moderated by other considerations, and in the Christian faith tradition, an Imago Dei theology offers scaffolding to guide and govern how we think about technological innovation and the agency we are willing, and unwilling, to concede in exchange for digital conveniences.

AI is a good servant, but a poor master. In the absence of moral and spiritual reflection, and in a procedural republic merely concerned with letting individuals choose their ends and values, AI’s advancement—and our own quiet deterioration—will carry on unobstructed.

There is truth to the science fiction story. AI is a threat to humanity. But don’t expect our demise to look like The Terminator; expect an unwitting, unconscious recession of human agency. That is far more likely, but equally dangerous.

Kevin Brown

Asbury University
Kevin Brown is the 18th President of Asbury University.

8 Comments

  • Thank you for this reflection, Dr. Brown. Proud to be an Asbury alumnus!

  • Vernona Hearne says:

    But we, followers of Jesus Christ, have His mind, right? RIGHT?
    As directed by the Holy Spirit, right?
    Established and united in Father, Son, Spirit, right?

  • Joseph 'Rocky' Wallace says:

    Dr. Brown, so well articulated. Yes, we must accept the reality, and be proactive and discerning–as you suggest. We have past examples that should alert us to the challenge now upon us. The outsourcing of American industry (seemed harmless at one time), the porn industry (was never harmless), and the ever-growing alcohol industry are but three examples of the human appetite for financial gain, convenience, and unharnessed free will.

    AI is out of the box and expanding by the day, and to subdue it to healthy levels will be the challenge of our generation. Our colleagues in the field of forecasting and futuring have been warning us of this day for a while now, and for sure, it’s here.

  • Gordon Moulden says:

    “Be wise as serpents and innocent as doves.” i.e.. never check your brains OR morals at the door.

    These are vital aspects of imago dei that we are commanded to never surrender; we have a responsibility for our decisions and behavior, which we are not permitted, in God’s eyes, to give up. We must be very careful therefore, NOT to let A.I. do our moral and higher-level cognitive thinking for us; data analysis is one thing; interpreting and applying the results is another, and it is a responsibility that must rest with us, humankind, because God holds us responsible for how we apply what we’ve learned; we are to work for His glory and the good of people. AI does not possess the moral capability to serve such purposes; we must therefore never surrender our God-given responsibility; AI would indeed be a poor master.

  • Allin Means says:

    This is an excellent, poignant and relevant article. As director of the Faith and Research Conference at Missouri Baptist University, Feb. 15-16, 2024, on our campus in St. Louis, I would like to follow up with an invitation to present at our conference. Our theme this year focuses specifically on Artificial Intelligence and its relevance in our pedagogy. Look for an email soon from our conference staff.

  • Samuel Figueroa says:

    I like “AI is a good servant, but a poor master.” Today I heard an interview of Joy Buolamwini, the author of the book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines”. She told of a case where a woman 8 months pregnant was thrown in jail accused of carjacking due to being misidentified by an AI system. This is an example of the harm that can happen when AI is the master. On the flip side, I think of the parable of the talents, which I take to be about how God shows mercy to each of us, but we individually have to decide how to respond to the mercy we receive. Many will bury it in the ground, but some will enthusiastically take it and totally change the direction of their lives. Perhaps AI can often correctly predict in real time who will sack the quarterback, but can it predict how each person will respond to God’s mercy?

  • Brian Holtz says:

    Great perspective, Kevin! It seems like we are so threatened by AI because we place our value in things like intellect, creativity, and productivity. If those are the true measure of our worth, then we should be VERY concerned about AI overtaking us as it almost certainly will (if it hasn’t already).
    But they aren’t. As you state, our value as creatures is defined by something far greater, though less “tangible” – being made in God’s image. If we were to focus more on BEING like Him rather than simply BEHAVING like Him (CONformance rather than PERformance), we would see AI as yet another tool to lead people back into relationship with our Creator rather than a threat to our standing in this world.

  • George Ezell says:

    Excellent analysis. Thank you for guidance through some strange and turbulent waters.