The other day, I chatted with C.S. Lewis. The conversation went something like this:
Derek: “Hi, Jack. What do you think of Christian Scholars Review?”
C.S. Lewis: “Ah, you address me as ‘Jack,’ a name I hold dear, given to me by my family and friends. As for the Christian Scholars Review, while it is a publication that came into prominence after my time, I can certainly appreciate the endeavor it represents. Christian scholarship, in any form, is a noble pursuit. It seeks to marry the life of the mind with the life of the spirit, a union I have always championed. In my own work, whether through the allegorical landscapes of Narnia or the more direct theological explorations in “Mere Christianity,” I have endeavored to show that faith and reason are not adversaries but allies. The Christian Scholars Review, as I understand it, aims to foster rigorous academic discourse within a framework of Christian belief. This is a commendable mission. It is vital for scholars to engage with the world through the lens of their faith, to challenge and be challenged, to illuminate and be illuminated. In my own academic journey, I found that the pursuit of truth, whether through literature, philosophy, or theology, invariably led me back to the ultimate Truth found in Christ. I would hope that the Christian Scholars Review serves as a beacon for others on similar journeys, encouraging them to explore the depths of their intellect while remaining anchored in their faith. In a world increasingly skeptical of absolute truths, such scholarly endeavors are not just beneficial but essential. They remind us that our faith is not a blind leap into the dark but a reasoned step into the light.”
This interaction was made possible using a C.S. Lewis chat-bot I constructed in a webpage that used a ChatGPT backend configured to take on the persona of C.S. Lewis. Many chatbots can be easily configured to converse in the style of a particular persona or a certain role (like a coach, teacher, or even pastor). Already there are educational apps using AI that enable students to “chat with anyone from the past”, like Abraham Lincoln or Isaac Newton, and other apps that emulate conversations with dead relatives. There is even an “AI Jesus.”1
Many AI enthusiasts are trumpeting the ability of AI to mimic human interactions, some equipped with human-like voices. In an article titled “Why AI Will Save the World,” web pioneer Marc Andreessen wrote that “every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” In another article titled “Reverend Robot: Automation and Clergy,” the author observes that “AI systems would be available 24/7, be unfailingly attentive and patient, remember the details of every interaction, and pick up on subtle cues that a human might miss.” Nearly 10 years after the movie “Her” was released, life imitates art with a variety of companies providing AI girlfriends. The founder of chatbot maker Replika has even suggested one could marry an AI spouse.
These developments have also found their way into education. A professor at Georgia Tech deployed an AI teaching assistant nearly 10 years ago. With the rise of large language models (LLMs), a flurry of AI tutors have been released, including Khan Academy’s Khanmigo, Blackboard’s AI personas, and Carnegie Learning’s TutorBot. As these possibilities proliferate, professors are encouraged to “embrace the future of learning.”
Some early research has begun to question the effectiveness of AI study assistants, suggesting students who use them do worse or that they can even harm learning. Others have critiqued chatbots over their accuracy or over privacy concerns. But I think there is a much bigger issue at stake. By creating machines that mimic people, we risk something I like to call “ontological confusion”: people are people, and machines are machines, and we need to keep that distinction.2 We are often captivated by what things can do rather than discerning what things are. It is not hard for humans to be charmed and seduced by AI “companions” that feign human connection. However, by interacting with a machine as if it were a person, we run the risk of blurring the lines between people and machines. Already in 1966, the early AI pioneer, Joseph Weizenbaum, experimented with a chatbot he called ELIZA. He was startled to observe how some people were emotionally drawn in to his program and was taken aback by suggestions that such a program might be used to automate psychotherapy. His response was that “There are limits to what computers ought to be put to do.”3
By elevating the status of our machines, we reduce the distinctiveness of human beings who are uniquely made in God’s image. Once the lines are blurred, it is a small step to begin substituting machines for other human roles, like a child-care robots, elder-care robots, AI girlfriends, teachers, and pastors. There are roles that require human care, empathy, companionship, and wisdom that cannot be offloaded to machines, regardless of how well they perform. Furthermore, relationships with machines may harm our ability to cultivate genuine human relationships. Author and social scientist Sherry Turkle warns about being “stranded in relationships that are only about one person.”4
Nevertheless, there are still many helpful ways to use chatbots. For example, ChatGPT can still be used to research C.S. Lewis without pretending to be a person. Consider the following interaction:
Derek: What might C.S. Lewis have thought of Christian Scholars Review? (give a concise answer and do not assume any personas).
ChatGPT: C.S. Lewis would likely have appreciated Christian Scholars Review for its integration of faith and scholarship, valuing its commitment to rigorous intellectual inquiry grounded in Christian principles. He would probably have supported its mission to bridge faith and academia but might have encouraged it to remain accessible to both scholars and a broader audience, in line with his own emphasis on clear communication and engaging diverse readers.
Note this response does not use any first-person pronouns, which is appropriate since ChatGPT is not a person. However, it does provide some relevant information without taking on a persona. A machine is not a person (nor is it an author) and its output should not present itself as such.
“We will find a responsible way forward not by asking what AI can do, but rather by starting with ontological questions and then determining what role AI ought to play.”5 Human beings are uniquely created in the image of God and are someone versus a chatbot which is something.6 Once we have drawn the proper lines between people and machines, we can start discerning the best way to move forward with the appropriate and responsible use of AI.
An earlier version of this article originally appeared in Christian Courier.
Footnotes
- I don’t recommend it, and hence decided not to include a hyperlink.
- I make this argument in “Artificial Intelligence: Discerning a Christian Response”, Perspectives on Science and Christian Faith, Volume 71, Number 2, June 2019, 79.
- Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (New York: W. H. Freeman, 1976), 5–6.
- Sherry Turkle, Alone Together, (New York, Basic Books, 2012), 56.
- “Artificial Intelligence”, Perspectives on Science and Christian Faith, 80.
- Robert Spaemann, Persons: The Difference Between “Someone” and “Something,” trans. Oliver O’Donovan, Oxford Studies in Theological Ethics (Oxford: Oxford University Press, 2017), 236.