Skip to main content

Derek Schuurman rightly warns against the use of chatbot “personas,” which are created by configuring a program like ChatGPT to respond in the style of a particular person. For Schuurman, the problem is that using these services encourages “ontological confusion” – by interacting with a computer as if it is a human being, “we run the risk of blurring the lines between people and machines.” As this distinction gets lost, we start to countenance the use of machines to do jobs that only humans can do (like parenting, pastoring, and teaching), and we start to lose our capacity to cultivate relationships with other humans. 

All this is true, but it is important to emphasize the deeper and darker risk of ontological confusion. It is not just that machines might take human jobs, or that humans might come to prefer relationships with machines. It is that humans who relate to machines as if those machines are humans are at risk of becoming less human themselves. As Anton Barba-Kay puts it: “What is novel about technologies that engage us psychologically or emotionally is not that they will learn to do without us, but that we will unlearn how to be without them – not that they will act like human beings, but that we will act like robots to make them more human, which in turn recoils on what we want “human” to mean for us all.”1 The risk, in other words, is not that the C.S. Lewis chatbot will become indistinguishable from C.S. Lewis, but that the chatbot’s user will become indistinguishable from the chatbot. Ontological confusion may well lead to ontological transformation. 

In light of that concern, I cannot agree with Schurrman’s suggestion that chatbots can be legitimately used if we confine them to providing “relevant information without taking on a persona.” This semester I am reading Viktor Frankl’s Man’s Search for Meaning with some students. The first part of the book is a record of Frankl’s time in the concentration camp; the second part is less narrative and more abstract, as it lays out Frankl’s theory of “logotherapy.” One student said she wished this second part would “get to the point,” instead of going on for pages and pages about what logotherapy entails. She was asking for “relevant information” – the sort of thing that could be summarized in a paragraph by ChatGPT. This is a common sentiment among undergrads, who are always looking for “relevant information,” and for more efficient ways to procure it. That is the wrong way to read a book, I suggested. Your goal, at the end of a book, is to be able to ask yourself “what would the author say about X, Y, or Z?” and to be able to answer that question not because you have the “relevant information,” but because you have inhabited that author’s thoughts for a few hundred pages. A book is not a person, but it is persons who write books, and this is why some of us can think of books as friends. A friend is someone you want to spend time with, without an ulterior purpose. Machines, by contrast, are very good – probably better – at producing “relevant information” that can be put to immediate use (putting aside the question of how “relevance” is determined in the first place, except by all-too-human judgment).

I can offer an answer to the question “what might C.S. Lewis have thought of Christian Scholar’s Review?” because I have spent a lot of time with C.S. Lewis, by reading his books. I suspect Schuurman is in the same position. Schuurman and I have a sense for what Lewis might have thought about Christian Scholar’s Review because we are persons, and Lewis is a person, and we have from Lewis’ books a good sense of Lewis’ persona. I did not read his books because I wanted to someday be able to answer the specific question that Schuurman posed to the chatbot. If I had wanted to be able to answer that question, it would have been a much better use of my time to look for the “relevant information,” and to get it as quickly as possible. But why would I want to spend my time querying ChatGPT for information about C.S. Lewis, rather than spending it reading C.S. Lewis?

As someone who has read Lewis’ books, I can recognize that the chatbot’s answer is not inaccurate. And Schuurman might object that this is not an either/or. We can read books, and we can also use a chatbot: why do we have to choose?  But I think it is much closer to being an either/or than we appreciate. The question is whether I would ever have bothered to read his books, if I had grown up with a chatbot that could read them for me and then give me the information I needed for some specific purpose (perhaps I am taking an exam on the thought of C.S. Lewis). More to the point: the question is whether, having spent a lot of my youth talking with chatbots, I would ever have been formed into enough of a person to care about something more than information. 

I was about to conclude by saying it would have been more interesting if Schuurman had asked the chatbot what C. S. Lewis would have thought about chatbots, but then I realized that the momentary frisson of irony would have been exactly the sort of cheap fascination that chatbots are so good at providing. I do not want to suggest that there are no legitimately helpful uses for these kinds of programs, but I do want to insist that Schuurman’s example illustrates a generally unhelpful way to use it. 

Footnotes

  1. Barbar-Kay, A Web of Our Own Making: The Nature of Digital Formation (Cambridge University Press, 2023), 200.

Adam Smith

Adam Smith is Associate Professor of Political Philosophy and Director of the Honors Program at the University of Dubuque, and Associate Editor at Front Porch Republic.

One Comment

  • Derek Schuurman says:

    Thanks for further strengthening the argument against the use of Chatbot personas. The words of Marshall McLuhan come to mind: “we become what we behold.”

Leave a Reply