After hearing repeatedly about ChatGPT, an artificial intelligence tool that OpenAI made available a few months ago, I finally decided to give the system a test drive in mid-February. I chose to engage ChatGPT in an exchange about the relationship between Christians and libraries—a subject area on which I have written and presented repeatedly, especially early in my career. My experience with ChatGPT was memorable, alternately inducing exhilaration and dismay.
I have chosen to document highlights from the “conversation” in the Christ Animating Learning blog for two reasons. First, the subject of my exchange with ChatGPT aligns with the general focus of this forum. Second, ChatGPT’s capabilities intersect broadly with the concerns of educators and other thought leaders who read the blog. The account that I present is necessarily selective; an annotated version of the chat transcript is available for readers who may be interested in more detail.
I began my interaction with ChatGPT by asking this question: “How does holding to a Christian worldview affect one’s view of libraries?” The system responded with four paragraphs that briefly elaborated four themes: the value of knowledge, access to information, ethics and morality, and community engagement. The output was well written. I saw no spelling or grammatical errors, and the prose was cogent, so I had a good first impression.
I followed up by asking, “How have Christians contributed to the development of libraries?” Once again, ChatGPT generated four paragraphs that collectively developed a plausible answer to the question. However, ChatGPT’s assertion that “Andrew Carnegie, a Christian philanthropist and industrialist, donated millions of dollars to build libraries throughout the United States and other countries,” caught me by surprise. I was certainly aware of the fact that Carnegie had funded the construction of numerous libraries, but I had no sense that his business achievements and giving habits were rooted in personal Christian commitment. Eventually, I probed this detail further, but for the moment, I continued to ask broad questions that required the system to perform processes that simulated interpretation and synthesis.
In response to my query, “What are some points of conflict between Christians and contemporary library practice?,” ChatGPT predictably produced four paragraphs. The general theme was that some Christians find it difficult to support free expression in libraries, particularly where sexually oriented materials are concerned.
I pressed further: “How do Christian ethics and the ethics of the library profession overlap, and how do they conflict?” ChatGPT briefly outlined three areas of overlap and three areas of conflict. Discussion was consistent with—although somewhat repetitive of—responses that the system had already generated.
Next, I asked a question germane to my interest in the integration of faith and profession: “Is there such a thing as a Christian philosophy of librarianship?” ChatGPT answered in the affirmative and then offered a statement that struck me as both succinct and insightful: “A Christian philosophy of librarianship is based on the belief that the library profession can be a means of advancing the Kingdom of God on Earth. This means that libraries and librarians have a unique responsibility to promote Christian values such as truth, justice, love, and service to others.”
Knowing that Christians who aspire to practice librarianship have sometimes found secular academic programs in the field to be hostile to their values,1 I decided to pursue a slightly different angle. “Are evangelical Christians likely to feel welcome in the environment of a school of library and information science?,” I asked. ChatGPT offered a nuanced reply, highlighting potential areas of concern (e.g., intellectual freedom, diversity, secularism), but also affirming that “there are many evangelical Christians … who feel that their faith is compatible with the values and principles of the profession.”
Having found that ChatGPT seemed able to produce a brief but reasoned appraisal of a complex topic, I posed a question that called for a more objective answer: “What resources are available for the integration of Christian faith and librarianship?” The system provided two lists in response. The first list mentioned the Association of Christian Librarians and the Theology of Work Project. The second list cited the authors and titles of four ostensibly relevant books and articles.
Intrigued by the title of one of the sources that was cited, I asked, “Can you provide publication details for ‘The Information Society and the Christian,’ edited by Paul A. Soukup?” ChatGPT replied with a publisher name, date of publication, and ISBN, as well as a one-sentence summary. Curiously, I was unable to verify the book’s existence despite searching WorldCat and Google Scholar. Although I found Soukup to be a real scholar, neither his CV nor a page of his works listed the publication in question.
Having seen my own name in the list of resources, I asked ChatGPT, “Can you summarize Gregory A. Smith’s ‘A Christian Philosophy of Librarianship’?” The system returned five paragraphs, eloquently describing a book that, ironically, I have not written. Before following up on that point, I decided to pursue the earlier reference to Carnegie: “You identified Andrew Carnegie as a Christian philanthropist and industrialist. However, other information I’ve read suggests that he rejected his Christian upbringing. Can you elaborate?” In reply, ChatGPT provided several paragraphs to explain that “it may not be accurate to describe him as a Christian philanthropist and industrialist without further context.”
At this point I returned to the matter of the book that ChatGPT had incorrectly attributed to me. The system replied, “I apologize for the confusion. After reviewing my previous response, I realize that there was an error in the information I provided. I mistakenly attributed ‘A Christian Philosophy of Librarianship’ to Gregory A. Smith, but it was actually written by Shelly Arneson.” When I requested publication details, the system provided a publisher and date; however, using standard search tools, I was unable to confirm that such a work existed. When I submitted this evidence to ChatGPT, it produced another retraction: “It appears that I made an error in my previous response. I cannot locate a book or chapter by that title by Shelly Arneson, and I was not able to find any other sources that suggest such a book exists.”
As noted earlier, ChatGPT correctly associated my name with the integration of faith and librarianship but supplied spurious information about my publication record. With this misdirection in mind, I posed this question: “Can you tell me more about Gregory A. Smith, who has written or edited a variety of publications exploring the connections between Christian faith and the world of information and libraries?” In typical fashion, ChatGPT generated five paragraphs. Disappointingly, they misrepresented my employment history and attributed authorship of a non-existent book to me. When I attempted to set the record straight on two counts, the system issued retractions and offered more incorrect information.
Overall, I was pleasantly surprised by ChatGPT’s ability to interpret questions, synthesize information about a topic, and identify talking points. Even if the system’s writing style was somewhat formulaic, its capacity to generate error-free prose and the appearance of solid analysis could readily lead one to accept the accuracy of its output without question. However, as another contributor to this blog explained, accepting its formulations uncritically entails substantial risk. Additionally, I was struck by ChatGPT’s presumptuous inclination to fabricate information rather than acknowledge ignorance—a trait that resembles but amplifies a human behavioral weakness. Of course, the system’s projection of a confident response was often followed by a retraction, but it would surely have been more intelligent (and honest!) to admit a lack of knowledge in the first place.
Ultimately, many educators may be primarily concerned with determining how ChatGPT relates to academic and professional work, and what changes this may imply for curriculum design, instruction, and assessment of learning. Nevertheless, answers to such questions are likely to be highly nuanced and subject to change.
ChatGPT stands as a recent entry in a list of powerful technologies that have emerged to disrupt education and research. Earlier examples have included word processing and spreadsheet software; library catalogs, databases, and discovery systems; Internet search engines; and applications supporting tasks such as statistical analysis, citation management, document scanning, analysis of ancient texts, survey data collection, data visualization, and audio transcription. These technologies have inherent biases and are susceptible to misuse, leading to inefficiency, ineffectiveness, and even breaches of ethical or legal standards.
As discerning Christians, we must be ready to assess the strengths and weaknesses of emerging technologies with both a critical eye and an open mind. Artificial intelligence (AI) is evolving rapidly as various companies seek to establish a competitive advantage in the field. In this context, the capabilities and defects attributed here to ChatGPT may not remain constant.
Fixating on either ChatGPT’s capabilities or its limitations can easily lead us to exaggerate or understate the tool’s value. A more balanced position is warranted, as Derek Schuurman suggested in a recent post to this blog: “AI is part of the latent potential in creation, and we are called to responsibly unfold its possibilities.” Where ChatGPT is concerned, exercising biblical stewardship requires us to grapple with critical questions: How might we use the system to further redemptive activity in our world? How can we differentiate tasks for which ChatGPT is ill suited from those in which it may serve as a viable complement to human intelligence? What steps might we take to mitigate the tool’s tendency to amplify the expression of human fallenness? Answering such questions will be the subject of ongoing debate, particularly if, as I expect, ChatGPT undergoes a series of changes and OpenAI’s competitors launch alternatives with somewhat distinctive attributes.
- A few months ago, I asked several acquaintances to comment on their recent experiences as Christian students in secular library science programs. The responses that I received highlighted the challenge of sustaining and expressing a Christian worldview in such environments. Following are two examples:
-“Overall, I would characterize the program’s level of receptivity to a Christian student as very poor. … In some cases, I would characterize certain professors as having extreme prejudice against perspectives that are rooted in a Christian worldview.”
-“I do not feel comfortable at all in expressing an opinion rooted in a Christian worldview if I know it may go against what the majority of my classmates think. I have seen two occasions where one of my classmates expressed an opinion that was not in alignment (politically) with the majority. A few of my other classmates did not hold back in berating them for their opinions ….”
Hmmm. The article did not demonstrate to me how AI could be at all useful in education or scholarship. There is an assumption by the author that because there is a new technology we have to use it. But do we? Since the 19th century the underlying assumption of our culture is that anything can and should be done in the name of progress, and because it is “progress,” we must accept it and make use of it. We might well consider whether much of our technology has only served to make us lazy—passive recipients of the labor of something or someone else.
Thanks for sharing your thoughts. I agree that not all technological developments represent progress. There’s a lot of truth in John Culkin’s summation of Marshall McLuhan’s view of media: “We shape our tools and thereafter they shape us.” Personally, I think many information technologies are used non-redemptively. In some cases, it might be best to reduce the use of a given technology, or even to stop it altogether. Critical thinking is in order to determine if a technology can be used redemptively and whether prevailing applications may entail humans being used rather than being purposeful users.
As an example of using ChatGPT for scholarship, when I wanted to generate an abstract of this blog post for use elsewhere, I asked ChatGPT to suggest a summary. Actually, I asked it to perform multiple iterations of that task. Then I selected what I thought was the best base version and I made targeted edits until I was satisfied with the end result. The process saved me a little effort and gave me some machine-generated “opinions” as to which elements of the post warranted inclusion in the abstract. Human and artificial intelligence thus complemented one another. In my view, that’s a legitimate use, and a far cry from taking ChatGPT’s output at face value.
I enjoyed reading your essay.
O that Christians would search the word of God, listen, learn and obey contextually and be about the business of glorifying Him in the world, amongst those who profess you know Who as Savior and those who do not.
Authorative Intelligence….hmmmm. JOY.