When I was a boy growing up in Hong Kong, both my parents were professors. They worked some long days during the long semesters and sometimes brought papers home to grade on the kitchen table after dinner. At the same time, they never seemed to be working deep into the night, and summers were a long, glorious holiday filled with exotic, even if low-budget, travel experiences.
For many of us serving in higher education in 2024, that is not our story. We have assignments set to arrive digitally at midnight every Saturday and pressure from students and administrators to assess those assignments within 48 hours. I routinely receive emails from students seeking assistance in the wee hours of the morning. I regularly receive emails from colleagues in the same timeframe. Most of my colleagues have well over 100 students across their teaching load and spend their summers doing curricular development and catching up on research commitments that would have shocked my parents.
Is it any wonder we are intrigued by the potential leverage and precision generative AI systems promise to deliver? The pace of adoption across various industries is staggering, even in high-stakes environments. Big four accounting firm, KPMG, recently reported that in a survey it conducted of firms, 83% of them use AI for financial planning and modeling.1 In medicine, an AI system has been used to estimate the gestational age of fetuses based on ultrasound scans performed by untrained technicians.2 As far back as 2016, an AI system deployed at Beth Israel Deaconess Medical Center improved pathologists’ ability to diagnose certain kinds of cancer from 96% accuracy to 99.5%.3 Even artists have begun to use graphic generative AI systems like Canva and Open Art to augment their artistic skills in the creation of images. The quality of their AI-assisted work can be demonstrated by the 2022 Colorado State Fair art competition, won by an artist who used the AI system MidJourney to craft his entry.4
Some of us in higher education are experimenting with generative AI systems in hopes of improving our efficiency and work product. I have friends in administrative roles who use ChatGPT to draft performance reviews and job descriptions. A year ago, an administrator at NYU’s Stern School of Business admitted he used ChatGPT for everything from reviewing lengthy email chains to managing his I-Phone settings. 5 Some of my faculty colleagues claim it has cut the time they need to prepare lesson plans for a semester from days to hours. The academic digital library, JSTOR, recently announced it is rolling out a generative AI system that will synopsize academic articles for researchers, providing all of us who use that library leverage in identifying the articles we need.6 In the K-12 environment, teachers are using ChatGPT to devise exam questions and draft emails to parents.7
While many of the experiments with generative AI among those I know and those reported in the media are positive, there are some sizeable potholes in this road to efficiency that should not go unnoticed. First of all, generative AI systems have the potential to be “delusional.” They can deliver fact claims that are just untrue. I recently asked a popular generative AI system to provide a list of blog posts I had published on Christian Scholar’s Review. It returned a series of post titles in excellent APA formatting with links to each article. Unfortunately, none of them were authored by me. Fellow CSR contributor Gregory A. Smith received a summary of a book from ChatGPT that he had never written.8
A more serious case of delusional AI is the now-infamous case of the New York lawyers who were sanctioned by the court after their ChatGPT-generated brief included citations to non-existent cases.9
There is also a growing body of research indicating that generative AI systems can produce biased results. Research conducted in my home university indicates that some graphic generative AI systems will grossly underrepresent women serving in different professions, including some in which they make up the majority of practitioners. The healthcare sector is actively struggling with the bias in generative AI systems. Because generative AI are trained on data sets, historical bias can appear in those data sets and be continued or even amplified in the output produced by the AI system. Generative AI systems have been found to “exaggerate known disease prevalence differences between groups, over-represent stereotypes including problematic representations of minority groups, and amplify harmful societal biases.”10
Another common complaint about generative AI systems is that they have no concern for copyright infringement. Much of the graphic and textual material included in their data sets are works produced by authors and artists who continue to own the rights to their work product. Open AI, the makers of ChatGPT used a collection of The New York Times publications as part of the data set used to train the system. The New York Times is now suing Open AI and Microsoft for the reproduction of its copyrighted materials without payment.11
What then would a Christ-animated approach to utilizing AI platforms in higher education look like? The fact that AI systems can be delusional can undermine our duty of professional stewardship. Christian professors are called to excellence in all their labors (Colossians 3:23). Unless we are willing to ascribe agency to the AI systems we might utilize, they are merely instrumentalities of our own work, just like our dry-erase markers, and a good workman never blames his tools for a job poorly done. The fact that the AI system is an instrumentality does not mean we should avoid using it. Rather, it simply means that if we use AI systems for the leverage they provide, we still maintain responsibility for what they write or draw for us. Knowing that AI systems can be delusional, we cannot approach them like a trusted research assistant or colleague. We will instead have to approach them like we would approach internet sources, verifying their accuracy before we include their input in our pedagogy or research. This may limit the leverage we receive from AI systems at this point in their development, but our duty to perform “as unto the Lord” probably includes a call to invest the effort necessary to create excellent work products.
The fact that generative AI systems may contain inherent bias can undermine our duty of fairness. As Christian academics, we know better than to show favoritism. Prejudice or preference for particular groups is contrary to the character of God, who “shows no partiality” (Acts 10:24). At the same time, most of us drive cars that, if we take our hands off the wheel, will eventually pull to the right or left. That does not mean we cannot drive those cars. It just requires that we compensate for the bias we know is inherent in the instrumentality. When using generative AI systems, Christian professors will have to be mindful of the bias they can contain and adjust for it. That adjustment may require some experimentation and research on our part to determine when the bias appears. Mercifully for us, experimentation and research are part of our brief, and most of us are comfortable performing them. We only need to be prepared to dedicate the time before we include generative AI material in our work products.
Much conversation has been devoted to plagiarism at senior levels of the academy of late and rightly so.12 Using generative AI systems in violation of other’s copyrights, however, involves not just plagiarism but theft. Violating other’s intellectual property rights violates the 8th Commandment (Exodus 20:15) and a Christ-animated approach to generative AI systems would be required to avoid it. Again, this does not necessarily place generative AI systems beyond the reach of Christian academics. We have long possessed tools that are designed to identify plagiarism of protected work. Deploying these plagiarism detection systems to AI-generated material would add steps to our use of AI but not steps with which we are unfamiliar.
The upshot of these examples is that generative AI can, and sometimes should, be adopted by Christian universities. That adoption will just require more work on our part to avoid the pitfalls inherent in the AI systems. Now would be an excellent time to advance the work needed to adopt generative AI. The global academic community is early in the process of figuring out how to deploy generative AI systems in higher education. Christian scholars have an opportunity here to lead the way in mapping out protocols and boundaries that can capture the benefits of AI without falling into the potholes of its faults. It may place relaxed summers further out of reach for us, but contributing our unique gift of moral clarity to the global academy will be worth the work.
Footnotes
- KPMG, “Supercharge Your Finance Workforce with GenAI,” 2024, Supercharge your Finance workforce with GenAI (kpmg.com).
- Teeranan Pokaprakarn, et al., “Estimation of Gestational Age from Blind Ultrasound Sweeps in Low-Resource Settings,” New England Journal of Medicine Evidence, 2022, 1(5) (2022 March 28), AI Estimation of Gestational Age from Blind Ultrasound Sweeps in Low-Resource Settings | NEJM Evidence.
- Alvin Powell, “A Revolution in Medicine,” The Harvard Gazette, 2020, Risks and benefits of an AI revolution in medicine — Harvard Gazette.
- Sarah Kuta, “Art Made With Artificial Intelligence Wins at State Fair,” Smithsonian Magazine, 2022 September 6, Artificial intelligence artwork wins 1st place at Colorado State Fair, causing controversy – CBS Colorado (cbsnews.com)
- Sindhu Sundar, “I use ChatGPT 50 to 70 times a day for everything from preparing for professional meetings to getting superglue off my fingers,” Business Insider, 2023, I Use ChatGPT 50 to 70 Times a Day to Be More Productive (businessinsider.com)
- “Explore Generative AI on JSTOR,” JSTOR. (n.d.) Generative AI FAQ (jstor.org).
- Kayla Jimenez, “ChatGPT in the Classroom: Here’s What Teachers and Students are Saying,” USA Today, 2023 ChatGPT in the classroom: Here’s what teachers and students are saying (usatoday.com)
- Gregory A. Smith, “Christianity and Libraries: A ‘Conversation’ with Chat GPT,” Christian Scholars Review Blog, 2024, Christianity and Libraries: A “Conversation” with ChatGPT – Christian Scholar’s Review (christianscholars.com)
- Sarah Emmerich, “Artificially Unintelligent: Attorneys Sanctioned for Misuse of ChatGPT,” National Law Review, 2023, NY Judge Sanctions Attorneys for Gibberish AI Brief (natlawreview.com).
- Janna Hastings, “Preventing Harm From Non-conscious Bias in Medical Generative AI,” The Lancet Digital Health, 2024, Preventing harm from non-conscious bias in medical generative AI – The Lancet Digital Health.
- Bobby Allyn, “‘New York Times’ Sues ChatGPT Creator OpenAI, Microsoft for Copyright Infringement,” NPR, 2024, ‘New York Times’ sues ChatGPT creator OpenAI, Microsoft, for copyright infringement : NPR
- See, e.g., Perry L. Glanzer, “Moral Expectations to and of the Vulnerable: How Both Powerful Politicians and Elite Academics Can Get Them Wrong,” Christian Scholars Review Blog, 2024, Moral Expectations to and of the Vulnerable: How Both Powerful Politicians and Elite Academics Can Get Them Wrong – Christian Scholar’s Review (christianscholars.com)
Thanks for sharing this article, Larry. As Christian educators, we need to realize that Gen AI is a cataclysmic shift in the development of computers and technology and the future of our students. If we refuse to acknowledge Gen AI is here and do not help our students learn how to properly and ethically use Gen AI, then we are not preparing our students for future ministry or work.
I want to point out that most AI detection tools are woefully inadequate at correctly detecting AI-generated student work. Academicians need to be extremely careful not to use AI detection in the same manner as a plagiarism detector. Most AI detector programs encourage its use as a point of reference to begin a discussion on the proper use of AI rather than as a punitive measure.
This is a very useful article. It identifies several important issues. When the author correctly draws attention to the frequent errors or mistakes (the industry calls them “hallucinations”) found in AI materials, he correctly refuses to call them “lies”. This is because we must avoid giving to a machine God-endowed moral capacity, remembering that AI is only electrically charged instrumentality.
In a similar vein, I would also add the following reflections:
1. The diagram accompanying the article shows a human finger connecting to an electronic one, with the electronic finger presumably being a pseudo-divine AI. Apart from the obvious allusion to Michelangelo’s “The Creation of Man” fresco in the Sistine chapel, and though I sometimes do it in my work to see what response I get*, I strongly contend that we should not depict AI in any divine or humanoid form. It is merely a machine. It is not God. It is not made in God’s image with the imaginativity, authority, responsibility and moral capacity given by God to humanity alone. To ignore this reality in our AI depictions is to risk confusion and to misunderstand God’s created order.
2. Dr. Locke draws attention to biases in AI products because of the data sets upon which they are trained. It’s important however also to note the distorting biases in AI-generated outputs that result from the worldview biases built into AI research parameters by the AI developers. For example, as is recorded in many articles, AI algorithms have been deliberately structured so at to prioritise minority groups in order to redress their perceived inequality – resulting in hallucinations such as female popes and black American Founding Fathers.
3. As AI gives a new focus to the issue of plagiarism, I am looking forward to the day when Christian scholars and their institutions will step back from the traditional Western cultural positions on plagiarism, cheating etc., and embark upon a biblically rigorous and informed discussion upon the whole issue of the origin and ownership of knowledge.
4. I agree that AI can simplify and shorten many tasks, and that there remains an important responsibility upon academics to ensure that our AI-assisted work is checked for errors, fairness, plagiarism etc. However, reminiscent of some of the challenges presented to Christian in Pilgrim’s Progress, AI’s short-cuts can be counter-productive, at times running the risk of undermining the richness of our God-given creative capacity. Jared Boggess draws attention to this in his 2023 Christianity Today article entitled, “How AI Short-Circuits Art”.
* See my paper about AI and ChatGPT, found at https://allofliferedeemed.co.uk/wp-content/uploads/2024/05/rje-chatgpt2024.pdf