On September 5, 2023, Houghton University held a panel discussion with seven faculty from a broad array of fields focusing on the implications of artificial intelligence (AI) technology for Christian liberal arts higher education. The panelists included Brandon Bate, PhD, associate professor of mathematics; Peter Meilaender, PhD, dean of religion, humanities, and global studies, and professor of political science; Jesse Sharpe, PhD, associate professor of English; Sara Massey, PhD, associate professor of music education; Laurie Dashnau, PhD, professor of English and writing, and director of the writing center; Craig Whitmore, MA, ABD, assistant professor of education; and David Huth, PhD, professor of visual communication and media arts, with myself—Alison Young Reusser, PhD, associate professor of psychology—serving as moderator.
With AI a rapidly changing technology, each panel member has since written a brief reflection on their own and the other panelists’ perspectives, benefiting from eight months of intervening experience. A full transcript of the original panel follows in the Appendix. While a broad array of ideas are presented in the essays and transcript, four topics came to the fore:
1. Ethics: Jesse Sharpe, whose background includes library and information science, contended that tools like ChatGPT are created using “unapproved, unattributed, and uncompensated labor.” His reflection “Why This?” critiques AI companies given recent evidence1 of questionable decision-making: “Well-performed intellectual theft does not make it excusable.” Craig Whitmore, in his reflection “On the Ethical Use of AI in Education,” considers how to teach students to use AI ethically.
2. Student Impact: Sara Massey argues that AI could encourage students to skip the key process of learning to write yet sees potential advantages in music education. Laurie Dashnau discusses the different metaphors for AI she highlights with her students. Peter Meilaender develops an argument that student use of AI subverts a primary goal of liberal arts education, making it “much easier to pretend to be interested in stuff without being interested in stuff.”
3. Scholarly Community: Sharpe made the point that academic work in isolation can lead to a shallower understanding of our own limitations. He mentioned, “it’s easy to look at something and say, ‘That’s really good,’ and you have only consulted yourself. But it’s another thing if you have to consult somebody else, and they have to go, ‘Ah, that’s a good idea; but, what about. . .?’”
4. AI and the Mind: Whitmore raised the question of whether AI can truly be a mind, leading to an interesting exchange between Sharpe, who disagreed (“It [isn’t] magic. ChatGPT is math.”) and David Huth, who endorsed the idea (“This mind is incredible.”). Huth provocatively develops this point in his reflection “I Could Plausibly Be A Large Language Model,” invoking Emily Bender and colleagues’ popular stochastic parrots metaphor for AI: “[Large language models] haphazardly [stitch] together sequences of linguistic forms [. . .] according to probabilistic information about how they combine, but without any reference to meaning.”2 Huth asks whether human minds could be seen the same way.3 As a psychological scientist, I would argue that the belief that AI seems like a mind says more about human cognition (i.e., our tendency to anthropomorphize non-human agents4 ) than it does about AI itself.
The following essays by these faculty members highlight the breadth and depth of the “AI issue” as experienced by faculty at one institution and underscore how much Christian liberal arts faculty can contribute to understanding it.
Artificial Intelligence, Metaphorically Speaking
BRANDON BATE
In our discussion, Laurie Dashnau challenged us to think about AI in terms of metaphors; a great way to begin thinking about this topic. For many of us in education, we were first confronted with artificial intelligence when students used it to “enhance” their writing (or fabricate assignments entirely). These “enhancements” are often impediments to learning and violations of our academic integrity policies. But if we are to divorce ourselves from AI’s impact on education, how might we describe its capacities? What’s a suitable metaphor? It’s like photoshopping an image of yourself. Rather than seeing your true likeness, you see what you feel ought to be there instead. The blemishes are removed. The unsightly is deleted. This is not incompatible with our cultural norms. It’s simply a new manifestation of what our society does every day.
Not all uses of AI have this narcissistic tinge. In preparing this response, I reread the raw, unedited transcript of our conversation. Like most transcripts, it lacked the elegance we associate with well-written prose. Even with our most articulate faculty members, there were starts and stops, incomplete sentences, “umms” and “uhhs.” I gave this transcript to a large language model and asked it to “clean it up.” The result was surprisingly faithful to our actual discussion, but much more fluid. In a very real sense, it was the conversation I remembered. This type of “clean up” could have been done by anyone with sufficient writing experience. AI just makes quicker work of it; like a vacuum cleaner or a washing machine, it’s a labor-saving device.
But AI is no ordinary tool. Its capacities are expanding. Tasks that were previously thought impossible for AI are now reality.5 What is it to become? What are its limitations? No one knows.6 Will it cure cancer? Can it one day rescue us from other ills? Some in the AI community, the so-called “effective accelerationists,” feel that it is morally justifiable to pursue rapid, socially disruptive development in AI precisely because it has the potential to alleviate human suffering and move us closer to utopia.7 The appropriate metaphor here is of a deity bringing heaven down to earth.
These are the three metaphors I’ve found most compelling; many more are possible. It’s tempting to embrace one of these as being more representative of reality than the others. Doing so allows us to more easily justify or dismiss AI on moral grounds. And given the social impact of AI thus far, I’ve detected ample motivation for definitive judgments on this matter. But this is a new technology. We don’t know the full extent to which any of these metaphors will represent reality. We can, however, identify and consider the metaphors themselves and evaluate them on moral and theological grounds. Our faculty conversation offered a glimpse into what that can look like. I hope this type of conversation continues.
Artificial Interest
PETER C. MEILAENDER
It has been interesting to revisit this conversation months later and after another semester spent grappling with a slow but steady increase in (ostensible) student work clearly generated with the use of AI. The participants of our dialogue sort themselves along a spectrum from those most open to advantageous uses of AI (David and Brandon—perhaps unsurprisingly, the most tech-savvy members of our group as they work in media arts and mathematics, respectively) to those most skeptical (Jesse, a scholar of Donne and Herbert, and me, the guy who immediately turns off the spell-checker whenever they give me a new laptop). A dialectic emerges between “this mind is incredible” on the one hand and “it’s just math” on the other.
One idea, however, seemed to draw agreement across the spectrum: AI is a “tool,” which like all tools can be used in numerous ways, some of them good, others problematic. At one level, this claim is no doubt true, perhaps even a truism. But at another, it is dangerously and deceptively seductive. It tempts us to ask the obvious question that one asks about a tool: How should we use it? I am not sure that is the question we should be asking. I am more concerned with a different question: How does the use of this tool shape us?
During the panel discussion, I worried that the use of AI makes it easier forour students not to be interested in things. In the months since the discussion, that worry has grown. When I began my teaching career, I cared deeply about whether my students were really learning the content of the courses I was teaching them—in my case, the political ideas of thinkers such as Plato, Augustine, or Hobbes. I still care about that, of course, because these are important ideas. But twenty-five years of teaching cures one of the illusion that students remember much of what one says. They forget (as have I) most of the actual content of their undergraduate coursework unless it is reinforced by subsequent graduate study or work experience.
There is something they can take from their studies, however, that, once acquired, will stay with them: the ability to become interested in things. Artificial Interest, we might say. And I worry that the ease and enchantment—the apparent magic—of the one AI, Artificial Intelligence, will hamper their ability to acquire the other, Artificial Interest.
Artificial Interest may seem like a strange thing to desire, for two reasons: first, because we imagine that people are either just naturally interested in something or they are not, and second, because “artificial” sounds fake or inauthentic. But this is wrong on both counts. People are not just naturally interested in things; to the contrary, it is all too easy not to be interested in things, because being interested requires attentiveness, effort, and a conscious direction of the soul. And “artificial” is simply that which is the product of art, rather than nature. Because humans are by nature intended to grow beyond mere nature into culture, it is actually the artificial that is truly natural for us.
What do I want for my students? That they will grow into adults who have learned how to become interested in things. If they do, their education will have been the seed that fell on fertile ground. We teachers have our work cut out for us if we are to ensure that the tool of Artificial Intelligence does not entice our students away from the goal of Artificial Interest.
Why This?
JESSE SHARPE
I have been grateful for this opportunity to reflect upon the conversation concerning AI that we had in September 2023. It has been helpful, not because I have changed my positions, but because time has allowed more clarity for my concerns with the current state of AI and its incorporation in the classroom.8 The arguments for it often seem to come down to ideas of convenience and efficiency. There also is a component of “What’re you gonnado?” thinking. The technology cannot be beaten, so why not join it? Yet I am unconvinced.
Our universities are built upon ideas of honor codes and codes of conduct. We know that students will drink, do drugs, cheat, harass, and the like, but we do not give up and say, “Well, they’re going to do it anyway.” We believe that the principles behind the rules are more important than the inevitability of students breaking the rules. So, what is it about internet connections and algorithms that makes us disregard our principles? What causes us to give up the fight? If a student plagiarizes on a paper, they get a failing grade. If that student repeatedly plagiarizes, they are expelled. But if a corporation uses, without permission or attribution, authors’ writings to generate text that it then sells or provides as “new” and “unique,” we celebrate the company in the classroom? Scale of offence should not make the offence disappear; it should make it more upsetting. The academic and intellectual dishonesty of the current AI are extensive, and profiting off others’ work cannot be acceptable simply because it is difficult to trace. Well-performed intellectual theft does not make it excusable. Authors and artists are protesting this technology and trying to get back control of their work. Yet we are supposed to side with those stealing their work. Why?
Our excuses cannot all be about efficiency and convenience. We cannot let those be our guiding principles, especially at Christian universities. Christ did not call us to a life of convenience. AI has been shown to use enormous quantities of energy to generate its texts and images. The technology has already been used to create deep-fake disinformation and pornography. When providing answers to academic questions, it is prone to “hallucinations,” providing inaccurate information. AI’s primary reliance upon English will further accelerate the death of minority dialects and languages. Why is it okay to ruin the environment, eliminate languages, and further erode faith in truth for bad essays, inaccurate answers to queries, and less time-consuming emails? Why is this where we compromise our principles and ethics? Because it seems like magic? Because I can spend a little less time drafting emails to students and colleagues? I just cannot accept that.
Reflection on the Use of Artificial Intelligence in Music Education
SARA MASSEY
Initially, I was skeptical about the use of AI, especially due to ethical issues related to its use in academic writing. I believe that students should learn the skill of creating their own work as writers and not use artificial means. I still hold to the boundaries of using AI to author a paper rather than experiencing the process of academic research and writing. However, I have moderated my stance and can now recognize the advantages of AI in some areas of music education, my primary teaching area.
My participation in September’s panel prompted me to consider the efficacy of using a combination of intelligent technology and on-site teaching to enhance individualization in music teaching that should be a characteristic of 21st century education. Research shows that the current trend of student-centered teaching is far more effective than the 20th century teacher-directed teaching. Teaching marked by individualization enhances students’ interest in learning; however, traditional modes where teachers charge the students to ‘sit still while I instill’ remain prominent. Integrating AI technology and music education can potentially enrich classroom teaching resources and improve the technical aspects of music education.9
Personalized learning of the 21st century recognizes that every student has different learning styles with different skill sets. Teachers who employ different approaches, including technology, to meet the learning needs of all students can maximize learning in ways that traditional approaches do not. AI technology can support students ranging from those with disabilities to those gifted in ways that enhance and improve their abilities.10
The prospect of using AI in the music classroom is very broad.11 Since the inception of AI in 1956, the development of this technology has become an interdisciplinary and frontier science.12 I admit my initial skepticism about the application of AI in music education, but now I look forward to the emergence of various intelligent tools that will support the improvement of students’ learning efficiency and quality. I do not believe that an actual human teacher can be replaced, but AI can provide music educators with an auxiliary tool that can aid in teaching in accordance with their aptitude. Additionally, AI can assist teachers to arrange courses more effectively and accurately.
Living and Writing Well in the Age of AI: Conversations and Challenges
LAURIE A DASHNAU
Since the time of the panel last fall, I have continued to invite my students to create comparisons and metaphors for AI. Although my colleague Dave Huth said “we need to go deeper” than metaphors and provided examples of oversimplification (e.g., a personal assistant or graduate assistant), most of my students’ metaphors have, in fact, provided rich fodder for conversation.
One student likened AI to a tool, saying it should not be substituted for or confused with the work—the thing to be constructed (the writing) itself. Another student compared it to duct tape, saying it is “useful . . . but not the answer to or for everything.” A third student compared AI to their “really smart big brother,” commenting, “I can ask him almost any question and he will likely have an answer. However, from time to time he gives me too much information and that makes it hard for me to understand what he is talking about. At other times he decides to give me a one-line, useless answer. While it is funny when my brother does it, it is irritating when AI does it.”
These students and others have used figurative language to “go deeper,” acknowledging that AI is unlike anyone or anything else. Are the comparisons sometimes limited in their scope? Yes. Are they imperfect? Definitely. Nevertheless, they are a means to the greater end of ongoing deliberation and evaluation. As Lakoff and Johnson write,
The heart of metaphor is inference. Conceptual metaphor allows inferences in sensory-motor domains (e.g., domains of space and objects) to be used to draw inferences about other domains (e.g., domains of subjective judgment, with concepts like intimacy, emotions, justice, and so on). Because we reason in terms of metaphor, the metaphors we use determine a great deal about how we live our lives.13
How I might better live my life in the age of AI is now part of my individual professional development plan. According to a recent Barna poll of adults, Christians—which most of my students as well as I identify as—are less hopeful than non-Christians that AI “can do positive things in the world” (28% versus 39%),14 and I admit that I initially shared this sentiment. Nevertheless, holding this knowledge in tension with the fact that Houghton’s founder and first president, Willard J. Houghton, signed his letters “Yours for fixing up the world,”15 I believe I owe it to my students and my discipline to learn more about the possibilities as well as perils of AI’s uses and abuses in the writing classroom.
On the Ethical Use of AI in Education
CRAIG WHITMORE
I appreciate that we need to be consistent in our application of ethics to the use of technology. For me, part of the issue of AI use revolves around the difference between learning students and working professionals. Within education, I think the issue of what constitutes an appropriate, “non-plagiarism” use of AI needs to be addressed. I do feel that this “genie” is out of its bottle and that we need to teach our students how to make use of it in an ethical, Christian way.
The first question in my mind concerns the differences between using AI as a student and as a working professional. AI is already in the workplace. Teachers (at least in K-12 grades) are already using AI to help write lesson plans. AI is also a foundational part of using both search engines16 and Grammarly,17 which have been commonplace in teaching for decades. In my own experience, lesson plans from ChatGPT are woefully inadequate, but they do provide a decent outline to start the lesson-planning process. I do not think AI in the workplace will be going away soon, so it could be argued that not preparing our students to use AI effectively and ethically is handicapping them as they enter the workforce.
However, I would not be able to pick out the parts of the theoretical ChatGPT-produced lesson plan that need changes without having the foundational subject knowledge to know what goes into a good lesson plan. Students need to learn enough knowledge and skills so they can assess the value of AI-generated work, just like students need to learn how to properly vet information from any website or Google search. I could see professors gradually allowing more use of AI on projects as students mature in their subject knowledge and learn how to ethically make use of AI.
The second question I have is what such an ethical use of AI would be, particularly relating to plagiarism, which is perhaps the biggest issue involving AI in education. I believe most educators would agree that asking ChatGPT to write a majority of a paper would be plagiarism (though this seems to be less of a problem than I would have thought18 ). I would not consider clicking to accept Grammarly’s two-to-three word grammar or spelling suggestions as plagiarism (I do it all the time), but I would count accepting entire sentences rewritten by the Grammarly AI as plagiarism. So where would we draw the line? Could a student simply cite that AI was used in the writing of their paper? What if they asked ChatGPT for a beginning outline on a topic? Would that be any different from searching Google for a list of ideas to generate an outline? In this case, I feel that using AI to assist in generating starting ideas is ethical (though maybe not evidence of their own thinking), as long as its use is indicated. A professor could even require the addition of the original AI-generated outline as a way to more clearly see the changes and tweaks made by the student to the AI-generated ideas, concepts, or outlines.
For me, gradual training on how to use and cite AI assistance in schoolwork is the route I would choose. Completely banning the use of AI does not seem like a reasonable stance in our current society. At this point, AI is a tool, not an evil, and not giving guidelines for the acceptable use of AI allows students to make up their own rules about how it should be used. Teachers could require in-class, hand-written assignments to show the knowledge and skill that students have on their own and then allow the appropriate use of AI on digital assignments to show how much more they can achieve when they use such knowledge and skill in conjunction with modern technological tools. I think both are parts of good teaching.
I Could Plausibly Be a Large Language Model
DAVID HUTH
Here is how I know my engagement with this topic should be guided by epistemic humility: the more I learn about AI, the more uncertain and baffled I become.
Eighteen months ago (i.e., when I was ignorant), it seemed obvious that AI is simply the logical next step in an ongoing chain of advances in calculating digital technology. Today, however, nothing about it is obvious to me at all. My highest confidence remains in how little confidence is warranted. In other words, I am one of those people who is persuaded that AI is not just another tool.
My perception is that whatever AI is, it’s not a fancy calculator or “technology”or any sort of object. Instead, AI is an event. It is something that’s happening to us. Happening in the way that immigration happens, pandemics happen, sports tournaments happen, or widespread cultural changes in worldviews happen. Even over the months since our panel discussion, my disposition toward The Event of AI (indeed, I am aware how corny this sounds) continues to shift.
Something I have not changed my mind about is that AI is not like anything. I am unaware of analogies that hold up beyond a surface reading. AI is not like the Internet, not like email, not like the printing press. It is not like the replacement of horses with internal combustion engines. Holding tightly to these metaphors is likely a dead end for good decision-making.
Here is my perspective: the most sensible way of approaching this event is to engage AI actively and intentionally—and to do so as if it is a kind of mind. A crude, alien mind—rapidly maturing in its mentality faster than I can anticipate. When I consider other approaches, they eventually come off as inept and off-putting category errors.
I believe this because the accomplishments of human interactions with AI that I judge to be the most positive (protein folding, clear writing, moving writing, novel analytical revelations, compelling music and art) all come from engaging AI on the terms in which it presents itself: as a linguistic interface giving access and order to a boiling pot of nascent thoughts.
My personal experience is that people who reject this definition of the situation do so on principle, rather than open engagement and practice. “It’s not so, because for some reason it can’t be so.” But why not? I think the clumsiness in how we grapple with what AI might be descends from our reluctance to grapple with what we might be. It is utterly flummoxing to consider that a mind could be an instance of stochastic parroting machinery. What if human minds are not created by magic, panpsychic speculations, ensoulment, or even unique evolutionary leaps into “emergence?” What if an actual, creative, reasoning mind can be grown from material processes snapping tiny switches open and shut according to the laws of quantum mechanics, to spit out a statistically patterned string of tokens? How gruesome! And yet. Obviously, if an AI-mind is produced that way, this doesn’t mean that’s how human minds are produced.
But in the absence of other thorough, testable hypotheses, it would mean that this might be the foundation and possible origin of human minds.
I do not think most people want to consider that. I know I do not. Unfortunately (for my instinct to feel special in the world) I cannot discern another plausibly constructive way forward in my ongoing engagement with this event. I can hardly believe I am serious about committing these words to print. Yet here I am.
Appendix
The following is the transcript of the panel discussion on AI in Christian liberal arts higher education which occurred at Houghton University on September 5, 2023. The transcript was machine-generated from a video recording using Microsoft Teams and has been edited for clarity, brevity, and relevance.
Brandon Bate (Mathematics): My first encounter with artificial intelligence happened when I did an independent study on the subject as an undergraduate computer science major in 2004. At that time, artificial intelligence was nowhere near as popular as it is now. It was a niche topic, but one that I found very interesting. Nevertheless, after graduating I pursued a career in mathematics instead. As time passed, I kept an eye on the various breakthroughs happening in artificial intelligence. The state of the technology has come a very long way since 2004. In recent years, I’ve done some work in artificial intelligence with SIL International’s data science division. My work mainly consisted of working with large language models to aid translation quality assurance.
Peter Meilaender (Political Science): . . . I have two mindsets: on the one hand, AI is actually the sort of thing that I’m fundamentally uninterested in. I don’t care about phones, and I don’t care about social media, and it’s just not very interesting to me. But on the other hand, as a teacher, it makes it really easy for students to produce work that’s very plausible sounding work. And I think what I found at least so far, mainly this summer, as the biggest challenge is: unlike the case where a student just copies something from online, and I read it and think “you didn’t write that,” and I Google it and there it is, and I’ve got the evidence of the cheating; instead, I get papers—I’ve got papers that I know the student didn’t write for various kinds of reasons, like other things I’ve seen from them, whatever it might be. And I can feed it into the AI detectors online, and I’ve used several of those, and I get varying results; you know, I give it the same text and one will say, “95% probability this was AI generated,” something else will say “30% probability;” it’s also a little higher than I’d like, but nevertheless. . . . So, I can’t prove that this work is not generated by the students, but I know it’s not. And so, trying to figure out how exactly to navigate that space, I think, is really the thing I’m puzzled about the most.
Jesse Sharpe (Literature): My background is actually in librarianship, as well as information ethics. . . . I’ve been looking at AI since 2003, maybe 2004, but it was always more from the ethical point of view. And so that’s how I’ve been kind of keeping track of this and paying attention to it. For me, there are huge ethical concerns about it, not just when it comes to plagiarism, but this is unapproved, unattributed, and uncompensated labor. It’s pure exploitation. So, I think, especially for a Wesleyan school, we’re going to have that Wesleyan tradition of standing against the exploitation of labor. Obviously, we’re still carrying forth the “don’t drink alcohol” because of that. Might be another way that could be apropos to apply it.
Also, from a language point of view, because especially . . . ChatGPT is hoovering up all of these . . . words. Primarily the words they’re hoovering up are English and from developed nations and with the rise of language death throughout the world and the sort of dominance of language from a few communities, [it] could exacerbate that because it can reproduce English from developed nations quite well, but it’s going to have a harder time reproducing languages from minority language groups.
So, I think it could exacerbate a lot of problems and I realize that it’s [being adopted] quite quickly, but I always believe that resistance is possible. I like that idea. And I think actually it can quite easily be beaten in the classroom. Just ask for direct quotes from things from the library and the library databases and it doesn’t have access to those. It’ll make up quotes, but those are very easy to prove to be made up. So, [it’s] not a difficult system to game our way just as it’s not a difficult system to game the students’ way. And so, I’m not sure that it is something that we should actively be trying to bring into the classroom in any sort of approving manner.
Peter Meilaender (Political Science): [joking] Can I just be sure that the provost heard that line about the Wesleyan tradition and not exploiting labor? [laughter from others]
Jesse Sharpe (Literature): Oh, it’s very big.
Alison Young Reusser (Psychology): That’s fine. Okay. . . . So now, getting a musician’s perspective.
Sara Massey (Music Education): Okay. I’m Sara Massey. One of my courses that I teach every fall is Music and Christian Perspectives. It’s a required course for every music student, whether it’s music performance, [or] music education. Even though that title doesn’t tell you this, one of the things that’s embedded in that course is my responsibility to help these music students learn how to write a research paper. We have [library faculty member Doyin Adenuga] come in and help them understand library resources. And I spend a lot of time making that process incremental to ensure that it is, in fact, their work.
So, I won’t address right now what I’ve come up with that may solve the ChatGPT problem, but I will say I’m quite interested because even though some arguments I’ve read would say it’s okay, you know we have calculators calculate math problems, but I still think that learning how to write creatively and knowing how to use research tools properly is important. It’s an invaluable skill. So, I intend to employ some of the things I’ve researched. I’ve really not been at this too long, probably two years, but certainly I’ve been more intentional the last school year to try to think about how I might consider the exploitation issue, the ethical issue, those kind of issues. [Provost David Davies], you might be interested to know that it can even compose music—music compositions can be generated [with] ChatGPT. It’s more predictable, but it could. That’s not part of my skill set—or not part of the course requirements for my course—but it’s interesting to note the breadth of what ChatGPT can do.
Alison Young Reusser (Psychology): Thank you. Laurie?
Laurie Dashnau (Writing): I’m Laurie Dashnau and I teach writing. So, picking up on what you said about the creative aspects of it: I tend to think in metaphors. Myers-Briggs tests tell me that and I talk a lot about metaphors in the classes that I teach, so I’ve been interested in looking at metaphors for AI and ChatGPT. [I’ve] come across . . . dozens, from that it’s like an octopus to it’s like a hawk to it’s like a management consulting firm—like a new McKinsey—to it’s the village idiot to it’s the Einstein and then the gap between the two is very small . . . and that sort of thing.
And I like to talk with students about that in those terms, too, when it comes up, because I find that the language oftentimes even of cheating or kidnapping or stealing doesn’t work well in terms of the ethics or the morals. But if I show them samples and I ask them, “What metaphor do you gravitate towards in terms of what is happening here?” I can get them to converse about it in a way that I can’t if I’m talking about what’s morally right or what’s morally wrong. I really enjoy doing that and it also helps me in individual conversations with them. Although when I do, like in Narrative and Personal Essay (of all classes), I had a student who would be writing very small pieces of personal narrative but then talking about interpersonal relationships in ways that just—what do you know?—I put it in these detectors and had come up something like 50% or 75%.
So, there’d be these big [similarities] and it helped me start [conversations with students with this question]: “Where did this come from?” [Unfortunately,] I wouldn’t get very far. “Well, [it’s] just my reflection. I know you want us to reflect and not just [research]. “Okay . . . did you look at anything?” “Yes, just to kind of help me get started,” And so . . . in this [one student’s] case, she thought of [AI] as a conversational pal. . . . And then she started, so to speak, “mirroring her conversational pal [like a friend would do],”. . . as if you’re talking with someone and you pick up their language patterns. [Thinking of it this way has] helped me in ways I hadn’t even thought of, [but] it’s still, it’s [an] extremely daunting [process of sifting through metaphors, knowing none are perfect].
Alison Young Reusser (Psychology): Craig?
Craig Whitmore (Education): Craig Whitmore, education department. Really, this last year . . . as far as being aware of AI and some of the different things that it can do, although from what I’ve found just in trying to learn more about it, just that technology and especially in education, but technology in general is kind of where my focus is and my studies and it’s—everything has always been around for a long time. There’s always disruptive innovation: things that happen that are going to change the course of history. Or in some cases they do and in some cases they don’t.
And in education the introduction of ballpoint pens was considered horrific at one point and the idea that you had calculators instead of slide rules. And for some of those it seems like some of the same arguments could be made both ways. But especially in education as a teacher, if students turn something in, it used to be that I could look at it and say, “Oh, that’s not your language.” You know, “I just know you didn’t do that.” But at the university level, my second year here, a little more challenging. They’re a little more well-spoken than most 7th graders are than I’m used to. But still, if I took something and put it in Google it wouldn’t be found and [I would’ve] said, “Okay, well, you must have written that.”
But I do know that there are, from what I’ve seen, there are different detectors you can use, and I haven’t tried using them in class to vet any of the turned-in assignments. One of the things that I’ve heard—kind of what you were speaking to, Jesse—was ideas for crafting assignments in such a way they can’t readily be created through AI, or that is obvious that they weren’t. Asking . . . for something that is a personal type thing or something that can’t just be outsourced online.
So, my son, just this week—he’s attending a college in Virginia—he sent me an article19 about technology and Christianity and it was really timely, and the author’s idea was that technology in itself is not good or bad, but it’s not neutral, either. It leads us in a way. It leads us either away from Christ or towards Christ. How it’s set up, how it’s been made, definitely how it’s used, but even just how it’s crafted. There was a quote in the article, and I don’t know who, I didn’t write it down, but that the medium is the message. That the way that it’s set up is the way that, kind of, takes you in a direction.
And so, I think that’s a big question about AI and education, whether or not we allow students to use it, they are going to use it when they get into the workplace, especially as teachers—K-12 teachers. It’s projected from what I’ve read, that there’s going to be an explosion in K-12 teachers using ChatGPT to write their lesson plans, to source ideas for assignments. In some ways it seems very similar to looking something up on Google. I can search on Google, I can go to ChatGPT and ask it. When I go to ChatGPT and say, “Write me a syllabus for educational psychology at a university level,” it gives me one that is pretty doggone close to what I’m using in my education psychology class. But my syllabus is based on syllabi and books from previous teachers and previous instructors. But I don’t know where ChatGPT is [getting information from]. So, if I could look at my sources and there is a reason for using those, but you can’t with ChatGPT, I think that’s an issue too.
I think I fall more on the side of being in favor of teaching students to use it responsibly, but I would also fall on the side of teaching junior high students to use their cell phones responsibly in class, and that’s never going to happen. I mean some of them will, but 99% won’t. I don’t know that that’s feasible, but I’d like it to be.
Alison Young Reusser (Psychology): . . . Setting aside the plagiarism [and] academic misconduct stuff until later, because I know that’s a huge question. David [Huth], I was reading through some of your resources and you brought up, periodically, you’ve sort of been testing it out in different ways, but you would often have a line where you would say, “I’m not sure about the ethics of this and I’d love to hear what people think.” So, could you maybe give us one example off the top of your head from that and maybe have some feedback from the panelists?
David Huth (Communication): Yes, sure. And I’m glad we are setting the plagiarism and cheating topic aside, for now. I have things to say about that as well, but this is more interesting. . . . So, this morning, a student emailed me the same thing that several other students emailed. And the answer to their set of questions was that I was really glad to have them in the class, they can find everything in the syllabus at the top of the Moodle page, and if they click a certain button then they’ll be able to access the videos through the links on the Moodle page.
There’s a way for you to train ChatGPT on your own personal writing style, so I’ve given ChatGPT ten emails that I wrote to colleagues, and I’ve labeled them “Dave’s Colleagues Email Writing Style,” and I’ve fed it ten or fifteen emails that I’ve sent to students, and I’ve labeled that “Dave’s Student Email Writing Style.”
Well, I just didn’t want to write another email to this student. So, I opened ChatGPT, I sort of highlighted the “Dave’s Student Email Writing Style,” and I kind of typed some rough notes. I said, “Please tell this student. . . .” And I said basically what I just said to you, and—no, sorry, [laughing] I was using voice dictation. I wasn’t even typing. I was using [AI] voice dictation, and I was talking to my laptop. I said, “Tell the student that they can find [the information] at the top of the page . . . the syllabus is here and there, and I’m very happy to have them in the class.
And the email was spectacular. [laughter]
It sounded like my writing style. I was very happy to have [AI].
I always edit [these emails]. There’s always something that feels a little bit off. So, I changed a few words. I deleted something ChatGPT guessed that I would say, something like, “Enjoy the warm weather.” I would never say that. [laughter]
So, I just deleted that sentence. I give ChatGPT instructions. I always say, “Professional but friendly style”—something like that. And sometimes ChatGPT guesses what that means while trying to fit it into my own writing style. It even signed off, “Later,”—“Later, comma, Dave.” Often, I’ll actually do that. So, it picked up a lot of my own writing style. And I just sent it off. And afterwards, I was thinking, “Man, did I just . . . did I lie to someone? Do I have to go to hell now? What is happening here?” And that’s just one example.
Over the summer, I taught a summer class. I told ChatGPT the topic that I was teaching, and I told ChatGPT the things I wanted to assess students on. Then I said, “Please give me an assignment, a kind of assignment that requires . . . work outside of class that they’d have to engage personally in, and please make it, as best as you can, something that they can’t complete with ChatGPT.” [laughter] And it gave me a good, it gave me a really good assignment. I thought, “Wow, that’s really good.”
I thought, “I don’t know that I would ever even think of that.” So, I used [that assignment]. You know, I copied and pasted ChatGPT’s suggestions into my own assignment instructions format. Then I edited it. I added a few things. I took out a couple things that ChatGPT suggested, and I sent it off. The student responses were fantastic. They were great. You know . . . maybe they were using ChatGPT to write out their conclusions.
It’s terrible, it’s unending: the race for the ChatGPT-proof assignment. Everyone is trying to find a bomb-proof assignment. But as it goes, I trust my students. I’m always very open with them about how I expect them to use ChatGPT and not use ChatGPT . . . and I always put that right in the instructions. I think: “Okay, my expertise . . . recognized how good this assignment was. But my expertise didn’t generate the assignment from the ground up. I instructed this unpaid labor force [to do it].” I’m really interested in thinking more about that—as these machines become closer and closer to what might be reflective of a sentient being somehow.
And another example: did I cheat somehow? Did I just do something in my job . . . that I would not want students to do in their assignments? . . . I think about this every day. I think about it all the time. And I always feel a fluttering in my stomach. But then I move on because I’m very busy and there is a lot going on. And the students are . . . sending me good work and I’m assessing the work myself. So, that’s just a couple examples. I don’t know if they were helpful or not, if that’s what you meant, Alison.
Alison Young Reusser (Psychology): No, that’s exactly [it]. Thank you so much because I hadn’t even considered any of that before I started reading through that document. That’s really helpful. Let’s just open it up. Does anyone have an initial idea in response to what David just shared?
Sara Massey (Music Education): This might not be exactly what you’re looking for, but I recently read an article from an Ivy League professor20 that had been doing the degree of research that David had been doing, and he decided he was going to generate an exam for a large class—a take home exam—and that ChatGPT would take [it], and then he would submit that exam to his graders under a false name. And then the graders [graded] it according to the rubric that he’d set, and the fake student made a C-minus on the exam. In my mind, I thought, “I could do that.” But then set the standard for passing that test as a C. So that C-minus student would technically fail the exam. Do you see what I’m saying?
In the education department—and I am music education—students have to make a C anyway to be able to move through the system. That’s just one thought about the way I would assure that students are, in fact, doing their own work.
Alison Young Reusser (Psychology): I still want to talk about the academic misconduct side, but more on the side of ethics. Let’s start with Jesse and then go back.
Jesse Sharpe (Literature): Dave, this made me think of the history of the introduction of email into the workforce. Before the introduction . . . of email into the workforce, you had secretaries where you dictate to the secretary and the secretary [can write it up], and then send the letter out. Then they introduced email and they said, “Oh, we don’t need these secretaries anymore,” until they fired all the secretaries and put the work onto the people. And then, also, it was something you could take home so you could keep working twenty-four hours a day. Wonderful thing, right?
But . . . the way that you described using [ChatGPT to write] your email is how somebody would’ve used a secretary . . . in the pre-email era. And . . . anybody who’s written anything that’s been put on the Internet that’s openly available has contributed to ChatGPT. We’re all the unpaid labor as well.
But this is it: before, you would’ve had a secretary that you’d have interacted with, and that person would’ve gotten a paycheck, and it would’ve contributed to their household. But also, it would’ve been a human being that you would’ve been interacting with. When we built our syllabi, we’d be interacting with colleagues, we’d be building the community of learning, the community of education. We’d be growing off each other’s ideas. [ChatGPT] encourages us to do things in isolation, and I’m not entirely sure that’s healthy either, as we’re more and more isolated. And I think, as we saw from COVID-19 lockdowns, that isolation isn’t healthy.
This is something that, once again, the ethics of it could be that it might be doing quick fixes that have unintended long-term consequences. And some of those unintended longer-term consequences are not just unpaid labor—exploitation of labor and the ethical qualms that come with that—but also the slow loss and deterioration of various communities as we interact with one another less and less. . . .
We need things to help each other grow, because it’s easy to look at something and say, “That’s really good,” and you’ve only consulted yourself. But it’s another thing if you have to consult somebody else, and they have to go, “Ah, that’s a good idea; but, what about . . . ?” We start to get this idea of a bias confirmation: It looks good to me, so it is good. And we have fewer people looking over it. There’s a danger in that as well.
Alison Young Reusser (Psychology): Peter?
Peter Meilaender (Political Science): Mine aren’t as profound as Jesse’s thoughts. And maybe a one-level-down analysis in a sense. Dave’s examples— when he asked, “Have I done something unethical?”—I think he was thinking of the way in which he represented himself toward his students, as opposed to other people he might’ve been somehow drawing upon. From that angle, I don’t think either of the examples strike me . . . as unethical. I think they’re different examples. The second one, to me, isn’t any different from pulling your college syllabus out of your file and seeing what assignment your professor had . . . or you pull a book off the shelf that’s got sample assignments, whatever it is, I don’t see a big difference there.
The first has to do with the way other people think you are or are not relating to them. It’s exactly like using a secretary. I don’t think it’s unethical. I don’t—forgive me, David—I don’t particularly respect it. But I don’t [even] like people who send me emails and can’t be bothered to say, “Dear Peter,” at the beginning. I just think it’s lazy and I think it’s rude and it’s not that hard. It takes about a half a second to type that. I think it’s not that hard to write an email to your students; you just do it, right? It’s part of the job. I don’t see anything unethical about it, but I wouldn’t do it.
Alison Young Reusser (Psychology): Brandon.
Brandon Bate (Mathematics): Nobody likes to do repetitive tasks, such as explaining to your students via email that the information they want is in the syllabus. AI technology does help with performing such tasks. Dave mentioned that he trained ChatGPT to replicate his style. Is that right?
David Huth (Communication): Yeah.
Brandon Bate (Mathematics): So, in some sense it’s more authentically representing you. There are two ethical issues that I see related to this type of use of AI. One is that, as Dave pointed out, AI can replicate you. Probably sometime in the future, maybe you pass away and there’s a loved one that misses you. They could upload videos and other media and create a virtual you. They could then talk and interact with an AI that is just like you. I think there’s an ethical issue with having a technology that can make copies of yourself, or at least copies that seem good enough for social purposes. So, that’s a question to think about.
The other ethical concern I have is analogous to what Jesse was describing with the breakdown of actual social connection that happened with the advent of social networks and email. The danger with technologies like ChatGPT is that, along with further degrading social connection, they can take away our sense of individuality. All of us faculty have endured the painful process of writing down one’s thoughts, reading them over, being immensely dissatisfied with the result, and having to change or start over. But the result of this process is that you find your own voice, your own words, and your own style. AI technology can essentially provide you with a way to avoid that process. The result is going to be a fairly vanilla and generic writing style.
I like that we can get a lot of good work done using AI, but you may then look back at your work and say, well, who am I really? I’m good at using an AI and that’s about it. That’s not so much an issue for us who are older because we have that experience of crafting our own voice, but if you’re younger, this could be a real issue.
Alison Young Reusser (Psychology): That’s a psychology question.
Laurie Dashnau (Writing): Can I jump in? I have an example, one analyzing style. Here’s a paragraph, and then it comes back after being put in ChatGPT and it tells you things like, “Here’s the readability score according to [X].” And then here’s the tone type: friendly; intent type: informal; audience type: general; style type: informal; emotion type: mild; domain type: general.
This would suggest . . . what David said about his experience with students. And then what was just said about [how] it takes away your individuality, also made me think we do a lot of writing like [this] for [class. For example,] today in Writing 101, we read Linda Flowers’s piece, “Know Your Audience.” [AI] takes away the individual person [and weakens the sense of a specific audience]. Dave’s example was [that] he wouldn’t say, “Enjoy the warm weather.” I would, to certain students who I know that love warm weather. I wouldn’t to other students or students I don’t know well enough [to know they like warm weather or would consider that comment a pleasantry]. So, that’s what it does: it takes away their individuality as well.
And for whom are you writing? Are you writing for a general audience? Are you writing for an encyclopedia? . . . [O]f course, students are going to say, “I’m writing for this class.” But even with this class, this class spring semester isn’t this class last semester in terms of what we have discussed, in terms of the different things that are being emphasized. I mean somewhat, but it strips that individuality away from the recipients, too.
David Huth (Communication): Well . . .
David Huth (Communication): I just wanted to interject that ChatGPT doesn’t send the email for you. ChatGPT writes an email and puts it in a window that you must copy and paste out of. If you know the student likes warm weather, you can add that.
Here’s something that also happened over the summer. It wasn’t a ChatGPT-generated email. These other emails that I sent were basically pro forma things that I sent to a number of students [and copied and pasted from a template]. One of the students I had a conversation with about their Resident Assistant (RA) training earlier on. . . . So, I added, “I hope RA training is going well. Let me know if you need to adjust the schedule anymore. Later, Dave.” . . . There’s nothing that’s stopping you from doing that with ChatGPT. In fact, I removed that bit about the warm weather because I felt like I don’t know the students well enough . . . I kind of individuated it to myself slightly more by doing that edit. [laughing] That “defense” is small potatoes, because what we’re getting at here is that there’s something much larger looming underneath all of this.
Laurie Dashnau (Writing): That was using a small example.
David Huth (Communication): And I’m glad you mentioned metaphors one more time. Everybody who’s spoken has used a metaphor for this. But I’ve stopped doing that.
Every time you start to use a metaphor—“Oh, this is like unpaid labor,” “This is like a secretary,” “This is just like spell check,” “This is just like Googling something and copying off whatever”—we feel we have to do that because we’re required in our jobs to [find metaphors]. We want to categorize all the things that we’re doing.
But ChatGPT, and artificial intelligence writ large, is not like anything. Not any of these things. If you only think of it as a [more powerful] personal assistant, or as a graduate assistant . . . it never goes to that deeper level that I—like everybody who has spoken—seems to be reaching for. That thing [about AI] that’s a little bit deeper. I think if we can get at that, that’s something unique that our faculty can contribute to this whole global thought process.
I sent some resources to Alison. All institutions are having this exact same conversation. But they’re doing it with a different valence, based on their own institutional identity. I feel like there should be a way for us to think about what’s deeper underneath . . . I think that set of conversations can be helpful if we keep trying to get at what is it exactly that makes Dave so nervous. I don’t want to find a metaphor that I can hang my defense on. What is it exactly that’s happening here?
We may not be able to grab hold of it. We lost the battle in our society’s first engagement with artificial intelligence, which was social media. [AI] has been driving social media—without the chat box—for many years. And it was a disaster. Social media . . . the history of social media and its effect on culture and politics. Elections are swayed by very clever manipulations by artificial intelligence, through social media.
And . . . it feels crazy to even say it. “What!? Come on. Facebook!? Really?! Twitter!?” But it’s been driven almost entirely by artificial intelligence. Now [AI] is kind of emerging . . . where we can see it and engage with it in many more areas of our lives. I like that this group of people is trying to get at that deeper thing.
Alison Young Reusser (Psychology): We could have a series of discussions going in all these different directions. I want to bring this back to how this affects our pedagogy. And I want to leave the last ten minutes specifically for academic misconduct, the practical things that people are thinking about there.
But in the next chunk of time here, I’d like to read some quotes of other people’s perspectives on how AI influences pedagogy. Some of these quotes—I’m not going to tell you which ones—made me yell out loud. But some of them, maybe not. I’m going to read over them in rapid succession here. Pia Ceres, senior digital producer at Wired, said, “If a chat bot could answer a question like, ‘What does the green light symbolize in The Great Gatsby?’ was that question worth asking my students anyway?”21 Another person, a commenter on a similar article, “If AI can write better than most humans, will the companies employing college graduates really care as long as it serves their needs? Being nostalgic for literary talent might be akin to being nostalgic for cursive writing, abacus calculations, reading dead languages, equestrian skills, and such.”22
A high school English teacher [Evvy Fanning’s] perspective: “We’re going to have to assign work that requires something human, something ChatGPT doesn’t know.”23 Skipping forward a little bit, a listener’s tweet quoted on NPR: “I foresee the words ‘prompt engineer’ showing up in job applications and resumes as all the schools ban kids from developing those exact skills.”24 [From Jay Wingard, a secondary math teacher:] “I don’t think using ChatGPT for the intentions of learning is ever a bad thing, but academic misconduct becomes a bad thing.”25
And even further than that, some of you might have read or heard about an Atlantic [article]—I’d say David, you certainly did, because you referenced it. “The End of High School English”: “There are many, many ways to have an experience with a piece of text and to demonstrate learning about a piece of text. You can do a drawing, you can do a presentation, but we always assume writing is an essential way to engage, and maybe that is not true anymore. Maybe we do not need to write anymore.”26
That’s just a sampling. Let’s start on this side, since we haven’t spent much time [there].
Craig Whitmore (Education): You mentioned prompt engineering. I . . . found an article in Time magazine from . . . April [2023]. The title is “The AI Job that Pays Up to $335,000 a Year and You Don’t Need a College Degree,” and that’s to be a prompt engineer.27 But they also make the point that that’s probably quickly going away. And . . . it would be an interesting conversation to have, [but] are we really talking about AI or are we talking about AGI—Artificial [General] Intelligence? I think that’s what it’s called . . . is it truly Hal in 2001: A Space Odyssey, where he’s making choices and decisions? Or is it just a response based on an algorithm? Is that really AI?
But the idea that eventually AI is going to get to the point where it can understand us more, we can just talk to it, and it’ll be like talking to the [Star Trek] Enterprise and it gives us whatever we want and that kind of a thing. It’s interesting. I don’t know if it’s not going to happen. I think that no matter what we in higher education do, in the workforce . . . they may value people and employees but if they can get it done faster and cheaper and save money and make more money for their shareholders, you can make the argument that that’s the right thing to do for their company. So, they should fire the people and have the AI write the [article]. I don’t know.
I think it’s interesting. There was a podcast or a conference thing I listened to this summer, and I asked the question at the end, “Should as a teacher of educators, should I . . . train my students on how to use ChatGPT to write [their] lesson plans?” Just taking the person’s opinion. And his idea was, you know, in the workforce, people are going to absolutely be doing that. But in education, you absolutely shouldn’t do that. You shouldn’t. You shouldn’t teach them to skip the steps. In my mind, it’s kind of like learning math that you need to learn the basics before. You don’t give the calculator to someone in first grade. At least I don’t think we do. You need to learn the basics before you learn the shortcuts to save your time.
Alison Young Reusser (Psychology): Laurie, I’m really interested in your take as a writing professor.
Laurie Dashnau (Writing): I don’t do too much different, so far, than I did when Wikipedia entered the scene. It just takes me a lot longer to have the conversations or to know if I should have them when I can’t track them down nearly as easily. But I’m always having students do things while they’re in Writing 101 where they’re taking current news and then comparing a text with it. [T]hey’re using . . . quote sandwiching . . . or . . . personal experience compared to this writer’s experience, comparing and contrasting. I don’t do too much. But . . . leading the Writing Center, that’s where it’s coming in . . . [in terms of] how to help colleagues or how to help them have discussions with their students. And even again, like I said, discussions with my own students. It’s a lot trickier than putting a search string in.
Alison Young Reusser (Psychology): Sara?
Sara Massey (Music Education): I intend to flip the classroom. So that I give a prompt based on reading and then see the original work.
Alison Young Reusser (Psychology): Have them write in class.
Sara Massey (Music Education): Writing in class, yes. And David, if you might be able to clarify this. In some of my reading, it said that . . . ChatGPT’s knowledge, the ChatGPT we’re using now, all this data was entered by the end of 2021. Now, if it has been—I tried to search to see if it had been updated, so that it’s relying on more recent stuff. But if it’s ending by that date, then in my papers, I’m going to require them to have at least one very, very current source.
David Huth (Communication): That was the case in the early days of GPT 4, when it was first released, and GPT 3.5 I think, as well. But I don’t know anyone who relies on that any longer: a) because ChatGPT is really good at hallucinating and faking, and b) now that ChatGPT is being updated as it goes, it’s continuing to learn as it goes.
Sara Massey (Music Education): Okay, okay. Thank you.
Alison Young Reusser (Psychology): Does anyone have a take on the AI-of-the-gaps sort of thing? That if AI can do it, why bother requiring students to do it because they’re not going to have to do it in the workforce?
Craig Whitmore (Education): I do know that when I [asked ChatGPT 3.4] to write a lesson plan, it wasn’t a very good lesson plan. And if I didn’t have the training as a teacher to know, “Oh, it’s missing this and this and this”; if a student turned that in, they’d get a bad grade on it. If you don’t know the basics, if you don’t know the background, at least right now what we have access to for free is producing something that’s kind of junky. It’s better than nothing, but it’s not good.
Alison Young Reusser (Psychology): Training, helping students recognize what’s good, what’s not. Brandon?
Brandon Bate (Mathematics): I’ve taught programming classes and basically, at this point, it’s going to be standard that all code editors are going to have an AI assistant built in whether you want them there or not. I let students try these technologies. With that said, I’m aware of their limitations. The projects I had students work on were the sort of projects that I knew an AI couldn’t do. The use of AI in this case was a glorified spell check. It could tell you when something was wrong in your code, which was helpful. It could also generate what we call boilerplate, which is code that’s not all that interesting, but is needed, nonetheless. Not too many of my students really took advantage of AI in this way.
One of the interesting things about ChatGPT is it’s also been trained on all the source code that’s on the Internet. I found this particularly helpful when attempting to use a particular software library. The documentation for the library was terrible. I couldn’t figure out how it worked, and so I ended up asking ChatGPT to explain it to me because I knew ChatGPT had been trained on source code that used this library and could at least, in theory, tell me how it worked. And it worked! ChatGPT basically told me how to use this library even though there wasn’t any good documentation. My students also used ChatGPT as a source for learning coding information like this. And so those are the sort of use cases, at least in software development, that we’re going to have.
Alison Young Reusser (Psychology): There we go. We have seven minutes left, so let’s—[Peter Meilaender asks to speak]—Yeah, definitely.
Peter Meilaender (Political Science): This is indirectly an answer to your question about, “Why teach stuff we don’t have to.” But I wanted to take that back to Dave’s point at the end of the previous round about what’s at stake here. I don’t know what’s at stake here in the big sense, that’s just too many questions. But in terms of educational institutions, what’s at stake? Think about the quotes you read. They’re all about the reasons why we shouldn’t have to know something.
Alison Young Reusser (Psychology): Yeah.
Peter Meilaender (Political Science): And we shouldn’t have to do stuff. And it’s a kind of fundamental motivation here, which is, “Why know something if I don’t have to know it?” And I think what bothers me about it in the educational context is that it makes it much easier to pretend to be interested in stuff without being interested in stuff. And what we’re fundamentally after for our students is wanting them to be interested in stuff. Far more than [to] be smart or clever, we want them to be interested in things. It cuts to the very heart of the educational enterprise.
And even to bring the ethics back in. I don’t necessarily think there’s an ethical obligation to be interested in stuff, but I don’t admire people who aren’t interested in stuff. I don’t want to be that person and I hope my students won’t be that person. I just think it makes it that much harder to get our students to be the kind of people we want them to be.
Alison Young Reusser (Psychology): Yeah, it’s a good point. And as an aside, one of the assignments I give my students in social psychology is meant to engage them. They’re trying to take what they learned and literally apply it to their own experiences. And I don’t know whether people have used AI to produce their responses, but if they have, they’re missing the whole point. Absolutely.
David Huth (Communication): I want to respond quickly. I love the things that Peter especially is bringing to this conversation. We’re in so much trouble already because we’re already set up to do exactly what Peter fears, because we don’t expect anybody to know spelling anymore. We just don’t. In fact, what we say—
Peter Meilaender (Political Science): I turn [spell check] off in Word.
David Huth (Communication): [laughs] That’s great. But what I say to my students . . . is when a paper or an essay is handed in [with] a whole bunch of dumb spelling errors, we actually tell them to go to an AI to fix it. We say, “This is—come on! All you have to do is use spell check, just run spell check through this! Do it so that it’s correct.” We don’t want to read spelling errors, but we don’t expect students . . . to know spelling.
I just feel like we’re already set up [to fail]. With all the tools we have from the stupid little paper clip in Microsoft Word, to all the ways that our bad grammar is automatically underlined.
ChatGPT is going to be buried in Microsoft Word. All Microsoft products. It’ll be buried in Teams. It’ll be correcting the things that I say [laughter] as I’m talking to you now, and when I’m stuttering, it’ll be removing the stutters.28
We’re already set up. And so, if we’re going to resist, we’ve got to figure out, first of all, how to be consistent in the ways that we’re already not resisting.
. . . It’s easy to just ban things. But there should be a better way to open up the world for students to be interested in knowing things because it’s good to be interested in things. That’s what the liberal arts are for. What is this whole weird institution for? It’s not just to train students to get a job in prompt engineering. It’s to get them interested in things for its own sake.
Elwyn Foster (undergraduate student): I just wanted to say, if the only goal is to train for jobs, that seems like a pretty sad goal for a liberal arts institution focused on, well, glorifying God. It should be focused on discovering the greatness of God’s creation in all its various ways, and if we use chat bots to do that, we didn’t learn anything.
Alison Young Reusser (Psychology): Well said.
Sara Massey (Music Education): One of the things I read and I would love to hear David’s comment on this, since you seem to have done a lot more reading than I have, I understand that ChatGPT does synthesis fairly well, but when it comes to combining critical new ideas, it doesn’t do so well. Maybe you have a different idea on that. For me, whenever I read that I thought, “Okay, I really have got to focus on higher-order thinking skills as I develop prompts in and make rubrics for research projects.” I’m sure David, here we have writing sets and we want to apply this analysis to that.
David Huth (Communication): There’s nothing wrong with pushing yourself to get your students thinking in the higher-order thinking skills and all that. I mean, we should all be doing that already.
But it’s a losing game to try to figure—to try to guess what ChatGPT is going to be capable of next week. We just have to accept that this mind is incredible, and it’s getting better faster than we can outthink it.
We can’t say, “Okay, ChatGPT knows that it takes two hours for a T-shirt to dry on the laundry line, but it thinks that it’ll take six hours for three T-shirts to dry on the laundry line, because it doesn’t understand what water is, doesn’t understand what evaporation is.”29 Yeah, it does. As soon as you find a way to trick ChatGPT, it figures it all out next week. We just have to accept that this is something that will be able to accomplish all that’s asked of it.
Unless we’re asking it to be interested in something. That thing that Peter said is going to guide my thinking for a long time.
But in terms of just answering questions, knowing things about the world, I mean, its own creators are baffled. Always in their interviews, they’re like, “Well, I don’t know how it got so smart, so fast.” This is simply a common theme in the interviews that I read with all those people at all those corporations who build [AI]. They’re always surprised at how fast it gets [smarter].
Alison Young Reusser (Psychology): I think we have time for one more comment. Jesse.
Jesse Sharpe (Literature): I was just thinking about the old thing of, when a new technology is introduced, it’s the idea that the technology is like magic. I’ve been working with computers for a long time and doing database creation and things like that. I keep thinking we’re all just about five years away from realizing computers are not magic. And then something else happens, and everybody is like “Oh, it’s magic!” And it’s never magic. ChatGPT is math. It’s really, really good at math, but it’s just predictive modeling. That’s why it does so [well] with things from the past, because that’s all it’s fed, is stuff from the past. It can’t be fed things that have been written in the future because [they] don’t exist for it.
And so, we’re talking about it as if it’s magic, as if it’s some sort of overarching force. But you just unplug it, and it’s dead. Like it’s incredibly, incredibly vulnerable, and we take electricity, and we take these different things for granted. But we can’t as a society. And so yes, it’s there, it will be there. AI long predates social media. You use it all the time if you use Spotify and it suggests some music you want to have, or Amazon and it suggests what you want to buy, that’s all basically AI, but it’s all math. And it’s really fascinating math, but it’s all math, and it’s all math created by humans and it’s not greater than us. It’s just a tool.
When we come to any tool, we have to decide, “What does this tool do?” “Does it do it in a good way?” “Is it to our advantage to continue to use this tool?” but we need to recognize it as a tool, not as a mind, not as anything else; that’s falling back into metaphor. The reality is that this is a tool that serves a particular purpose. We need to say, “Is this tool built well and in an ethical manner, and are we going to use it well and in an ethical manner?”
And if we aren’t willing to ask that question, or if we’re too dazzled by the bright lights, then we’ve missed the core of what’s at stake, and we aren’t able to . . . ask the right questions and we aren’t able to answer them. This is what I think we need to recognize: it’s astounding mathematics. It really is. But it’s just math and it’s just predictive modeling and it can easily be beat. This is why I said, all you have to do is ask for . . . students to quote directly from things that are in the library, work from the library’s database because almost all of that stuff is under copyright.
Alison Young Reusser (Psychology): That doesn’t work for psychology papers.
Jesse Sharpe (Literature): APA is a database.
Alison Young Reusser (Psychology): Yeah, but even then, sometimes it can. There are certain things that it still has access to that aren’t behind—because of open science.
Jesse Sharpe (Literature): Yeah, there was a great—Donne scholars asked ChatGPT to write an article on John Donne’s Virginia company sermon. And it did this long essay about it and said sadly, it’s no longer in existence. It is in existence. We have [it], we write about it all the time. Brigham Young University has a great database of all of Donne’s sermons, and you can have it and it’s completely out of copyright.
Or there’s a great article that just came out today30 about somebody who was looking for a quote out of “In Search of Lost Time” by Proust and it kept asking ChatGPT, “Where did this come from?” and ChatGPT said, “I’m sorry . . . I can’t look, it’s a copyrighted thing,” and the guy’s like, “It was written in 1922. It’s out of copyright,” which is true. And he said, “So can you generate? Where does it say this?” And it kept giving quotes, yet those don’t sound right and then eventually ChatGPT admitted, “Well, I’m giving you paraphrases in the style of Proust.”
It’s predictive modeling along a certain way, but he’s like, “This is terrible writing. It’s nowhere nearly as interesting as Proust,” This is, it’s a tool that does certain things very well and does other things very poorly, but until we start to recognize it and only discuss it as a tool and recognize it as a tool, I think we’re going to have a very, very hard time getting at the truth and having good, fruitful conversations. But it’s definitely not magic.
Alison Young Reusser (Psychology): There’s so much more that we can talk about and . . . we obviously didn’t have time to even get to the academic misconduct . . . discussion, but thank you so much for taking the time to share your wisdom with us. Thank you, David, for coming in remotely.
Cite this article
Footnotes
- Cade Metz, Cecilia Kang, Sheera Frenkel, Stuart A. Thompson and Nico Grant, “How Tech Giants Cut Corners to Harvest Data for A.I.”, The New York Times, April 6, 2024, https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html.
- Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Schmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT 2021 (March 3-10, 2021): 617, https://doi.org/10.1145/3442188.3445922.
- When asked how she knows she isn’t a stochastic parrot, Emily Bender responds, “I am not going to have conversations with people who will not posit my humanity as a basic axiom of the conversation” (Emily M. Bender, “Resisting Dehumanization in the Age of ‘AI,’” Current Directions in Psychological Science 33, no. 2 [April 2024]: https://doi.org/10.1177/09637214231217286).
- Adam, Nicholas Epley, and John T. Cacioppo, “Social Cognition Unbound: Insights Into Anthropomorphism and Dehumanization,” Current Directions in Psychological Science 19, no. 1 (February 2010): 58–62, https://doi.org/10.1177/0963721409359302.
- David Silver, Julian Schrittwieser, Karen Simonyan, et al., “Mastering the Game of Go without Human Knowledge,” Nature 550, no. 7676 (2017): 354–359.
- Jason Wei, Yi Tay, Rishi Bommasani, et al., “Emergent Abilities of Large Language Models,” Transactions on Machine Learning Research (August 2022): https://doi.org/10.48550/arXiv.2206.07682.
- Daniel Soufi, “‘Accelerate or Die,’ the Controversial Ideology That Proposes the Unlimited Advance of Artificial Intelligence,” EL PAÍS English, January 6, 2024, https://english.elpais.com/technology/2024-01-06/accelerate-or-die-the-controversial-ideology-that-proposes-the-unlimited-advance-of-artificial-intelligence.html.
- Since the initial discussion was primarily about ChatGPT and text generating AI, I will primarily limit response to that, though I do believe other areas of AI use have ethical issues.
- Dan Dan Dai, “Artificial Intelligence Technology Assisted Music Teaching Design,” Scientific Programming 2021, no. 1 (January 2021): https://dx.doi.org/10.1155/2021/9141339.
- Xiaofei Yu, Ning Ma, Lei Zheng, Licheng Wang and Kai Wang, “Developments and Applications of Artificial Intelligence in Music Education,” Technologies 11, no. 2 (March 16, 2023): 42, https://doi.org/10.3390/technologies11020042.
- Jing Wei, Karuppiah Marimuthu and A. Prathik, “College Music Education and Teaching Based on AI Techniques,” Computers and Electrical Engineering 100, no. 107851 (March 14, 2022): https://dx.doi.org/10.1016/j.compeleceng.2022.107851.
- Xiao Chen, “Research and Application of Interactive Teaching Music Intelligent System Based on Artificial Intelligence,” Proceedings of the International Conference on Artificial Intelligence, Virtual Reality, and Visualization, no. 1215302 (December 16, 2021): https://doi.org/10.1117/12.2626819.
- George Lakoff and Mark Johnson, Metaphors We Live By (Chicago, IL: University of Chicago Press, 1981), 172.
- Barna Group, “How U.S. Christians Feel About AI & the Church,” Barna, November 8, 2023, https://www.barna.com/research/christians-ai-church/.
- Amanda Zambrano, “Christ-centered Community|Fixing Up the World,” Houghton Magazine, February 20, 2023, https://www.houghton.edu/hc-magazine/christ-centeredcommunity-fixing-up-the-world/.
- Stephen Shankland, “Google Reveals its AI-powered Search Engine to Answer Your Questions,” CNET, May 10, 2023, https://www.cnet.com/tech/computing/google-reveals-its-ai-powered-search-engine-to-answer-your-questions/.
- laLane, “Grammarly Is AI — We’ve Been Using It All Along,” Medium, May 21, 2023, https://medium.com/@lanekwriter/grammarly-is- ai – weve – been – using -it -all-along- 9a570af9e029.
- Arianna Prothero, “New Data Reveal How Many Students Are Using AI to Cheat,” Education Week, April 25, 2024, https://www.edweek.org/technology/new-data-reveal-how-many-students-are-using-ai-to-cheat/2024/04.
- Derek Schuurman, “Technology and the Biblical Story,” Pro Rege 46, no. 1 (September 2017): 4-11, https://digitalcollections.dordt.edu/cgi/viewcontent.cgi?article=2949&context=pro_rege.
- Samantha Murphy Kelly, “ChatGPT Passes Exams from Law and Business Schools,” CNN, January 26, 2023, https://www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html.
- Wired Staff, “The Good and Bad of ChatGPT In Schools,” Wired, March 9, 2023, https://www.wired.com/story/gadget-lab-podcast-589/.
- Benjamin David Steele, July 30, 2023, comment on Corey Robin, “How ChatGPT Changed My Plans for the Fall,” Corey Robin, July 30, 2023, https://coreyrobin.com/2023/07/30/how-chatgpt-changed-my-plans-for-the-fall/#comment-328725.
- Evan Dawson and Megan Mack, “Teachers On AI’s Role In Education,” Connections With Evan Dawson, NPR WXXI News, Rochester, NY, August 10, 2023, https://www.wxxinews.org/show/connections/2023-08-10/teachers-on-ais-role-in-education.
- Wired Staff, “The Good and Bad of ChatGPT In Schools.”
- Dawson and Mack, “Teachers On AI’s Role In Education.”
- Daniel Herman, “The End of High-School English,” The Atlantic, December 9, 2022, https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/.
- Nik Popli, “The AI Job That Pays Up to $335K—and You Don’t Need a Computer Engineering Background,” Time, April 14, 2023, https://time.com/6272103/ai-prompt-engineer-job/.
- David Huth joined this panel remotely via Microsoft Teams.
- Huth is referring to an often-cited argument that Large Language Models lack common sense or an internal world-model.
- Elif Batuman, “Proust, ChatGPT and the Case of the Forgotten Quote,” The Guardian, September 5, 2023, https://www.theguardian.com/books/2023/sep/05/proust-chatgpt-and-the-case-of-the-forgotten-quote-elif-batuman.