I’ll begin with a concession: by the standards that tend to govern education today, widespread employment of generative AI tools seems a marvelous idea. As colleges respond to the “enrollment cliff” by embracing market values and selling commodified diplomas to prospective student-consumers, a promise of the ability to leverage generative AI tools to synergize human-bot capabilities for the purpose of optimizing (blah blah blah) is downright logical—maybe even worshipful (never mind what we are worshiping). And, with the passage of the so-called “Big Beautiful Bill’s” financial incentives for colleges to prioritize earning potential in their curricula, it appears these standards aren’t going anywhere soon.
So the question for faith-based educational institutions is not whether widespread generative AI rollouts are economically viable; at present they very clearly are. Rather, the question that we in the Christian liberal arts must ask ourselves is whether economic viability is a wise standard for measuring value. To put this question another way, should we measure value by the same metric that turned pornography into a multi-billion-dollar industry? After all, the economic viability of this industry is clearly catching the attention of secular institutions in the liberal arts tradition. Why wouldn’t it? How could a board of directors sleep at night knowing it has failed to provide its patrons with access to such a massive economic pie? And is it really a bad thing that some graduates might, with an initial investment of a good camera and an internet feed, stand to make six figures (earning the institution a healthy funding package for helping to raise the GDP)? Think of all the newly admitted customers whose earning potential will be exponentially improved with that money.
Now, let me be very clear: I am NOT equating generative AI use to pornography. I am, however, pointing out a profound tension in appeals to market viability at confessional institutions. That markets are better drivers of economic decisions than state-directed socialism is demonstrable; that this makes capitalism inherently good is not. In fact, capitalism quickly becomes a value vacuum, justifying that which will generate capital rather than that which is good, holy, and beautiful. See, for example, the rise of fast fashion and the prevalence of appliances designed to wear out soon after the warranty expires. As Huxley’s imagined World State reminds us, “ending is better than mending.”1 How quickly the value of quality gives way to the values that drive Steinbeck’s “bank monsters,” which “don’t breathe air…They breathe profit.”2
It’s time, therefore, that we who teach at confessional institutions give up economic viability as a primary standard of value. Of course, we all want our students to leave our colleges with earning potential, but we also presumably believe that the engineering student who leaves a six-figure starting salary on the table to help build life-saving infrastructure in the developing world has made a wise decision. And I hope we also all believe that the federal funding our institutions lose because of that decision to have been an investment in a far more worthy economy—an economy that prioritizes the dignity of the Imago Dei above the dignity of capital. Are we willing to trust that God can be made great in our and our students’ lack of preparation for things that are not worthwhile? If so, then the only question before us is whether generative AI in an educational context is worthwhile—whether it is good.
Judgment of worth is, of course, an impossible standard to meet with epistemological certainty, and this impossibility cuts to the very heart of the human condition. In the fall, Adam and Eve sought inappropriately to claim authority to discern good and evil. But we have eaten the fruit, and we’re now stuck with the question. And though we cannot answer it, we ARE called to pursue an answer: “whatever is good…think on such things;” “if any of you lacks wisdom, let him ask God.” Historically, wrestling with these kinds of epistemological uncertainties was the domain of philosophy (in the classical model) or theology (in the ecclesiastical model). As the proverbial “queens” of the liberal arts, these domains provided the purpose that drove the study of the more “practical” disciplines. In the 200 years following the Enlightenment, as unified conceptions of sophia (wisdom) and theos (God) were increasingly fractured, the university created the humanities as a sort of humanistic theology—the study of what humanity can and has achieved from within the boundaries of what Charles Taylor has termed “the immanent frame.” Thus, the humanities are the modern inheritors of the “queenship” of the liberal arts, and those of us who engage the humanities from within a confessional framework seek to collapse the distinctions between philosophy, theology, and the humanities. Thus, our primary purpose is the study of humanity’s effort to bear the Imago Dei.
But the Enlightenment was not merely content to unseat philosophy/theology as the queen of the liberal arts; it quickly took aim at the value of abstractions—or wrestling with epistemological unknowns—as well. The result was a new competition between philosophy (i.e., “the love of wisdom”) and pragmatism as the chief end of education. In a pragmatic model, moral edification and the development of character are subordinated to efficiency, and questions of value or purpose are subsumed into “objective” values—like “the market” or “social progress.” The modern axiom “those who can, do; those who can’t, teach,” is the gospel of this new model—never mind the fact that this axiom is a barely century-old bastardization of Aristotle’s much older axiom “those who can, do; those who understand, teach.” The question before us, then, is which takes priority: understanding (wisdom) or efficiency?
Generative AI tools are specifically designed—even targeted—to help students avoid the development of wisdom. See, for example, David I. Smith’s recent article in this very blog exposing the Adobe AI tool’s deliberate, explicit invitations to create shortcuts to learning—shortcuts that actively preempt values like understanding, justice, and openness to the Spirit in the interest of efficiency. In other words, these tools are designed to let wisdom and moral edification, the responsibility of the Imago Dei, rot on the vine. I’m reminded of the teachers who protested the widespread rollout of calculators in the 60s through the 80s. The country disregarded them, and now numeracy rates in the USA are cripplingly poor. Sure, correlation isn’t causation, but it does prove the plausibility of causation. We chanced it with numeracy; do we really want to chance it with wisdom?
Sadly, I’m already seeing access to generative AI tools erode the convictions of Christian educators. I have encountered numerous students who claim to have been encouraged by faculty members at confessional institutions to streamline research, curate content, and impose superficially “correct” syntax at the expense of sitting patiently with their own ideas until they have discovered the inner workings of their fearfully and wonderfully formed minds. After all, discovering these inner workings is troublesome, and if, as Huck Finn reasons, “it’s troublesome to do right and it ain’t no trouble to do wrong, and the wages is just the same…”3 But the trouble is worthwhile. Even in writing this reflection, my position has shifted as I encountered subtle contradictions in my thinking or searched for a better word or turn of phrase to reflect the thoughts I am discovering within myself. This is a primary means by which the Imago Dei that I bear conforms me ever more to that image. To outsource this process to an algorithm, to despise this act of worship of the God whose image I bear in the name of the worship of efficiency—I can think of no clearer way to “worship and serve created things instead of the Creator.”
I’m still waiting to hear a justification for the integration of generative AI into Christian liberal arts educational contexts that do not fall short on the grounds I’ve outlined. There may be valuable research questions that become available for high-level, specialized researchers through tools such as these. But researchers capable of pursuing these questions will have time enough to learn to use generative AI tools once they have internalized a love of the pursuit of that which is lovely. Indeed, I pray the question of what is lovely will always remain at the heart of the liberal arts in a confessional context. For if it does not, we have lost both the liberal arts and our confession. What is left, then, is self-annihilation, a dereliction of the very responsibility we have been created to shoulder. But don’t take my word for it; I will leave you with the words of renowned machine learning specialist Stuart Russell, who in his book Human Compatible envisions a future in which people “will wonder why we ever worried about such a futile thing as ‘work’.”4 What will become of us worshipers of efficiency if that future ever arrives?
Footnotes
- Aldous Huxley, Brave New World (Harper Perennial Modern Classics, 2006), 50.
- John Steinbeck, The Grapes of Wrath (Penguin Classics, 1992), 32.
- Mark Twain, The Adventures of Huckleberry Finn (Ignatius Critical Editions, 2009), 106.
- Stuart Russel, Human Compatible: Artificial Intelligence and the Problem of Control (Viking, 2019), 122.





















