Skip to main content

“Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” These are the words of the web pioneer Marc Andreessen, writing about “Why AI Will Save the World.”

Such optimistic claims are not new. The rise of the world wide web came with predictions that it would be “a natural force drawing people into greater world harmony.”1 As we know, the web did provide unprecedented access to information, but it did not lead to “greater world harmony.” It became a medium for misinformation, echo chambers, bullying, and polarization.

Likewise, the development of AI will bring benefits in many areas like medicine, enhanced traffic safety, and environmental monitoring. Even so, AI will not be “infinitely helpful” and there will be many pitfalls to address.

Eric Horvitz, Microsoft’s chief scientific officer, recently sounded an alarm which I think is prophetic. Horvitz wrote that AI is moving us closer to a “post-epistemic” world “where fact cannot be distinguished from fiction.”2 In short, AI will have an impact on truth.

Horvitz identifies one area where AI will challenge truth: “deepfakes.” Deepfakes use AI to create synthetic videos that can impersonate people. In a famous demonstration, researchers at the University of Washington posted a deepfake video of President Obama, making him say whatever they wanted. Seeing is no longer believing, and “truth” can now be manipulated and fabricated.

And then there is the issue of “hallucinations” in generative models like Large Language Models (LLMs). This should come as no surprise: by their very architecture, LLMs are “simply a system for haphazardly stitching together sequences of linguistic forms … without any reference to meaning: a stochastic parrot.”3 According to Grady Booch, an IEEE Fellow and chief scientist for engineering at IBM, “Generative modes are unreliable narrators” that can “generate misinformation at scale,” and they are now being “unleashed into the wild by corporations who offer no transparency as to their corpus.”4 According to the research of one company, “A.I. chatbots invent information at least 3 percent of the time, and some as much as 27 percent of the time.”5

Other researchers have begun to recognize that AI chatbots are trained with a particular worldview and users are subject to something called “latent persuasion.”6 Regular usage of AI chatbots can be like having a “Jiminy Cricket” on your shoulder, autocompleting your thoughts. Over time, such nudging can shape your opinions without you realizing it.

Another recent development is “astroturfing”—using AI to generate a fake campaign that gives the illusion of a grassroots movement. AI chatbots can be harnessed to post massive amounts of tailored content on social media to capture attention and manipulate people’s opinions. Some predict that astroturfing will increasingly distort truth and reality, posing a direct threat to democratic societies.

All of this can lead to a kind of syncretism, in which Christians amalgamate secular ideologies promoted by AI alongside Christian thought. In some senses, we are no different than the Israelites who were influenced by the gods and practices of the surrounding nations, only now this influence is amplified by digital tools. In fact, one startling study demonstrates that increasing use of AI is correlated with a decline in religiosity.7

What is the antidote to the post-epistemic turn in our modern, technological, AI-driven age? The Christian philosopher of technology, Jacques Ellul, provides advice in his book, The Presence of the Kingdom. Ellul points to Romans 12:2, “Do not conform to the pattern of this world but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is—his good, pleasing and perfect will.” Ellul argues that “faith produces a renewal of intelligence” and that it “takes place in Jesus Christ, through the action of the Holy Spirit.”8 Ellul argues that this requires a “new style of life” that includes the “whole of life,” from “the way we dress and the food we eat” to how we treat our neighbors.9 In our context, a “new style of life” might include habits of mind and the cultivation of various intellectual virtues. In his book Epistemology: Becoming Intellectually Virtuous, author W. Jay Wood reminds us that “…wise persons not only possess knowledge of eternal or ultimate significance but have undertaken to become the kinds of persons who naturally desire and pursue this knowledge.”10 Some counter-cultural habits that might help us cultivate wisdom include observing Sabbath and limiting our exposure to the constant stream of AI-driven media.

In the mid-twentieth century, C.S. Lewis described the pitfall of developing a “blindness” to certain truths by reading “only modern books.” His advice is “to keep the clean sea breeze of the centuries blowing through our minds” by “reading old books.”11 In our twenty-first century era, “modern books” are no longer the issue, but rather “modern media.” None of us are immune to the “blindness” that may be caused by misinformation, latent persuasion, and astroturfing. We ought to “renew our minds” with the “clean sea breeze” of older books including, of course, the Scriptures.

We shape our tools, but our tools can also shape us – including shaping our perception of truth. With AI advances that promise to be “infinitely helpful,” let us cultivate practices that help us remain faithful to the truth.

An earlier version of the article originally appeared in Christian Courier.


  1. Nicholas Negroponte, Being Digital (New York: Knopf, 1995), 230.
  2. Eric Horvitz. “On the Horizon: Interactive and Compositional Deepfakes”, Proceedings of the 2022 International Conference on Multimodal Interaction, November 2022, 653-661.
  3. Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada: ACM, 2021), 617,
  4. Margo Anderson, “’AI Pause’ Open Letter Stokes Fear and Controversy”, IEEE Spectrum, April 2023.
  6. Maurice Jakesch et al., “Co-Writing with Opinionated Language Models Affects Users’ Views”, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, April 2023,
  7. Joshua Conrad Jackson et al., “Exposure to Automation Explains Religious Declines,” Proceedings of the National Academy of Sciences 120, no. 34 (August 22, 2023): e2304748120,
  8. Jacques Ellul, The Presence of the Kingdom, Helmers and Howard, 1989, 80-81.
  9. Ibid., 119-122.
  10. W. Jay Wood and William Jay Wood, Epistemology: Becoming Intellectually Virtuous, Contours of Christian Philosophy (Downers Grove, Ill: InterVarsity Press, 1998), 69.
  11. C. S. Lewis, God in the Dock: Essays on Theology and Ethics, ed. Walter Hooper (Grand Rapids, Michigan: William B. Eerdmans Publishing Company, 2014).

Derek C. Schuurman

Calvin University
Derek C. Schuurman is Professor of Computer Science at Calvin University in Grand Rapids, MI. He is author of Shaping a Digital World and co-author of A Christian Field Guide to Technology for Engineers and Designers (IVP Academic).


  • Phillip Cary says:

    There is a difference between what is true and what is merely considered to be true (or “truth” put in scare-quotes). The latter is a social construct that will be profoundly affected by AI. The former, together with its opposite, the concept of what is false, is the indispensable intellectual tool we need to make critical judgments about AI and to develop intellectual virtues and real knowledge. So it’s not a good idea to talk as if AI undermines truth. The truth is the truth–and falsehood is falsehood–no matter who says it, no matter who believes it or not, and no matter what AI does about it. It’s important to keep that in mind in the face of every epistemic crisis.

    • Gordon Moulden says:

      Indeed, AI is not a being that can independently to undermine truth. It’s misuse, intended or otherwise, can impact access to, and impressions regarding what is true, but truth does not change. So living according to truth depends on people having their lives anchored to it, so they are not tossed about intellectually and spiritually by false representations of “truth”.

    • Derek Schuurman says:

      Yes – good point – the wording should be more precise: AI does not change the truth, but it can distort or cloud our perception of the truth, hence the importance of developing intellectual virtues.

  • Gordon Moulden says:

    “Man shall not live by bread alone, but by every word that comes from the mouth of God.” (Deuteronomy 8:3, reiterated in Matthew 4:4)

    Truth begins there or mankind is lost.

  • Jenell Paris says:

    In The Presence of the Kingdom, Jacques Ellul writes extensively about technology, communication, and propaganda. He posits that true communication only happens face to face, a communing, a “being with.” Even a love letter, intimate as it may be, is messaging. Mass messaging is prone to propaganda, groupthink and a devaluing of conscience and individual expression. I discuss this idea with students. The responsibility is staggering, should we begin to privilege proximity and face-to-face dialogue as a God-given means of communication, a means for which there is no substitute. Writing, reading, texting, AI, and so on, by their nature, can augment, extend, confuse, support, or distort, but not serve as equivalents to communication. In Ellul’s view, this is a special gift held and shared only by persons. (I suppose I could try to convince you here that I am really a person typing these words, but all I’ve got is letters and symbols to convey the notion.)

  • Thanks for this post, Dr. Schuurman. Lots to mull over! Appreciate it!