Skip to main content

As an educational psychologist, I study teachers and students, both of whom are learners in their own ways. As artificial intelligence (AI) burgeons in classrooms, I cannot help but think of Romans as a possible answer to the question Benjamin Bloom posed more than four decades ago. Roughly, Bloom’s question was: “How can we deliver the benefits of 1:1 tutoring at scale without the impossible cost of providing a human tutor for every child?” This was Bloom’s famous 2-sigma problem, and it derived from research depicting that students who received one-to-one tutoring on average performed two standard deviations better than students in conventional school classrooms.1 In practical terms, the average 1:1 tutored student seemed to perform better than 95–98% of comparable students from a traditional setting.

Though Bloom’s studies were not without their issues, AI education is often framed today as a potential solution to Bloom’s question. AI systems can now simulate aspects of 1:1 tutoring, such as personalizing learning pace, generating adaptive problem sets, flagging misconceptions, providing instant feedback and data-informed remediation, increasing engagement and personal relevance, and engaging students in Socratic questioning.2 In theory, these systems could approximate certain parts of Bloom’s model at lower marginal cost than the status quo. This potential is promising because data shows that many K-12 students are continuously failing to meet grade standards.3

However, satisfying education standards runs much deeper than making technology more efficient.4 Traditional public schools are built around age-based cohorts, bell schedules, standardized curriculum pacing, and discrepant teacher-to-student ratios. Thus, pursuing 1:1 learning would require rethinking seat time (mastery vs norms), rethinking assessment (content vs competency), and rethinking teacher roles (coach vs sage). Charter schools might have greater flexibility in staffing and more willingness to experiment. They might also arguably integrate AI tutoring models faster than public schools. But charter funding is more unstable, has added accountability pressures, and risks over-reliance on technology as a cost-cutting method. Homeschools are very well-positioned to individualize the learning pace, and parents have unique motivations to see their own children succeed. However, homeschoolers might struggle to equitably access the right tech, much less access the right approach to using it.

Hence, delivering true 1:1 learning at scale, whether in the public, charter, or home school environment, is as much a structural and cultural issue as it is a technical one. What responsibilities do we have as Christians in pursuing AI in the classroom? If wealthy schools get the premium AI systems and low-income Title I schools get the basic automated classroom, a wider equity gap emerges, and those most marginalized will suffer even more. But even if AI reduces instructional costs on the back end, implementation is not cheap. Hardware and software licensing costs! Training costs! Ongoing maintenance costs! If all goes well, teachers’ anxieties might rise: Will I be replaced? Can our organization or my home afford the latest upskilling in the newest systems? How much surveillance must we do to make sure students are not cheating? If all goes wrong, teachers’ anxieties also rise: What have we lost in time, money, and industry?

Yet we persist! At a recent conference, I described how we often collaborate with people from complementary disciplines who share our passion and enthusiasm for education.5 Just who are these brothers and sisters? They could be psychologists who warn of the misperceptions or biases that educators adopt in their practice, leading them to believe in a host of false ideas with limited to no evidence base, such as “learning styles.”6 Designers use divergent thinking to improve learning systems using gamification and dashboards.7 Computer scientists have specific knowledge and skills that are paramount to bringing learning programs to life using artificial intelligence, though they often embed their own biases within the models.8 Humanists keep us anchored to the non-measurable ideals and principles that promote and sustain culture and prosocial engagement. A strong community indeed.
However, we’ve been here before when intelligent tutoring systems (ITS) and adaptive learning technologies were all the rage in the early twenty-first century. Look how that turned out. While ITS can be a boon if integrated well with sufficient resources, they do not seem to universally and unequivocally outperform human teachers in the long run.9 Perhaps guided discovery is more important than the medium for discovery.10 What some of us might have missed is that Bloom’s model was not simply about individualized instruction as much as it was about relational learning. Both Khan and Mollick suggest that AI could act as a tutor where the machine leads, and the human supervises, as a co-pilot where the teacher leads and the AI augments, or as an enhancer, where the AI supports both teacher and student more efficiently. But the teacher and student form the important relationship in learning.

Thus, 1:1 learning via AI systems creates a sort of paradox, promising unprecedented personalization yet risking the isolation of learners and potentially diminishing human relations. Technological promises can make us forget to treat people as ends in and of themselves, and not merely as means to a proliferated technological progress.11 I do not doubt that AI, in the hands of responsible individuals, is capable of optimizing performance. Acquiring STEM, language, and even artistic or musical proficiency is now possible through AI tools. The interventions are only improving by the day. What I do doubt is whether pursuing optimized performance should be prioritized over cultivating whole human beings. What if we stopped to reframe the problem? What if, instead of optimizing and scaling ‘performance’, we optimized and scaled wisdom, courage, and vocation, through 1:1 learning, using AI?

I won’t pretend to be an expert on what that looks like. Perhaps AI systems embracing wisdom would provide individualized philosophical dialogue, expose students to diverse perspectives, and encourage metacognitive reflection; they would foster judgment under uncertainty, humility to know when you were wrong, and would do this through building tension, challenge, and modeling lived virtue. Now AI can easily simulate dialogue, but can it embody virtue? AI systems embracing courage would expose students to failure and social friction, requiring students to speak or better yet, act publicly in tactful disagreement while navigating complex group dynamics. But the burden of 1:1 personalization is that it can unintentionally remove the relational stressors that come from complex group dynamics that form courage. AI systems building a sense of vocation would separate jobs from paychecks, developing identity toward service and contributory love. Yet, in apprenticeship guilds, someone didn’t just know a person deeply by analyzing patterns in their behaviors; they also witnessed, affirmed, challenged, and loved them without a prompt.

I would argue that wisdom, moral courage, and vocation are embodied and developed via imitation, real community, shared hardship, and culture. Although Bloom was interested in optimizing performance gains, wisdom is slow, courage requires discomfort, and vocation unfolds over time. These do not optimize neatly. Our responsibility should be to shift public school teachers from content deliverers to mentors who could also model moral courage and engage students in discernment even when interfacing with AI; we should help homeschool parents learn to be more present emotionally (God knows I struggle with that at times), and resist outsourcing formation to software; and we should reinforce to charter school leaders that protecting the mission’s integrity is paramount over chasing efficiency.

If we ignore these principles, we will likely pay a great toll from failing to realize that as technology—from fire to the wheel and the printing press to the iPhone—raises the bar for human intentionality, teachers and students must ever more be formed themselves. There is a difference between acquiring knowledge and acquiring character. I heard a quote, but cannot remember where from, that “‘Responsible AI’ is only as responsible as the people behind it.” To me, that means we should have the courage to explore with AI, as long as we also have the temperance, prudence, and most of all the patience to do it the right way and for the right reasons. And if in ten years we were to look back and say, as both non-Christians and Christians often utter, “Oh no! What have we done?” then, as the latter, we would have the wherewithal to raise our hands and assume full responsibility. For as Romans says, “So then, each of us will give an account of ourselves to God.”12

  1. Benjamin S. Bloom, “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring,” Educational Researcher 13, no. 6 (1984): 4–16, https://doi.org/10.3102/0013189X013006004. ↩︎
  2. Sal Khan, Brave New Worlds: How AI Will Revolutionize Education (and Why That’s a Good Thing) (Viking Press, 2024). ↩︎
  3. U.S. Department of Education, Institute of Education Sciences, National Center for Educational Statistics, National Assessment of Educational Progress (NAEP), 2026, https://www.nationsreportcard.gov/. ↩︎
  4. Diane Ravitch, An Education: How I Changed My Mind about Schools and Almost Everything Else (Columbia University Press, 2025). ↩︎
  5. Dane C. Joseph, “A Tour of the Head, the Heart, and the Hands That Teach with Artificial Intelligence” (presentation, Baylor Symposium on Faith and Culture, Waco, TX, February 2026). ↩︎
  6. Tracey Tokuhama-Espinosa, Neuromyths: Debunking False Ideas about the Brain (W. W. Norton & Company, 2018). ↩︎
  7. Tiffany Witt, The AI Field Guide for Learning Designers: Hands-On Strategies for Education, Training, and Instructional Design (Kendall Hunt Publishing, 2026). ↩︎
  8. Ethan Mollick, Co-Intelligence: Living and Working with AI (Portfolio / Penguin, 2024). ↩︎
  9. Wenting Ma et al., “Intelligent Tutoring Systems and Learning Outcomes: A Meta-Analysis,” Journal of Educational Psychology 106, no. 4 (2014): 901–18, https://doi.org/10.1037/a0037123. ↩︎
  10. John Sweller et al., “Why Minimally Guided Teaching Techniques Do Not Work: A Reply to Commentaries,” Educational Psychologist 42, no. 2 (2007): 115–21. ↩︎
  11. Morgan G. Ames, The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child (MIT Press, 2019). ↩︎
  12. Rom. 14:12 (NIV). ↩︎

Dane C. Joseph

Professor of Education and Research Fellow for the Program for Leadership & Formation at George Fox University.

Leave a Reply