The Error of Truth: How History Came Together to Form Our Character and Shape Our Worldview

Steven J. Osterlind
Published by OUP Oxford in 2019

Reviewed by Jeremy Scott Case, Mathematics, Taylor University

The study of worldviews has been a mainstay of Christian educational institutions, and a worldview cast as quantification deserves attention in today’s age of data-driven decision making, quantitative science, technology, and statistical rhetoric. Steven J. Osterlind’s book The Error of Truth argues our epistemology has moved toward quantitative thinking. Our inclination is to view natural and everyday phenomena through a lens of measurable events with odds, predictions, and likelihood playing a dominant role. Events which were once thought to be random, providential, or unknowable are now predictable and measurable due to our ability to think quantitatively using probability and statistics. With the ability to make decisions rationally rather than impulsively, outcomes can now be anticipated as opposed to blindly attributed to fate.

The Error of Truth casts the story of the development of quantitative thinking in Europe and the United States and how quantification became our prominent mode of knowing. Rather than providing technical explanations, the book traces the lives of 50 or so people primarily associated with the development of probability and statistical theory. For Osterlind, the historical and cultural contexts are important because the reason the advancements occurred was that the times were ripe for discovery and invention. Quantification was only gradually accepted as a way of thinking as those achievements initiated societal changes.

The story begins when the empirical tool of observation became more deliberate and mathematical. By developing mathematical tools to consider variation within repeated measurements, uncertainty became more calculable, predictable, and reliable. The monumental works of Isaac Newton furthered observation and fundamentally changed science and philosophy. Being able to describe how and why objects behave using first principles and mathematics, Newton opened the door for scientists to consider the role of prediction in Nature’s behavior, which had previously been blindly accepted or attributed to the whims of God.

While Newton laid the groundwork for a mathematical emphasis in a variety of fields, people did not immediately change their thinking toward quantification even as the Enlightenment ideals of reason, self-determination, and individualism took hold. According to Osterlind, the slow and subtle change toward quantification was largely unrecognized by the populace until the twentieth century. The environment evolved socially, politically, and educationally “to a degree that they were suffused with a quantified mindset without conscious effort” (69).

The ability to quantify uncertainty was a major impetus of this evolution. Most identify the origin of the mathematical theory of probability with the seventeenth-century correspondence between Blaise Pascal and Pierre de Fermat. By examining the “problem of points,” the two came up with formal methods to calculate risk and the likelihood of a success or failure. Over the next two centuries, the law of large numbers, the binomial theorem, and the central limit theorem laid the foundations of probability theory. With these tools, one may by uncertain of the outcome of a single chance event, but one could assign a probability of success. The more noteworthy advance in quantifying uncertainty was the focus on measuring and interpreting the variation within repeated events.

The Error of Truth emphasizes biography—including religious motivations of subjects—to provide context for many of the advances mentioned above. For example, Pascal’s thinking on probability revolved around his religious convictions as evidenced by “Pascal’s wager,” the argument that people bet their lives on God’s existence. Abraham de Moivre created actuarial tables by calculating the likelihood of a person dying before their next birthday, thereby influencing the insurance industry ever since. De Moivre originated the concepts of the familiar bell curve, standard deviation, and Z-scores as a measure of error. A French Huguenot, de Moivre saw order in all things and attributed this order to God and as evidence of God’s existence, a sort of quantitative approach to theology.

According to Osterlind, theology also played a role in the United States lagging behind Europe in advancing mathematics and probability. He suggests the Great Awakening, and specifically Jonathan Edwards’s “Sinners in the Hands of an Angry God” sermon, stifled the progression of quantification in the U. S. by advancing puritanical views of God’s control of people’s lives. Meanwhile, Europeans, influenced by the ideas of Newton and Locke, could use their own free will and reason in determining their relationship with God. Enlightenment ideas challenged them to think differently, expansively, and boldly.

As an example, the book points to the Reverend Thomas Bayes, a contemporary of Jonathan Edwards. Bayes saw theorizing about probability as theology put into practice via mathematics. Rather than being constrained by his faith, Bayes’s desire to glorify God led him to explore why things happened and made him open to new ways of thinking. His broader perspective led to one of the most profound and far-reaching ideas in probability. Bayes’s theorem can be roughly stated as: “When I learn new information about a prior belief, what is the probability that I will change my mind?” Mathematically, the theorem calculates the probability of an event given the probability of another event. This theorem spawned an entire theory of probability with far-reaching implications. Bayesian thinking is used in an abundance of disciplines, including law, medicine, and economics. Given the complexities of the subject, the book does a reasonable job of introducing these concepts in a non-technical way.

Bayes only gave his proposition conceptually. Decades later, Pierre-Simon Laplace popularized and mathematically formalized probability theory including Bayes’s theorem. Laplace believed probability was central to all knowledge and should be taught to ordinary people. To him, probability was nothing but common sense reduced to calculation. Almost all knowledge was only merely probable, and that which was able to be known used techniques based upon probabilities. Since ordinary people had an instinct for probability, life’s questions were probability problems. In short, the study of uncertainty was an epistemological process for all.

It should be pointed out here that quantification as a worldview is not strictly determinism. In many ways, Laplace was a scientific determinist who proposed what was to become known as “Laplace’s Demon.” If one knew the relevant properties of all of the particles of the universe at any given time, one could derive complete knowledge of the past and the future. For many, “Laplace’s Demon” was undone by the second law of thermodynamics and the probabilistic nature of the uncertainty principle in quantum mechanics. While quantification includes deductive thinking and mathematical certainty, the application of probability and statistics is inductive and by its nature uncertain. According to Osterlind, since uncertainty exists, some people thus incorporate some degree of religiosity in their worldview.

Others did not. Karl Pearson claimed the scientific method was the sole gateway to knowledge. Placing empiricism at the center, his book The Grammar of Science (1892) emphasized the importance of going beyond fact gathering to employing creative imagination while at the same time remaining objective and dispassionate. The statistical process would allow both. In fact, the motto on the title page was “Statistics is the Grammar of Science.” Required reading for a generation of scientific researchers, The Grammar of Science presented a quantified worldview which would require a modern theory of life and adaptation to the theory. Quantitative work would answer the social and educational problems of the day, thereby affecting the social consciousness. For Osterlind, Pearson and his text moved quantification significantly forward.

Sir Ronald Fisher, with his prodigious output, continued the advance of statistics and quantification. His introduction of randomization to experimental design was monumental and continues to be an essential feature of valid experiments. A randomized experiment with a control and treatment group has allowed researchers to use inferential statistics to make claims in a variety of fields. Fisher’s analysis of variation, or ANOVA, measures uncertainty and variation which ties back to the beginning of the book’s story.

The Error of Truth proposes that quantification as a perspective on the world was not accepted generally until the twentieth century. The chapter on psychometrics and psychological tests gives a strong argument for this acceptance by the general populace. As areas within sociology and psychology adapted empirical and statistical principles, psychologi- cal tests became more ubiquitous and quantitative. Everyone has taken a test. Tests can be a determining factor in school admission, employment, and job performance. As such, a mental test or similar assessment can be personal as it explores the identity, beliefs, and nature of an individual. While the chapter explains the reliability and validity measures within psychometrics, people can view themselves or others in terms of a numerical or quantitative standard, whether or not it is valid.

A theme of The Error of Truth is progressively moving from ignorance to awareness. Previously, people were forced to accept their fate, and the influence of the local clergyman was “weighty” (45). The nineteenth century in Europe saw greater education, increased transportation, and the elimination of the monarchy. The industrial revolution raised the standard of living. A spirit of individualism, independence, and exploration contributed to quantification. Darwin stirred the public’s mind of ontological truth, and society began to redefine itself. With probability and statistics, we can predict and anticipate events. Being quantitatively informed, we can take meaningful and advantageous actions. Although quantification did not take hold until the twentieth century, we have an entirely new perspective on what we know about the world and how we know it.

The Error of Truth succeeds in arguing how quantification, viewing the world through a measurable lens, has developed. What the book fails to do is adequately address the critiques of this development. Quantitative models often idealize phenomena and attempt to define precisely what is difficult to quantify, thereby giving preference to what can be measured. In public discourse, a person will often reference a statistic or a study to make a point with the implication that a statistic is objectively neutral. Although thinking quantitatively does not necessarily require data, data collection itself is theory and value laden. Jacques Ellul argued that the very appearance of neutrality is dangerous, as it often excludes religious and moral values.1

The Error of Truth acknowledges that some of the biggest names in statistics (Francis Galton, Pearson, and Fisher) were deeply involved with eugenics. The book neglects to provide guidance or even a warning about avoiding participation in such immoral behavior. Bayesian inference, computer simulations, mathematical models, and artificial intelligence are moving quantification beyond the scientific laboratory and providing multiple benefits. Critics such as Cathy O’Neil are warning of the unreflective applications of quantification, computer algorithms, and big data as they increase inequity and social injustice.2

Quantification will increasingly play a major role in society and how we view the world. May we develop the wisdom to employ it appropriately.

Cite this article
Jeremy Scott Case, “The Error of Truth: How History Came Together to Form Our Character and Shape Our Worldview”, Christian Scholar’s Review, 50:1 , 115-118

Footnotes

  1. Jacques Ellul, The Technological Society (New York, NY: Alfred A Knopf, 1954).
  2. Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY: Broadway Books, 2016).

Jeremy Scott Case

Taylor University
Jeremy Scott Case is professor of Mathematics at Taylor University and president of the Association of Christians in the Mathematical Sciences.

Leave a Reply