You’ve heard me say that logic invariably leads to faith, but don’t take my word for it. There’s considerable evidence to support the idea and people who love the hard sciences will especially appreciate it, because the confirmation comes from logic itself.
As an introduction, consider the 19th Century worldview and how it changed going into our modern age. Throughout the 1800s, there was great enthusiasm and optimism in the power of logic. Earlier scientific and mathematical discoveries—Newton’s classical physics, for example—had uncovered what seemed to be an elegant symmetry in nature, which gave intellectuals a view that all things could be understood in terms of scientific laws. In fact, people (the great Emmanuel Kant, included) believed the logic inherent in science would eventually lead to an understanding of the mind of God.
But cracks began to appear in this deterministic construct. One of the first was the development of non-Euclidean geometries that rejected a postulate used in the plane geometry we study in high school. The possibility that other postulates might be incorrect or of limited value ushered in an age of uncertainty that additional scientific and mathematical discovery only amplified.
The study of physics, for example, added to the disquiet. As physicists delved more deeply into the interactions of subatomic particles, they discovered that such relationships were based upon probabilities rather than strict rules as had been previously supposed. To make matters worse, one implication of the discovery was that the result of any subatomic interaction didn’t finalize until a sentient being observed it. To understand what this means, consider that old philosophical conundrum: If a tree falls in the forest and no one is there to hear it, does it make a sound? Well, according to one interpretation of quantum mechanics—the Copenhagen version first posited by the Nobel Laureate, Niels Bohr—the tree doesn’t even fall! Its subatomic particles remain in what is referred to as a probability wave until a sentient observer comes along. Thereupon the particles “choose” one of the infinite range of possibilities available to them. (My son, who is working on a physics PhD, tells me no one really believes that’s what happens, although it is consistent with the inexplicable results of various experiments. For more on this, see the book Schrodinger’s Kittens and the Search for Reality, by John Gribbin, which in my opinion is the best book about quantum mechanics written for lay people).
Perhaps one of the most significant discoveries that increased the uncertain worldview of our time was made by Bertrand Russell, a man who had once been a proponent of mathematical determinism. Russell discovered a logical inconsistency in set theory that is evident in the following question:
A man of Seville is shaved by the barber of Seville, if and only if, the man doesn’t shave himself. Does the barber shave himself?
The paradox can be described in a rigorous mathematical way, but consider the following simplification: Obviously, the barber of Seville either shaves himself or he doesn’t. Regardless, however, the result is illogical. If he shaves himself, he cannot be shaved by the barber of Seville, but since he is the barber of Seville and he shaves himself...well, you get the point. The problem, as Russell was able to distinguish, can occur anytime a set is an element of itself. For example, the set of men shaved by the barber of Seville (let’s call this A) is not an element of itself, because A is not a man shaved by the barber of Seville. However, let’s create another set (which we’ll call R) and include in it everything that is not in set A. Since set R is also not a man who is shaved by the barber of Seville, it is an element of itself, a characteristic often called self-referencing. This may seem like a trivial matter, but self-referencing can lead to serious logical problems and points to the limitations of logic.
Finally, let me describe a theorem that is simplistically sneaky, but has implications that have essentially put an end to strict mathematical determinism. I’m talking about Kurt Godel’s Incompleteness Theorem, which like Russell’s Paradox, has a rigorous mathematical rendering, but can be described in simplified terms.
Let’s say you create a computer that you call the Universal Truth Machine (or UTM for short) which you’ve programmed to tell the truth. Before approaching it, you write the following words on a sheet of paper:
The UTM will never say this sentence is true.
Now, you turn the UTM on, show it the paper and ask if the statement is true. What happens? First, the UTM can’t say the statement is true. Can you see why? If it does, the statement will be rendered false, which is contrary to the UTM’s programming. In a paradoxical way, the fact that the computer will not say the statement is true is your greatest evidence that the statement is, in fact, true. On the other hand, the UTM can’t say the statement is false, either, since as we’ve already demonstrated, that would be untrue. (Another way to look at it is if the UTM said the statement was false, it would render the statement true).
Again, the result seems contrived and trivial, but the math is sophisticated and full of implications. In short, Godel’s Incompleteness Theorem suggests that rational thought can never penetrate to the ultimate truth, or said another way: There are truths that cannot be discerned, or proved, strictly through the use of logic.