"The first point is that the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it."

I want to make a casual suggestion here, but one about which I am serious. What follows, in other words, is not meant to sound flippant.

Might the famous unreasonable effectiveness of mathematics--its spectacular success in quantifying, model-building and predicting future states of natural systems--be simply a matter of coincidence?

We tend to be amazed that math works again and again. Wigner compares the situation to a man with a bunch of keys finding that the first one or two he chooses always open the door. This would indeed be surprising, but the analogy can be read another way: how many ways are there to get into the house? The keys don't open windows; they don't open walls; they don't open the ceiling, or the yard, driveways or bushes or clouds. Keys fit those things that, well, fit keys. To note that one of your keys can do any task at all would be one thing. To keep being amazed that keys unlock doors is quite different.

Let's say a single key turns out to open a huge number of doors in Eugene's house--perhaps even an infinite number. That is surely an awesome thing, especially to a human mind: What a powerful key! Look at how many tasks it can handle--door upon door upon door!

But it still tells us nothing about what we aren't able to do with it (paint the house, grow the garden). Opening yet another door, excellent as that achievement may be--from the first discoveries in fluid mechanics to the latest in quantum chemistry--is, from the point of view of math's mysterious utility, essentially the same feat as it was the last time around. Look at that! Quantifiable things are still able to be quantified. Who knew?

But even if endless discoveries are made using the abstraction of mathematical tools, we are not justified in assuming that we are, in this manner, making all possible discoveries. Something can be infinite (say, the set of all mathematical expressions that correspond, in some suitable defined way, to nature) without being all-encompassing (say, if that set we just described turns out to be a subset of another, also infinite set, "the set of all truths about nature").

Let's take a different approach. Part of logical positivist epistemology--the "logical" part--regarded mathematical ...]]>

The paper to which I referred above, Conceptual Complexity and Algorithmic Information, is from this past June. It can be found on academia.edu. As is often the case, Chaitin begins with Leibniz:

"In our modern reading of Leibniz, Sections V and VI both assert that the essence of explanation is compression. An explanation has to be much simpler, more compact, than what it explains."

The idea of 'compression' has been used to talk about how the brain works to interpret a myriad of what one might call repeated sensory information, like the visual attributes of faces. Language, itself, has been described as cognitive compression. Chaitin reminds us of the Middle Ages' search for the perfect language, that would give us a way to analyze the components of truth, and suggests that Hilbert's program was a later version of that dream. And while Hilbert's program to find a complete formal system for all of mathematics failed, Turing had an idea that has provided a different grasp of the problem. For Turing,

"there are universal languages for formalizing all possible mathematical algorithms, and algorithmic information theory tells us which are the most concise, the most expressive such languages."

Compression is happening in the search for 'the most concise.' Chaitin then defines conceptual complexity, which is at the center of his argument. The conceptual complexity of an object X is

"...the size in bits of the most compact program for calculating X, presupposing that we have picked as our complexity standard a particular fixed, maximally compact, concise universal programming language U. This is technically known as the algor...]]>