For mathematics to progress, it is necessary to teach students the difference between mathematics and meta-mathematics. Mathematics is the deduction from rules to consequences. Meta-mathematics is the choice of those rules. [1]
Years of doing proofs teaches mathematicians to assume that proofs exist and to take what is given for granted. Let us examine why we cannot take what is given for granted. Mathematically truth is the existence of proofs. Definitions blur the line between possibility and necessity (and can hide undefinability). While we can think of a definition as one of several possibilities, we would not need the definition if something about it is not necessary for our hypothesis, theory, or proof. [2]
There is no such thing as a well-defined definition. [3] Remember that consistency (with the axioms) is a necessary condition for a definition. We cannot know whether a definition is well-defined because systems cannot prove their own consistency (see Godel's incompleteness theorems). [4] Definitions in set theory are axioms or theorems. [5] Lack of ambiguity of a definition can be guaranteed by a decision procedure. Thereby, unambiguous formalisms are dual with their negations.
Some classes are ambiguous; they have an open boundary. The most important such concept is provability. All deductive arguments / all proofs are provable, so that provability is not definable. [6] Provability is a meta-mathematical concept rather than a mathematical definition. [7] With the foundational importance of provability, we consider the Curry-Howard correspondence. [8]
We have been complacent with computability. Everything about the theory of computation is finite. [9] We cannot have computability, computable enumeration, computable sets, computable languages, or computable functions without the use of infinity. Extending computation to infinity is self referential because string encoding is also a computational process [10] (see the halting problem). [11] With the definition of computation (rather than computability), string encoding is not an issue because it is bounded.
Eliminating the contradictions of self reference (such as Russell’s paradox) imposes the existence of a topology (over the syntax of logic). [12] The use of computational enumeration (or similar) as a formal definition implies a topology. [13]
Zermelo Frankel set theory with the axiom of choice (ZFC) leads to a different topology. [14] Along with a topology, the axiom of choice (AC) provides uncountable sets. [15] The continuum is a topological space over a uncountable set. [16] Note, infinity is a leading indicator of over interpreting generalizations. It turns out that the confusion that we have had comparing countability and computability has a straightforward answer; countability and computability are not isomorphic. We can't make a system that forces enumeration and counting to be the same (see Lowenheim Skolem).
Existential axioms, such as the axiom of choice, create non-constructive sets. [17] We can think of those axioms and sets as a form of arithmetization / self reference. [18] We can think of more axioms, but there is less to prove without set theory (guess how we fix that).
Mathematicians calculate. Mathematicians prove. Mathematicians model. Modeling connects deductive arguments to non-deductive ideas. What is beyond deductive arguments is unprovable. [19] We can think of unprovability in a number of ways: an unavoidable undefinable hole in a would be proof, a circular / self referential argument, a limit point of logical formula or sets, a misunderstanding of what the rules mean. If we refuse to acknowledge subtleties of our rules, then problems will pile up waiting for braver souls.
Please accept both my encouragement and cautionary example to discussing the philosophy of math. [20] There is something for you in what I have written here.
peace and joy,
Paul Marchertas
[1] Think of how much science has benefited humanity. That was only possible because of the clarity of what is science and what is not science. If we are not clear on what math is and what math is not, then we will run math into a rut of trying to solve unprovable problems. In my opinion, mathematicians have long since done that.
[2] We can represent things in multiple ways. These kinds of definitions are inherently contingent. That is normally not a problem because they are only part of a larger theory that eventually leads to a hypothesis / problem statement. However, there is plenty of opportunity to lose track of things. Definitions can be a good place to look for undefinability.
[3] Originally, the term well-defined referred to functions. The reason that we know that functions are well defined is that we can test them against their domains. When we want to generalize, we still need a test. We cannot use the consequent without establishing the antecedent. A set theoretic domain could be a non-redundant proper subset of axioms.
[4] As a corollary to Godel's incompleteness theorems, we must extend / specialize a system to test for consistency. That is metamath in a number of ways: we need to change axioms, we infer the consistency or lack of it for the original / general system, and the concept of consistency itself.
[5] Philosophically, we may think of definitions as part of the conjecture / problem / hypothesis. In practice this won't work. First, as I have written, we rarely consider that it might not be true. Second, most definitions / models are infinite, so that we have to use axiom schema to formalize them.
[6] Provability is undefinable in every system. That is stronger than Tarski undefinability. Tarski undefinability (and Godel incompleteness theorems) require a system to encode Peano's arithmetic. We should be able to generalize that to systems that are sufficiently sophisticated that they could encode Peano's arithmetic.
[7] Notice the relationship works in both directions. Something is undefinable if and only is something is unprovable.
[8] At first, when we think of computation and computability, we think of the Entscheidungsproblem. Thinking of formal methods more abstractly: The methods have an intrinsic continuity (of the minimal / atomic operations). We know of unsolvable problems via a convergence of solvable problems. The limits of what can be done is like a closure on deductive truth. Knowing that provability and computation must be meta-mathematical concepts leads us to a careful reappraisal of computation.
[9] The input is finite. The output is finite. The machine / algorithm is finite. The execution / run-time / evaluation of an input is finite. Infinite processes are not computations. (Computable functions can be partial and be non-terminating, but unlike infinite procedures, the non-terminating code does not generate output.) Computation and computability should not be conflated.
[10] Strictly speaking, even that depends on our definition of string encoding. It seems that, at the time, we thought that strings are simple, so there was no need for a formal definition of string encoding. (Maybe you have noticed, overlooking subtleties is a recurring theme of our discussion.)
[11] Keep in mind the difference between numbers and numerals. While we think of them as being the same, numbers have an intrinsic order; numerals do not have an intrinsic order. Number systems come with a topology. Strings do not have a topology of their own. The mappings are not well-defined / not fixed. Notice that all string encodings cannot be assumed (according to the axioms in use) to be a set.
[12] When we decide to be dissatisfied with equivalence, then we require identity. (Discarding equivalence can be a mistake. Discarding equivalence / breaking abstractions means that the members of an equivalence class are no longer the same with regard to the arguments that we are making.) Topology seems to be the necessary and sufficient formalism to flatten circular reasoning. Note that the first order form of the axiom of regularity does not fix a particular topology, yet requires one to exist. Such a use of topology (because it implies a new axiom) and the paradox itself are metamath.
[13] By the Curry-Howard correspondence we see that computational enumeration matches the topology of stratified set theory (NF is prominent). Enumeration maps from natural numbers (repeated application of the successor function). Think of an ordinary generating functions and generating languages.
[14] Note, the subspace topology is finer than the order topology, but only differs for infinite spaces. While ZF does not fix a topology, adding AC fixes a topology. AC leads to well ordering. The ordering of sets leads to an ordering of formulas via the completeness of first order logic. Counting maps to natural numbers. Think of an exponential generating functions and parsing / recognizing languages. (Note, choice functions arithmetize; more on that later.)
[15] Countability is another open boundary, proper class, meta-mathematical concept. Without AC all sets are countable. (More on the problems of axioms that make metamath classes into sets in a bit.)
[16] Limits are deceptively simple. We don't even need the axiom of infinity to describe limits. We do need the axiom of infinity for limits to be useful. However, limits in themselves do not give us calculus. There are two distinct concepts that we refer to as real numbers. We have limits / Cauchy sequences / Dedakind cuts and then, we have the continuum / least upper bound / real field / uncoutability of the reals. It may make sense to some that limits should be uncountable, but I won't hold my breath as they try to prove it from ZF. (Further, the relationship between discrete and continuous math is not fixed by ZFC.)
[17] There are other axioms that can be called existential (my use might be non-standard), but we are not talking about them. (It is an interesting, but too much to go into here.) The non-constructive set generating axioms must also generate ordinary sets. We should not be confused by the ordinary sets because they, literally, tell us nothing about the non-constructive sets. An over interpretation in this situation cannot be corrected by a counterexample.
[18] It is our choice whether we view arithmetization as referring to the system we are working with or to another similar system (cycle or infinity). While the first is more interesting, the second avoids the issue of self reference. When we add the ability to qualify over logical statements, we change first order logic into second order logic and loose completeness. That is the incompleteness in Godel's incompleteness theorems. Arithmetization and how we use it are metamath.
[19] It is intellectually dishonest to both reject changing axioms and to insist on a proof of unprovability. Unfortunately, that perspective seems to be the prevailing one.
[20] I try to understand and to share. It has not killed me yet. Hopefully, you will benefit from and improve on my work. I congratulate you for reading this far.