We have not done a good job at learning the lessons of Godel’s incompleteness theorems and Tarski’s undefinability. In part, that is because Mathematics is focused on deduction. The rules and limitations of Mathematics are Philosophy (of Mathematics) because they are not limited to deduction alone. We should remember that completeness allows us to find set theoretic analogues of these logical concepts, which should be helpful. Also, note that both incompleteness and undefinability rely on computable enumeration, which is not as general as has been assumed. What we are dealing with is not just what we know and what we do not know, but also what we cannot know. For deductive reasoning, what we cannot know is unprovable. One way to think of unprovability is that there is a gap in what we can prove. There is something in the rules or the problem statement that is undefinable. Whatever rules we start with have assumptions built in. By taking special care to distinguish formal / deductive arguments from informal / non-deductive arguments, we can sometimes find those gaps and better understand the rules that we are working with. Some informal concepts can be made formal concepts while others cannot. Self reference is undefinable for first order logic. A precursor to self reference is arithmetization. Logic is used as the foundation of mathematics. Set theory is the theory of values of those logical statements. The distinction between logic and set theory is inherent in their use. Arithmetization (or the lack of it) is undefinable as it crosses that boundary. Godel numbers are not the only form of arithmetization. An older form is characteristic functions. So long as the system that we are using is complete, we can represent a logical statement with a function that maps values for the variables of the logical statement to the truth value of the statement with those values. These functions are also known as decision problems. An important decision problem is provability. Provability is semidecidable. (Note that arithmetization encodes the domain of provability.) The functions that correspond to non-redundant existential axioms (as they are logical statements) are semi-decidable problems. The general concept of unprovability is itself undefinable, but specific examples can be found. The way we know that something is unprovable, in a deductive system, is to have a proof in another deductive system (see undefinability). That system must be an extension of (aka specialize or strengthen) the initial system. By extending a system, we make an informal concept a formal one. To have the best understanding of mathematics that we can, we should consider what consequences change (statements that are true, false, or indeterminate) as we change the rules. This is in opposition to our bias / prejudice for a single system! We are especially interested in extending a set of axioms with an existential axiom. For existential axioms to be logically independent (to add something new, to be non-redundant), they must allow non-constructive objects (sets in our case). As such, non-redundant existential axioms break completeness. Without completeness, counter examples are not guaranteed so we cannot use the law of the excluded middle (see Intuitionism) with non-constructive sets. While existential axioms can generate non-constructive sets, existential axioms always generate constructive sets. This overlap can confuse. We will over generalize, from our experience with constructive sets resulting from existential axioms, without realizing that our assumptions cannot be disproven. It is a problem that mathematicians are completely oblivious to using unstated axioms! We have a situation that mathematicians believe that the rational field is a subfield of the real field. For that to be mathematically / deductively true we need the continuum hypothesis as an axiom. (We cannot have a 'good' axiom for CH since we start with non-constructive sets.) There are many open problems that are not provable in ZFC because that assumption is built into the problems! ZFC can allow cardinality to be defined differently and to create an axiom opposed to CH, so that the rational field is incomparable to the real field. Computation is trouble too. The theory of computation is finite, so it is a mistake to assume that extending the theory to infinite sets is trivial. While mapping a computation over the natural numbers seems simple, it needs more than repetition (aka primitive recursion). The axiom of infinity is not enough to formalize computable enumeration, computable sets, or computable functions. String encoding does not react well to infinity because it is incapable of testing its domain. We should be able to use the same encoding over a subset, but we cannot for infinite sets because we get uncomputable sets / languages. We have the same problem with computational complements. To have computable functions, the computable complements must be the same for every computation (ie algorithm / Turing machine) in the class. We need an axiom stronger than well foundedness or regularity. Such an extended notion of computation needs set theory to be stratified. "For every subset there exists a total characteristic function and an enumeration function" might do, maybe. (Those functions representing subsets with respect to the set and string encodings over a domain. No need for classes then; it should be easier to contrast with AC.) Math needs a variety of incompatible rules, so that important open problems are not proveable. Prove me wrong or be a disciple. If you care about mathematics, then you shouldn't play coy. Working in obscurity is tough. I encourage you to learn from others; to save yourself grief.