I have worked on the PNP (computational complexity) problem for ten years. I have made progress on the problem, but I have been unable to entice an audience. While I cannot recommend myself as a teacher, I do warn you that understanding PNP requires familiarity with mathematics beyond the standard computer science curriculum. PNP is unproveable because the non-existence of bounds and orders is undefinable in Zermelo Fraenkel set theory (ZF). First, let's clear up something that seems to confuse every one at some point: The motivation for PNP is an applied problem of solving a NP-complete problem quickly to save money. That problem is often discussed as an optimization problem. It is an optimization problem, but it is not the same optimization problem as used in PNP. We need to be careful in what we are optimizing. Even if we had a PTIME algorithm, that algorithm would be (already proven to be) super linear. There is a break even point where the cost of the computation will be more then the savings to be had by obtaining an optimal solution to a NP-complete problem. This (resource) problem is to get a near optimal (in the traditional sense) solution, adjusted for the cost of the computation. Now that we see that the practical problem that motivates PNP is an applied problem that is different then PNP, we are better prepared to appreciate that PNP is a theoretical problem. A quick way to understand why PNP is logically independent of ZFC, is to understand that constructivism and non-constructivism are localized around PNP. (There has to be a good explanation of this!) While the traditional foundations of mathematics are not based on the theory of computation, complexity theory is. Every pair of computable functions that have a finite number of differences belong to the same complexity class. Every aspect of the theory of computation is finite. As complexity theory is based on computation, proofs of complexity are logically required to be constructivist. Nondeterminism in regards to complexity is bounded. We would like to have robust results that are invariant to changes in models of computation. Since models of computation are PTIME reducible to each other, what we know up to PTIME we know non-constructively. A less conceptual way to understand PNP is to use working assumptions. Let us try assuming that P is in LSPACE. By definition the work done by computation is quantifyable (and thereby localizable). We can arange the configurations of any computation into a grid space (with graphs for the transition function). This allows us to speak of graphs and topological spaces in place of strings and tapes. That is very convenient as many NP-complete problems are graph theoretic. A Hamiltonian circuit / cycle can be thought of as a circular permutation, so a Hamiltonian path can be thought of as a partial function that is a contiguous subset of a circular permutation. Paths are continuous subspaces. Since paths are arbitrarily long with respect to algorithms, the working memory must retain a near continuous subspace of the input for a PTIME algorithm. (No mystical 'extra' information - to be formalized as an ordering.) This form of near continuity is 'near' in that the information that is left out of memory is limited by the storage capacity of the algorithm. The upper bound for the number (and magnitude) of discontinuities for P is constant, based on the size of the algorithm. The upper bound for NP is a function of the input. NP-complete algorithms use an increasing number of discontinuities or more than linear space. (It is convenient to use a different topological property to prove the lower bound. An assortment of topological spaces are related by the configuration space. Instead of a single difference of a topological property, the proof actually has equivalence classes of topological spaces separated by properties.) Using this working assumption P /= NP. We can use a double exponential time Turing reduction to ‘pre-compute’ a NP-complete problem in polynomial time because the asymptotic analysis of complexity theory is based on string length instead of string value (that is used for computation). String values of the same string length are not inherently ordered in ZFC (, yet we use one, more on that later). The domains of computable functions are chains of anti-chains. Given a NP-complete problem and a correct NPTIME algorithm, this reduction will work, as long as, enough accepting input strings can be guaranteed to be polynomial time. The proportion of provably fast accepting strings verses total accepting strings (for each string length) have to converge to zero at a polynomial rate (forming a lower bound). The input strings, that are accepted, have an underlying non-repeating pattern. This pattern contains the prime numbers. (Verify with an enumeration of minimum size certificates for the subset sum problem. Multiplies / factors in more than one dimension.) The new stuff in the patterns, corresponding to primes, are particularly slow inputs. Next, we use group action, within string length, to get a polynomial portion that is fast. (Verify with graphs.) It will work, so long as, we have a bound (asymptotic / upper and lower) on primes that is narrow enough. With Riemann hypothesis (RH), P = NP. Note that A bound is a binary relation; one thing relative to another. Boundedness requires orderability. I have found this notion very helpful. (Should be used when teaching analysis; Think convergence and meterizability respectively. Check that sigma-algebras are preorders on topologies.) One use is to uncover set theoretic implications / consequences. For us, the critical factor is whether a discrete set has a unique / pre-existing order. That is the underlying assumption that exceeds the axioms (ZFC). To my understanding, the continuum hypothesis (CH), as an axiom, fixes transfinite arithmetic with respect to ordinary arithmetic. With CH, the order of real numbers (a la meterizability) infects the natural numbers. With a cardinality enhanced notion of a bound and order, every subset of naturals becomes finite and ordered. Moreover, since ZF + CH imposes an order on every subset of naturals, that order is not accessable constructively (by using ZFC, much less computation). That conforms with our working assumptions. Notice how the working assumptions that we used match: an order on the domains of NP computable functions and a bound on the space needed by PTIME algorithms, an order outside NP and a bound inside P respectively. LSPACE blocked the possibility of extra information / denies the existance of an underlying order. The logical constraints of complexity theory means we can’t just use AC; we need the order to explicitly be there or to explicitly not be there, via a stronger axiom. It seems that ZF + CH -> RH, P = NP and ZF + ~CH -> P /= NP. The most important principle for mathematics is, not to assume more then is needed. You don’t believe me. Admit it. You are not impressed with my explanation, but don't overlook that it is true. If pnp is important, then it should not matter that I am not perfect. Note, I only thought to use RH and CH recently. If someone starts there, to rediscover the answer, they could save a lot of time. You have read this far. Think on it. I thank you for making an effort to understand my explanation.