PNP is called the most important problem of computer science, but it is also a problem of theoretical mathematics. From what I can tell complexity researchers are focused on using sub-Turing-complete models of computation to match complexity classes. Such methods are not capable of testing the possibility that the PNP problem does not have a yes or no answer. In the case that PNP is not proveable, we can expect that it will take broad familiarity and skill with mathematics to finish a proof (of logical incompleteness). Computational complexity theory formalizes the difficulty of mathematical problems. To do so, complexity needs to measure work. The theory of computation formalizes work. Complexity is limited in how precisely it can measure the difficulty of problems because it has a different basis from computation. Computation uses string values, while complexity uses string length. This value / length gap could be why the PNP problem is not proveable. Here I will pause for a few comments. For logically independent problems, there are many ways to describe what is essentially the same thing. Our length vs value gap can be described differently. I will not emphasize this other description or rely on it, but mention it here as it might be instructive. As complexity is based on analysis, complexity proofs involve limits. As computation quantifies work, it limits us to inductive proofs. In other words, complexity needs infinity, while computation can't use infinity. Since the logic of PNP is bounded in such a way, the Turing Thesis could be used without assuming computationalism. For a proof, every step is valid. That won't work where no proof exists. The missing information is always a challenge to preconceptions. The more important an open problem is, the more time and effort that has gone into trying to solve it; The built up expectations add to the difficulty of acknowledging what is wrong with the problem. To form a proof, the missing information must be added to either the hypothesis or the axioms. The missing information is invalid, but satisfyable. We can only know that the missing information is also satisfyable when the proof is complete. That is why we might call the missing information a working assumption or maybe extra information. The set of all total orders (of string values) that are compatible with the partial order (of string values based on) of string lengths contains the information of the length-value gap. We can form a working assumption for PNP based on the length-value gap by ordering the input strings (that is choosing one of the total orders), within string length, and then using that order in some way. To use the order we form a new, larger string by concatenation consecutive strings interspersed with separators. For convenience, we will continue to refer to the old strings within the new string. The number of strings to concatenate is determined by a polynomial using the string length of the first string as the parameter. To process the new string, we modify an algorithm that is known to solve a NP-complete problem. The modified algorithm starts by calculating another polynomial to use as a time limit. Then the modified algorithm runs the original algorithm on each of the old strings up to the time limit. If any string is accepted, then the first string is also accepted. The modified algorithm depends on the asymmetry of computation. The strings that are not first can only help the first string be computed in polynomial time. This algorithm requires a special kind of order to be used. The chosen order puts the accepting strings together, while spacing out the strings that are computed quickly (according to the original algorithm). What matters is that we have enough fast strings. Let us form a ratio of fast strings to accepting strings divided by the number of strings that we concatenate. If our ratio is greater than or equal to one for all string lengths then P is equal to NP given this working assumption. Here PNP is turned into a combinatorial problem. Our working assumption has chosen a 'convenient' order that has sped things up. Now we need another working assumption to choose an 'inconvenient' order to slow things down. Complexity classes are made up of computable functions. Computable functions have corresponding algorithms that can be used with input strings to form execution traces as sequences of configurations. As an alternative to complexity classes we will use configuration spaces. This may remind you of what complexity researchers do. I won't go into the details because I don't remember them. For problems to be of interest for complexity, they must be computable for all strings beyond a constant. As every algorithm is fixed with respect to input, it is small in relation to most input. The states of an algorithm are inconsequential with regard to the information content of how input is processed. This should hold for any complexity class. Since NP-complete problems can be converted to each other, we can use graphs in place of strings without loss of generality. We will assume that every NP-complete algorithm can be limited to accessing one path at a time. This is our working assumption. A path is also an inductive function from vertices to vertices. The relationship between labeled and unlabeled graphs is that of an algebraic group. Every group has an isomorphic permutation group. The permutation group that corresponds to the symmetries of labeling a graph will include every path in the graph (as the inductive function corresponding to the path will be a subset of a permutation that is a member of the group). Any isomorphic graph will have the same number of vertices and edges (to any other isomorphic graph), so it will have the same string length. Therefor, our working assumption depends on the input string order. This working assumption turns PNP into a topological problem. As our assumption imposes a structure on NP-completeness, that structure must hold for all of NP. We need to show that, for PTIME, some property, such as compactness, holds for every subspace. It should make sense that simpler computations will have more topological structure. Then we can show that, while P and NP are similar locally, they are not homeomorphic. That will prove that P is not NP given our assumption. Here we see infinite vs finite being emphasized over length vs value. Notice that a path is a one dimensional, topologically continuous, subspace. Our assumption implies that input space is mapped to configuration space continuously. You may be wondering about self reference. If my guess turns out to be right, a proof by self reference should exist. It might be easier, but controversial. A proof by self reference still needs a working assumption. As the logic of PNP is tied up with the rules of computation, a self referential proof might not be all that interesting. The issue of Turing Thesis vs Countable Choice tied up with it. If you wish to generate Hamiltonian graphs, here are a few tips: First string length grows with graph size rather than graph order. Hamiltonian circuits / cycles can be rotated and reflected, so they correspond to dihedral groups. Unlabeled structures are generated by exponential generating functions. Hamiltonian graphs have symmetry with relation to each Hamiltonian circuit, so they need a ratio of exponential generating functions, aka a hypergeometric function. It is a shame that after so many years I could not finish a proof. The combinatorial and topological (sub)problems remain to be solved. It has been difficult for me to use each kind of math correctly in context. Obviously, I think that this proof strategy is promising. I expect that you are tempted to reject my arguments. I did what I could. I have to leave it to mathematicians. After all, PNP is a math problem.