A digression within a digression: it is possible to reject the kind of axiomatics on which set-theory is grounded without embracing finitism. My own rejection of it owes itself, not to a refusal of the Platonic position or, more generally, the concept of an infinity and mathematical continuum, but to the manner in which the related abstractions have been obfuscated by an ‘improper semantics’ yet to be disentangled by a complete cycle through the epistemes. Current mathematics, basing its model of the continuum on Peano’s axioms, constructs the natural numbers by simply taking 0 or 1 as a non-logical symbol and adding 1 to it, repeating this indefinitely, generating the entire continuum number by number toward infinity on the basis of a repeated operation of ‘addition’ whose ‘unary function’, reifying the procedural coherence of the infinite iteration of the assumed variable, (ie. we assume it even makes sense to infinitely repeat an operation; we assume the nature of addition does not suddenly change at some exceedingly far point in the continuum) defines the totality of these numbers as an interminable series or ‘set’ a la modern set-theory, or set N, leaving behind the Feferman–Schütte or ‘first impredicative’ ordinal associated with arithmetical transfinite recursion as the smallest ordinal that cannot be generated through ordinal addition on 0, that is, the basis of proof-theoretical mathematics, where one moves on a recursive path from given sets of theorems toward those axioms necessary for their construction, with this impredicative ordinal signifying a minimal axiomatics necessary for the articulation of a statement or theorem with some truth-value. This makes it impossible to understand the formation and distribution of something like prime numbers, for primes are related to an entirely different mathematical operation, namely multiplication, such that simply adding one to a prime will not likely carry us to another prime, or if it does, hardly reveal anything useful about the frequency of the distribution of prime numbers in the continuum.

[An important note must be made here. We cannot ‘disentangle’ the additive and multiplicative simply by conducting primitive recursion from any given set of theorems to their necessary axioms or ‘impredicative’ ordinal. It is, in other words, of no account as to how precisely the operations of addition and multiplication are defined,- either defining addition through Peano arithmetic, in whose logical signature the multiplication and addition relations are both contained, or defining it through Tarski’s identity by extending the first-order Skolem arithmetic (that theory of the natural numbers for whose signature only multiplication and equality are given) via the successor predicate,- for in both cases, the operations of addition and multiplication become entangled by the unary function, inasmuch as the truth-value of formulas in the Skolem arithmetic are reducible to the sequences of non-negative integers that make up their prime factor decomposition, such that the multiplication operation becomes simply a pointwise addition of these sequences. Furthermore, despite the far greater complexity and presumed foundation of the reals, moving from the natural numbers to the real numbers through yet another extension of the same unary function can be easily negotiated if we consider a geometrical analogy. If one draws a line going through a circle in a 2-dimensional plane, there is no single point where they meet; the mathematical analyst, when tasked with enumerating that point, will simply insert a new point on the plane where he wants the connection to be made, and then use converging additive sequences on that inserted point to describe the intersection of the line and circle. He has enumerated the imaginary point where the line and circle meet as an equivalence class of a family of converging sequences (fundamentally, sequences of additions) in the plane. Real numbers are simply algebraic constructions analogous to these geometrical ones. A real number is similarly constructed as an equivalence class of Cauchy sequences of rational numbers, with the continuum itself conceived as the continuity of these equivalence classes, viz. the assumption of there existing a continuous sequence of all such equivalence classes, from 1 to infinity, forming a series that can be iterated by a function (namely, the ‘unary function’, which reifies the operation of addition) and enclosed by a limit, forming a set,- or more precisely conceived topologically, a ‘dense set’ from whose metric space E any uniformly continuous function into another metric space can be extended to a unique continuous function on all of E, as demonstrated in the techniques used by Minkowski to map the quadratic irrationals to the rationals. (“Infinite Ergodic Theory of Numbers”, “The Farey map; definition and topological properties.”: “… any uniformly continuous function from a dense set of a metric space E into another metric space can be uniquely extended to a continuous function on all of E.”)

That sequence might not exist, and these equivalence classes might be discontinuous, such that no formula can be derived that iterates it, undermining our entire model of the continuum,- for it would mean, however surprisingly and unintuitively, that there are ‘numbers’ that cannot be constructed regardless of how many times the unary function is iterated; there are numbers one can never ‘get to’, no matter how far they count, in keeping with the modern presumption of ‘inaccessible cardinals’ implied by the axiom of choice. However, it would also mean the continuum hypothesis is false,- a speculation rarely ventured in the literature, though we do find some arguments for it like that in Woodin’s omega-conjecture and infinitary Ω-logic, where, given the existence of a proper class of Woodin cardinals satisfying an analogue of the completeness theorem for which any axiom exists that is comprehensive over the structure of hereditarily countable sets N3, the continuum by implication is not N1. Instead of developing, like Mochizuki, a non-set theoretical mathematics in which to account for this discontinuity, which he calls a discontinuous homomorphism, Woodin indulged in it and simply endeavored, in keeping with his overall interest in large cardinalities, to find sufficient axioms of this kind, with which to generalize the theory of the determinacy of pointclasses and gain insight into structures larger than N1, that is, the structure of N2, viz. a structure larger than the structure covered by the axiom of projective determinacy,- axioms with which an inner model of the large cardinals can hopefully be constructed as a set-theoretical universe within which the continuum hypothesis has a positive truth value, thereby rescuing mathematical platonism and set-theory from themselves. In this, we see that Woodin echoes the very same faith that had originally moved Godel to the platonizing side of the transfinite debate when the subject was first taken up by theoreticians,- a faith expressed in the belief that the undecidability or falseness of the continuum hypothesis from within any model of the universe of sets indicated nothing more than a paucity in our ZFC-based understanding of it, (internal failures of our own logical systems to fully circumscribe the universe of sets without encumbering themselves with paradoxes, counter-intuitive and often completely meaningless results, and circular arguments, like the argument given in the fact that, if an inaccessible cardinal is Levy collapsed to alef-2, viz. the powerset of the reals, the resulting constructible universe of that set, in keeping with ZFC, produces a corresponding model in which the existence of that cardinal is equiconsistent with a false Kurepa hypothesis) such that we simply need to introduce even more new axioms (generally, large-cardinal axioms) to our logic with which to construct a new model of the set-theoretical universe, ceaselessly extending and enriching our models in this campaign with more and more arbitrary axioms drawn up from the ether, until we discover a model in which the truth-value of the continuum hypothesis can finally be decided in the affirmative case owing to the coarsening and enlargement of the notion of definability over pointclasses and their interpolants from within any incomplete system afforded to us by the new determinacy and cardinal axioms we see added to our mathematics year after year,- axioms with which we have hoped to surmount the graduated universe of sets in conformation to the Borel hierarchy, the projective hierarchy, the hierarchy of universally Baire sets, etc.,- just as the Cantor-Bendixson theorem demonstrates there is no interpolant in the pointclass of closed sets.

While what has been said here might suggest a finitist philosophy of mathematics, note that someone like Mayberry, championing the finitist anti-Platonic view, rejects the ‘operationalism’ inherent in this construction of the natural numbers, (a construction he believes is descended to us in the obvious set-theory of modern mathematics through the conceptualization of the arithmos found all the way back in Euclid’s fifth common notion) whereby an indefinite process of continued addition is somehow reified as a coherent operation by a unary function that infinitely extends it to the formation of a number-field in which that function can continuously map elements of one set to those in a subset of itself, or in Euclidean terminology, an arithmos constituting ‘a whole necessarily greater than any of its parts’, simply as a consequence of his more stringent epistemological criteria for what constitutes well-defined mathematical concepts, from whose exalted order all indefinite operations and infinite series are to be expunged. Euclid establishes the concept of a ratio by taking two definite geometrical magnitudes and continually adding one of them to itself such that, if this results in a new magnitude exceeding either, they can be said to possess a ratio: here we have affirmed the basic principles of non-contradiction and identity, for two things that express such a ratio have essentially expressed the a priori fact that they are not equal to the third magnitude, and are therefor not equal to one another. Mayberry believes that Euclid took a tragic misstep when he attempted to generalize the concept of ratio by extending the notion of ‘geometrical magnitude’ with which he had been working to a purely arithmetical magnitude with unbounded quantifiers, moving from consideration of the definite magnitude of lines, angles and figures to that of the indefinite ‘arithmoi’, or what we would today call an empty set. For, if we similarly take a subset of a set and continually add it to itself, it becomes impossible to create a congruent one-to-one mapping between that subset and a subset of itself, (An incongruency first revealed by Cantor’s diagonal argument. We also have the famous Banach-Tarski paradox to consider, where it is possible to take a normal sphere in Euclidean space, partition that sphere into sets of points, that is, decompose it into some finite number of disjoint subsets,- with the paradox requiring us to utilize at least five,- and then replace some of these decompositions with congruent subsets of other sets, to finally recompose the sphere out of these replacements,- grounded firmly in the axiom of choice and arbitrarily taking them from extraneous congruent collections or what we call ‘free groups’ manipulated through nothing more than translation and rotation,- only to have the sphere doubled its original size, so that we can then redo the operation, doubling the original sphere out of nothingness repeatedly. Obviously, it does not make sense that one can take an object apart, rearrange its fragments in a certain way,- regardless of how clever our new configuration might be,- and then put it back together such that it has become twice its original size, but mathematically, in set-theory, this is possible: for each subset, as a collection of points indexed by infinite sequences that can never be written down, can for exactly that reason never be congruently mapped one-to-one to a subset of itself, leading to the ex nihilo enlargement of the original sphere after reconstructing it out of deviously articulated incommensurable subsets.) that is, a ‘set of all sets’, at least when the operation is infinitely iterated as it is in the construction of the real number line, such that the notion of ‘continuity’ itself seems to break down into the kind of paradoxes of irreconcilable and incommensurable infinities Cantor was the first to more formally explore. As Poincaire noted, this apparent ‘breaking down’ essentially means that the basis of arithmetic is not ‘self-evident’ in the way that the truths of geometry are, demanding that we fall back to mere axioms, not self-evident truths, in the development of our mathematics; it means the principle of infinite iteration, viz. the unary function on which the continuum is founded and through which the reals are constructed, is irreducible to the a priori principle of non-contradiction and the basic formalism of logical identity given by the ‘factum’ of Euclidean geometry,- a factum taken as the highest domain accessible to us,- an obviation of ‘pure reason’ that led of course to Euclid’s transition from a logically rigorous and self-evident geometry to a necessarily synthetic generalization of geometrical magnitudes and ratios to the arithmos or arithmetical operations, imposing upon us, as his intellectual progeny, a reliance on the very same ‘synthetic intuition’ he was forced to adopt, namely the concept of infinite iteration,- an intuition that both conceals from our limited human minds the a priori source of the truths of that geometry and allows us to deploy, from an ‘intuitive’ axiomatics (vis. Peano arithmetic, ZF, set theory, etc.) beyond which we simply cannot hope to venture for want of any more certain ground, precisely the basis of that arithmetic by which we have found ourselves able to navigate, if only pragmatically and not theoretically, a higher mathematics despite those human limitations. In the terms of predicate logic, we can say that the concept of equality, which can be derived self-evidently from the existence of geometrical ratios, is lost in the movement to a looser equivalence relation than equality, (this loss of the uniqueness quantifier up to equality signifies the resulting impossibility of congruently mapping the subset of a set to a subset of itself under infinite addition, as noted earlier) namely a relation that cannot be expressed by first order logic and from which the uniqueness quantifier up to isomorphism is derived that we later use to axiomatically ground the infinite quantification of the empty set and, by extension, our arithmetic, whose equivalence relation, designated category-theoretically, holds only up to isomorphism and not up to equality, or in the model-theoretical terms of elementary embeddings over the universe of sets, up to Vopenka’s principle, where it is axiomatized, that in every proper class over the universe of sets, some members are similar to others, that is, for every proper class of binary relations, there is one embeddable into another, such that for every natural number there exists an extendible cardinal. Here, however, the construction of the continuum through infinite addition, that is, the kind of number-theoretic functions over the natural numbers involved in Vopenka and Woodin cardinals, or large cardinals in general, exponentiation, Cauchy sequences, etc., and all other functions of this kind, by means of which we have attempted to transcend the rational numbers and construct the system of real numbers on which set theory and modern analysis depends, (A system whose defense has required us to embrace mere axiomatics, estranging the logical core of mathematics given by the original truths of geometry, namely the definite ratios expressible by rational numbers. In fact, it is precisely this concept of a ratio, a concept which can be proven self-evidently, affirmed by nothing more than the a priori fact of logical identity, non-contradiction, and equality,- unlike our axioms, which require the synthetic reasoning and arbitrary nonconstructive symbolic conventions employed with our adoption of free axioms on infinite sets,- that gives to us a notion of positivity, viz the idea that one number can be larger than another and therein satisfy the definition of what a number is in the first place, and thus expresses the rational continuity of ordered pairs, allowing us to construct the sequence of intervals to which the entire continuum of the natural and rational numbers conforms in 1 being less than 2, 2 less than 3, 1 and 2 both being less than 3, and the sum of 1 and 2 being equal or ‘identical’ to 3, and so on.) is rejected due to the fact that it entangles, at increasing levels of abstraction, fundamental operations and concepts in improper semantics.

[b]Let us summarize what we have said so far. When we generalize, as Euclid did, the concept of a ratio involving definite geometrical figures to that of an empty set, we have taken two rational numbers and subdivided the interval they express once more; we then reinsert new numbers between the numbers indexed by that interval and locate a mediant between them, thereby creating a series of endless ratios between the completely indefinite, abstract ‘magnitudes’ of an ‘arithmos’, endlessly subdividing the interval into fractions, for example, between 0 and 1, eg. into 1/2, 1/3, 1/4th, and so on,-- an infinity of convergent Cauchy sequences or Dedekind cuts freely inserted and indexed between any two numbers in an interval, taken to exist under the axiom of choice regardless of the proximity of that interval to another interval, establishing, not an equality, but a virtual isomorphy of cardinality alef-0 between any subset of a set and the set from which it is amputated, whose equivalence-class instantiates an ordering-scheme (in the modular-positional arithmetic of our infinitely expandable decimal system, we move from left to right) over the mediant and what Godel understood to be a ‘computable universe’ (The basis of the Godel-constructible universe; a complexity value and information density that tells us the computability of large cardinalities) but does not internally map and fully describe that universe, (as per the incompleteness theorem) ie. an ordering scheme encoded by the hierarchy of cardinals beyond those accessible from within any given universe of sets,- an index from which we cannot actually extract any information or ‘enumerate’ the encoded sequence of ordinals without adopting the axiom of choice or that of determinacy, that is, some necessarily synthetic external symbolic convention,- a schematism like that adopted by Kant in his synthesis of arithmetic and temporal order,- that tells us where to begin and terminate any subset over a given universe,-- magnitudes that we then, to be short, simply assert make sense to add to each other forever through the infinite operation of additions, subdivisions, and additions needed to produce from them exactly that structure we call a ‘set’, or more precisely, the power-set of the real numbers, a complete atomic Boolean algebra of cardinality Alef-2 sitting right at the edge of Lebesgue-measurability, an edge beyond which we are forced to give up either Hausdorff or separable topologies in our attempt to define the infinite-- a powerset with somehow greater density than the set of the rational numbers. This is indefensible save through one of two axiomatics, either that of the axiom of choice, which we have already discussed, or that of the axiom of determinacy; in the first case, we run into the inconsistencies and paradoxes already noted involving inaccessible cardinals, while, in the later case, we produce an equally bizarre mathematics in which all partially ordered sets can be embedded by cardinalities less than the continuum, (viz. the reals) which would mean, in other words, that it was somehow possible to decide the truth-values of all statements expressible in the set-theoretical universe from a small subset of that universe: in either case, we have lost sight of the fundamental nature expressed by the rational numbers at the heart of mathematics, as determined a priori by geometry: ratio, proportion. When we consider the rational numbers in comparison to the reals or set-theoretical model of the continuum, we see that one interval is completely equal to any other interval; there is as much information contained between 0 and 1 as there is between 1 and 2, so that we cannot begin to define a hierarchy of transfinite cardinalities of increasing density and complexity through the axiom of choice, ie. by taking any point within one interval as the beginning of a subset we can add to itself indefinitely as per the unary function, nor can we utilize the axiom of determinacy, whereby the equivalence up to equality is coarsened for isomorphic equivalence and subsets of an interval are produced incongruent to subsets of themselves, as per Cantor’s diagonal argument. If there is any hope of ‘disentangling’ the mathematical operations, it rests on an exploration of this deep-structure belonging to the rational numbers.

This deep-structure of the rationals seems to defy the kind of ordering scheme, relative to a given mediant, that we have utilized over prefabricated strings organized by external axiomatic systems imposing some linear-positional encoding over binary sequences, as again, each interval of the rationals is perfectly identical to every other interval, such that we cannot from any position determine where to begin and end a sequence of intervals relative to a mediant, as we begin from the left in a decimal expansion, to thereby produce a set from them that we can assign a value of cardinality; the rational numbers, in other words, explode noncontinuously, as a heterogeneous complexity that we simply cannot hierachialize or reduce to any number of countable pairs, even given an infinite amount of time to complete the operation, since the very positional encoding on binary sequences which the unary function extends to n-countable sets of intervals is made impossible once the rational numbers have been grasped without the baggage of the real number system and the axiomatic, synthetic constructions it has imposed over the rationals in the attempt to render them and their infinities cognizable as something we call, in a word, ‘the real number system’. This attempt, in short, has only further concealed their deep-structure; a deep-structure within which the true semantic content of the fundamental operations is still contained. To access that content, I believe we must reverse course, so that we can move in a different direction than that taken by Euclid in his generalization of the arithmos; it must be possible to generalize and extend the geometrical concept of a ratio in some other way, without adopting the implicit synthetic reasoning of the unary function, such that numbers can be discussed without having to encode them in cardinalities or impose arbitrary linear-positioning schemes viz. sets. First, we simply have to accept the fact that the kind of reasoning we use in discussing finite sequences cannot be applied to infinite sequences. That does not mean we cannot extend that reasoning whatsoever, though it does perhaps mean we will never find a means of meaningfully doing that while still carrying on with the presumption that we can apply our reasoning to infinite sets. Euclid, for example, while perhaps making mistakes in his method of generalization, did not share this presumption with us; he admitted that there were certainly more primes than are found in any finite list of primes, but he did not say that meant the primes were infinite. This concept of a ‘noncontinuous’ or heterogenous infinity eludes us,- the concept Euclid was suggesting (though perhaps did not possess sufficient language to fully articulate) by making such a rigorous distinction, namely the following: the ‘number’ of primes was not actually a number itself, (it cannot be encoded linear-positionally on a sequence of binary pairs) and therefor not infinite, but instead a ‘metanumber’, (what would, in the terms laid out here, signify some latent global structure at a higher, unintegrated episteme) a kind of machinically produced cluster of numbers, a complex composite structure synthetically generated and ‘held together’ by some extrinsic semiotic through multiplication and precisely not the linear-positional arithmetic we use to construct the reals out of addition on a symbolic operator expressed by a binary pair or ‘interval’, that is, the continuum. We here return to the original thesis of this text, namely the semantic confusion and entanglement of fundamental operations like multiplication and addition; the obfuscation of that extrinsic semiotic. Only after that underlying syntax is clarified can we reencode local objects, whose distinctions have been clarified, into new global structures, unpacking their actual semantic content on some higher level of abstraction. It is precisely due to this heterogeneous infinity, a latent semiotic related to multiplication, that we find the nature of the frequency of prime number distribution cannot be determined from within an arithmetic grounded on sets and addition.

[/b]

In essence, when we deal with infinite sequences and infinite sets, we are claiming that they contain an infinite amount of information,- an endless storehouse of data that exists in some other universe entirely disconnected from us, yet we cannot decode any of that information from them unless we impose an arbitrary symbolic construction on them (the axiom of choice; we can begin to map the unary function at any position within the sequence to a subset of itself, reading it from left to right in the decimal expansion) that allows us to rearrange subsets of the sequence in relation to a mediant, (axiom of determinacy) reconstructing it out of binary pairs that express intervals in the real number system and enumerate the ordinals in conformation to the complexity-hierarchy expressed by the information density of the large cardinals. However, this means, in actuality, that the information is not contained in the sequence itself, it is being machinically produced or ‘synthesized’ with an external schematism stored, not in some other universe, but in no place other than our own brain.

What these schematisms do is quite simple: they extend the mathematical operations to the number field, in more and less convoluted, entangled ways. When we list the ‘real numbers’, for example, we are extending the geometrical concept of a ratio to an empty set and using a schematism of that extension to continuously map the operation of addition to the number field. Similarly, when we list the primes, we are mapping the multiplicative operation to the number field, etc. Incompatibilities in the schematisms we utilize result in things like the ABC conjecture or continuum hypothesis. The ‘infinite series’ associated with these schematisms are not really infinite because they encode no information by themselves- that information is synthesized by the schematic form or ‘metanumber’. Thus, to make any progress in the direction suggested here, we need to see how many of these metanumbers there are. Pi would be another example of a schematism/metanumber, as it is just a schema that extends the concept of the ratio of a circle’s circumference to its diameter indefinitely, depending on how many times we iterate it to synthesize more digits and thereby map it to the number field. We will also need to investigate alternative ways of mapping the operations without causing such entanglements and incompatibilities within our schematisms.]

The ABC conjecture is precisely about this, the relationship of numbers generated by addition and those by multiplication, and it remains unsolved because that relationship, which is really about the operations themselves from which the two classes of number are produced, is not understood; a state of conceptual conflation or what Mochizuki calls an entanglement of “the two underlying combinatorial dimensions of a number field, which may be thought of as corresponding to the additive and multiplicative structures of a ring or, alternatively, to the group of units and value group of a local field associated to the number field”. This is the case for all of the fundamental mathematical operations,- they are conceptually ambiguated. The local structures (viz. the natural numbers) and global structures (things like prime numbers) encoded by the mathematical operations, in other words, exist in an indeterminate, confused state awaiting disentanglement; the contained ‘semantic content’ that exists between the two levels of abstract structure remains latent at the second episteme, unextracted and yet to be reincorporated at a higher abstract level, namely that of the third episteme,- an incorporation at a higher abstract level attempted in IUT, regarding the operations of addition and multiplication, through the construction of Hodge theaters where deformations of objects (distinctions of local structures) translated from one to another model or ‘theater’ are calculated inside a log shell and removed from Qs and number fields, and then gathered into a kind of ‘inter-universal’ container that can be inserted into and removed from multiple theaters in order to measure and equalize the translational variance of objects through a process of strategic deformation of those objects. The integration of latent semantic content will not be achieved until a new model of the continuum is developed, one extended ‘axiologically’ beyond Peano arithmetic. Until then, we will be faced with a never-ending litany of imponderables. As Wittgenstein observed, set-theoretical mathematics has been constructed through a combination of laws, developed from symbolic logic, and axioms, freely chosen programmatically,- two things that have been utilized in parallel, with axioms serving to make up for the conceptual gaps in the laws, and the laws in their turn making up for deficiencies in the axioms, only further entangling semantic layers with each leap toward greater generalization from that convoluted foundation. In other words, the issue is one of circular argumentation; when asked to define a function, the analyst will happily do so in terms of a rule, yet when the same analyst is asked to define a rule, he will define it in terms of a function. The kind of questions created (like the Riemann zeta hypothesis) through this entanglement are not so much questions as they are manifestations of defects in our fundamental concepts. They are the result of an entangled semantics leading us to formulate questions that, appearing to make sense syntactically, or at one level of abstraction, do not actually signify anything, containing no latent semantic content that can be reintegrated. They’re not questions, they are malformed statements semantically entangled in concepts; concepts that, once clarified and reified in higher global structures, will lead to the development of an entirely new system in which questions of this sort are not answered, but simply disappear into the greater horizon opened up by the expansion of the space of representation available to the first episteme, that is, cease to be questions, as those structures about which they have been ventured are themselves reunified, following local differentiation and sublation, into new singularities. [Note that Solomon Feferman similarly proposed a novel theory of mathematical definiteness grounded in a semi-intuitionistic sublogic that applies classical logic only to bounded quantifiers, using intuitionistic logic for unbounded quantifiers, such that any proposition designated ϕ can be said to be ‘definite’, that is, to possess a truth value, only if the semi-intuitionistic sublogic can prove ϕV¬ϕ, such that many famous problems like the continuum hypothesis would simply be malformed statements lacking any truth value one way or another. His use of two logics, one for bounded and one for unbounded quantifiers, again suggests the same semiotic process found in Grothendiek’s use of projective and injective norms and Mochizuki’s Frobenius and etale-like portions of a Frobenioid, though this logical partition is not, in Feferman, later sublated as an arbitrary differentiation on local objects, or re-encoded by a codeterminate form on some higher abstract or ‘global’ structure with which the semantic content of those local objects can be ‘disentangled’ as ‘congruent representations’.

In fact all ‘problems’ in all fields of knowledge, be it mathematics or philosophy or ethics, are the result of an analogous deficiency revealed through an application of Pierce’s semiotics, that is, the very same more fundamental problem related to the existence of concepts entangled at different epistemes. In order to realize a total incorporation of latent semantic content, the distinctions of separate local structures must be clarified, these differences reunified by global structures inside a codeterminate form, and then this codeterminate form must be reencoded in two or more abstract levels between which a congruent representation can be delineated, out of which a new ‘object’ can be defined as a correspondence between multiple abstract levels, fully completing the semiogenetic circuit through the three active epistemes. Returning to mathematics, new axiologies must be introduced, one for each of the operations and foundational concepts, just as addition was grounded in Peano’s axioms, so that models of the operations can be developed in isolation. Once these isolated models are developed, local structures can be mapped from one to another model, a process that will reveal and clarify the distinctions of the local structures inasmuch as the mappings will not be complete and isomorphic; indeterminacies will be created in the attempt to translate local structures across different models. Once the distinctions of local structures are clarified in this way they can be removed through the ad hoc introduction of an arbitrary ‘artificial differentiation’ that equalizes our results by surjection once it is sublated, to finally be reunified as an ‘anamorphic projection’ by emergent global structures inside codeterminate forms that correlate the different indeterminacies created by a series of translations that can be systematically carried out in a process of intentional disfigurement, much as visual anamorphosis clarifies a distorted image by framing it at multiple arbitrary vantage points, accumulating the differences between those vantage points, and then equalizing them. Only then can such codeterminate forms be reencoded so as to establish a congruent representation between two or more levels of abstraction, (these levels, for Grothendiek, being defined by projective and injective norms; for Mochizuki, who we will briefly consider momentarily, they are defined by the etale-like and Frobenius-like portions of a Frobenioid as related through a Kummer isomorphism arising from cyclotomic rigidity, or more precisely, from a cyclotomic synchronization isomorphism which permits us to establish a more general relationship or ‘global structure’ between the Frobenius-like portions of independent Frobenioids once their etale-like portions are connected, a relationship whose mono-abelian transport introduces the kind of indeterminacies noted here) these new representations signifying a novel class of abstract object. In fact, this is the exact method taken by Mochizuki in his attempt to solve many conjectures like and including ABC through something he calls ‘inter-universal Teichmüller theory’, where the isolated models of operations suggested here, with their independent axiologies, are called ‘mathematical theaters’, and where the distortions and inequalities introduced by the movement across theaters and the translation of structures in one to another is accumulated inside a log shell by the theta function, whose variance is measured within a Hodge theater to the ends of redeploying new global structures and reconstructing the hopefully ‘disentangled’ continuum once the distinctions of the ‘local structures’ have been fully clarified in this way and then later sublated or ‘dropped’ from the number-field. We see that there is a single process (a movement through the epistemes) occurring in all fields of thought; philosophy, mathematics, ontology, ethics, semiotics, psychoanalysis, etc. A problem in one field corresponds to a problem in another, even if the corresponding problem has not yet been discovered by those working in its field; and a solution in one, of course, corresponds to a solution in another, even if those in that field have not yet become aware of it. All the better, we can assume the solutions in one field for ourselves, and personally reconstruct them in the field we are working in.

In short, by applying my own extension of Pierce’s semiotics to the question, (ie. the quaternary logic of the three active epistemes) we see that the truths of geometry are a priori and self-evident, while the ‘truths’ of arithmetic require us to adopt synthetic argumentation (via the assumption of the coherency of infinite addition and infinite series,- the assumption that it makes sense to infinitely repeat the operation of adding 1 to a starting number to create the number line, which is not a self-evident truth) that falls back to axiomatics under examination, (namely, the axiom of choice) leading eventually to an ‘entangled semantics’ between the two different levels of abstract structure that has further entangled the fundamental mathematical operations and concepts with one another and so prevented the semiotic chain from fully completing the loop through the three epistemes needed to clarify and extend meaningful representations of objects, a failure resulting in the production of imponderable questions that have no actual signification or ‘semantic content’, lacking any truth-value whatsoever, since such questions, like that involved in the RZF, arise only from a confused semantics. One might be naturally led to group theory in the search for some means of disentangling the confused semantics of the mathematical relations encoded by addition, multiplication, etc., given the fact that the sums of an infinite number of groups require that the constituents always have finite non-zero elements, (for, in keeping with the primary decomposition formula, every finitely generated abelian group is isomorphic to the direct sum of primary and infinite cyclic groups; equally, every Noetherian ring is a Lasker ring, and we can decompose any ideal into an intersection of primary ideals of finite number) whereas direct products of sets are not similarly bounded, and Mochizuki has sought in precisely this domain, attempting to disentangle these relations mainly through some unique extensions of Teichmuller-space and Galois groups. Beyond the ABC conjecture, for whose solution IUT was tentatively conceived, the explication of latent semantic content would give us a deeper understanding of the continuum in far more general ways. All current mathematics is, in essence, simply set-theory, which is to say that it all takes place within a single ‘theater’, a theater whose continuum is based on a model of the fundamental operation of addition. Accordingly, the modern conceptualization of a transcendental number is based on Cantor’s work, a product of set-theoretical proofs. There is some structure analogous to or correspondent with a transcendental number in each of the other theaters. To understand what a transcendental number like PI actually is, [Note. We hardly understand what it ‘actually’ is at present. Within the single theater we use, derived axiomatically, in the wake of Godel, (following the failure of Russell and Whitehead’s Principia, an attempt to ground mathematics in a complete symbolic logic) from Peano and the unary function on which basis we establish the apparent scale invariance, internal consistency, or ‘smoothness’ of the continuum through the operation of addition, it can only be grasped as an irrational number; this is not its true nature, but only the result of a defect in our theory, in just the same way that a singularity results in the mathematics we use to try and decipher what takes place beneath the event-horizon of a black hole,- a singularity that most physicists agree is not actually there, signifying nothing more than a conceptual hole in our understanding.] we would need to correlate the local differences between each of these analogues and then equalize them through an artificial differentiation that permits their global reincorporation into a codeterminate form that we can reencode on multiple independent abstract levels, with congruent representations of that codeterminant form across these levels giving us our deeper, more complete understanding of the continuum and something like transcendental numbers, the primes, etc. The relative meaninglessness of our conceptualization of irrational numbers like PI does not seem to cause us many engineering problems, for most applications only require a few digits of it, yet this issue does cause us major problems when attempting to articulate the foundations of mathematics. We can write down a formula for PI, but we do not know what it means, because we have no framework for performing actual arithmetic with it; if we want to do something like add PI and E, all we can do is say it is ‘PI plus E’ and write a new, even more ambiguous formula for it. These irrational numbers simply do not ‘fit’ in the mathematical theater we are using. However, they might fit better, or perfectly, in another theater. Most importantly, the local distinctions of such numbers, clarified between their appearance in multiple theaters, would bring us closer to the apprehension of global structure and give us the most insight into what an ‘irrational number’ truly is. A transcendental or irrational number might correspond, following this extraction of latent semantic content within the continuum, to what was earlier called a ‘metanumber’, that is, an entirely different class of object,- not a ‘number’ at all, but some object corresponding to a different axiology entirely, just as ‘numbers’ are objects corresponding to the primitive axiology of Peano’s arithmetic, (0), 0+(1)=(2), 2+1=(3), 3+1=(4), and so on. Equally interesting, in a mathematics grounded on the unary function and addition, (set theory) Tarski’s exponential function problem (which asks if the real numbers together with the exponential function is decidable; as noted earlier, the reals are just as much grounded on addition as are the naturals) is undecidable, along with the question as to whether or not the real version of Schanuel’s conjecture is true, (whose proof would confirm the decidability of that problem) for the same reason that no answer to ABC can be found within set-theoretical mathematics, namely the semantics of addition and multiplication are entangled by an improper syntax on local structures (the naturals and reals) that must be decoded into higher abstract levels between which congruent representations of objects related to global structures (exponents, primes, etc.) can be determined.