Euclid's fifth common notion and axiomatics

This is a footnote to a larger essay I excerpted in a thread on the semiogenetic loop, but I have extensively added to it, to the point that it is basically its own essay now. Big ideas in here.

Even if you’re not too invested in math, or good at math, I suggest one take it slow and plow through this entire thing anyway, for I assure you; all knowledge is reconciled to all other knowledge, and developments in one field, say math, can be directly translated into any other field, say philosophy, or psychoanalysis, or ethics, or … For this reason, all subjects should be equally interesting.

What I bolded are simply conclusions, it is all equally critical information.

You need to read the whole thing to understand these two sections, but to what is said here I want to add some explicit notation:

So instead of talking about infinite arbitrary series like we do in set theory, we’re only going to consider an infinite series valid, or perhaps a better word would be meaningful, (it contains information we can extract from it) when it can be matched to a schematism that produces it digit by digit. It is not so much infinite as that schematism is mapping an operation it represents to the number field continuously. In fact, that is precisely what information is being extracted or ‘schematized’ from it; mappings of operations to the number field. Immediately, because of this, most sets people discuss become meaningless, because they are arbitrary sets of numbers produced by the axiom of choice and do not require generating algorithms to produce them.

I listed a few such schematisms in those two sections, extensions of ratios and multiplications, PI, etc. But those are pretty complex.

So let’s consider a very simple sequence of just endless 1s, which maps a self-identity to the number field. What schematism would correspond to the act of producing an endless sequence of 1s in the number field? Let the symbol [K] be the sequence itself, let M be an index for the sequence, (any location in it, any possible digit in that sequence) and we can say:

[K]m= K.

That means the following: for m, (any number in the infinite list of Ks, the index) the value is K. So if K is 1, and M is index location numbered 10 trillion,- if it is the 10 trillionth number in the list,- then that number K still equals 1; K is still 1 when M is any possible number.

So we label the sequence itself as [K]. So for K↔Nat (if K is any natural number) then c=[K]. For any natural number M, (M representing an index or location in the sequence) then the m-th element of the sequence is again what we said: [K]m= K.

So another example, say 5 is K. Say M is 11. Then [K]m=K would work out to [5]11=5, the 11th number in the sequence when K is 5, is 5. Obviously, you can do the same thing with literally any number, that’s the point. This is a definition for a schematism of 5 to itself extended over the number field, in other words. One of many logical atomic facts I must notate. This formula is basically saying that there is no change in information density or cardinality regardless of how many times I map the number 5 to itself on the field; it is still the number 5, so that any interval between it and some other number has preserved a congruent one-to-one mapping to a rational number, contra the axiomatics of set theory, where diagonalization reveals the incongruency of sets with subsets of themselves, which necessarily have greater cardinality.

It is important to note that the schematism mapping K to K is not the same thing as the actual number value of K in the formula. The schematism and the number aren’t the same. In the example above, [K] is not itself 5 when K=5. The schematism is expressing the atomic fact I just explained about identity and most importantly, the fact of congruent mapping of intervals to a rational number, with the schematism itself representing precisely that map.

Now what needs to be done is go all the way up the scale of increasingly complex schematisms like the ones I described here and fully notate symbolic logic for them like I just did with the most elementary, simple one, this one that maps the concept of identity. In this way, we can use them to construct a whole new mathematical logic! A new system can be produced that is able to disentangle a lot of these destructive kind of ‘unconscious’ semantically entangled schematisms I have been pointing out are scattered all throughout modern maths. And I will be doing that as a side project. Just as an example, next I would formalize a schematism for the operation of addition, mapping 1 to another number additively. I would do that for all the basic arithmetical operations. And gradually we get more and more complex. It is rather natural, for example we can add two schematisms mapping two numbers to themselves, say

[K]m=K when K is 5 and m is any natural number, plus the same schematism we will notate as L when L is 8 and m is any other natural number. 5[m] + 8[m] =
[K] + [L] = [K+ L]

But let’s see something really cool. We’re going to introduce variables to it. So a variable I will notate with the symbol [b][1]. The symbol [b][2] doesn’t represent a number, it represents any possible variable.

For any M↔Nat, (let m be any natural number) then [^]m=m. (let ^ be any object, value, anything. It is an unassigned variable.) So what happens if I decide to put the number 1 where the m is? The formula becomes [^]m(1)=m(1). So m is 1. If I put a 2 where the m is, it becomes [^]m(2)=m(2), so the solution is 2. If I put a 3 where the m is, it becomes [^]m(3)=m(3), so it’s 3. And so on. What does this mean? It is saying you can map any object ^ to any number m.

The formula [K]m=K mapped any number to itself, right? Well this formula I just wrote is mapping any variable ^ to any natural number M↔Nat. The formula is symbolic logic for the idea that you can map any object to any number; there can be 10 dogs or 20 dogs, there can be 10 ^ or 20 ^, and so on. If instead of an unassigned variable ^ we use m in the same spot in the formula to denote any number value, then this becomes a schematism that produces the indefinite sequence 1,2,3,4,5, … and so on, generating the natural numbers, and then recursively the intervals between them, the rational numbers and so on.

So we have to keep building up this lexicon of schematisms until we have an entire mathematics that can move from basic arithmetic all the way to addressing the kind of set-theoretical problems I detailed in the original post here.

Going through these first few schematisms, we can already say some interesting things: we see why ‘dividing by zero’ is nonsensical. Check it out. If you could divide a number by zero, that would mean, in this system, that you can map the schematism of a number to itself in such a way that it can be made to divide into itself by an incongruent ratio; it would mean that the number would be unequal to itself; it would mean you can schematize a number, say 1, such that it can go into itself less or more than once, since the ratio being mapped relative to zero could not be determined even with infinite operations, as zero has no value. It would mean you could produce a schematism that violates the first atomic logical identity, that first schematism formula I noted, which preserves congruent maps between ratios and rational numbers and thus affirms the a priori principle of non-contradiction, the principle of A being A. Thus, if you could divide 1 by zero, 1 would not be 1. But 1 obviously is 1, ergo you cannot divide by zero. Compare this to the set-theoretical reason you can’t divide by zero, (where division is only the inverse of multiplication) namely a purely axiomatic one given in the fact that there is no multiplicative inverse of zero. But my system proves that through a-priori, purely logical reasons, which is obviously a stronger basis than arbitrary axioms.

Now we can put two ideas together, that of a free variable signified by ^ and that of an arbitrary number value signified by m. This combination we will call a polynumber schematism, the schematism of the idea of something called a polynumber, which works like a list of elements.

[p(^)]m=p(m). So the P stands for polymumber, ^ is any object, and m is any numerical value.

Say ^ is dogs. This formula means that the schematism of the polynumber of dogs when the number of dogs is 4, (m) is simply the polynumber of dogs, p(^); it is just an extension of the logical identity concept to the generalization of a polynumber, an extension of the first logical atom in the first schematism I notated. The polynumber schematism of ^, if ^ is dogs, and if m is 4, is a polynumber associated with those dogs: a container p(^) where 4 dogs are indexed, and again those indexes can be assigned to any 4 things, any 4 (^).

Obviously it is very useful to be able to have a container to store multiple numbers in when you’re doing arithmetic. That’s what this schematism does.

If you read the essay, you will understand that things like multiplication, exponentiation, etc. and other mathematical operations are recursively defined in axiomatic set theory, relying on a notion of induction built into the unary function that allows us to perform addition; without that induction from the arbitrary axiom that the infinite iteration of adding a number to itself makes sense, (1 plus 1=2, 1+1+1=3, and so on) then set-theoretical math can’t even define what any given number Z is; it can’t articulate an identity; number itself becomes meaningless. By using schematisms we are trying to define the operations without inductive recursion from an arbitrary axiom,- not just because the arbitrary axiom is problematic, but also because of the consequences of recursively defining the operations through the one operation of addition, namely it entangles mathematical structures encoded by the different operations and so distorts or conceals their semantic content, leading to things like the ABC conjecture or continuum hypothesis, etc. The technical induction of the addition operation states that every number has a successor, and this is what makes it a number; 1 is a number because it is followed by 2. You see that the idea of an infinite series is built into the function itself.

But in this new system of schematisms, a number is not defined recursively like that. A number is any number M that can be schematized to a map of itself to itself on the number-field, viz. a schematism exists for that number M which satisfies the definition I established with the formula [K]m= K,- (If no such schematism exists, if we cannot produce a schematism of that form for it, then it is not a number. Example, we cannot produce the schematism of that form for the number of elements in an infinite set, thus that number isn’t a number, it is a semantically confused non-concept, gibberish. If we can’t write a schematism for a number, it isn’t a number: that is a very important atomic fact. A schematism maps an operation represented by a sequence of numbers to the number-field; thus the endless string 11111, etc. maps the number 1 to itself continuously, giving us the fact of the logical self-identity of that number, as expressed by [K]m=K. But if we have an infinite sequence of numbers whose number of elements we cannot schematize to the number-field in a similar way, then there is no information actually encoded by the sequence and it doesn’t mean anything, that is, it doesn’t represent any operation or sequence of operations we can map to the number field that preserves a congruent ratio between that number and another over n-digits.) such that the mapping preserves an identical ratio between that number and any other number, say N, when the schematism of either is computed to whatever digit; (thus we’re dealing in a finite mathematical object, not an infinite indefinite set) we cannot pick an arbitrary mediant inside of an undefined interval between M and N, and thus the ratio expressed between them has to also be reducible to the map of a definite geometrical figure. So you see that this concept of a schematism is a generalization of the concept of a ratio, but it gets around the problems introduced by the generalization of the ratio to an empty set or arithmos that Euclid made. I suggested a different concept should be used to replace the arithmos in the original post’s essay; schematism is that concept.

So do you see that an entire new mathematical universe becomes possible, if we simply work up from these atomic schematisms toward more complex ones? There are completely new mathematical structures, (the complex schematisms I have not thought of yet) just waiting to be discovered. These new structures might very well reveal important insights into many fundamental unanswered problems, even into problems that are unanswerable within set-theoretical math. These new structures could also reveal important new insights into the nature of the physical world. Many of them could also just be interesting or fun to mess around with. Most important of all, they will help reveal something about the deeper structure of the rationals I was talking about, this ‘heterogenous infinity’ that resists the attempt made to circumscribe it as per set-theory, which I wrote a decent amount on in the essay. It is ‘heterogenous’ because it can’t be hierarchialized and broken into binary pairs, because the information content of any interval is the same as that in any other interval. There’s as much data between 0 and 1 as between 1 and 2. In order to schematize numbers, a mediant has to be synthesized from the interval through some operation that maps it to a definite position in the number field, and the interesting thing is this: (and this is another sense in which it is heterogeneous) there is not an infinite number of numbers we can write down, meaning there is not an infinite number of ways to specify a mediant. In fact, most numbers between say 1 and a 10 followed by 10 exponents of 10 are impossible to write down on a harddrive the size of the universe because of the information density they would need to have, but a few of them can be, that pop up randomly, heterogeneously distributed throughout the rational numbers. Like this: z=10^10^10^10^10^10^10^10^10^10 + 23. One less than that number could not be written down even if I could write a digit on every quark in the observable universe. One more than it could not be written down on all the quarks in the universe either. Nor can 2 less or 2 more than it, 3 less or 3 more than it, etc. Almost every number that exists, proprotionally, between 1 and that number: can’t be written down. But that one can be written down, I just did it. It wasn’t even that hard, took a few seconds. That is the kind of unhierarchializable levels of undefined complexity, heterogenous infinity I am talking about: that is the kind of infinity involved in the rational numbers. It jumps from massive information density to the point you can’t even write a number down, to suddenly a number that can be easily written down, and then back to unfathomable information density, with no rhyme or reason or pattern to it.

For one example of a new object appearing for study, let’s do something interesting with this definition:

[p(^)]m=p(m). So the P stands for polymumber, ^ is any object, and m is any numerical value.

Let’s define a polynumber using prime factor decomposition. The polynumber of say, any number m, is the list of P’s prime factors added together, then added to P. The result gets multiplied by the polynumber, so to do it with the polynumber 2,

[P + (M, prime factors of P added together) ] P =
[2 + (2)] 2 = 8

So if the P or polynumber is 3, we list the prime factor decomposition of 3 and add the digits of that factorization together, then add it back to 3, so that becomes [3 + (3+1=4) ] Then we multiply that result by the polynumber P, so in this case we multiply by 3. So we are adding the polynumber 3 to 4 so far, that is 7; and then we multiply that result by 3, so 21.

[(3) + (3+1=4) ] 3 … 21

Do it with polynumber 4. The prime factor decomp of 4 is 2 and 2, add those and you get 4. So we are adding P, 4, to that result, which is also 4, giving 8, and then we multiply that 8 by P again, which is 4, finally giving 32; 8 times 4 is 32.

4 + (2+2=4) ] 4 = 32

So the prime factors of 5 are 1 and 5. Add those together, it is six. P is 5, so 5 plus 6 is 11. Then we multiply by 5, that is 55.
5 + (1+5=6)] 5 = 55

With the number 6. Its factor decomposition is 3 and 3, added together is six. Six plus 6 is 12, and 12 multiplied by 6 is 72.
6 + (3+3)] 6 = 72

With 7.

[7+(1+7=8) ] 7= 105.

So far the results are 21, 32, 55, 72, 105.

Do it with one more number, 8.

[8 + (2+2+2) ] 8= 112

With 9

[9 + (3+3=6) ] 9 = 135

With 10

[10 + (2+5=7)] 10 = 170

So far the results are 8, 21, 32, 55, 72, 105, 112, 135, 170.

The difference between each number result and the next is
13, 11, 23, 17, 33, 7, 23, 35.

Obviously we can go all the way back and do it for the number 1, which very easily gives a result of 2.

[1 + (1) ] 1 = 2

So all in all we have 2, 8, 21, 32, 55, 72, 105, 112, 135, 170.

The difference from each number to the next is 6, 13, 11, 23, 17, 33, 7, 23, 35.

So we have created a schematism that maps certain operations (prime decomposition, addition, multiplication) to the number field in such a way that it produces that list of numbers. Now let’s make up a new name to call these numbers. How about… primordions! Following that first list going from 2 to 170, we can say that the primordion of 1, is 2. The primordion of 2, is 8. The primordion of 3, is 21. The primordion of 4, is 32. Etc. That other list, the differences between each primordion, we can call … primordion regressions. You can keep going forever to get new primordions, but there isn’t an infinite number of primordions stored in any arbitrary set of primordions. The primordions are synthesized from that schematism by mapping operations to the number field in a certain way. In the formula specifying the structure of a polynumber’s schematism, we can simply call any primordion the P in it. The primordion of 743 would be what happens when we replace the P in that formula with 743; doing that creates a list I symbolized by (^) that indexes the results of the formula into a class, these being the primordions.

So that is just an example of a new mathematical object someone can study, primordions and their primordion-regressions. I looked in the OEIS and the repeating sequence of the primordions is not in the sequence dictionary. The study of these primordions, figuring out why they are what they are and how they work, will yield potentially novel insights into the deep-structure of the rational numbers. As will the study of any number of new mathematical objects we can define using these schematisms. To begin studying them, we can just look at the sequences and see if there’s patterns, like with the primordion regressions, why is it regularly going from small number to big number, small number to big number, small number to big number? Why are they all odd numbers except the first? Etc, etc. The regressions move in this way: +7,-2,-12,-6,-16,-26, +16, +12, …

Instead of adding the factor decompositions together, you could divide or subtract them, etc, or do several operations together to schematize more and more complicated objects. That is of course the goal. When the factors are added or subtracted we can call it a primordion, when they are multiplied or divided we can call it a primordial.

If, instead of adding the factors together, you subtracted each from the other, and then subtracted the result from the primordion, then the primordions 3,4, 5,6, 7, 8 and 9 would be:

[(3) - (3-1=2) ] 3 =

3-2; 1. 1 times 3, 3.

4 - (2-2=0) ] 4 =

4-0; 4. 4 times 4, 12.

5 - (1-5=-4)] 5 =

5 minus -4; 9. 9 times 5= 45

6 - (3-3)] 6 =

6 minus 0, 6. 6 times 6, 36,

[7- (1-7=-6) ] 7=

7 minus -6; 13. 13 times 7, 91.

[8 - (2-2-2=-2) ] 8=

8 minus -2, 10. 10 times 8, 80.

[9 - (3-3=0) ] 9 =

9 minus 0, 9. 9 times 9, 81.

Altogether, 3,12,45,36, 91, 80, 81. So we can say the negative primordion of 3 is 3, the negative primordion of 4 is 12, the negative primordion of 5 is 45, etc.

So you could compare the positive primordions 3-7
21, 32, 55, 72, 105

The negative primordions 3-7
3,12,45,36, 91

The “number” PI is not a number in this system. It’s another schematism. It is the same thing as the schematism producing primordions, just with different operations encoded in it than those in the schematism for primordions, operations which it is continuously mapping to the number field in order to produce the sequences of digits that it, that PI, has.

Another thing to study is the relationship that schematisms have to one another. What kinds of structures can we classify them into? What class of schematism does the schematism PI belong to, and what else is in that class? And if there is such a well-organized class of related schematisms, how then might one class relate to another whole class? Questions like this.

It is possible for different schematisms to output the same list of digits in this system. So one schematism’s output can potentially be re-encoded by another, simpler schematism. Now we can define information density as a class to break down schematisms and start to understand how they relate to each other. One question we could ask is: is it the case, that no schematism of a certain complexity-level can produce a number sequence whose digits can be re-encoded as the output of another schematism at a lower complexity level than it? Then there is a maximally complex schematism, because the sequence it maps cannot be re-encoded by any other schematisms besides those in its own complexity class. That could explain the mysterious process of ‘renormalization’ in QED.


  1. /b ↩︎

  2. /b ↩︎

Without having read all of your post yet, I wanted to mention that “infinite” is a misnomer since an “infinite series” isn’t actually infinite at all, it’s just a continual reproduction of errors that are never resolvable given the initial terms. 1/2 is resolvable without error (remainder) because 2 is defined as twice 1 in a base system where “twice” is divisible into the base itself. 1/3 is not resolvable without error because 3 is defined as thrice 1, but “thrice”, three times as much doesn’t fit perfectly (without error) into a base 10 number system. You get 1 leftover after you do three of the 3’s (3x3=9. +1 would =10). The leftover 1 needs to then be accounted for with respect to the 3, so you get a repeating 1/3 or 0.33333… over and over. But this isn’t infinite, all it means is that however many times you want to try and resolve it you’re always going to get the same error.

The +1 is the error that doesn’t fit into the original 1/3, unless you change the base system to be one where the base is a multiple of 3. In that case there would be no 0.3333333… repeating “to infinity” but it could just be like 1/2 is in the base 10 system, as 1/2 would be in any base system where the size of the system happens to be a multiple of 2.

So it is all about ratios. A number is just a ratio of itself as a quantity to the overall size of the base system in question which we happen to be using at that moment. Generally we use base 10. 1-9 are the numbers and 0 is the placeholder term, so 10 digits. “10” is a 1 but one order of magnitude up, and 10 is defined due to the base 10 system itself as being “five 2’s” for example, so your 1/2 would be 0.5. Or 1/5 would be 0.2.

Ratios of quantities is all math is. There are no “infinities”. Pi is not representing an infinite sequence, neither is 1/3 or anything else. Change the base number system and these “infinities” resolve, without changing the actual number itself or what it means. Even in other base systems, 1, 2 and 3 still mean the same thing. They are simply representing quantities, nothing more. The ratios between 1, 2 and 3 still hold the same in any base system, but depending on that base system this can either be expressed with or without errors (remainders).

So I don’t think I can agree with what you said about infinite sequences and infinite sets containing infinite amounts of information. I don’t think it is information in there. Unless a perpetually erroneous “me trying to smash a square peg into a round hole over and over” and recording the results each time without the square peg actually going into the circle would somehow count as “information”. I guess we would be learning each step of the way what is the error-remainder at that particular point in the sequence of failed attempts… maybe that’s somehow interesting, idk.

All this “paradox” means is that set theory is stupid.

If I were to come up with a theory in logic or math or philosophy and you could show that my theory contains blatant contradictions, self-contradictions, that simply proves that my theory is flawed somehow.

The flaw in set theory, as far as I can tell, has to do with the false assumption regarding the supposed meaning of infinity, as I was mentioning in my last post. Infinities in math aren’t real and aren’t representing anything real, they are the result of errors attempting to be resolved but unable to be resolved. Each little 3 in 0.333333… represents the fact that the quantity “1/3” is trying to resolve itself three times into the base 10, but then the next 3 indicates how this was a failure. We don’t need to work through every single 3 in sequence over and over again without end in order to conceptually understand that no matter how many times we try it there will always be another 3 in the sequence. No matter how many times you try to mash three 3’s into a 10 you will always end up with a remainder of 1. 1 is 1/3 of 3, hence the repeating 3 that represents the error/remainder.

The paradox mentioned is committing a fallacy by changing the meaning of its own terms within its own argument. Sure, if I am allowed to change the meaning of the number 3 to actually mean 2, then I can make 1/3 resolve into 0.5. How amazing, wow! A paradox! Yeah, no. I was just stupid enough to change the meaning of my own terms within the same span of my own argument from start to finish. Dumb stuff like Banach-Tarski are simply doing this same thing but not even being honest enough to admit that’s what is happening. They can take advantage of the supposed “uncountability” as “infinities” of “subsets” and then substitute other ones in there which are actually different but can be falsely made to seem equivalent because of the silliness of the supposed infinities somehow meaning something. Pro tip: if you have two things you think are the same, but one leads to a different result than the other one in the same operation, they aren’t the same.

That is in essence what I said, however an alternative to set-theory has to be found. The reason set-theory is so unchallenged and prominent is because, even though it logically doesn’t make sense, it is useful, to mathematicians. The stellar model of epicycles and Ptolemy was completely and utterly incorrect from its foundation, but people still used that model for centuries because, despite being completely wrong, it did actually work to help people navigate, especially in comparison to trying to navigate by rolling a dice. Theories can sometimes still be useful despite being incorrect, and set theory is the same. I devoted a good bit of text to attacking the logical problems of set-theory in the essay, however, no amount of attacking it will dissuade mathematicians from using it;’ it is too useful to them. But I believe that the more developed set-theory becomes, the harder it will be to extricate ourselves from it, which we are eventually going to have to do in order to answer some of the deeper unanswered mathematical questions out there. And so the other half of my effort, moving from an attack, is constructive; to produce the basis of a new system with this idea of a schematism.

You said:

“Ratios of quantities is all math is. There are no “infinities”. Pi is not representing an infinite sequence, neither is 1/3 or anything else.”

That is in essence what I said in the larger essay and the examples post so I don’t have much to argue with. Just that set-theory comes from Euclid’s too indefinite generalization of the concept of a ratio to the concept of arithmos, which a schematism is meant to replace, a different way of generalizing the idea of a ratio to something more abstract that doesn’t have the same paradoxes as an empty set, which I explained more of in these comments;

So instead of considering Pi to be a magical transcendental number with an infinite set of digits, we consider PI to be a schematism. The digits in it are encoding a sequence of definite, finite operations and mapping them to the number-field; a series of finite operations reducible to definite geometrical figures. We can write those operations down very easily in the formal structures I gave for the basic schematisms. (If we use the axiom of choice to come up with an arbitrary set and just SAY it has an infinite number of digits, and we provide no underlying algorithm that produces them digit by digit… that means we cannot write a schematism that produces those digits. That means that infinite set doesn’t actually ENCODE any information that can be extracted from it. Which is the same thing as saying that infinite set DOESN’T EXIST.) Just like the very simple [K]m=K self-identity schematism, that is what PI is: a schematism that is mapping a certain operation to the number field, the same way [K]m=K continuously maps any number to itself on the number field, proving its logical self-identity. With this small change, all the bullshit seems to drop off and everything becomes very clear. But I must work from the ground up redefining an entire system of logic through schematisms now. Once that is done, this new mathematics will be able to answer questions about the deep-structure of the rational numbers that something like set-theory cannot. Things that are imponderable, confusing, or unknown in set-theory, could be easily answerable in the new mathematics.

Basically, in set theory, you can talk about infinite sets that aren’t produced by a finite operation or sequence of operations, whose number of elements cannot be matched to any schematism. In the terms I laid out here, that means that number of elements in the set simply isn’t a number, because it doesn’t encode information. As it encodes no information, no information can be decoded or ‘schematized’ from it, so it literally means nothing to talk about it, because you’re talking about something that doesn’t make any sense and can’t even satisfy the most basic formal schematism of self-identity that proves A=A mathematically, that proves a number equals itself. How could a theory so logically bankrupt ever get so far, set theory I mean? Well again, it goes back to its utility. In the magical world of set theory, where you’re allowed to talk about things that don’t exist and up can be down, mathematicians can make their wildest fantasies comes true! Why would they be quick to give that up? And don’t think it has something to do with a technological utility. Our computers use floating point math, not set-theory. Nothing your computer does requires set-theory. Floating point math is completely finite obviously, or every CPU would turn into a black hole.

Also, a small clarification on something. At some points I relate the axiom of choice and that of determinacy,

When I equate them like that, I am saying that the Zermelo-F axioms, the basis of modern math, without the axiom of choice but with the axiom of determinacy, is equally as consistent as the Zermelo-F axioms with the axiom of choice, IF the existence of infinitely many large cardinals is accepted, with these cardinals representing the kind of information-density hierarchy I mentioned.

Further building on the schematism concepts I was going over in my first reply.

An endless string of 1s can be schematized to [K]m=K when K is 1 and M is any number. This schematism expresses the atomic fact that 1=1, because the schematism can map 1 to itself on the number-field indefinitely and no matter how many times the encoded operation is repeated, it does not ever map the 1 to any other number but 1. Logically, it implies (it means) that 1 has an identity, and that its identity is 1. If it didn’t have that identity, the schematism could map it to something other than 1. But definitionally it can’t, ergo 1 is 1.

This concept of indefinitely doing something, as in the above case where no change takes place no matter how many times a 1 is pointed to itself, we can name something. We can call it infinity. But it is not the infinity of set-theory. The infinity of set-theory is the infinity of an arbitrary set that requires no generating algorithm. So no information is encoded by the infinite set. But there is information encoded in the schematism [K]m=K, namely a certain mathematical operation where a map from the number 1 (any K) is mapped to itself, returning the same value regardless of how many times you do it. So the infinity of this schematism is completely different. It is well-defined, has a specific meaning, it can be written down, and it can be encoded in this new special structure called a ‘schematism’. The schematism is what makes this infinity concept meaningful, without it, infinity is a vacuous typographic symbol. But now that we have meaningfully defined the idea with the formal structure of a schematism, we can use it to indicate indefinite extension. We can notate it as }. We can integrate it logically by saying:

N (any number) plus infinity = infinity.
1 + } = }

Infinity plus infinity equals infinity.
} + }= }

Infinity times infinity equals infinity.
} X } = }

Infinity plus infinity equals infinity plus n.
n + } = } + n = }

n times infinity equals infinity times n.
n X } = } X n = }

Any operation involving infinity outputs infinity essentially.

Earlier I said that dividing by zero would mean the number being divided by zero would violate the schematism that says a number can map to itself indefinitely on the field, [K]m=K, which states that no matter how many times you make a number point to itself on the number field, it never points to a different number. But we can use the new concept of infinity I just described in a different way… We can reformulate infinity as the inverse of the logical atom stating A=A. If we can make a schematism for A=A, then we could make a schematism where A never equals A. A never equals A precisely when A is 0, because if 0 could equal something or have any value at all, it wouldn’t be zero. Division by zero is that schematism; the ‘never’ part of it, that is the ‘infinity’. This might actually have some use mathematically.

So 1 divided by 0, or 1/0 as a schematism, would be saying that 1 is, in a certain context, not a valid number; 1/0 is saying that 1 can’t be present in some other structure the schematism is mapping. It’s a way of excising or cutting out the number being divided by 0 from an equation or structure. Something involving this schematism can never have that value. If there was something where you wanted to eliminate the possibility of the number 1 from appearing, you would use this concept of 1/0. Similarly a schematism for 0/0 is simply a logical atom stating that there is no value to zero. The same way that A always equals A (A=A), 0 can never equal anything. In fact, that’s what makes it zero.

In other words, A doesn’t equal A precisely (and only) when A is 0.

What else would this mean? What could one use that logical schematism to do, this concept of 1/0 and 0/1 we are going over. … Well we can use it to generalize projective geometry to a theory of the rational numbers.

We do that in this way:

If you draw an x and y axis in the plane, then 1/0 would represent a point that goes to 1 on the x axis of the plane, but then just stays where it is, purely horizontal, never going anywhere on the y axis part of the plane. Similarly, 0/1 would stay where it is on the x axis and then go somewhere on the vertical y axis. And 0/0 would be a point that goes to 0 on the x axis and also doesn’t go anywhere on the y axis, it is at the 0 position on both: a point drawn dead center in the plane in other words, at the origin. A line drawn through a point that only exists on the horizontal x axis line (1/0, 2/0, etc.) will never meet a line drown through a point with a Y=any number coordinate, like (0/1, 0/2, 0/3.) No matter how many times I rotate a line drawn through a point with a 0 in its y coordinate, like 1/0, it will never meet a line drawn through a point that has a non-zero Y axis coordinate like 0/1. This concept of infinity as a schematism that defines a context in which a value A cannot exist, where A cannot equal A, (a logical inverse of A=A, the law of identity; A doesn’t equal A precisely when A is 0, because 0 has no value, and if 0 did have a value, a value that could be mapped to itself with [K]m=K, then it wouldn’t be zero.) generalizes the idea of a parallel line geometrically, in other words, because a line going through the point 1/0 and one going through 0/1 are exactly that, parallel lines and never meet.

So we’re generalizing or extending geometry to arithmetic, but without using empty sets. So we can rewrite all the arithmetical laws as: (where a/b or c/d and so on is understood as a pair of numbers that can be matched to an x and y coordinate)

a/b + c/d = (ad+bc)/(bd)

a/b x c/d = (ac)/(bd)

a/b-c/d=(ad-bc)/(bd)

a/b divided by c/d = (ad)/(bc).

Check if these obey the usual stuff like commutative, associative, distributive laws, etc. Save you the time and tell you they do. Each pair of numbers a/b can correspond to two points on an x and y axis in the plane. 0/0 is just the center of that plane, the origin.

So these are extensions of rational numbers via projective geometry, but we have a few unique ones now that are illegal in normal math, like 1/0 (1 divided by 0) and 0/1 which is our infinity: the inversion of the schematism expressing the atomic logical fact of identity or A is A. 1/0 and 0/1 correspond to points that exist on only one of the two axes, x or y. A line drawn through a point that only has an x coordinate will never cross a line that has non-zero values for its y coordinate, meaning such lines are parallel. Just as 1 can be mapped to 1 indefinitely on the number field, as a schematism can be written that encodes that operation, 0/1 can never be mapped to 1/0, meaning lines drawn through those points will never intersect even if you infinitely rotate those lines. This is also why any operation involving infinity returns a value of infinity. We also have the number 0/0, this last one generalizing the dead center origin point of an X and Y plane.

[b]In sum:

[K]m=K is a schematism that encodes a certain finite operation: the operation of pointing a number K to itself however many times (m) such that it still returns the original value, K. If K is 1 and m is 593, meaning you are looking at the 593rd digit in the index for this schematism, the value returned is still k, 1. The ability to encode this operation in a schematism proves that the number 1 has an identity: if it did not, then this schematism could not be written down. It mathematically proves that A=A.

But there is one context where A does not equal A. When A is zero. Zero has no value, so the value of zero cannot be mapped to itself in the number-field even once. A does not equal A when A is 0. Zero can’t equal zero because zero has no value, otherwise it wouldn’t be zero, in other words: it is the logical inverse of A=A. This is a schematism that gives the idea of infinity actual symbolic meaning, one we can understand as a generalization of what in geometry are called parallel lines, lines that never meet even when indefinitely extended forever. I can now encode this indefinite extension in a schematism, and this is what gives it meaning. An arbitrary infinite set has no meaning, because sets don’t require generating functions or algorithms to produce them; if a schematism cannot be written down that encodes a finite sequence of operations producing the repeating digit, then those repeating digits have no meaning as a set, because no information (a sequence of operations mapped to the number-field) can be extracted from them.[/b]

_
Bro… get to the point.

Way to go, in making maths long-winded. =D>

In industry, it ain’t… but I get that you are compiling your life’s works, so ignore me. :icon-rolleyes:

I will simplify one passage as much as it is possible, this one:

How would I say this more simply than that. So we have a sequence of numbers, say 1,2,3,… and we say that this sequence of numbers goes on forever. We call them the natural numbers. What we do is abstract a new thing called an arithmos, an empty set. We subdivide that sequence, the natural numbers, and stick the resulting subset of it inside the arithmos, which was an empty set, but we are filling it now. We’re turning it into something they call an ordinal number in set theory. (Specifically, the first ordinal number.) How do we do that… We use the unary function, which is not a self-evident law or an a-priori truth, but just a synthetic axiom we are adding in to this system, (synthetic, meaning we just made it up, it was never demonstrated self-evidently) it is something we’re just saying makes sense because we want it to make sense: this unary function says it makes sense to infinitely iterate an addition of a number to itself forever, (by implication, every integer is the successor of another; it has a value that comes before it) and we use it to pick any arbitrary spot in the first sequence and start mapping it to any position inside of the subset, any spot inside the second sequence, as we read it from left to right in the decimal expansion.

The corresponding point mapped between those two numbers, one in the original sequence and one in the subset, is the mediant. Because of the circularity of the successor predicate, a value has to exist before the value of this mediant, which the mediant is the successor of: which means we can continually subdivide a sequence into subsets and then add those subsets together. Why? Because of the successor predicate, again. Once we establish between a set and a subset the mediant, in the manner described, we can add the mediant as a successor to its own predicate, which it necessarily has because the successor predicate says every integer has a value that comes before it, in a circular argument, giving us the interpolant, which is what we’re continually adding to itself in order to fill up the arithmos. The mediant added as a successor to its own predicate is the interpolant, and we are using interpolants to fill up new sets, sets bigger than the starting sequence of the natural numbers 1,2,3… , producing multiple higher infinities that are larger and larger.

[size=85] [Note. Now, because this unary function business is not a self-evident truth, we can’t logically justify it-- that made us move from failed projects like Russel’s Principia, where they tried to ground math in pure symbolic logic, (An attempt called logical positivism; I think that’s as idiotic as set theory is. They tried to fully detach math from all semantic structure and make a pure mathematical syntax. It of course didn’t work.) to something called axiomatics. Zermelo-F Axiomatics. The foundation of that is called the axiom of choice, where we just assert that we can arbitrarily pick these points to map to each other in a sequence and subset of the sequence. There’s also the axiom of determinacy which grounds a weaker version of Zermelo-F axiomatics, but they are both equivalent to one another when you add large cardinal axioms in to the picture, so the distinction doesn’t really matter. Point is, I am trying to go a third route. A route that is not logical positivism and also isn’t axiomatics. A route based on the theory of epistemes, my main work, which is what allowed me to connect my interest in math to philosophy and got me writing about it.][/size]

Now we just lost the quantifier up to equality of first-order logic, and we drop to a second-order logic where we can quantify only up to ‘isomorphy’: this again is not a self-evident truth. We’re saying it makes sense to do the same thing with the subset as we did to the original sequence, and then we add the subsets together inside that arithmos, using the unary function. And do it again. And again. In this way we made the subset bigger than the original sequence, we produce a second infinity bigger than the first one, because we can quantify these two sets only up to isomorphy: the first sequence is quantifiable up to isomorphy with the second one, and the second one can keep being further quantified past that isomorphy (through a process called well-ordering in set theory: again, we’re basically picking an arbitrary point in one set and we take that as the position to begin mapping to the subset we later add back to itself,- the two arbitrary points, in their correspondence, specify the mediant, and the mediant added as a successor to its own predicate or preceding value as per the successor predicate, gives us something called the interpolant, which is what we’re actually adding into the arithmos.) because we can’t quantify to equality anymore: in other words, we can keep subdividing it into new subsets and then adding those subsets back to it. This new, second, ‘bigger’ infinity, (alef-1) is what we call the real numbers. That is how set-theory works. And people have for years been doing this game where we produce bigger and bigger sets from other sets, and you can classify them in terms of their complexity to one another, and these classes they are held inside of are ordinals, and people have created various large cardinals based on them, numbers so large they can’t be reconstructed out of the natural numbers.[b]

These nonsensical numbers are the heart of a lot of modern math, especially stuff like Lie algebra. Since what defines these large cardinals is their information-density/complexity, they are associated with different topologies. The complexity level is the Alef hierarchy, Alef-0 is the infinity of the natural numbers; 0,1,2,3, and so on. Alef-1 is the infinity of all the enumerable ordinals taken as a single set. Alef-2 is the infinity of the powerset of the reals; and so on. The higher up the alef-scale you go, the less structure the infinity has, (the less dense its information content, and by logical implication, the harder it is to get information out of it, that is, the harder it is to understand it and what structure it does have[/b]) but the bigger it is. Every ordinal number can have an associated Alef number defined for it-- if the continuum hypothesis is true. That is why Alef-1 is very important, the infinity of the enumerable ordinals… check it out, the enumerable ordinals are what we call the real numbers. If the number of countable real numbers is c, then Alef1= C That equation, Alef 1 is equal to C when c is the number of enumerable ordinals, is called the continuum hypothesis. But if this heterogeneous infinity I was talking about exists, then the continuum hypothesis doesn’t even make sense as a question to ask, if it is true or not, it is just a malformed statement grounded in this unary function and arithmos nonsense.

A note on that^ . The transfinite cardinal number specifies the size of an infinite set, (the cardinal alef-o is the ‘size’ of all the natural numbers 1,2,3,4…) whereas the transfinite ordinal specifies what ordering position a number has in an infinite set. (Omega is the order type of those natural numbers. In a list of infinite sets, the ordinal tells you what place one infinite set occupies relative to the other infinite sets. Omega says the natural numbers have the lowest place in that order, they are in other words ‘first’ in the sequence of infinite sets, and we call that first position ‘omega’; the natural numbers are the first infinite set in an infinite list of infinite sets. And the actual size of that infinite set of the natural numbers is the cardinal Alef-0.) So the full list of the natural numbers has the ordinal value of omega, and it has the cardinal value of alef-0. Every ordinal can in the same way be assigned a cardinal. If the continuum hypothesis is true anyway, but as I’ve been saying, it is neither true or false, it is simply nonsensical. Another way of stating the continuum hypothesis is: there is no cardinal number in between the cardinal number of the natural numbers (alef-0) and the cardinality of the real numbers (alef-1). Recall what I said earlier, the cardinal value of the reals is alef-1. So the continuum hypothesis is saying there is no number between alef-0 and alef-1. There is no cardinal value between those two alefs. That’s what makes it a continuum, it is continuous, a jump from alef1, to alef2, to alef 3. The whole idea of the heterogeneous infinity of the rational numbers I have been talking about, completely obviates this formulation of the continuum jumping through these alef levels in strict conformation to a sequence of ordinals that exist in a definite hierarchy of information-density/complexity that’s able to measure how well-ordered one set is from another. (As I elaborated, information cannot be extracted from arbitrary infinite sets.) In my system, there is just as much information between 0 and 1 as between 1 and 2. You can’t hierarchialize infinity.

So what I am doing is going back to ground zero, and taking this concept of arithmos out of it. Euclid used it to generalize the concept of a ratio, but it has all the problems I just specified, the main one being it doesn’t make sense. There must be a better, more logical way of generalizing the idea of a ratio, something other than arithmos. So I figured out that alternative (what I called a schematism) and now I am reconstructing all of math with it.

I’m pretty certain just about anyone can understand what I said in this post, at any rate; it is as much as the idea can be simplified. We are talking about infinity after all. Simplified, but not shortened. We must, if we are going to rebuild a new math that is more rigorous than Euclid’s arithmos and set theory… be rigorous. We must define every term, every operation, every aspect of it very carefully, precisely knowing what we mean even when stating something as simple as 1=1. A new math is needed because the set-theory I just described, will never answer anything about the deep-structure of the rational numbers, you know, the ‘actually real’ real numbers. Not the real numbers. Without a new mathematics, the heterogenous infinity associated to those numbers, the numbers that actually do exist outside of our heads, will forever elude us, and I thought that’s what math was, accessing the higher truths of our reality, truths independent from us and our imaginings.

As you can see, simplifying doesn’t mean shortening. I simplified the passage I excerpted, one paragraph, into all this. In cases where the thing has high information-density, making it simpler means making the text longer, actually.

I’ve been trying to think of ways to disentangle the properties of the number-field; the additive and multiplicative, etc.

So in math, a semigroup is just an algebraic structure; it refers to a set combined with a binary operation that is associative. The set of all natural numbers, for example, forms a commutative semigroup.

A monoid is a semigroup but with an identity element as well; the set of all natural numbers forms such a commutative monoid under addition with identity element 0, and it forms one under multiplication with identity element 1. Addition and multiplication are entangled by this commutative monoid and its identity element.

Now, taking into account all I have said so far, it should be clear that this mysterious ‘binary operation’ is the unary function. That is what causes the entanglement of the properties of the number-field, an entanglement creating a lot of confusion. We want to disentangle these properties, the multiplicative and the additive.

My idea is:

We use algorithmic results from Krohn–Rhodes decompositions * on semigroups (mainly interested in numerical semigroups generated by prime numbers) ** and then find association-schemes for the wreath-products they embed into, (there are different algorithms for decomposing them into wreaths, the holonomy one is most efficient but is still difficult computationally) that is, the wreath-products of the corresponding aperiodic monoids and simple groups they decompose into, analogous to prime factorization results. (The aperiodic monoid divides a wreath product, and the simple groups can be embedded in that product to reconstruct the semigroup being decomposed; these monoids are where the entanglements are; their identity element causes commutative behavior under both addition and multiplication. The simple groups, however, are essentially to groups what primes are to numbers; they are just that, primal. They are the ‘disentangled’ thing one wishes to isolate.) Involved in this is the method of obtaining the automorphism group of a wreath product detailed in the paper: “Wreath products and projective system of non Schurian association schemes”. 1 (We are reversing the process though, so we need the epimorphism group of a wreath product, not the automorphism group. Still the paper is useful. I don’t believe the epimorphism groups of wreath-products has been investigated much.)

Once we find association schemes for these wreath-products or ‘cascades’ through surjective epimorphisms, (again, you can embed any semigroup into a finite list of simple groups and monoids that combine with each other in a multiplicative feed-forward chain, a wreath) we can generate new imprimitive association schemes from those schemes using Bose-Mesner algebra and Terwilliger algebra. *** So we have multiple association-schemes now, and they can be merged using the crossing and nesting operators, **** the one operator is associative-commutative up to isomorphism, the later is only associative. We combine them with a paper that details “a poset operator for combining association schemes indexed by an arbitrary finite poset.”

[b]A wreath product can be used to construct an association scheme for two association schemes. We’re reversing the process out of the wreath-product obtained from Krohn-Rhodes decompositions and then merging them: in other words, we’re finding two association schemes that can be constructed as one through their wreath-product, which is also the wreath-product of those decompositions- hence we have to find those two schemes, but instead of doing that and ‘multiplying’ the two schemes to create a third association scheme from their product, we are merging the two association schemes together with operators. We get a wreath-product from the decompositions of semigroups; we find two association schemes whose product is also that same wreath-product, whose value matches it, and instead of using the wreath-product to construct a new association scheme for them, we are adding the two association schemes through operators, merging them. Another way of saying it: instead of embedding the two schemes inside their wreath-product to produce a new scheme, we are finding two schemes whose product matches the wreath-product obtained by our decompositions, and then we are merging them together with the poset operator I mentioned earlier. Why merge them in this way? Because:

The merging of these association-schemes creates non-symmetric resulting association schemes, and these asymmetries can help us find corresponding symmetrical Abelian groups[/b] that relate to the semigroups we started with and decomposed originally: I call them pseudosymmetric-abelian transports, these asymmetries. The asymmetries are sort of ‘pushing out’ the entangled identity-elements in the commutative monoids. They are extracting the purely associative elements from the associative-commutative elements on the number-field, ‘transporting’ them.

They are to scheme theory what pseudoabelian categories are to category theory. Through them, I am applying something analogous to category theory’s Karoubi envelope to schemes, lifting it from category to scheme theory. Just as the Karoubi envelop allows one to find a pseudo-abelian category from any preadditve category, this technique allows one to find symmetric Abelian groups from nonsymmetric associative schemes generated from the results of the original decomposition on semigroups into monoids and simple groups. Similar to how the Grothendieck group of a commutative monoid is a certain abelian group. 2

Taking the Karoubi envelop (often called the pseudo-abelian completion) of a preadditive category yields a pseudoabelian category: taking the pseudosymmetric-abelian transport of an asymmetric associative scheme yields corresponding potential symmetric Abelian groups. The same commutative property ensuring bilinear morphism in the case of abelian groups is at work in the two processes, in both the Karoubi envelop and pseudosymmetric-abelian transport. These potential symmetric Abelian groups, understood as a new ‘thing’ relative to the asymmetric associative schemes that generate them, are what I call tableaux. We do important stuff in them later, in the tableau. We reconstruct objects inside them, and whatever we were not able to reconstruct with the symmetric Abelian groups in them, whatever is left outside the tableau, is the extracted associative elements of the number field, outside the associative-commutative. It is what we have disentangled.

I found a little information about finding the wreath-product of two finite association schemes: “The wreath product of finite association schemes is a natural generalization of the notion of the wreath product of finite permutation groups. We determine all irreducible representations (the Jacobson radical) of a wreath product of two finite association schemes over an algebraically closed field in terms of the irreducible representations (Jacobson radicals) of the two factors involved.” *****

Basically the procedure is,

Let Y be an elliptic curve defined over K. Let S be the semigroup of non-zero endomorphisms of Y.

Merge two associative schemes (A and B) whose wreath product can embed the monoids and simple groups of S attained by decomposition in an associative scheme C, call that merged scheme D.

Let T be the semigroup of non-zero epimorphisms of S. Find the associative scheme that embeds T and measure its symmetry in comparison to the asymmetrical merged scheme D. (by reconstructing the curve. We started with an elliptic curve and we’re ending with one, both Abelian groups.) The variance of C and D related to the variance of T and C is what is important.

We’re intentionally deforming something with these asymmetries, and the measure of deformation is related to the Abelian groups we can find afterward inside these tableaux. We sort of started with an entangling of the multiplicative and additive properties of the number field, then we broke structures apart into domains where only one or the other property holds, we developed them in isolation, (‘transporting them’) then we abstracted features from those isolated entities and reconstructed them with symmetric Abelian groups, and finally we are comparing the reconstruction results to the original semigroups we started with, the original structures we decomposed; in essence, whatever we weren’t able to reconstruct with the Abelian groups inside a tableau is the thing that captures the nature of the number field without the entanglements.

  1. arxiv.org/abs/2207.09205
  2. arxiv.org/abs/1704.02226

** link.springer.com/article/10.10 … 20-10102-9

*** researchgate.net/publicatio … on_Schemes

**** sciencedirect.com/science/a … 980500003X

***** journals.scholarsportal.info/de … ml&sub=all

_
What I got from your post, is an inquiry, on the opening-up and greater inter-connectedness for a better functionality of maths.

Seeing the world through mathematical concepts… the maths behind the existence of existence [the material/objects] itself… That’s my thoughts, on it.

I can explain this entangling of the additive and multiplicative properties of numbers pretty simply.

It doesn’t really make sense to ‘add’ three apples and a pear. Does it? It certainly makes sense, when adding 3 apples and 2 apples, to say the result is 5 apples. That is clear enough. But when you add three apples and 1 pear, the result is neither an apple or a pear. It is a different kind of ‘number’. The result is 4… something else. Three apples added to 1 pear is 4 ‘something else’. It just doesn’t work the same.

Given this, we could organize the numbers into families, and say that numbers can only be ‘added’ together when they are in the same family, viz. when they are both ‘apples’. (Mochizuki uses p-adic numbers to do this, to organize them into families.)

Multiplication could then be conceived as working as the intersection of two or more families of other numbers. Hence 3 apples multiplied by 2 pears results in six ‘fruits’, ‘fruits’ representing a ‘number’ that is higher-dimensional in some way. We can call this higher-dimensional number a ‘genus’ instead of a ‘family’. In this case we wouldn’t multiply numbers in the same family, because the result wouldn’t be a genus, but only a number of the same family. Multiplication would only work with two genus-numbers or when the output would be a genus-number.

So addition would only work on numbers in the same family; (apples and apples) multiplication would only work when the output of the operation is a genus. (as the output is ‘fruit’ when multiplying ‘apples’ and 'pears.) While Mochizuki uses p-adic systems to organize numbers into families and define addition, (as I said, this reorganization of the number system into ‘familial’ associations, familial proximity, instead of just organizing the proximity of numbers based on how close they are on the number line- this has been thought of before) he defines this new multiplicative operation through complex Galois groups, Frobenioids, Teichmuller spaces, etc. That later part is the main novelty of his theory, this ‘genus’ based multiplication and how the two operations are inter-related.

Doing this allows the confusing entanglement of our normal mathematics, where addition and multiplication are recursively defined by the same unary function, to be untangled. The main thesis would be that most mysteries of mathematics are simply confusions arising from improperly combining the operations, an entangled semantics. In terms of my own philosophy, exactly this kind of entanglement is conceived semiotically, a semiogenetic loop through interacting epistemes that occurs in all fields of study and in all discourses. I was happy to find an exact example of this entanglement idea at work in mathematics.

_
(Hu)man maketh math, so (hu)man have to maketh the maths work.