Magnus Anderson wrote:If you're saying that my calculations are influenced by my image, you're wrong. The image is nothing but a reflection of my calculations. It's supposed to show how I'm doing my calculations (and it is implied that the way I'm doing them is the way they should be done.)

The first element of the purple rectangle is \(0.9\) and it's constructed from the second element of the green rectangle which is \(0.09\). It is not constructed from the first element of the green rectangle. That's why the purple rectangle does not start at the same place as the green rectangle.

The way in which you're doing the calculations of summation is the way it should be done.

The way in which you're justifying the set-correspondence is a different calculation to this summation, and only

seemingly connected by your "reflection of your calculations" via your image. Consider this example:

I have two sets, A and B, which each contain one instance of each integer from 0 upwards, in ascending order \(\{0,1,2,3,...\}\).

I state however that B can be constructed from A by first inserting 0 to start the list, and consequently populating the rest of B by adding 1 to each element in A.

This construction of B from A is entirely possible, but for each element in the list, there is one-to-one correspondence between the identical sets. The implication that B has an extra element because 0 was added to it separately to the construction of the rest is inconsequential, because for every element in A there is still an identical element in B - literally forever and with no gaps. You never get to any un-matching element in B from its very start and onward forever - it is impossible for there to be one, since the two sets were identical to begin with. It was only the alternative construction of B from A that gave the "appearance" that correspondence was skewed from being one-to-one. Superficial.

In exactly the same way, the working you do to get from Green to Purple is entirely valid summation

in itself, and yet this construction goes nowhere to changing the

objective bijection between the two sets, with element for element identity from the start and indefinitely onward from there.

Magnus Anderson wrote:Silhouette wrote:There is no "product itself" of a divergent infinite product - you never get there.

There is. The fact that the product never ends does not mean it does not exist. (Indeed, I can use Gib's argument against you: infinite sums have no temporal dimension, so they can be considered complete.)

What's true is that there may be no finite number equal to an infinite product. In the case of \(\frac{1}{10} \times \frac{1}{10} \times \frac{1}{10} \cdots\), there is no finite number equal to it. It has a limit, which is \(0\), but that's not the same thing as its result (its result being

greater than its limit.)

Infinite sums being "complete" refers to there being no gaps in the set, in the sense that each jump between one element and the next is defined from the representation itself to be constant throughout.

This doesn't mean that any "product" doesn't always have an extra element after it, or to add to it etc. An infinite set can be completely defined, but it still has no product because there's always more than any product. Any "product" is therefore undefined, it's infinite, it doesn't "equal" its limit - this is something I thought you already indicated you had grasped...

Magnus Anderson wrote:Silhouette wrote:\(1+\infty\) is almost literally "more boundless" (by the finite quantity of "one").

Something can't be more boundless than boundless, so it's boundless whatever finite thing you do to it along its boundlessness.

Quantities are either finite or they are infinite. There are no degrees. That's where we agree. Where we disagree is that \(\infty + 1\) means "more infinite". It does not. What it means is "larger infinite quantity".

You're trying to tell me that "larger infinite quantity" does not mean "more infinite"?

"Larger" is a synonym with "more" (or it is at least a type of "more"), and you yourself clarified by "infinite", you mean "infinite quantity".

Is your distinction therefore between the "type" of "moreness" not having anything to do with being "larger"? Adding 1 deals with (finite) quantity, so both "larger" and "more" apply to the specific example you're talking about. Basically nothing in your words distinguishes between "more infinite" and "larger infinite quantity".

The question is whether the output as a whole is distinguishable from the previous input states. I can watch an apple and a banana be put into the same basket and ask someone else who didn't watch the process to describe the product, and they'd be able to accurately describe it as an apple added to a banana. No information is lost upon the completion of the operation.

If you asked the same of someone adding an apple to an infinite line of apples, someone else who didn't know the input process would only recognise the result as an infinite line of apples, and could only guess at any operation previously performed. You yourself can only know the result as "an infinite line of apples plus 1 apple" as a result of your memory of a past state - as opposed to the current state.

It's perhaps a bit of an advanced concept, but the act of operating on an infinite with a finite results in maximum information entropy - with all information guaranteed to be lost. The infinity "added to by 1" already has maximum entropy, though operating on it with the finite quantity of 1 as a separate input at least contains some information that you'd expect to result in a certain outcome - if only the operation of adding 1 wasn't to an infinite quantity. Mixing the two completely eliminates the certainty of the finite operand, and treating the two separately in the result like you would an apple and a banana, which you can't mistake from one another, only serves to hide the fact that you absolutely can mistake infinity from any addition of 1 to it. We're dealing with information here, I'm sure you'll agree, so the information entropy of what you're proposing cannot be disregarded.

Magnus Anderson wrote:Silhouette wrote:That's why adding one results in boundlessness both before and after you do it

The truth value of the statement \(\infty + 1 = \infty\) depends on its meaning.

If what we mean is "Some infinite quantity X + 1 = Some infinite quantity that is not necessarily equal to X" then the statement is true.

If what we mean is "Some infinite quantity X + 1 = The same infinite quantity X" then it is false.

So as above, the problem is not only that the equality or inequality of "X+1" and "X" is indeterminable, undefined and undefinable when X represents an infinity. "X" isn't even determinable as different from any other "X" when X represents an infinity, because in in each case the quantity is undefined - no matter how well defined the finites are in its construction or representation, the infinity in the construction or representation is also undefined. There is no "the same infinite quantity X" because neither X is defined. This is why you have to revert back to "Some infinite quantity X + 1 = Some infinite quantity that is not necessarily equal to X", therefore \(\infty + 1 = \infty\) is true.

Magnus Anderson wrote:Silhouette wrote:It's literally impossible to test whether you added, took away or did whatever bounded thing to a boundless length after you've done it and are therefore able to make an equation about it. You can only validly comment on what you did with the finite quantity of 1 apple before it was absorbed into the boundless non-finite mass, you cannot validly comment on that resulting infinity that stays infinite in the same and only way that infinity can be infinite. So with no change in the result, there is therefore no valid equation or statement to make about the result as changed.

It seems to me that like Gib you're not able to distinguish between conceptual and empirical matters.

Gib keeps asking questions such as "How can we empirically determine (specifically, through direct observation) whether any two infinitely long physical objects are equal in length or not?"

The question is irrelevant and it is so precisely because it's empirical and not conceptual.

We're talking about concepts here, and to concepts we should stick. A conceptual matter cannot be resolved empirically.

You keep jumping to the conclusion of inability when people disagree with you.

Whether empirical or conceptual is a matter of Epistemology either way: you are championing the conceptual over the empirical in order to gain

knowledge from what you're conceptually attempting. This goes as far as to present situations that seem like they ought to be reasonable, but whether or not they actually are reasonable requires testing. You aren't gaining knowledge from presenting situations conceptually until they've been examined. It turns out that when one tests what you judge ought to be conceptually viable, it can't be done.

This is not insignificant and certainly isn't a mistake by those who offer empirical testing to conceptual matters. Most of my objections are actually logical ones, and occasionally I back them up with empirical objections - either way, in order for you to forward your conceptual propositions as knowledge, they need to be testable and tested. They fail both logically and empirically - both of which are valid approaches to evaluating the conceptual.

Magnus Anderson wrote:Silhouette wrote:Your error is to say that since 1=0 is a contradiction, we ought to be able to treat infinites alongside finites as though they were compatible.

That's not what I'm saying.

The contradiction that you speak of arises as a result of not understanding the implications of the concept of infinity that is being employed.

I've previously said that \(\infty + 1 = \infty\) is true. There is no doubt about it. But that's only the case if the symbol \(\infty\) means "some infinite quantity, not necessarily the same as the one represented by the same symbol elsewhere". In such a case, you can't subtract \(\infty\) from both sides and get \(1 = 0\). This is because \(\infty - \infty\) does not equal \(0\) given that what that expression means is "Take some infinity quantity and subtract from it some other infinite quantity that is not necessarily equal to it". If the two symbols do not necessarily represent one and the same quantity, then the difference between them is not necessarily \(0\).

Some progress - this is what I've been saying all this time.

Magnus Anderson wrote:But that's PRECISELY what mathematicians do when they try to prove that \(0.\dot9 = 1\). There is literally no difference between people proving that \(0 = 1\) by subtracting \(\infty\) from both sides of the obviously true equation \(\infty + 1 = \infty\) and various Wikipedia proofs that \(0.\dot9 = 1\) except that it's much easier to see that the former conclusion is nonsense and that the proof must be invalid.

The terms that I've brought up before of "divergent" and "convergent" make a difference here.

If we compare the two similar looking infinite sums of \(\sum_{i=1}^\infty\frac9{10_i}\) and \(\sum_{i=1}^\infty\frac9{10^i}\), the former indefinitely increases linearly with no finite limit and the latter indefinitely increases harmonically with a finite limit (of 1). Both are "complete" in that their progressions are constant and regular so we can project their trajectories very clearly, but while both series have infinitely many terms, the former also tends to infinity whilst the latter tends to finity. This difference is why convergent series are useful for things like calculus, and divergent series are not. We can surmise logically from their respective trajectories that:

1) the former tends to any arbitrarily large infinite quantity, for any arbitrarily large infinite quantity of iterations, that cannot be said to equal any other infinite quantity with an

infinite margin for error, and

2) the latter tends towards no other number than 1 (the limit that it never actually gets to) though the margin for error for any arbitrarily large infinite quantity of iterations tends to no other number than zero (any number than zero can be divided smaller, so logically it can only be zero).

With infinite margin for error to to subtract an indefinite number of corresponding digits (or elements in a set), versus logically zero margin for error to accompany the problem of subtracting an indefinite number of decimal places (or elements in a set), we can accept the latter but not the former.

Magnus Anderson wrote:Silhouette wrote:Sure, if saying what something "isn't" gives it a meaning about what it "is".

You don't seem to understand the extent to which "ends" apply when it comes to definition.

Full definitions that include what something "is" as well as what it "isn't" are separating what it "is" from what it "isn't" by a bound (i.e. an "end") that's as clear as possible.

To define some symbol S is to verbally (or non-verbally) describe the meaning of that symbol S. The usual aim is to communicate to others what meaning is assigned to the symbol.

In other words, the meaning of a symbol

precedes its verbal (or non-verbal) representation (which is what definitions are.) They are superficial things, very much in the Freudian "tip of the iceberg" sense.

The meaning of a word does not have to be described in order for it to have a meaning. This means the word "infinity" is a meaningful word so as long there is some kind of meaning assigned to it regardless of what kind and how many descriptions of its meaning exist.

You can describe the meaning of words any way you want, so as long your descriptions do the job they are supposed to do (which is to help others understand what your words mean.) If you can define words using "is not" statements and get the job done, then your definitions are good.

Using a transistor "NOT" gate as an analogy, let's call our input the word/symbol "infinite" and any output is any meaning we understand.

If we supply our input into the circuit, the transistor "switch" is turned on (allowing current to run through it) and the power supply and everything just runs straight to ground with no output provided.

If we turn our input "infinite" off, the transistor no longer completes the circuit between power and the ground, and power can now only flow out the output.

In both cases the input "infinite" is never connected to the output "meaning". The only way we can get an output is to turn "infinite" off and the meaning we understand is just the default: "NOT" that input. We're recognising and processing the word by throwing it in the bin and switching to what we do know which is the opposite of that i.e. "finite".

It's just an analogy, of course, but our brain circuits - just like our dictionary definitions - are "getting the job done" by defining something else than "infinite" and just saying "not that". Describe that input "any way you want" just as you say. Any meaning we interpret is what's assigned to "not that" i.e. the meaning that "finite" has for us. The conflation of this with "infinity is a meaningful word" is that Freudian "tip of the iceberg" that you were talking about - it's the recognition of the symbol, verbal or non-verbal representation

only. The shape/sound of the signifier is defined for us, but the meaning we get from detecting it is the meaning of its opposite, not the meaning of the signified.

I'm just saying don't conflate.

Magnus Anderson wrote:Silhouette wrote:The reason that expressions that include e.g. division by zero are undefined is because you can't clearly bound what it "is" from what it "isn't", since in this case any answer is no more valid than any other:

\(\frac{1}{0}\) has no valid answer (not a single one) since there is no number that gives you \(1\) when you multiply it by \(0\).

The limit of \(\frac1{0}\) as the denominator tends towards zero is infinity: it's undefined. There's no way to clearly bound exactly what it "is" from what it "isn't". Like I just explained, you know it doesn't work for anything finite, and it's only to that extent that you can know its infinitude. Obviously there's no finite number that gives you \(1\) when you multiply it by \(0\), and even extending these finite numbers as far as you can conceive (which is only for finites, by definition) it makes no sense how extending it even further than that could give you \(1\). All we can know for sure is what happens when the finite limit of the denominator tends towards zero - and that can't be pinpointed to tend towards anything.

Magnus Anderson wrote:Infinite series and limits is what they taught you at school (and thus what you're familiar with) but they are not relevant to the subject at hand.

Not seeing the relevance to the subject at hand is a result of you needing to go to school.

I'll just leave that non-comment there to match the non-comment content of your baseless and unelaborated quip.

It's adorable that you think I've not continued my learning and critical thinking way beyond school level - I can't wait to see all this next-level stuff that you've obviously been holding back from the discussion so far. So far it's just been weak as shit childlike speculation, personal incredulity and your denial of any need to prove it better than that kind of level. It's not surprising that you have such a negative attitude to what people learn at school, because this unworkable attitude that you have against learning like people do at school. I loved school, and learning - I still love learning and can't wait for anyone to show me something I've not learned or thought of before. I love being a student. You look down on that as though you never learned to learn a thing. Where's that getting you, exactly? Do you feel competent and confident with your closed mind?

Magnus Anderson wrote:Silhouette wrote:See how you're only going for one side of the "undefined" and concluding that the other side therefore doesn't exist?

\(0.\dot01\) never gets to \(0\) also means no gap can ever come into existence between \(0.\dot01\) and \(0\).

That's not true. It simply means the gap cannot be represented using one of those numbers you can understand.

\(0.\dot01\) must attain \(0\) in order for there to be no gap.

You saying there ought to be some other number/concept/anything without being able to say anything about it doesn't make it true.

I'll grant you anything at all to fill this "gap" and there can always be something smaller, and \(0.\dot01\) will still be a contradiction. I'm waiting for substance here beyond "there must be something" based on that contradiction. Nothing can be close enough to zero to suffice here, numbers or otherwise, and logically only 0 cannot be divided smaller and thus satisfy this "gap" - which just so happens to be the limit that it must tend towards even if it can't literally get there. The only thing you have against 0 is incredulity. You need more!!! I'm sorry, that's just how Epistemology works.

Magnus Anderson wrote:Silhouette wrote:Saying there's always a smaller gap, therefore there always is "a" gap that never fully vanishes, is no different from saying there's never any point at which the gap can be said to come into existence.

This is the problematic statement. It's a non-sequitir.

A non-existent gap would be a gap that has zero size. The size of an infinitely small gap is greater than zero -- by definition. No partial product of \(\frac{1}{10} \times \frac{1}{10} \times \frac{1}{10} \times \cdots\) is equal to \(0\). Note that this is different from \(0 \times 0 \times 0 \times \cdots\) where every partial product is equal to \(0\). Both are infinite, never-ending, products but one evaluates to a finite number that is \(0\) (is equal to it) whereas the other doesn't evaluate to any finite number and is provably greater than \(0\).

*Sequitur. An infinitely small gap must be even smaller than that, if it has any size at all, in order to satisfy being small enough. If it has no size at all, it is indistinguishable from zero. Logically only zero can fit the bill by being literally indivisible. If only zero can be small enough for this gap, then there's no gap. Partial products are all fair and lovely, but they're only partial. It's only by being infinite that this can happen as for finites only products of at least one zero would fit the bill. But we're not talking merely partial products or finite progressions are we - as I keep saying and as you keep denying with one hand while you appeal to treatment as/like finites with the other. If it's impossible to prove the "gap" is greater than zero, then there's no grounds to say there's any gap nor to distinguish it from zero in any possible way. It can therefore be treated as 0 and all the other proofs fall into place like a complete jigsaw. Give yourself a break.

obsrvr524 wrote:Silhouette wrote:\(0.\dot0{1}\) never gets to \(0\) also means no gap can ever come into existence between \(0.\dot0{1}\) and \(0\).

The undefinability of any "gap" means there's no grounds to distinguish \(0.\dot9\) from \(1\), nor \(0.\dot0{1}\) from \(0\)

Neither of those claims make any sense to me. So you haven't proven anything to me. Why would your first claim be true?

Silhouette wrote:Saying there's always a smaller gap, therefore there always is "a" gap that never fully vanishes, is no different from saying there's never any point at which the gap can be said to come into existence.

That one doesn't make any sense either.

Fortunately making sense to you isn't a necessary criteria for such things to have already been proven, but I can try to convince you all the same if you're actually open to the possibility. Magnus isn't, are you?

The first claim is rephrasing one side of the argument as the other side.

\(0.\dot0{1}\) is a contradiction, but even if what it's trying to represent wasn't a "fully bounded boundlessness", it's a quantity that is always smaller than it is: it always has to be divided more to suffice as small enough. Logically the only quantity that is small enough to not be divisible further is zero, which not incidentally is the limit of what this gap has to be. So even though we can't get to the limit, we can confirm through logic that the answer is in fact zero, as though we actually could get there by means of the limit.

Therefore, inasfar as one way of thinking can say there's always a gap because the limit never reaches 0, the other way of thinking shows that such a gap could never be small enough to come into any existence.

It was an attempt to get you guys to try and see both sides of the argument and not just your own - so it's fair enough if that kind of thinking didn't make sense. Maybe it was just my wording and you are actually able to see both sides, in which case the explanation I just gave should have straightened things out.

obsrvr524 wrote:Silhouette wrote:"They" have a proof that if \(0\times{n}=0\), "n" can be any number you want since all numbers multiplied by zero equal zero. Therefore dividing both sides by 0 to get \(\frac0{0}\) it's not \(\infty\) that this equals, but "undefined".

Except that ∞, by definition is NOT a number ("n").

So the "proof" using n/o is invalid.

If \(\infty\) is not a number then you can't equate it with the number \(\frac0{0}\)

If it is a number, then \(0\times{n}=0\) is any possible number.

Take your pick - either way the answer is undefined/invalid.

Ecmandu wrote:The reason Silhouette sees no difference between .000...1 and zero is because the 1 in 0.000...1 is never arrived at. It’s ALWAYS zero!

So you are saying that you never get to the end??

And that would mean that there is always a "9", never a "0" in 0.999..., wouldn't it.[/quote]

You never get to the end of a limit, but as I just explained above, logically the answer is the limit anyway.

And yes, there's always another "9" and never a "0" (after the decimal point) in \(0.\dot9\) never allowing any gap to emerge between it and \(1.\dot0\).

The whole issue for non-mathematicians is that they don't look the same because they're represented with different digits. Turns out it's true despite appearances - is that so very hard to accept?