We have previously agreed that infinites can be of different sizes. One particular size can be ("may validly be ") set as a standard with which to compare others. Obviously if there are different sizes, there are differences between those sizes to be potentially measured.
I am giving one particular infinite size a name, “infA”, to be the size of the set of all integers. InfA does not have to be called a “number” or a “real number” or a “quantity” or anything other than a standard size (aka “infA elements”) of the infinite set of integers.
So we have agreed on the 2nd question of the set of digits to the right of the decimal being of size infA. Or that there are “infA elements in the set”.
Therein lies your fault. You treat something that is not a number as a number. Which makes you think that there is a zero at a particular position on the end. In fact, there is no end.
Yeah, I’m with Phyllo here too. Your reluctance to talk about it as a cardinality rather than a size suggests that you mean something different. Is that right? My concern is that when you call it a ‘size’, there will be a temptation to make an equation like
InfA + 1 > InfA
which is not the case if InfA is the cardinality of the integers (and 1 is the cardinality of a set with 1 element).
And again, I mean to be pedantic here, because I expect the syllogism you are building to be sensitive to the details. Cardinality is a specific mathematical concept; ‘size’ is ambiguous. The set of integers has the same cardinality as the set of decimal places of .333… If you mean something different by size, you need to be explicit.
You cite to non-technical sources, so we should expect those definitions to be a bit blunt for many purposes, and they are for this discussion: Infinity isn’t a number, so the cardinality of an infinite set can’t simply be the “number of elements” (though it is for finite sets). And infinite sets and their cardinalities have weird properties that make treating them as numbers problematic; for example,
InfA + 1 = InfA (where InfA is the cardinality of the integers, and 1 is the cardinality of a set with one element).
By contrast, if 1 and 3 are the cardinalities of sets with 1 and 3 elements, then
1 + 3 = 4
where the + symbol in relation to cardinalities is the disjoint union operation (if it is a regular union, then 1 + 3 <= 4)
If all of this is accepted, and you are just using “size” to mean “cardinality”, then proceed. Otherwise, could you clarify how they are different?
Certainly not in the sense that it is being used in general math equations.
An infinite series, or sum, or integral does not specify some particular infinity as opposed to another infinity.
The reason seems intuitively obvious - the very concept of ‘infinity’ does away with all size concerns. What applies for the “lowest” infinity also applies to all “higher” infinities. IOW, the results for all infinities are exactly the same.
I think that intuition is right for limits, series, sums, etc., but there is a sense in which there are ‘more’ real numbers than natural numbers, while there are the same number of natural numbers and integers. You can map uniquely from the natural numbers to the integers and vice versa (e.g. by mapping 1 to 0, 2 to 1, 3 to -1, 4 to 2, 5 to -2, etc.). But the real numbers are uncountably infinite: You can’t create a mapping, because there’s no next number for any real number; any two real numbers have infinitely many real numbers between them.
I don’t know why you guys are going off on this tangent.
Certainly, there is no reason for James to do so. It weakens his “you can never get to the end” argument. A number system with cardinality infA will have gaps between numbers as compared to a system with greater cardinality infB. There will obviously be a ‘smallest number’ in the system. That’s something he denied was true for Reals.
I wish to clarify whether or not you mean cardinality when you say size, since as your own source points out, “[t]he cardinality of a set is also called its size, when no confusion with other notions of size is possible” (emphasis added). Just be explicit and we can move on.
You made the point yourself:
The existence of different infinities makes this a possibility. And if it is a different string, with a smaller infinity of decimal places, then James could refute proofs that rely on multiplying .333… by 10.
I brought it up because that’s the kind of ‘logic’ which is used to create doubt about 0.999…=1 in the minds of those who are not familiar with math.
The fact is that if you multiply 0.333… by any some factor of 10, it doesn’t pick up zeros on the end … it simply shows the 3s that were there but seemed to be ‘invisible’ to you.
What would happen is that was not the case?
The symbol 0.333… would represent an infinite number of different numbers. It could be anything from 0.333…333 to 0.333 …300 to 0.3330000000000000…
It could mean that 0.333… = 0.333
If you saw it used on a page, you would not know what it represented. A simple multiplication operation, in a set of operations, would change its meaning and value. There would be no logical way to use it.
My understanding is that in the hyperreals, that’s pretty much how it works: .333… doesn’t necessarily fully express the number (possibly because a convention hides certain properties), and multiplying by 10 affects the unexpressed portion such that the result is not 3 + .333…
I think it’s similar to how we express integers without the trailing infinite string of zeros, even though it is there: when we multiply an integer by 10, we move the decimal place to the right and ‘reveal’ a zero where there wasn’t one. I think the hyperreals operate in a similar way, but around infinity instead of the decimal point. Because the hyperreals include infinity and infinitessimal as numbers, weird things happen, and I believe they affect this question.
But, to be clear, this is not how the real numbers work (“standard reals” for James’ benefit).