Is 1 = 0.999... ? Really?

I don’t know that Magnus has taken that position, but I think that is something like the usual idea when people object to that line in the proof. But that doesn’t really make sense. Why would we add a zero, as opposed to another 9?

I also think a more rigorous construction of (0.\dot9) would resolve the issue. Suppose we create (0.\dot9) by taking (3 * 0.\dot3), and we construct (0.\dot3) by dividing 3 into 1. Assuming long division is a reliable algorithm, we get an infinite string of threes (similar to my recursive definition of (0.\dot9)).

If we use a construction like that, we have an alternative way of asking whether (10 * 0.\dot9 = 9 + 0.\dot9), because we can multiply 10 elsewhere in the construction (by the associative property) and get to the same repeating long division. I don’t think that construction would actually work, but maybe we could find one (if not, we’re just not talking about the same thing, so there’s no actual disagreement about what equals or doesn’t equal 1).

I see this question of whether one can create a coherent mathematics that has a largest number as a red herring. Even if there is a largest number, (0.\dot9) still equals 1.

Because that’s what happens when you multiply something by 10: 10 x 123 = 1230.

Obviously, this isn’t the rule, it’s just a convenient short-cut for deriving the result. The rule isn’t: shift all the digits to the left and fill the last place with a 0. The rule is: add the multiplicand to itself a number of times equal to the multiplier. So with 10 x 123, you would add 123 to itself 10 times. You start with 123, then get 246, then 369, etc. until we get 1230. Nowhere in that process are we shifting digits to the left. It’s just convenient that, when multiplying by 10, we so happen to get a result that can be derived by a much simpler short-cut: shift the digits to the left and tack on a 0.

This short-cut applies to all numbers with finite decimal expansions, but because it is not the actual rule of multiplication, we cannot necessarily say it carries over to numbers with infinite decimal expansion. We’d have to derive a thorough explanation for why we get this short-cut when multiplying numbers with finite decimal expansions by 10, and then see if that explanation carries over to numbers with infinite decimal expansions.

So instead of asking whether (10 * 0.\dot9 = 9.\dot9), we ask whether (10 * (3 * 0.\dot3) = 10 * (3 * \frac{1}{3}) = 9.\dot9)?

I doubt that would convince the skeptics. Magnus has already ruled out the fact that (\frac{1}{3} = 0.\dot3).

It’s a tangent. This thread is full of tangents. Personally, I’m okay with that as I’m not actually trying to get to the bottom of the main question–does (0.\dot9) really equal 1?–I just enjoy a good debate regardless of where it leads.

(But just to address your point, if there is a largest number, it would fundamentally change the way we understand numbers (it would for me at least), and this could affect the way we understand the question of does (0.\dot9) = 1.)

Does the “=” sign represents the requirement of a balanced equation. Seems it is tipping ever so .0111… to one side. Or does .0111…= 0? Seems like a smallest number question as well.

The short-cut isn’t necessarily wrong, multiplication by 10 in base 10 shifts the decimal point to the right. But (123 = 123.0), so we aren’t adding a zero, the zero was already there. That’s not the case for (0.\dot9) (if there are (L) decimal places, there can’t be any number in the (L+1)th place, because (L+1) is undefined).

I also suspect that we can prove the short-cut as a general theorem, since it’s the case that for any number in base x, multiplying by x shifts the decimal point one place to the right. (Tangent: is there a base-agnostic word for the ‘decimal’ point?)

I was thinking (10 * (3 * 0.\dot3) = (3 * \frac{10}{3}) = 9.\dot9), because (\frac{10}{3}) results in the same repeating decimals in the same way.

I agree this isn’t the construction we need to convince skeptics, but I’m not sure what construction the skeptics are using to define (0.\dot9). Again, without that construction, I’m not sure that we all mean the same thing when we say (0.\dot9).

Absolutely. Numbers aren’t closed on addition? A number where addition is defined for only half the number line (all negative numbers, but no positive numbers)? An end to all decimal expansions, such that there is uncertainty about what happens to any infinite expansion when multiplied by 10? Does that make a whole class of rational numbers that aren’t closed on multiplication? Is multiplication by (L) defined?

My impression is that the existence of (L) is ad hoc, unnecessary, and has a lot of unintended consequences. I don’t believe it can be proven from other standard axioms, so we’d be adding it as an additional axiom, and it isn’t clear why.

Ah, so the red herring is actually sport fishing! Fair enough.

The limit of the sequence is (0.0000… * \infty). My calculus is rusty, but I think that means the sequence doesn’t converge.

So, even here someone or something the equivalent of God is necessary. In other words, an omniscient point of view who/that knows everything that can possibly be known about both apples and math.

In the interim, mere mortals such as ourselves carry on as best we can. I’m just curious as to why those here who obviously do have an enormous amount of understanding with respect to math still can’t pin this down conclusively.

This tells me something about reality [human or otherwise] that doesn’t quite seem to sink in with others. For better or worse.

No. You can find logical flaws without omniscience or knowing everything.

We can know that 6 times 7 does not equal 43 even when we don’t know the true answer. That’s because the product must be an even number. We know something about it.

That’s because a consistent point of view is not being maintained by some posters. Sometimes they say that infinity is a number and sometimes it’s not. Sometimes they say mathematical division works and sometimes it doesn’t. Sometimes they talk about numbers and sometimes they talk about sets.

It’s like trying to argue against a square-circle. They see a circle when it suits them and a square when it suits them. They don’t recognize that you can’t have both at the same time.

Demonstrating that inconsistency is difficult.

You know my pitch here. Whether in regard to living matter evolving into apples, the definitive understanding of all things mathematical or human minds capable of discussing either one, we are all embedded in that same profoundly mysterious and problematic gap between what we think we know about anything and all that there is to know about everything.

Besides, the discussion revolves not around whether 6 X 7 = 43 – who here would get into a fierce debate about that – but 1 either equaling or not equalling 0.999…

Okay, but as long as the exchanges revolve around words talking about numbers pertaining only to more words still, it’s hard for folks like me to grasp the relevance of the debate as it might be applicable to, say, technology and engineering. Again, it’s actual use value and exchange value in human relationships.

A square circle? Isn’t the whole point of the expression “squaring the circle” to suggest something impossible?

The point is that you don’t need to know everything in order to come to some reasonable and true conclusions.

If you want to engineer something, then you can’t be flip flopping around.

Take a stand and see where it goes. That’s the lesson to be taken away from this argument.

Phyllo,

Engineers only use pi to 6 decimal places.

That’s not what we’re talking about here.

I’m talking about how to solve problems without getting all fucked up.

Something like 1=0.999… is just one example.

You’re preeching to the choire.

How 'bout the “base”?

Magnus likes to use (\sum_{i=1}^{\infty}\frac{9}{10^i}).

Yeah, I’m not sure why Magnus brought it up, nor did he care to explain what he meant. He did say it was a “special” kind of number. Not a natural, not an integer, not a rational, and not a standard real. He dropped the term “non-standard real” at one point, but he has yet to confirm this is what he meant. We’ve also been discussing hyperreals (infinitely large and infinitely small) and he might have this in mind, but if so, I’ll dispute it as I see no reason to assume there is a largest hyperreal number (hell, they’re already larger than infinity, what would stop them beyond that?).

I don’t think it’s a matter of being able to pin it down, I think it’s a matter of unanimity. As far as I’m concerned, the matter was pinned down the minute someone posted this proof:

X = (0.\dot9)
10X = (9.\dot9)
10X = 9 + (0.\dot9)
10X = 9 + X
9X = 9
X = 1

I didn’t.

(L + 1) refers to a number larger than the largest number which is a contradiction in terms. On the other hand, (L - 1) is perfectly fine (i.e. no contradiction.)

And there’s no need to change the definition of addition.

“If two calculations both result in ∞, that doesn’t mean that these outcomes are equal.” (“Als uit twee berekeningen allebei oneindig komt, dat betekent niet dat die uitkomsten gelijk zijn.”)

  • The Elder Milikowski, VLIW architect and physicist

“Thus ∞ is not a number.”

“Base point”? I don’t see that used anywhere. A bit of wikiing brought me to “radix point”, which I’ve never heard before but seems to be what I’m looking for.

I don’t think there’s a non-question-begging way to proceed from there.

(10 * \sum_{i=1}^{\infty}\frac{9}{10^i} = \sum_{i=1}^{\infty}\frac{90}{10^i} = \sum_{i=1}^{\infty}\frac{9}{10^{i-1}}), but I don’t see that leading to anything that’s not question-begging: if (\infty) is finite (what?), then the latter two sums ‘end’ before the sum on the left side.

But what’s the difference between (\infty) and (L)?
(\sum_{i=1}^{\infty}\frac{9}{10^i} \stackrel{?}{=} \sum_{i=1}^L\frac{9}{10^i})

What kind of number is (L)? It can’t be a natural number, an integer, a rational number, or a real number without redefining every basic operation.

And everything just seems to behave strangely around (L):
(\frac{L}{2} + \frac{L}{2} = L )
But multiplying both sides by 2, as we can usually do with an equation, yields
(L + L = 2L)
A contradiction by definition. But this is hard to escape: if we can put (L) into any equation, and if we can use any standard operator on it, ever, you get weird things. Like
( L - 1 + 1 = L + (-1) + 1 = L + 1 - 1 )
That should be permitted, and if it’s not then addition involving (L) doesn’t mean what addition usually means.

But, as someone not able to follow the mathemtics here with any real sophistication, how important, in regard to creating technology and accomplishing engineering feats, is it to get this right?

That’s what I can’t wrap my head around. Is the above in some respect the equivalent of this: “You will never reach point B from point A as you must always get half-way there, and half of the half, and half of that half, and so on.”

Intellectually, that seems to make sense. But if point A is my front door and point B is the mail box, I make it every time. And then back again.

So I’m wondering if, “for all practical purposes”, 1 = .999…has any real meaning in our lives. Such that, for example, if someone assumed they were not equal, and this turned out to be true, it would actually make their life different in some way.

But, here again, I am more than willing to concede this revolves entirely around my ignorance of the math. Analogous, perhaps, in a larger sense, to someone who does understand Einstein’s space-time continuum wondering how his or her life might be different if Einstein turned out to be wrong.

What kind of question is that? What exactly are you asking?

It’s neither of those.

Well, you multiplied (L) by (2). Not sure what you expected.

(L + 1 - 1) is fine.

The “result” isn’t useful in itself. It’s a curiosity and an exercise in problem solving and reasoning.

That particular problem produces a couple of interesting conclusions about mathematics and the universe.

Either infinite series converge - which is useful to know for solving other problems (and it would lay to rest some of the arguments in this thread). Or the universe is not infinitely divisible - at some point trying to go “half the distance” produces a quantum jump to the end point.

In itself, it means nothing in our lives.

Science turns out to be wrong all the time.

Life goes on in spite of that. But there are repercussions in wasted effort, money and lives.

For example, useless medical procedures and treatments that either do nothing or cause harm. Money wasted on medical research that is going in the wrong direction.

He’s asking exactly the same question I was asking, which you refuse to answer. You yourself tell us: L is not a nature, it’s not an integer, it’s not a rational, and it’s not a standard real. Yet you seem to be obstinately illusive when it comes to explanation what kind of number L is.

Can’t speak for Carleas, but I would expect that multiplying L by 2 is impossible. If you can’t even add 1 to it, how can you multiply it by 2. Or is multiplying it by 2 possible despite not being able to add 1. ← Now that would be bizarre indeed!

I think Carleas’s point is that there should be nothing wrong with dividing L by 2. And there should be nothing wrong with adding half L to half L. Combining those allows you to do: (\frac{L}{2} + \frac{L}{2} = L ).

Then there should be nothing wrong with multiplying half L by 2. Thus you should be able to do it to both fractions in the addition, which gives you: L + L.

But then you get a result which, by definition, you shouldn’t get: L + L = 2L.

Think of it this way: I have L cats and L dogs–that’s possible, right?–but then I have 2L pets.

There are numbers that function something like the way you’re describing (L), but you’re avoiding identifying (L) as any of those. It clearly doesn’t function like other numbers (as you’ve acknowledged), but you’re really only providing an accounting of how it does function when it’s function allegedly prevents what we’re trying to do.

You’re claiming that a thing exists which makes an equality false, and there are two responses to that: 1) the thing doesn’t exist, or 2) the thing doesn’t make the equality false. To explore either, we need a rigorous definition of the thing you’re positing.

How can that be? L+1 is undefined! You can’t subtract 1 from an undefined quantity and get a defined quantity.

Okay, if everyone sophisticated in math here can agree, it’s good to know.

And the part “the human condition” plays in that universe?

It’s the extent to which these “solved problems” ramify on human interactions at the juncture of science, philosophy and theology, that most interest me. Given the distinctions that I make between the either/or and the is/ought worlds.

Given the extent that you or any one of us here can possibly know this.

Meaning, of course, there is a path that one can take to be right. And, in regard to the either/or world, it always intrigues me when confronted with seeming antinomies…from the existence of something instead of nothing, to quandaries embedded in the determinism/free will debate, to the evolution of lifeless matter into living matter into self-conscious matter, to all of the fascinating speculations about sim worlds and dream worlds and solipsism and a matrix reality.

And, in a way that both intrigues and baffles me, this.