Is 1 = 0.999... ? Really?

Is 0.999… a number or a pattern that can match numbers?

If it is a pattern then does it match only one number?
If it does not, then how can we say it equals to 1?
We might be able to say “it is possibly equal to 1”, and only under the condition that it matches 1, but not that it is strictly, unambiguously, equal to 1.

Does it match 1?
It matches numbers 0.9, 0.99, 0.999, etc but it never matches 1.

So it is neither strictly equal to 1 nor possibly equal to 1.

What can be said it is is approximately equal to 1.
The error can be said to be 0.111…

The other problem is that the pattern that is 0.999… is not clearly defined. Does it match 0, for example? One is intuitively inclined to say “no” but this isn’t very reliable.

We can define the pattern 0.999… in the following manner:

{0.9, 0.99, 0.999, …}

The list gives us the idea which numbers the pattern matches and which it does not. For example, defined in this way, we know the pattern does not match 0 and we also know it matches unlisted numbers such as 0.9999.

Patterns can match one or more numbers. There may be a set they can match or there may be no such set.

This pattern matches more than one number and it matches no set (the set that it matches is called “infinite set” because it does not exist.)

Pattern simply means “one of permitted values”.
It can be represented using sets.

An example:
sqrt(9) = {+3, -3}

2 + 2 + {0, 1} = {4, 5}
2 + 2 + {0, 1, 2, …} = {4, 5, 6, …}

So let’s apply this to one of the proofs from the Wiki. The proof goes something like this:

x = 0.999…
10x = 9.999…
10x = 9 + 0.999…
10x = 9 + x
9x = 9
x = 1

Let’s try and follow the steps.

x = {0.9, 0.99, 0.999, …}
10x = 10 * {0.9, 0.99, 0.999, …}
10x = {9, 9.9, 9.99, …}

On the other hand…
9.999… = {9.9, 9.99, 9.999, …}

10x, it appears, is not equal to 9.999…
9.999… does not match 9.
Whereas 10x matches it.

In reality:
10x = {9, 9.999…}

But if we say that 9.999… = {9, 9.9, 9.99, 9.999, …} this will no longer be a problem.
However, the problem will not go away, it will merely change its location.

Now, the problem is that 9 + x does not equal 9.999…
9 + {0.9, 0.99, 0.999, …} = {9.9, 9.99, 9.999, …}
It does not match 9 whereas 9.999… matches 9.

In reality:
{9, 9 + x} = 9.999…

If we redefine 0.999… to match 0 the problem will disappear from this place but it will move back to the first place.

In other words, no matter how we define the two patterns used in the proof (0.999… and 9.999…) the logic remains invalid.

The above can be simplified by saying that in order for the proof to be correct 10x must be equal to 9 + x. Which it isn’t. In reality, 10x > 9 + x.

x = 0.999…

Depending on the definition of the pattern this can be one of the two:
x = {0.9, 0.99, 0.999, …}
or
x = {0, 0.9, 0.99, 0.999, …}

10 * {0.9, 0.99, 0.999, …} = {9, 9.9, 9.99, …}
9 + {0.9, 0.99, 0.999, …} = {9.9, 9.99, 9.999, …}
Clearly, 10x =/= 9 + x.
Instead, 10x = {9, 9 + x}.

10 * {0, 0.9, 0.99, 0.999, …} = {0, 9, 9.9, 9.99, …}
9 + {0, 0.9, 0.99, 0.999, …} = {9, 9.9, 9.99, 9.999, …}
Clearly, 10x =/= 9 + x.
Instead, 10x = {0, 9 + x}.

10x is one term (possibility) bigger than 9 + x.

It’s a representation of a number. A number is an abstraction. You can’t point to the number 4, but you can type the symbol ‘4’ that represents the number 4. Didn’t they teach you in school the difference between a number and a numeral? Maybe that was the old new math and they don’t explain that anymore.

It’s a representation of the number 1, just as ‘1’ is a representation of the number 1.

.999… is a shorthand for the sum of the infinite series 9/10 + 9/100 + 9/1000 + … In freshman calculus they prove that this is a geometric series whose sum is exactly 1.

In math it’s proved that it’s strictly, unambiguously equal to 1.

‘2 + 2’ never matches ‘4’ but they both represent the same number. Right?

It’s perfectly sensible to have two distinct representations of the same number, for example 2 + 2 and 4.

It’s proven in freshman calculus that .999… = 1. In undergrad real analysis, which is taken by math majors, they drill this down to first principles by constructing the real numbers then rigorously defining limits.

It’s clearly defined as the sum of the series 9/10 + 9/100 + … That’s what any decimal expression means. For example pi = 3.14159… = 3 + 1/10 + 4/100 + …

2 + 2 doesn’t match 4 but I don’t see you questioning that 2 + 2 = 4. Why is that? It must be that 2 + 2 = 4 is familiar to you, so you don’t realize that you’re agreeing that a given number can have multiple representations. But .999… is less familiar, so it’s confusing until you become familiar with it.

It’s not true that .999… is that particular set, but it’s not too wildly off the mark either so I can live with it, as long as you define what you mean. Can you define what you mean?

Does the pattern 2 + 2 ever match the pattern 4? Yet I’m sure you’d agree they both represent the same number. Can you see that this is no different than .999… = 1 except for the matter of familiarity?

Unclear what you mean. You’ve decided to make up a theory in which real numbers are sets. As I say this is actually true mathematically, but you are picking the wrong sets. You’d have to expand on your theory for it to make sense.

Statement without proof, and without the basic definitions that would make it meaningful.

Trouble ahead.

Mathematically untrue. The square root function by definition is the positive result. sqrt(9) = 3. It’s true that there are two solutions to the equation x^2 = 9, but that’s not what you said and you are showing some confusion here.

I don’t think that’s right. What can you possibly mean here?

I’m afraid I need to ask you to clarify your notation else this doesn’t make sense to me. What is a number plus a set?

This is not really a proof because it uses a fact that is far more sophisticated than the mere fact that .999… = 1. It’s unfortunate that the Wiki page doesn’t make this clear.

This right here is the problem with the Wiki “proof.” What principle of math allows you to multiply an infinite series by a constant and claim that the result is the same on both sides? The distributive law only applies to FINITE sums. You can’t actually apply this as you have until you prove a theorem that says you can do it. And the proof of that theorem is essentially the same proof that .999… = 1. So this proof isn’t wrong but it’s circular. It’s really just a heuristic plausability argument for beginners.

Now you’re applying your made-up definition of equating real numbers to sets, but you haven’t defined your idea sufficiently. Why should x be that particular set? What if we do the proof in binary instead? Do you get a different set?

Well, 2 + 2 doesn’t match 4 but I don’t see anyone claiming that 2 + 2 isn’t 4. It’s perfectly sensible for a number to have multiple representations.

Reality? Meaning what? What if we do the same proof in binary? Is that a different reality? Or just a different representation?

At this point you are confusing yourself by using made-up concepts that you haven’t defined. What do you mean that .999… is some particular set?

2 + 2 doesn’t match 4. We’ve been through this. Why doesn’t the obvious difference between 2 + 2 and 4 bother you? Isn’t it just because you’re familiar with it?

You’re completely making up notation now.

You’ve gone down a rabbit hole. You need to go back and carefully define your set notation. But why couldn’t there be two set notations for the same number? After all in binary, 0.1111… = 1 but now you’d have different sets.

Well you just proved that 2 + 2 isn’t 4. They obviously don’t look the same. Right?

.
“0.999…” is a representation of a quantity that has no end, and thus isn’t really a number at all. It represents an inexact quantity, unlike true numbers which represent exact quantities.

James, I’d be happy to discuss this subject with you if you’d like to make substantive points. But I won’t be responding to arguments based on denial of standard, accepted mathematics. Not to mention arguments from insults, irrelevancies, and non-sequiturs, which were the bulk of your replies to me a few months ago before I simply gave up replying to you.

0.999… is a shorthand for the infinite series 9/10 + 9/100 + … which is shown in freshman calculus to be a geometric series whose sum is 1. If you have a calculus book that says something else, feel free to provide the reference.

.999… is a “quantity that has no end and thus isn’t really a number at all” is your own personal mathematics. You are entitled to the confused contents of your own mind. But if you have any interest in discussing mathematics with people outside your own mind, at some point you have to make contact with standard, universally accepted math.

It’s like someone saying that gravity is the force that makes stuff fall up. Helium filled balloons for example. What would be the point of trying to argue against someone stating such nonsense? Perhaps they have some kind of interesting philosophical point to make; but what obligation does anyone have to try to figure it out?

By the way if .999… “has no end” and therefore isn’t a number, do you think pi is a number? Its decimal representation has no end either. What about 1/3 = .333…? Is that a number? Yes? No?

Do you understand the distinction between a number and a particular representation of a number?

You made a very interesting claim. Most .999… = 1 denialists agree that .999… is SOME number, just not 1. But you are saying that it represents no number at all. But the set {.9, .99, .999, …} is a nonempty set that’s bounded above (by 1, for example). Therefore by the completeness of the real numbers, that set has a least upper bound.

Are you denying the completeness of the real numbers? You’ve made a really extraordinary claim here to say that .999… represents no number at all. You are denying the real numbers entirely. en.wikipedia.org/wiki/Completen … al_numbers

X=.999…
10X-X=9X
9.999…-.999…=9X
9=9X
1=X

Not really, but convincing enough for those who don’t know better.

That is a particular type of logical fallacy (assuming the consequent). There are better arguments than that.

9 = 9 * 0.999… ???

9 times a non-ending number is still a non ending number. Who told you that it magically became a finite, bounded number?

Lol.

How very dogmatic of you.

You’re so smart. You understand everything.

Sure, ‘4’ is a glyph that can represent (i.e. match) things such as a basket that contains an apple, a pineapple, an orange and a banana. Among many other things.

The problem is that 0.999… does not represent any such thing. It does not represent a number. What it represents is an indefinite value. Moreover, this indefinite value is not strictly equal to any number, let alone 1, and it is not even possibly equal to any number, let alone 1.

What is an indefinite (or generic) value?
I’m not sure if such a concept has been formalized in mathematics yet.

It basically means “any value of the permitted values”.
It can be represented using a set.

2 + 2 + {0, 1} means “2 plus 2 plus either 0 or 1”.
The sum equals {4, 5} because it can either be 4 or 5.

Another example is square root of 9 which is not strictly equal to any definite value. Instead, it is strictly equal only to an indefinite value that is {3, -3} which means it can either be 3 or -3. And it’s not even an indefinite number because -3 is neither a number nor something that is strictly equal to a number. It’s a pattern. Basically, a shorthand for 0 - 3 which has no relation we call "strict equality” with any number (you can’t subtract something from nothing.)

0.999… is an indefinite value that does not even have clearly defined rules i.e. we do not know exactly which values are permitted.
For example, does it allow 0?
I assume it does not.

0.999… allows values such as 0.9, 0.99, 0.999 and so on.
These aren’t even numbers.
Decimal numbers aren’t numbers. They are sums.
None of these values are 1 or equal to 1, so there isn’t even possible equality between 0.999… and 1.

0.000… is different because although it allows many different values, each one of which is a sum, every single one of them is strictly equal to 0. Therefore, 0.000… = 0.

I admit, you’re extremely smart, but not that smart.

It does not matter how many times you repeat it, and how many sources agree with you, it will never become reality.

0.999… does not represent 1. It’s a template for numbers such as 0.9, 0.99, 0.999 and so on. It has nothing to do with 1.

Your intelligence is too much for me.

Any finite decimal number is a shorthand for a finite sum, so yes, an infinite decimal number would be a shorthand for an infinite sum.

The problem is that these are not two types of sums. “Infinite sum” is not a sum. It’s a class or a form or a pattern of sums. Not even a set of sums because not every class of elements has the corresponding set of elements (when the corresponding set is missing they call it “infinite” because they camnot face the fact that there is no such a set.)

The so-called “infinite sum” is an indefinite value and not every indefinite value has an equal definite value.

{1} = 1
Because only one value is possible.

{1, 1, 1} = 1
Because every possible value is one and the same value.

{1.0, 1.00, 1.000} = 1
Because although these are different values, each one equals to one and the same number.

However!
{0.9, 0.99, 0.999} =/= 1

And certainly!
{0.9, 0.99, 0.999, …} =/= 1

You can say they are CONVERGING TOWARDS one.
But you CANNOT say they are EQUAL to one.

In reality, it’s not true.

‘2 + 2’ and ‘4’ are two different mathematical objects. However, that does not mean they are not equal. ‘2 + 2’ is a sum – yes, a pattern – that matches only one number and that number is ‘4’. So yes, they are equal.

But 0.999… isn’t equal to 1.
Sorry.

Your brain is on fire.

‘4’ has many representations.
Think of 4.0, 4.00, 4.000 and so on.
Think of 4.000…
Think of 4/1, 8/2, 16/4, 32/8 and so on.
What exactly is your point?

0.999… simply does not equal 1.

No such thing as Pi.

That must be it.

You’re the one who is getting confused, don’t project it on me. The made-up concepts (as opposed to concepts that were not made-up but merely picked up from others wholesale) they are very clearly defined in my head.

And you’re totally a genius and not at all an arrogant minion of the system furiously trying to preserve the status quo.

Perhaps you should stop telling other people what to do and pay more attention to what they are saying.

Totally.

1=0.999…

This is due to what it means to have an asymptotic regression. Numbers aren’t just brute glyphs for symbolizing simple quantities of stuff in a basket. And I’ve already shown the proof of 1=0.999…, it only takes four lines to prove it.

Case closed.

Yes, they are. They are representations of quantities. And when they are followed by “…”, they relay that no decimals can be presented to represent the value being referenced because it is not a fixed quantity, but rather a ratio that is not representable with decimals.

Yeah, but now try to come up with a valid proof.

What does it mean for an infinite series to be equal to some number?

I understand that a series can have a limit, but we aren’t taking about limits here, aren’t we?

What does it mean for an infinite sum such as 0.9 + 0.99 + 0.999 + … to be equal to some number?

We know that 0.000… is an infinite sum and that it’s equal to 0.

I have tried to answer this question through my concept of indefinite values. Basically, an indefinite value is said to be equal to some definite value if the class that defines its possible (i.e. permitted) values is defined in such a manner that we can deduce that its possible values converge at the same value.

An indefinite value such as {1, 1.0000, 4/4, 8-7, 1*1+0} consists of different values each one of which converges at 1.

0.000… converges at 0.
1.000… converges at 1.
And so on.

But 0.999… does not converge at any value.
It’s strictly speaking divergent.

Is pi a number?

Why do you say this? The maximum difference between 1.000… or 0.999… and 1 is the same at each decimal place.

If we don’t know what follows in subsequent decimal places, we can say that 0.99 is at most 0.01 away from 1, and 1.00 is at most .009999… away from 1.

and similarly 0.9999 is at most .0001 away from 1, and 1.0000 is at most 0.0000999999… away from 1.

The two series converge at the same rate towards the same value: 1. 1.000… converges from above, and 0.999… converges from below, but they are both approaching 1.

Even leaving open the question of whether 0.00999… = 0.01…, it at least seems clear that 0.999… is not divergent.

Yesterday, I was wondering if you were going to get into this (again). :sunglasses:

No. It is actually just a ratio. The truncated approximations of it are numbers.

That didn’t make any sense the last time you state it. It doesn’t seem to have improved.

No. We can say that it is exactly .01 away from 1.00

Ummm… No. 1.00 is exactly 0.00 away from 1.

No. 0.9999 is exactly 0.0001 away from 1.

Again, there is no “at most” about it.
1.0000 is exactly 0.0000 away from 1.0000.

And to continue;
1.000… is exactly 0.000… away from 1.000…

But you want to claim that 1 - 0.999… is EXACTLY 0.000…

The problem is that you cannot prove it because it isn’t true.

One series is exactly equal at all time, while your proposed series is “at most” equal at all times. “At most equal”, injecting unknowns and doubt into mathematical identities is not “EQUAL”.

Can’t believe there are 21 pages of this. Lol.

Yes we are talking exactly about limits.

Excellent question. The sum of an infinite series of real numbers is defined as the limit of the sequence of partial sums .9, .99, .999, .9999, etc.

We know each partial sum exists because it’s the eminently sensible sum of finitely many terms. So now at least we’ve reduced the crazy idea of an “infinite sum,” whatever that is, to the convergence of a sequence of real numbers to a limit.

We now define the limit L of a sequence via the usual idea of saying that given some tiny positive real number epsilon, we can find a whole number N such that if n > N then the n-th term of the sequence is within epsilon of L.

It’s clear that 1 is the limit of the sequence .9, .99, etc. simply by virtue of the definition. So by definition, the sum of the original infinite series is 1.

Please note, I am NOT saying this is “true with a capital T” or true about the physical world we live in. This is a purely formal exercise in abstract mathematics. I make no ontological claims about mathematics. I find it helpful to think of mathematics as a purely formal game, like chess. And it is a perfectly true, and perfectly inoffensive fact that in the formal game of math, the statement that .999… = 1 is simply a logical consequence of the way we set up the definitions.

It’s really not anything that should trouble anyone.

Now it is interesting that infinite series came directly out of Joseph Fourier’s studies of the distribution of heat in an iron rod. He came to invent trigonometric series, which are infinite sums of products of powers of sines and cosines. A lot of equations involving the number pi, as it happens.

And then Fourier series became the theoretical foundation of digital signal processing, which allows us all to communicate online.

So there is some mysterious relationship between our formal, abstract math and the real world.

.999… = 1 is a logical deduction in math, pretty much because the definitions are set up exactly to make it come out this way. I don’t see how anyone can disagree with that since it’s objectively true. You just have to consult a calculus text or website.

What that means in the real world is a philosophical mystery. If you think it’s false in the real world you get no disagreement from me.

What I don’t understand is how anyone can argue with the .999… = 1 formalism itself. It’s like arguing against the way the knight moves in chess. You can’t argue with it, it’s just the way the game’s set up. Why does this upset people so?

X=0.999…

10X-X=9X
9.999… - 0.999… = 9X
9=9X
1=X

Why is this thread not finished already? Cmon people. Ignore the trolls. Or laugh at them, from time to time.

That’s essentially a circular proof. What allows you to start with x = .999… and conclude that therefore 10x = 9.999…? It can’t be the distributive law because that only applies to finite sums.

To make this legit, you have to prove the theorem on term-by-term multiplication of a convergent series by a constant. And in fact the theorem isn’t even valid UNLESS the series in question converges.

So to multiply by 10 you already need to develop a theory of the convergence of infinite series; of which the fact that .999… = 1 is a trivial consequence.

Your proof is actually circular. It’s really more of a plausibility argument for students.

I note that on this point, James and I are in agreement. That alone should tell you that you’re on shaky ground :slight_smile:

In my opinion the reason this discussion never ends online is related to the fact that it took mathematicians 300 years to work out the theory of limits, dating from Newton’s Principia in 1687 to the late nineteenth century when Weierstrass and others put the finishing touches on our modern understanding of limits.

Or even more than two millennia old, if you want to go back to Archimedes, who well understood limits but lacked the formalism. These are very deep and very old question in mathematics and physics. It’s natural for there to be enormous curiosity and confusion around the subject of limits.