|
So having essentially reached 100 posts I decided to write a blog about constructing a „number system“ where 0.9999... isn't the number 1!
One sees a discussion from time to time whether or why 0.9999... = 1, and it often isn't explained well enough for various reasons. The problem with the usual proof that 0.9999.... = 1 is that it contains a „limit“ (in the mathematical sense) which isn't in itself complicated, but it takes time to explain. Here, instead of explaining why indeed 0.9999... = 1, we shall see what would happen if we viewed (real) numbers purely as a sequence of digits. The procedure of constructing this number system is a bit tedious, so patience is needed.
Our working object in this number system is an infinite sequence of digits (0, 1, 2,..., 9) with a decimal dot somewhere in between, i.e. our object is what is usually used as a representation of a real number. Each such sequence we shall call an S-number ('S' stands for sequence). Furthermore, we shall write 1.234 instead of 1.23400000..., 1234 instead of 1234.0000...(and similar with any other such sequences).
So we can work more easily with these sequences, we shall set the notation as following: if 'a' denotes any previously described sequence, then set a(0) to be the first digit to the left of the decimal dot (of the sequence a), set a(1) the second digit to the left of the decimal dot (if there is none, set a(1) to be zero), etc. Analogously, set a(-1) be the first digit to the right of the decimal dot, a(-2) the second digit to the right of the decimal dot, etc. Note that there can only be finite non-zero a(n) when n > 0.
Example: a = 0.99999.... => a(n) = 0 for n >= 0, a(n) = 9 for n < 0, n goes through all integers.
Now notice that in our number system 0.99999... and 1.00000... are different S-numbers by definition (since they are different sequences)! You may call this a cheat, and it really would be if it ends here, but it doesn't. One has to realize that we haven't constructed anything yet! We just said: let us consider these objects. To consider a group of objects a „number system“ we have to at least have a (binary) operation on that group (such as addition on the real numbers).
So now let's define addition on our S-numbers. We shall define it the most natural way, by column addition. However, first we define „modular addition“ and „carry addition“ that'll aid us a great deal in defining addition.
1) Modular addition Let a and b denote two arbitrary S-numbers. Now we define the S-number c by defining the digits c(n), where n goes through integers:
c(n) := remainder of integer division (a(n) + b(n)) by 10 (notice this is a decimal digit!)
This completely determines c. Notation: c = a + b (mod 10). Notice: if a(n) + b(n) is less than 10 for every n, then this is just regular addition! Otherwise, if it happens for some n that a(n) + b(n) >= 10 then we shall say that „Carry occurred at the position n.“.
Example: a = 178.53, b = 987.654 => c = 55.184 = a + b (mod 10), or written out more nicely:
178.530 987.654 +(mod 10) 055.184
2) Carry addition Let a and b denote two arbitrary S-numbers. Again we define c by defining c(n): c(n) := quotient of integer division (a(n-1) + b(n-1)) by 10 (this is 0 or 1) Notation: c = a + b (carry)
Example: a = 1054.23, b = 9.8 => c = 11 = a + b (carry)
1054.23 0009.80 +(carry) 0011.00
Now we are ready to start talking about defining addition of two S-numbers.
First recall how we add two „normal“ (real) numbers with finite non-zero digits, given in decimal notation. One starts at the least significant non-zero digit (i.e. the right most), adds the two of them, but writes down only the unit digit of the sum (i.e. modular addition!) and writes down the carry in the column to the left. Then we add digits (and the carry) in the column where we wrote the (last) carry and write the new carry in the next column, etc.
The main problem we notice with this algorithm is that we have to start with the least significant non-zero digit, and that may not exist in general (e.g. 0.9999...). We need to modify the algorithm. The idea is as follows. Let a and b be the two sequences we want to add. Define:
c := a+b (mod 10), d := a+b (carry) -> if d is a zero sequence, then define (a+b := c) and we are done, otherwise continue
e := c+d (mod 10), f := c+d (carry) -> if f is a zero sequence, then define (a+b := e) and we are done, otherwise continue
and so on...
(The ideas is that the carries (here d and f) act as a sort of correction to our sum(s) (i.e. sums c and e, respectively); the carries also contain the information where “the carry occurred” while doing modular addition. The question is whether we can make the carries insignificant enough, ideally zero. We shall see that we cannot make the carries always zero, but we can get as close to zero as we want and that this is actually enough to define our sum.)
We can immediately see a problem with this algorithm: what if it doesn't end in a finite number of steps? And it really doesn't have to: for an exercise one can easily show that trying to add 0.9999... and 0.01001000100001000001... with this algorithm will yield a never-ending algorithm. Despite this flaw, we shall see that this algorithm is good enough for our purpose of defining the sum of two S-numbers.
For practical purposes (so we can work with our algorithm) we set the notation. If a and b are two S-numbers we define the following sequence of S-numbers:
a_0 := a, b_0 := b a_1 := a_0 + b_0 (mod 10), b_1 := a_0 + b_0 (carry) ... a_[k+1] := a_k + b_k (mod 10), b_[k+1] := a_k + b_k (carry) (Indices are put in square brackets to avoid confusion with the notation such as a(n). Now the expression a_[k+1](n) denotes the (n+1)-th digit to the left of the decimal point of the S-number a_[k+1]) …
Example: Let's add a = 0.989 and b = 0.011, we want to get 1 (as we get with normal addition). a_0 = 0.989. b_0 = 0.011 a_1 = 0.990, b_1 = 0.010 a_2 = 0.900, b_2 = 0.100 a_3 = 0.000, b_3 = 1.000 a_4 = 1.000, b_4 = 0.000 stop!
We'll call this the adding algorithm (of S-numbers). The sequence of S-numbers a_k, when k is a positive integer, is an approximation to our sum, and the sequence b_k is a sequence containing the information where carries occur.
Our goal is to define the S-number s = a+b. If b_k is a zero sequence for some positive integer k, we previously defined s := a_k (and the algorithm stops there). If there is no such k (i.e. the algorithm is never-ending), we need a concept of a stable digit.
Let a and b be two S-numbers and n an arbitrary fixed integer. We say that the sum of a and b has the n-th position stable (i.e. the n-th digit stable) if there exist a positive integer k such that a_[k] (n), a_[k+1] (n), a_[k+2] (n),... are all equal (remember, the sequence a_k acts as an approximation for our sum and b_k as the error), and we define the n-th stable digit of the sum as the number a_[k] (n) (which is equal to a_[k+1] (n), a_[k+2] (n),...).
Finally, the sum a+b is well defined if for every integer n the sum of a and b has the n-th position stable, and we set (a+b) := s, where s(n) is the n-th stable digit. Our main theorem can now be stated:
Theorem. Let a and b be two arbitrary S-numbers, then the sum a+b is always well defined.
The proof is easy if one notices the following:
Lemma. In the adding algorithm of a and b, a carry can occur at a certain position at most once (i.e. if n is a fixed integer, then (a_[k](n) + b_[k](n)) >= 10 for at most one non-negative integer k).
Proof. (Lemma) Let us assume otherwise. Then a carry occurred twice somewhere. Let K be the step where the carry occurred the second time, for the first time in the whole algorithm (I hope you understand what I mean; also notice K is strictly positive), and let the position be n. Let k be the step where carry occurred the first time at the position n, then (a_[k](n) + b_[k](n)) >= 10, and a_[k+1](n) <= 8 (since the sum of two digits is at most 18), and b_[k+1](n+1) = 1. Therefore, for a carry to occur at the position n again (which should be at the step K), a carry has to occur at the position n-1 twice at steps less than K (this is because the carry sequences have only 1's and 0's and to get a carry at a position whose digit is 8, you have to add two 1's). But this is a contradiction with the assumption that a carry occurred twice at position n first. Q.E.D.
The theorem is now easily proved; after a carry occurred at the position n, the digit at the position n+1 cannot change (because a carry occurs at the position n at most once!) and thus it is stable.
Done! We have addition on our number system! What about subtraction? We shall not define subtraction directly, but rather define negative S-numbers and then define addition between all the combinations of positive/negative S-numbers. I won't do the technical details, I'll just give a brief description how it's done and afterward we shall see some interesting examples.
A negative S-number (we define it such!) is just an S-number with a minus symbol as a prefix (in the following the term S-numbers include both negative and positive S-numbers). We shall also forbid the zero sequence to have the minus sign. Now we order S-numbers in a natural way, by comparing associated digits and such that positive S-numbers are greater than zero and zero is greater than negative S-numbers. The absolute value of a S-number is the same S-number if it is positive and it is the opposite S-number if it is negative (i.e. the same sequence, just without the minus sign).
We define addition between two negative S-numbers -a and -b as the number -(a+b).
Defining addition between a negative and a positive number is basically defining subtraction and to do that we would have to go through the previous process all the way with mostly minor tweaks (defining a “carry subtraction” and “modular subtraction”, both the lemma and the theorem are valid and proved analogously). The only noticeable difference is that we can only first define addition (assuming a and b are positive) a+(-b) when a>b and then we can define b+(-a) as -(a+(-b)).
Here are a few examples regarding the analogous terms: 8 – 4 = 4 (mod subtraction), 0 – 9 = 1 (mod subtraction) (because 10 – 9 = 1), 5 – 2 = 0 (carry subtraction), 2 – 7 = 1 (carry subtraction), etc.
Now finally we have something resembling a number system! Let's see some examples.
Example: a = 3.042, b = 2.943, we want to calculate a – b (and we should get 0.099, as with the normal subtraction): a_0 = 3.042, b_0 = 2.943 a_1 = 1.109, b_1 = 1.010 a_2 = 0.199, b_2 = 0.100 a_3 = 0.099, b_3 = 0.000 stop!
Example: This is an interesting one, we shall calculate 1 – 0.9999... Let a = 1, b = 0.9999... It follows: a_0 = 1.0000, b_0 = 0.9999... a_1 = 1.1111..., b_1 = 1.1111.... a_2 = 0.0000..., b_2 = 0.000... stop!
Interesting, huh? 1 – 0.9999... = 0. But then it follows that 1 = 0.999..., by adding 0.999... to both sides? Wrong! Let's see what's happening:
1) 1 – 0.999... = 0 (add 0.999... to both sides) 2) (1 – 0.999...) + 0.999... = 0 + 0.999... (we notice we forgot the brackets!!!) 3) (1 – 0.999...) + 0.999... = 0.999...
Example: Here's another similar example: Let a = 0.9999..., b = 0.9999..., we calculate a+b: a_0 = 0.9999..., b_0 = 0.9999... a_1 = 0.8888..., b_1 = 1.1111.... a_2 = 1.9999..., b_2 = 0.000... stop!
We got: 0.999... + 0.999... = 1.999... = (easily shown) = 1 + 0.999...
We now see that associativity fails in our definition of addition, i.e. the former is a proof that the following is not valid in general for S-numbers: (a+b)+c = a+(b+c) { (1 – 0.999...) + 0.999... = 0.999... != 1 = 1 + (– 0.999... + 0.999...) }
This is very unlike what one learns in school. It's actually also very anomalous for mathematics too (e.g. the only example that comes to mind is multiplication in Schwartz distribution theory)!
Luckily, at least, commutativity (i.e. a+b=b+a) is valid, as easily seen from our definitions.
One thing to observe though, we have only seen that when negative S-numbers appear that associativity is not valid, but we can ask whether associativity holds when adding only positive S-numbers. I was rather lazy and didn't try too hard to prove or disprove this claim and thus we have a:
Hypothesis. Associativity holds when adding only positive (or only negative) S-numbers.
If the hypothesis is true, the proof, I assume, would be rather technical.
A remark. (for those versed in higher mathematics) Notice that we didn't have to use decimal notation at all in our construction, we could have used e.g. binary digits. This prompts us the question whether these number system are isomorphic (probably not), or e.g. whether there is a monomorphism from the binary number system to the octal one, or vice versa.
Another thing to observe is that we could model (non-negative) S-numbers as an obvious subgroup of the countable product of finite cyclic groups (all with the same cardinality, in our case 10), indexed by integers. Then the associated group operation would coincide with modular addition (and subtraction) of S-numbers. The question is whether there is practical interpretation for carry addition, maybe then the above hypothesis could be proven/ disproved more elegantly.
Final words. Uh, writing meaningful (well, depends on who you ask I guess hehe) blogs is time-consuming, but it was also fun. Hopefully most of you now understand why real numbers aren't sequences of digits (or at least why one wouldn't want them to be), but a bit more abstract mathematical structure (where associativity does hold!). Also, when someone challenges the claim that 0.999... = 1, ask him what's the number 1 – 0.999...
EDITS: -Another example added -Typo -Associativity clarification
|
That's so much work though. I feel that the easiest explanation for why 1 = .999... involves multiplying both sides of 1/3 = .333... by 3 or 1/9 = .111... by 9.
Cool number system though
|
On August 29 2014 06:05 DarkPlasmaBall wrote:That's so much work though. I feel that the easiest explanation for why 1 = .999... involves multiplying both sides of 1/3 = .333... by 3 or 1/9 = .111... by 9. Cool number system though I wanted to view the problem from another perspective and also such where we don't need to know anything except adding integers. The proof you gave assumes a lot of stuff implicitly: the knowledge of representation of real numbers in form of a sequence of decimal digits, which in turn requires the knowledge of limits and series.
EDIT: How do you prove 1/3 = 0.3333...?
|
On August 29 2014 06:42 CoughingHydra wrote:Show nested quote +On August 29 2014 06:05 DarkPlasmaBall wrote:That's so much work though. I feel that the easiest explanation for why 1 = .999... involves multiplying both sides of 1/3 = .333... by 3 or 1/9 = .111... by 9. Cool number system though I wanted to view the problem from another perspective and also such where we don't need to know anything except adding integers. The proof you gave assumes a lot of stuff implicitly: the knowledge of representation of real numbers in form of a sequence of decimal digits, which in turn requires the knowledge of limits and series. EDIT: How do you prove 1/3 = 0.3333...?
I wasn't aware you had to prove that equality in any way other than simple division... that's pretty much how you show the conversion between decimals and fractions. In this case, 1.000000000... / 3.
|
On August 29 2014 07:39 DarkPlasmaBall wrote:Show nested quote +On August 29 2014 06:42 CoughingHydra wrote:On August 29 2014 06:05 DarkPlasmaBall wrote:That's so much work though. I feel that the easiest explanation for why 1 = .999... involves multiplying both sides of 1/3 = .333... by 3 or 1/9 = .111... by 9. Cool number system though I wanted to view the problem from another perspective and also such where we don't need to know anything except adding integers. The proof you gave assumes a lot of stuff implicitly: the knowledge of representation of real numbers in form of a sequence of decimal digits, which in turn requires the knowledge of limits and series. EDIT: How do you prove 1/3 = 0.3333...? I wasn't aware you had to prove that equality in any way other than simple division... that's pretty much how you show the conversion between decimals and fractions. In this case, 1.000000000... / 3. That's precisely the point, you're using a black box (the simple division algorithm).
|
On August 29 2014 07:59 CoughingHydra wrote:Show nested quote +On August 29 2014 07:39 DarkPlasmaBall wrote:On August 29 2014 06:42 CoughingHydra wrote:On August 29 2014 06:05 DarkPlasmaBall wrote:That's so much work though. I feel that the easiest explanation for why 1 = .999... involves multiplying both sides of 1/3 = .333... by 3 or 1/9 = .111... by 9. Cool number system though I wanted to view the problem from another perspective and also such where we don't need to know anything except adding integers. The proof you gave assumes a lot of stuff implicitly: the knowledge of representation of real numbers in form of a sequence of decimal digits, which in turn requires the knowledge of limits and series. EDIT: How do you prove 1/3 = 0.3333...? I wasn't aware you had to prove that equality in any way other than simple division... that's pretty much how you show the conversion between decimals and fractions. In this case, 1.000000000... / 3. That's precisely the point, you're using a black box (the simple division algorithm).
I can sleep at night with that
|
|
As you might know, one interpretation of carrying is that it is a cocycle in group cohomology.
For example, consider addition of 2-digit numbers. Let T=Z/10 and O=Z/10 (where T stands for "tens" and O for "ones"). Every two digit number is an element of T x O. There is a function Carry:O x O ---> T which takes values in {0,1}. This Carry function is precisely the cocycle in Ext(T,O) that represents the extension Z/100.
Is there a similar interpretation of your carry operation in some cohomology or Ext group?
|
On August 29 2014 11:33 Muirhead wrote: As you might know, one interpretation of carrying is that it is a cocycle in group cohomology.
For example, consider addition of 2-digit numbers. Let T=Z/10 and O=Z/10 (where T stands for "tens" and O for "ones"). Every two digit number is an element of T x O. There is a function Carry:O x O ---> T which takes values in {0,1}. This Carry function is precisely the cocycle in Ext(T,O) that represents the extension Z/100.
Is there a similar interpretation of your carry operation in some cohomology or Ext group? I have only a vague feeling of what you're talking about since the closest I got to homology theory was in algebra classes when doing exact sequences. That being said, I have homological algebra class this semester and will surely investigate your interesting observation within a few months!
|
As someone who knows nothing about maths (and didn't follow the post through), why do you get to discard zeros in abbreviation so 1 = 1.0... as opposed to other digits (0.9 = 0.9...)?
Also, why do you get to stick ... at the end of numbers, but not in the middle (i.e. 0.999... + 0.000...1 = 1.000...)? (We're basically saying that ... is repeating that digit for as long as we need more precision, right?)
|
On August 30 2014 02:32 netherh wrote: As someone who knows nothing about maths (and didn't follow the post through), why do you get to discard zeros in abbreviation so 1 = 1.0... as opposed to other digits (0.9 = 0.9...)? I'm not sure I understand the question but since the value of 0 is nothing, it's implied that there's nothing there. 1.00... = 1, whereas 0.99 is not the same as 0.99999 or 0.99999999...
Also, why do you get to stick ... at the end of numbers, but not in the middle (i.e. 0.999... + 0.000...1 = 1.000...)? (We're basically saying that ... is repeating that digit for as long as we need more precision, right?)
"..." implies that all the next numbers are the same. So 0.999... = 0.9999999999999999 and it goes on forever, whereas 0.000...1 means nothing, because where's that "one"? It's not at infinity because it can't be. If you do 0.999... + 0.00000000000001 you end up with 1.0000000000000999..., not 1.
|
Note that there can only be finite non-zero a(n) when n < 0.
mmh, maybe i misunderstood you, but this is only correct for rational numbers, no?
|
On August 30 2014 03:11 Djzapz wrote:Show nested quote +On August 30 2014 02:32 netherh wrote: As someone who knows nothing about maths (and didn't follow the post through), why do you get to discard zeros in abbreviation so 1 = 1.0... as opposed to other digits (0.9 = 0.9...)? I'm not sure I understand the question but since the value of 0 is nothing, it's implied that there's nothing there. 1.00... = 1, whereas 0.99 is not the same as 0.99999 or 0.99999999...
So "1" is the same as saying "1.0..." which is the same as saying "1 followed by an infinite series of zeroes". In which case 1 is not the same as 0.9... ("0.9 followed by an infinite series of nines") just by definition (at any digits you compare in the sequence, you're always going to see a 0 and a 9 -> they're different). I don't see how you can say 1 = 0.9... unless you're actually discarding precision somewhere. Maybe they trend towards the same thing as you look with better precision, but they'll never actually get there.
Show nested quote +Also, why do you get to stick ... at the end of numbers, but not in the middle (i.e. 0.999... + 0.000...1 = 1.000...)? (We're basically saying that ... is repeating that digit for as long as we need more precision, right?)
"..." implies that all the next numbers are the same. So 0.999... = 0.9999999999999999 and it goes on forever, whereas 0.000...1 means nothing, because where's that "one"? It's not at infinity because it can't be. If you do 0.999... + 0.00000000000001 you end up with 1.0000000000000999..., not 1.
I'm defining 0.0...1 as "0.0 followed by an infinite series of zeros, then one". I.e. however precise you want to be, go one digit further and add a 1, and you'll get to the same number. You seem to be saying you can't get to infinity, but doesn't that just emphasise that 1 = 0.9... isn't correct? (trending towards is not the same as equal to?)
Maybe my notation is bad; I guess I'm trying to define it as the difference between 0.9... and 1.0... in the first place. But similarily, there'd be a number between 0.8... and 1.0... (0.1...2 I guess?)
I suppose what I'm saying is that 0.9... appears to just be a function of the number of digits you look at, not an actual number, so why can't you define a different function that's the difference between 0.9... and 1.0... depending on the number of digits you look at.
|
On August 30 2014 04:44 netherh wrote: So "1" is the same as saying "1.0..." which is the same as saying "1 followed by an infinite series of zeroes". In which case 1 is not the same as 0.9... ("0.9 followed by an infinite series of nines") just by definition (at any digits you compare in the sequence, you're always going to see a 0 and a 9 -> they're different). I don't see how you can say 1 = 0.9... unless you're actually discarding precision somewhere. Maybe they trend towards the same thing as you look with better precision, but they'll never actually get there.
Well the thing is mathematics doesn't work like that, though I'm no mathematician. We agree that three thirds = 1. One third is 0.333... 3*(1/3) = 3*0.333... = 0.999... Now you might argue that the very last .00001 at the end of the sequence is missing and therefore you're missing that last tiny smidge of precision, but I think that mathematically you can just prove that if you have an infinite sequence of nines, physically it's just the same as if you rounded up.
I'm defining 0.0...1 as "0.0 followed by an infinite series of zeros, then one". That's not a thing. You can't put numbers after infinite zeroes because by saying "after infinite" you're implying that infinite is actually finite and you can say what comes after it.
I.e. however precise you want to be, go one digit further and add a 1, and you'll get to the same number. You seem to be saying you can't get to infinity, but doesn't that just emphasise that 1 = 0.9... isn't correct? (trending towards is not the same as equal to?)
Maybe my notation is bad; I guess I'm trying to define it as the difference between 0.9... and 1.0... in the first place. But similarily, there'd be a number between 0.8... and 1.0... (0.1...2 I guess?)
I suppose what I'm saying is that 0.9... appears to just be a function of the number of digits you look at, not an actual number, so why can't you define a different function that's the difference between 0.9... and 1.0... depending on the number of digits you look at. The problem you have here is that you're thinking in practical terms and things like that are abstract. I can talk about the mechanics but it takes a mathematician to understand why these things work the way they do. Our brains have a big deal of trouble trying to understand how things like infinite works because we can't really make sense of these unknowns which we never deal with in our daily lives.
|
On August 30 2014 04:12 Paljas wrote:mmh, maybe i misunderstood you, but this is only correct for rational numbers, no? Thanks, I meant n > 0.
On August 30 2014 02:32 netherh wrote: As someone who knows nothing about maths (and didn't follow the post through), why do you get to discard zeros in abbreviation so 1 = 1.0... as opposed to other digits (0.9 = 0.9...)?
Also, why do you get to stick ... at the end of numbers, but not in the middle (i.e. 0.999... + 0.000...1 = 1.000...)? (We're basically saying that ... is repeating that digit for as long as we need more precision, right?) 1 is basically just notation for 1.000... which in turn is just notation for ...00001.000... which again is just a notation for an(both ways) infinite sequence of digits, and if 'a' denotes this sequence then a(0) = 1 and a(n) = 0 for n != 0, n integer. 'a' can be viewed as a function from integers to digits 0,1,2,3,...,9; exactly as you said:
I suppose what I'm saying is that 0.9... appears to just be a function of the number of digits you look at, not an actual number,...
Although, I don't quite understand this question:
...so why can't you define a different function that's the difference between 0.9... and 1.0... depending on the number of digits you look at.
I'm defining 0.0...1 as "0.0 followed by an infinite series of zeros, then one".
That's a perfectly viable definition, but we're not considering such objects because... it doesn't fit here, I mean, we can define anything we want, but we would like that what we define makes some sense or have some workable properties.
For example, let us consider the real numbers, and we add an additional number that we'll call X. We define X to be greater than 0, but smaller than every positive number. OK, great, but how do we now define X+a or X*a, is there some natural way to define them? The answer is actually yes, but we would have to add an infinite amount of similar numbers, e.g. 1+X would naturally be a number that is greater than 1 but smaller than every real number greater than 1, etc. (this can all be done rigorously, see Hyperreal numbers and Non-standard analysis), but as a consequence we lose the concept of the natural distance between numbers and a lot of weird stuff appear (e.g. in hyperreal numbers there exist numbers that are greater than every natural number!).
I should have explained why 0.999... = 1 in the real numbers first, seems that my blog just got more people confused : (
|
On August 30 2014 12:27 CoughingHydra wrote: I should have explained why 0.999... = 1 in the real numbers first, seems that my blog just got more people confused : ( i thought you left it out intenionally to skip the discussion of 0.9.. = 1?!
|
Could you point out 3 S-numbers a, b and c where (a+b)+c != a+(b+c) ? It´s late here and i'm having trouble seeing it
Oh wait i got it 1 -0.999999... + 0.99999999...
|
On August 31 2014 04:33 Hryul wrote:Show nested quote +On August 30 2014 12:27 CoughingHydra wrote: I should have explained why 0.999... = 1 in the real numbers first, seems that my blog just got more people confused : ( i thought you left it out intenionally to skip the discussion of 0.9.. = 1?! Yes, I left it out intentionally, but considering that it just got more people confused, it seems that it wasn't such a good idea.
On August 31 2014 06:48 Geiko wrote:Could you point out 3 S-numbers a, b and c where (a+b)+c != a+(b+c) ? It´s late here and i'm having trouble seeing it Oh wait i got it 1 -0.999999... + 0.99999999... I'll update it to be more clear.
|
|
|
|