• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle
  • Let me restate: I am of the opinion that repeating decimals are imperfect representations of the values we use them to represent. This imperfection only matters in the case of 0.999… , but I still consider it a flaw.

    I am also of the opinion that focusing on this flaw rather than the incorrectness of the person using it is a better method of teaching.

    I accept that 1/3 is exactly equal to the value typically represented by 0.333… , however I do not agree that 0.333… is a perfect representation of that value. That is what I mean by 1/3 ≠ 0.333… , that repeating decimal is not exactly equal to that value.


  • Decimals work fine to represent numbers, it’s the decimal system of computing numbers that is flawed. The “carry the 1” system if you prefer. It’s how we’re taught to add/subtract/multiply/divide numbers first, before we learn algebra and limits.

    This is the flawed system, there is no method by which 0.999… can become 1 in here. All the logic for that is algebraic or better.

    My issue isn’t with 0.999… = 1, nor is it with the inelegance of having multiple represetations of some numbers. My issue lies entirely with people who use algebraic or better logic to fight an elementary arithmetic issue.

    People are using the systems they were taught, and those systems are giving an incorrect answer. Instead of telling those people they’re wrong, focus on the flaws of the tools they’re using.


  • In base 10, if we add 1 and 1, we get the next digit, 2.

    In base 2, if we add 1 and 1 there is no 2, thus we increment the next place by 1 getting 10.

    We can expand this to numbers with more digits: 111(7) + 1 = 112 = 120 = 200 = 1000

    In base 10, with A representing 10 in a single digit: 199 + 1 = 19A = 1A0 = 200

    We could do this with larger carryover too: 999 + 111 = AAA = AB0 = B10 = 1110 Different orders are possible here: AAA = 10AA = 10B0 = 1110

    The “carry the 1” process only starts when a digit exceeds the existing digits. Thus 192 is not 2Z2, nor is 100 = A0. The whole point of carryover is to keep each digit within the 0-9 range. Furthermore, by only processing individual digits, we can’t start carryover in the middle of a chain. 999 doesn’t carry over to 100-1, and while 0.999 does equal 1 - 0.001, (1-0.001) isn’t a decimal digit. Thus we can’t know if any string of 9s will carry over until we find a digit that is already trying to be greater than 9.

    This logic is how basic binary adders work, and some variation of this bitwise logic runs in evey mechanical computer ever made. It works great with integers. It’s when we try to have infinite digits that this method falls apart, and then only in the case of infinite 9s. This is because a carry must start at the smallest digit, and a number with infinite decimals has no smallest digit.

    Without changing this logic radically, you can’t fix this flaw. Computers use workarounds to speed up arithmetic functions, like carry-lookahead and carry-save, but they still require the smallest digit to be computed before the result of the operation can be known.


  • I’m not saying that math works differently is different bases, I’m using different bases exactly because the values don’t change. Using different bases restates the equation without using repeating decimals, thus sidestepping the flaw altogether.

    My whole point here is that the decimal system is flawed. It’s still useful, but trying to claim it is perfect leads to a conflict with reality. All models are wrong, but some are useful.


  • I never commented on the convenience or usefulness of any method, just tried to explain why so many people get stuck on 0.999… = 1 and are so recalcitrant about it.

    If you can accept that 1/3 is 0.333… then you can multiply both sides by three and accept that 1 is 0.99999…

    This is a workaround of the decimal flaw using algebraic logic. Trying to hold both systems as fully correct leads to a conflic, and reiterating the algebraic logic (or any other proof) is just restating the problem.

    The problem goes away easily once we understand the limits of the decimal system, but we need to state that the system is limited! Otherwise we get conflicting answers and nothing makes sense.


  • Decimal notation is a number system where fractions are accomodated with more numbers represeting smaller more precise parts. It is an extension of the place value system where very large tallies can be expressed in a much simpler form.

    One of the core rules of this system is how to handle values larger than the highest digit, and lower than the smallest. If any place goes above 9, set that place to 0 and increment the next place by 1. If any places goes below 0, increment the place by (10) and decrement the next place by one (this operation uses a non-existent digit, which is also a common sticking point).

    This is the decimal system as it is taught originally. One of the consequences of it’s rules is that each digit-wise operation must be performed in order, with a beginning and an end. Thus even getting a repeating decimal is going beyond the system. This is usually taught as special handling, and sometimes as baby’s first limit (each step down results in the same digit, thus it’s that digit all the way down).

    The issue happens when digit-wise calculation is applied to infinite decimals. For most operations, it’s fine, but incrementing up can only begin if a digit goes beyong 9, which never happens in the case of 0.999… . Understanding how to resolve this requires ditching the digit-wise method and relearing decimals and a series of terms, and then learning about infinite series. It’s a much more robust and applicable method, but a very different method to what decimals are taught as.

    Thus I say that the original bitwise method of decimals has a bug in the case of incrementing infinite sequences. There’s really only one number where this is an issue, but telling people they’re wrong for using the tools as they’ve been taught isn’t helpful. Much better to say that the tool they’re using is limited in this way, then showing the more advanced method.

    That’s how we teach Newtonian Gravity and then expand to Relativity. You aren’t wrong for applying newtonian gravity to mercury, but the tool you’re using is limited. All models are wrong, but some are useful.


  • Any my argument is that 3 ≠ 0.333…

    We’re taught about the decimal system by manipulating whole number representations of fractions, but when that method fails, we get told that we are wrong.

    In chemistry, we’re taught about atoms by manipulating little rings of electrons, and when that system fails to explain bond angles and excitation, we’re told the model is wrong, but still useful.

    This is my issue with the debate. Someone uses decimals as they were taught and everyone piles on saying they’re wrong instead of explaining the limitations of systems and why we still use them.

    For the record, my favorite demonstration is useing different bases.

    In base 10: 1/3 0.333… 0.333… × 3 = 0.999…

    In base 12: 1/3 = 0.4 0.4 × 3 = 1

    The issue only appears if you resort to infinite decimals. If you instead change your base, everything works fine. Of course the only base where every whole fraction fits nicely is unary, and there’s some very good reasons we don’t use tally marks much anymore, and it has nothing to do with math.


  • Eh, if you need special rules for 0.999… because the special rules for all other repeating decimals failed, I think we should just accept that the system doesn’t work here. We can keep using the workaround, but stop telling people they’re wrong for using the system correctly.

    The deeper understanding of numbers where 0.999… = 1 is obvious needs a foundation of much more advanced math than just decimals, at which point decimals stop being a system and are just a quirky representation.

    Saying decimals are a perfect system is the issue I have here, and I don’t think this will go away any time soon. Mathematicians like to speak in absolutely terms where everything is either perfect or discarded, yet decimals seem to be too simple and basal to get that treatment. No one seems to be willing to admit the limitations of the system.





  • I strongly agree with you, and while the people replying aren’t wrong, they’re arguing for something that I don’t think you said.

    1/3 ≈ 0.333… in the same way that approximating a circle with polygons of increasing side number has a limit of a circle, but will never yeild a circle with just geometry.

    0.999… ≈ 1 in the same way that shuffling infinite people around an infinite hotel leaves infinite free rooms, but if you try to do the paperwork, no one will ever get anywhere.

    Decimals require you to check the end of the number to see if you can round up, but there never will be an end. Thus we need higher mathematics to avoid the halting problem. People get taught how decimals work, find this bug, and then instead of being told how decimals are broken, get told how they’re wrong for using the tools they’ve been taught.

    If we just accept that decimals fail with infinite steps, the transition to new tools would be so much easier, and reflect the same transition into new tools in other sciences. Like Bohr’s Atom, Newton’s Gravity, Linnaean Taxonomy, or Comte’s Positivism.


  • The rules of decimal notation don’t sipport infinite decimals properly. In order for a 9 to roll over into a 10, the next smallest decimal needs to roll over first, therefore an infinite string of anything will never resolve the needed discrete increment.

    Thus, all arguments that 0.999… = 1 must use algebra, limits, or some other logic beyond decimal notation. I consider this a bug with decimals, and 0.999… = 1 to be a workaround.


  • Eh, i wouldn’t call that freeform building and exploring. Rather unconstrained base building, sure, and open world exploration, but you can’t disassemble the boss dungeon and rebuild it as a boat in hell. You can’t automatically kill enemies in a pit of lava. There’s no getting lost in your own mess of tunnels. And no one is making a working GPU out of Pals.




  • They’re also incentivized to keep the same size packaging (both for logistical and public perveption reasons) and ship less product in those packages. People are willing to pay $6 for a big bag of chips, despite the big bag weighing 150g less than the normal bag 5 years ago.

    They don’t get paid by the gram, they get paid by the bag. A bigger bag looks more impressive, and thus can be sold for more. Same for those tall skinny beverage cans. They look bigger than the regular cans, but are actually 25ml smaller, and yet go for a similar price.

    This will continue until the price per gram is what people look for (emphasis on this at the point of sale would help), or the mass of each product is standardized. 50g, 100g, 200g, 350g, 500g, 750g, and whole kg sizes only, none of this 489g nonsense.