0.3 does not equal 0.3?!? Codea bug? Lua bug? Ambush bug?

Can anyone tell me how to fix this? Apparently I’ve done something to make three not equal three.

Here’s my code:

function wrapXAroundY(X, Y)
    if X < Y then
        return X
    end
    return Y - (X - Y)
end

function tests()
    local base, given, intendedResult, calculation = 0.0, 0.0, 0.0
    base = 0.35
    given = 0.40
    intendedResult = 0.30 
    calculation = wrapXAroundY(given, base)
    print("calculation is: ", calculation)
    print("intendedResult is: ", intendedResult)
    print("calculation and result are equal: ", intendedResult == calculation)
end

Here’s the output:

calculation is: 0.3
intendedResult is: 0.3
calculation and result are equal: false

???
!!!
???

This is a subtle technical detail that’s present in almost all programming languages. Internally the numbers are stored as floating point (possibly as doubles - not sure for Lua). Technically it is unsafe to compare floating point values for equality unless certain conditions are met (those conditions hold for whole numbers, for example).

Internally the number of bits are limited and certain fractional values end up being stored as approximations. This means there might be multiple ways to represent 0.3 in terms of the actual bits being used under the hood so 0.3 is not always exactly equal to 0.3.

I don’t know what best practice would be for Lua here, but one workaround would be to convert them to strings and compare the strings. :slight_smile:

Do you know of a way to test whether or not your speculation is correct?

I know that floats have all sorts of weird behavior, but not all languages collapse in the face of them. For example I don’t think I could quite as easily get Swift to claim with a straight face that 0.3 doesn’t equal 0.3. I certainly can’t do it with the identical operations–I tried it out and Swift handles those just fine.

Swift:

Ha, you got me there! I had written the code on a little Swift-simulator on my iPhone, and I thought I had done that exact thing, but I must not have.

Converting to strings before comparing will probably work, but it wouldn’t be super efficient. Better to use a function that uses an epsilon to compare floats or just try to avoid having to do this sort of comparison altogether. :slight_smile:

@UberGoober As @BigZaphod said, this is a typical bug that occurs when testing whether floating point numbers are exactly equal. It occurs because floating point numbers have limited precision and are not exact.

I’ve modified you’re code slightly so that you can see what’s going on.

function wrapXAroundY(X, Y)
    if X < Y then
        return X
    end
    return Y - (X - Y)
end

function setup()
    local base, given, intendedResult, calculation = 0.0, 0.0, 0.0
    base = 0.35
    given = 0.40
    intendedResult = 0.30 
    calculation = wrapXAroundY(given, base)
    print(string.format("calculation is: %.20f",calculation))
    print(string.format("intendedResult is: %.20f",intendedResult))
    print("calculation and result are equal: ", intendedResult == calculation)
end

The way you fix this is the same approach as other languages. Instead of testing for exact equality, you have to test for approximate equality within a certain precision.

function setup()
    print(floatequal(0.101,0.100,0.1))
    print(floatequal(0.101,0.100,0.01))
    print(floatequal(0.101,0.100,0.001))
end

function floatequal(left,right,precision)
    local diff = math.abs(left-right)
    return diff < precision
end

Edit: Just a word of caution about the above approach. Depending on you’re context it can be good enough but this is not always the case. The following blog post and stack overflow explains it better than I could hope to.

https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

https://stackoverflow.com/questions/17333/what-is-the-most-effective-way-for-float-and-double-comparison

Oy all that makes my head hurt.

A system for representing numbers that makes simple math unreliable seems like a very odd thing for the world of computer science to embrace.

I know it’s beyond the scope of these forums to discuss questions this broad, but it really leaves me wondering, why are these things a good idea in the first place?

What if I just turned everything into ints by multiplying everything by 100000 or so? That should easily be enough precision for me, and then all I have to do is good old int math, easy as pie. :slight_smile:

Here’s an example that shows different values of .3 and how to do a compare with .3 .

function setup()
    z=.3
    print(string.format("%.30f",z))
    
    z=.1+.1+.1
    print(string.format("%.30f",z))
    
    print("z =",z)
    
    print("z - .3 = ",z-.3)
    
    if math.abs(z-.3)<.00000001 then
        print("about equal")
    end
end

@UberGoober Think of it along the same lines as a data type like int or float having a minimum and maximum representable value. There’s a limited number of bits used to represent it in binary. So in the case of floating point types an approximation is as good as it gets, but it’s good enough.

Also if you start multiplying everything by large numbers then you could run into overflows instead which would cause similar problems. It’s a surprising complex problem depending on what your context is but I wouldn’t worry too much unless you’re doing something really specialised.

@UberGoober See this link if you want to see the results of what a value is and what can be represented. Enter .3 in the area You entered, then press return to see the actual value stored.

https://www.h-schmidt.net/FloatConverter/IEEE754.html

Maybe one of them became string?

So yeah okay sure whatever sure. So floats can’t exactly represent certain values, so getting really really really close has to do, okay fine. So even then, those really really really close numbers can’t be expected to be consistent from one float to the next. So yeah okay sure whatever sure.

Why the heck does every language pretend that they’re accurate representations? Why show me 0.3 and 0.3 instead of 0.300000-whatever and 0.300000-slightly-different-whatever? It seems like unaccountably intentional befuddlement.

@UberGoober They are accurate for the amount of bits that are used to represent the numbers. For example, if I said show me $14.23 using just $10 bills, the closest you could come is 1 $10. If you used $10 and $5 bills, you could get closer with 1 $10 bill and 1 $5 bill. If you used $10 or $5 and $1 bills, you could get even closer. The smaller the denomination, the closer you can get. So depending on the number of bits a language uses depends on how close they can get to a specific number. Codea uses 64 bits. Other languages could use even more bits.

Exactly my point @dave1707.

If I said show me x in denomination y, and you did, you would be responding correctly to an explicit request.

If I said show me all the money in your pockets, and you had $3.02 but you only pulled out $3.00, you would be responding dishonestly to an explicit request.

If I said here, use this big fancy abacus and tell me what it says 4 minus 1 is, and you went and did all the clicky-clacky-clicky-clacky, and the answer came up 3.00000000002, and you came back and said to me “the abacus says 4 minus 1 is 3,” you would be responding dishonestly to an explicit request.

Representing floats inaccurately is a traditionally unacknowledged yet omnipresent obfuscation that causes lots of trouble, trouble that simple honesty would avoid–lots of mistakes, and lots of otherwise-unnecessary lectures by CS professors, and lots of otherwise-unnecessary forum threads like this one.

I don’t understand the purpose of displaying one thing as another when it’s not. It seems like an emperor without clothes. Just sayin’.

@UberGoober The way floats are represented aren’t unacknowledged. The majority of programmers know that floating point numbers aren’t exact. When they start using them, they realize that fact just as you are now. Normally when you do a print(.3) it prints .3 because the print function will round to about 6 or 7 places to the right of the decimal point. I’m sure that if you did do a print(.3) you would want it to print .3 and not .300000011920928955078125 . Below is a little program that prints the numbers from 0 to 4 in .1 interval. This shows what Codea thinks the numbers are to 25 digits past the decimal point.

displayMode(FULLSCREEN)

function setup()
    textMode(CORNER)
end

function draw()
    background(0)
    fill(255)
    c=0
    for z=0,4,.1 do
        c=c+18
        text(string.format("%.1f     %.25f",z,z),250,HEIGHT-c-20)
    end
end

@dave1707, sorry to be annoying, but it seems like you keep making the opposite of the point you think you’re making–my point, in fact.

The majority of programmers know…
Yes, this is a great definition of obscurity: something you have to know in order to know it. That a group of people, who are already insiders, share a common understanding is, I think, what they call trade knowledge, and it’s by definition the opposite of something plainly discoverable.

Normally when you do a print(.3) it prints .3 because the print function will round to about 6 or 7 places to the right of the decimal point.
Exactly, dave! The print function performs a rounding that it never tells you about. This is exactly my abacus example. If I say “tell me what the abacus comes up with for ___”, and you just say “it came up with 3,” you’re lying to me. You don’t have to say “it came up with .300000011920928955078125” every time, but you could at least say “it came up with 3 plus shavings,” or something like that.

To my limited powers of recall, it seems like standard practices in Objective-C used to call for adding an f after all floating-point numbers, so instead of 3 you’d write 3.0f. It was an optional stylistic choice, not required by the language, and I used to hate it. When they announced that it was now preferred to not use the f, I was very happy. And I continued to be happy that the coinage didn’t even exist in Swift (AFAIK). Older and wiser, I now see the wisdom behind having some obvious sign that a number has been rounded or is incapable of being precisely represented. I never thought I’d say it, but I miss the f.

If you took all the man hours that have been lost to the world because of the lack of an obvious signifier that a floating-point number is not a precise number… uh… you’d have a whole lot of hours! :blush:

@UberGoober Before you started to write programs, did you know how a ‘for’ loop worked, or how a ‘class’ worked. No. You learned how to use them just as you learned how to use everything else throughout your life. You either read about it, somebody told you about it, or you learned thru trial and error. The same thing goes for using floating point numbers. You now know that floating point numbers can’t be held in memory as exact numbers. Apparently when you started to use them, you didn’t know how they worked. You found out that something was different about them thru trial and error and now you are finding out why. There are a lot of things in Codea you don’t know about, so are you going to say that there’s something wrong with it. No, you’re going to learn how to use them just as everyone else that uses Codea will do including me. The next version of Codea uses Craft. I don’t know anything about Craft (well I know a little since I have the beta version) but I’ll learn more just as everyone else will. That’s the fun of programming.

EDIT: See this link.

https://en.m.wikipedia.org/wiki/Arbitrary-precision_arithmetic

@dave1707 so, yeah, I get it, you think it’s a thing that’s just part of normal learning, and that my points are all dismissable by “that’s what it means to learn things.”

I’m pretty sure that failure to plan for floating-point inaccuracies is a thing that is a thing, you know? I’m not complaining about for loops here. I’m talking about a very common mistake that fouls up even the most experienced programmers from time to time.

Would I like this mistake to be easier to avoid? Yes. Am I fine with shrugging and moving on? You betcha.

@UberGoober I’m not trying to dismiss anything you’re saying. I just trying to explain that you can’t express an infinite number of values using only 64 bits. Using only 64 bits also affects integer values too. Try the example below.

function setup()
    val=1000000000000000000
    
    a=val*9
    for z=1,5 do
        a=a+1
        print(a)        
    end
    
    print()
    
    a=val*10
    for z=1,5 do
        a=a+1
        print(a)        
    end
end