0.3 does not equal 0.3?!? Codea bug? Lua bug? Ambush bug?

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE 754 standard.

As you can see above, there is a standard for floating point. By using this standard, floating point values in different languages will be represented the same. How the floating point value is displayed will vary by language based on who wrote the language, basically the print function. Some languages may not truncate the value and print the full number, while other languages will round at a certain number of digits and only print those digits. The way floating point values are stored isn’t going to change anytime soon. Their values aren’t going to change until more bits are used to store them. And no matter how many bits are used, there will always be floating point numbers that can’t be represented exactly. That’s the way it is and it’s not going to change. It doesn’t matter how many Winnebagos are made.

As you say, it’s not going to change. If I’ve given the impression in my last post that I want to change it, I’ve given the wrong impression.

@UberGoober I didn’t get the impression that you wanted to change it. I’m just saying it’s not likely to change anytime soon so you might as well get used to the way it works. I suggest you google floating point numbers and read about it so you understand what you might run into as you use them.

I do often I suppose give the impression I know nothing about all this, but it’s not so much the case. And just so, the act of questioning any entrenched system leaves one open to being told to google it. That’s fair enough.

I did initially ask the question “what are they doing it this way” with the intended subtext of “that’s dumb and they shouldn’t do it that way.” I am not anymore and perhaps I didn’t make that clear.

I am sure I don’t need to explicate the Winnebago/hat analogy for you, but I think I may have seemed to be trying to ridicule the practice, and I wasn’t. I was before, but now I’m actually asking out of sincere dispassionate curiosity. It’s a logical inconsistency to quantify imprecise quantities as Winnebagos in some cases and as hats in others, and I’m actually asking what the rationale for that is. In describing the IEEE standard, and the varying approaches to the print function, you seem to be putting forth the viewpoint that it doesn’t really matter how different languages approach the print function, because they’re all talking about the same underlying storage system, and understanding that system is what really matters.

Of course, to have the deepest facility with floats, you’re absolutely right–that system is what really matters. That said, I’m curious to understand the logic being applied to how they’re represented in the print function, because a lot of thought usually goes into these things. At this point I’m neither critical of it nor ignorant of it–I’m just curious about it.