I am new to Codea, I bouth it just few days ago.
I started by studying the examples and my first program has been a new Mandebrot set drawing program.
I realized that at a not so high zoom level, there are floating point precision artifacts.
What is the floating precision of Codea? The Lua manual states that numbers all are of a unique type, with double precision, but to me it seems there is no evidence of that.
There is some way to get at least a double precision for float variables?
Thank you in advance.
P.S.: sorry for my english.
.@Rosso5 Codea uses 32 bit precision which gives 6 or 7 digits past the decimal point. Do a forum search because this was brought up previously.
Thank you. If I am not wrong, 32 bits correspond to single precision.
I tried to make a search, I saw that stated once, but I would like to be sure and to know if there is some way to increase the precision, otherwise a low of programs would be not-realizable.
There isn’t any way to increase the precision. I would like to see an option to use 64 bit for accuracy or 32 bit for speed. See this discussion.
Hi @Rosso5. If you do things that really need double pecision (maths or physics), it should be quite easy for you to implement yourself any precision you need? Just define your own +,*,-,/ with the number of bits you want?
Hello @Rosso5. The page on the Codea Wiki here may be of interest to you.
@mpilgrem: thank you, this gives a definitive confirmation.
@Jmv38: Thank you. I am not sure this can be made in easy way, I have not experience on that, but I think I could try, in case I really need it. In any case, it seems quite strange to me that a very basic thing like the precision of numeric variables in a programming language needs to be implemented by the final user, it should be planned in the programming language specification. And generally it is needed not only on basic operation (+, -, *, /), but also for powers, exponential, logarithms, trigonometric function, etc.
In the discussion he referenced @dave1707 said, “it would be nice if Codea used 64 bit precision, but I guess 32 bit is good enough for what’s being done now.” My take on this is that 32-bit precision limits what is being done now.
I’ve tried implementing large number arithmetic, and still have trouble with division. It’s harder than you’d expect!
So I’d like to see 64-bit reals made an option too. Most projects don’t need it, but there are some that would.
Yes, it would be very useful to have at least an option to define some double precision variables if needed.
Mandelbrot set explorer is an example (for the zoom over a certain amount, not so high), another one is a program performing ray-tracing.