So - I can make an image() - and I can set and get pixels. And I can copy parts of an image to another image, and I can plot that on the screen.
Is there a way to get an image from the screen? ie. is the screen an image() as well that we can copy to/from? it doesn’t seem to be, but maybe I am missing something. What I’d like to do is draw my teh awesome thing, then grab that drawing and blit it around - possible? (If not, I’m gonna be writing some line-drawing routines!)
Time to go write image decoders…
No way to grab screen contents at the moment. We’re looking at various render-to-texture APIs.
i = image(400, 400)
for c=1,400 do
background(0, 0, 0)
sprite(i, 100, 100)
Black screen - I think I should get a diagonal white line, no?
This works for me, but we’re looking into it on a number of devices.
We’ve just tried on a number of iPad 1 and iPad 2 devices and your example seems fine. Does the Mandelbrot example work for you?
Ah - I didn’t do troubleshooting step #1. Killed Codea, started it again, and there’s my line. Not in the orientation I expected, but oh well.
It’s making me sad that screen coordinates start int the lower-left, but image coordinates start upper-left. Sure we can just rotate the image, but bleah.
Image coordinates starting the lower left would bother me a lot, and I like that screen coordinates start in the lower left. I think this comes down to personal preference. I realise the inconsistency, but nothing else sits well with me.
I just expected them to be the same - I think most people will expect the same. It violates the principle of least surprise…
True, so you would prefer image pixels to be accessed from the lower left? I don’t think we’ll change the default now, but we could add an imageMode(FLIPPED) function in the future, to change the behaviour.
I agree with Bortels. Coordinate system should be consistent.
I don’t actually have a preference, other than that it be consistent. My tendency would be to have image() match the screen, only because the screen came first. BUT - I’m going to write code based on how things are now, so adding FLIPPED probably won’t get used much, by me at least. What I’ll probably do is just ignore the difference, and habitually rotate() the result.
Raster graphics operations typically access pixels from the upper left corner with Y pointing down (Photoshop, and so on). While 2D and 3D drawing coordinate systems typically have Y pointing up (OpenGL). Having raster coordinates accessed from the lower left feels incorrect to me, even if it is inconsistent.
You could probably override image’s set and get to do a Y-inversion, so you don’t have to worry about rotating.
Hey - that’s a thought! Not a bad idea even - just wrap it. I may just do that!
I thought image class is just another “screen” to draw upon before it goes to the “real” screen. Consider it as screen buffer or something like that. Besides, Codea is everything but Photoshop.
Image is different in that you touch pixels directly, and don’t have high-level drawing operations.
Heh. Speak for yourself - I already have Bresenham’s line drawing working (it was cheap - wikipedia has the algorithm right there for you)
Antialiased lines next, and Bee’s polygons (including filled, I hope). Give me a few days, and I’ll either finish it up, or get distracted, or Andrew will beat me to it - either way I’ll post what I have.
Go Bortels, go!
I remember I had written a program to fill a closed arbitrary area with a single color. I used it to color flat map images. I forgot what algorithm I used. It’s long time ago when Turbo Pascal v.5.5 in its glory days and I was still handsome. Maybe we can use such algorithm to get filled polygons.
I want to do both filled polygons (faster) and a flood fill of some sort. I’m sorting thru the different implementations now…
There seems to be a bug in the current image implementation with alpha values. We’ll be submitting an update tonight that fixes it.