drawing offscreen

Wasn’t sure how to classify this, because it’s probably intentional…

if I draw offscreen, with say line(1000, 1000, 2000, 2000) - nothing happens. So - that’s cool.

If I try to image:set a pixel outside of the image - it prints an error message.

I guess it’s not a big deal, but it’s inconsistent. I suspect this is because with scaling, that line() could be valid. And since you can’t scale images(), meh.

Yes, the screen coordinates will be affected by transforms, we don’t check drawing operations against the view frustum because it can be entirely intentional to draw things off screen (e.g. a sprite that is half-off-screen, or a line that continues off the edge, and so on). Most games tend to spawn objects such as enemies entirely off-screen, as well. So it’s valid behaviour.

Drawing outside of an image is invalid behaviour because the memory isn’t allocated to access those pixels, so you should be warned when you try to access parts of an image that don’t exist.

The choices with drawing off the edge of an image were to either give an error, or to just ignore the set call.

I can see the pros and cons of either behaviour, but throwing an error was more convenient for me at the time. We can always relax the specification to not throw an error later (but it would break existing code if we did the opposite), so I took the more prudent option.

Just to give some insight - I think a lot of us are looking at image() and the screen as being “the same thing”; we know the internal representation is different, but that’s not relevant to us. So - we expected (in an ideal world) the same coordinate system, the same behaviors, the same commands, ie. the screen is really just ‘image 0’ to us. They’re not - and that’s cool - but it means whenever we see them differ, we wonder “is this intentional, or is it a bug?” - I’d hate to write software thinking a particular quirk is the way it should be, then have it fixed later on :slight_smile:

The screen and images are intended to be very different constructs. If we do allow drawing commands in images, it will be through a system that maps the graphics context to an image, or renders into an image through some sort of begin/end construct.

In order to do regular drawing into images, we’re thinking of the following API

setContext( myImage )
-- Regular drawing operations go here
-- Drawing goes into "myImage", not to screen
ellipse(0,0,100)

setContext() -- Set drawing to go to screen again

sprite( myImage, 100, 100 )

I agree with Bortels. We do know that image() is a different beast. But since its main benefit of it is as screen “buffer”, we expect it to be like a real screen. Well, at least we hope it that way. :smiley:

I also hope the set/get pixel commands will also available as screen command as well. It’s the most basic drawing operation. If we have that functions, we could virtually draw anything on the screen, including to draw our own polygons, curves, fonts, and flood fill. It may a bit slower than native commands but it’s much better than nothing at all. Any plan for this?

I like the setContext() idea! It’s awesome! Please, make it through for the next update. If this will be realized, the set/get pixel functions will need to be extracted out from image class. :slight_smile:

get/set probably wont ever be available on the screen buffer directly. The rendering uses openGL ES2 for hardware acceleration. This makes it difficult to just draw a pixel directly (as it is a vector based rendering system). The image class provides a method for doing this though, by letting you draw to an image, then draw the image to the screen.

That said we may expose point rendering at some point, which would allow you to draw a single pixel of a certain colour, but that isn’t exactly the same as set (and get is not possible without re-architecting the renderer, and would be very slow as opengl is more about pushing data to the screen than the other way around).

I see. So, we got 2 options here:

  1. direct drawing to screen with more speed but less features,
  2. indirect drawing through image with less speed but more features.

Well, which one the best is depend on the kind of app we’re going to write. Personally I prefer the second approach because I use Codea for fun and quick coding. More for code prototyping and ideas brainstorming rather than to create a full fledged app. That’s the main purpose of Codea, hence the name, right? I also don’t want to mess around with complex computation just to make simple things (e.g polygons, gestures, etc). Besides, iPad is speedy enough for common computation. :slight_smile:

Drawing pixels to the screen would basically be noSmooth(); fill(color); rect(x, y, 1, 1);

You can do that now. Just wrap it in a function. You can’t read pixels from the screen, however.

Edit: Those aren’t really the two options. Indirect drawing has less features and less speed, but is more generic (direct pixel access, low level). Direct drawing has more high-level features and more speed (primitive objects, high level).

The image class is designed for things like fractals, image filters (gaussian blur), procedural textures, and stuff you just can’t do without direct pixel access.

For everything else, like immediately getting shapes, sprites and motion on the screen, you want the regular drawing API.

I see. So, I think we’re abusing this image class. :smiley: Sorry.

Then, if it’s the intention of image class, I think you need to provide a function to capture screen into an image. This way we could go between image and screen back and forth to combine screen and image drawing features. This way, the setContext() becomes moot. :slight_smile:

How about building in Z-buffers and alpha channels so you can do images overplayed on top of each other, or other things. Neat filter and image processing type renderings become possible.

@bee setContext() is better because it doesn’t require you to draw everything to the screen in order to capture it into an image. It gives you an avenue for changing image pixels with high level drawing functions.

@alvelda z-buffers and alpha channels are in there at the moment. Though alpha channel on image() isn’t correctly premultiplying through the colour channels at the moment - fixed in 1.2.5

@simeon: Yes, you’re right. But it will bring the illusion that image class is similar to screen. :smiley: So, by setContext() function, we have (all?) drawing API that is primarily for direct screen manipulation to be applied on image class as well, but not the other way around (e.g get pixel). Am I correct?

I don’t mind having that illusion (I personally don’t see it) if it affords flexibility and provides API simplicity.

SetContext() sounds lovely.

But what I really want is a way to grab a chunk of screen into an image().

Hmm - also, I know we can copy image() to image() - but can we copy from a spritepack? If we could copy from the screen it’d be easy - but we can’t right now.

Finally - access to raw image data. That’s not a biggie, but I have some ideas that would be faster than setting each pixel…

This is all wishlist - despite nitpicking, there’s a ton of potential with what’s there already; plenty of room to explore…

To copy from the screen, with setContext, you would simply do this:


function captureScreen()
    screenCap = image( WIDTH, HEIGHT )
    
    setContext( screenCap )
    
    draw()
    
    setContext()
    
    return screenCap
end

Hmm. I guess to grab a chunk I could use clip(). Yes?

Actually, with setContext, I wouldn’t need to grab from the screen - in theory, I could just draw to an image and then sprite() it to the “real” screen as I saw fit.

You could also apply a transform on before calling draw() so you can get a portion of the screen.