3D scene, GUI transparency, and static background issues...

Hello fellow Codea fans,

I’m encountering a couple of issues drawing using different projection modes in Codea (drawing a background and having transparency sprites like GUI, along with perspective projected scene).

First issue…In OpenGL, in order to draw a static background, I would disable depth testing, draw my background, then enable depth testing again and draw the scene.

The fastest way I found to draw a static background in Codea was to draw the background (seems by default in orthographic projection so not needing to specify it), then set to perspective projection and draw the scene.
However, when I do that “normally”, the background is still drawn over the scene (it seems orthographic projection is always drawn over perspective)

I found that the only was to set the z value of my background to -10 (anything else and it goes back to being drawn over any perspective projected scene).
This works fine, but it’s too odd for me to believe it’s the proper way to do it, anyone able to help ? :stuck_out_tongue:

The 2nd issue i’m having is overlaying images with alpha values. For example, when drawing a virtual stick, the transparent part of the image turns to background color, not whatever is drawn under it.

For example:

-- not setting orthographic projection since it seems to activate automatically every draw call
sprite("Planet Cute:Character Boy")

camera(camx, camY, camZ, lookX, lookY, lookZ, 0, 1, 0)

It seems things drawn in perspective projections and things drawn in orthographic projections are actually drawn in two different buffers ?

Anyway, I guess manually rendering the image in perspective projection on the near plane of the view frustum would make transparency work, but is it the only way?

Finally, @Simeon , are there any plans to allow the user to have control over things like polygon fill mode and backface/frontface culling ? This would allow for some cool “cheap” stuff like outlining meshes (ala cartoon rendering) etc…



One way to do the static background might be to use an ortho projection like so:

ortho( 0, WIDTH, 0, HEIGHT, -maxDepth, maxDepth )

This will allow you to render your background at the desired depth without it being clipped. The orthographic projection is set up every draw() call, however it is set up with a default near and far clipping plane of -10, 10.

Perspective and ortho both draw to the same buffer — those functions simply manipulate the projection matrix (you can read and set this directly with the projectionMatrix() function).

We will expose the more advanced features — backface culling, depth testing control and polygon fill mode. Could you add issues for these to the tracker? It’s the best way to ensure that I will remember to do it. https://bitbucket.org/TwoLivesLeft/codea/issues

@Simeon - Thanks for your replay !
Heh this explains why rendering the background to -10 worked :slight_smile:

However, transparency is not working for objects I want to draw “over” the 3D scene, only with other meshes using orthographic projection.
Actually, transparency only seems to work with meshes using the same projection matrix.
Is this a bug or me not doing it/understanding it properly ?

Currently I need to calculate the front view frustum plane and draw my GUI there to get transparency for sprites to work on the 3D scene. I’m already doing the calculations for culling, but is that the only way? :frowning:

I’ll add my requests about polygon mode to the issue tracking now, thanks :slight_smile:



Sort of related to this…

On the 3D example with basic shape, if you lower the alpha, depending on the direction, different filter effects occur, none of which are exactly like 2D filtering.

I can’t explain it any better than that run on sentence.

Turn down the alpha on the two shapes in the 3D example and you’ll get picture (which would be better than 1000 words trying to describe it).

I wonder if this is an ordering or depth buffer issue. Do you have a small piece of sample code that can demonstrate this? That would really help with debugging.

@Ipad41001 - Yeah I see what you mean. I’m drawing lots of chunks of terrain (so many different meshes), and when using wireframe mode, I notice that transparency isn’t always working.
It really feels like a glitch to me :confused:

@Simeon - Well the code I posted above is most of my draw function.

I didn’t have time to try it with demo code. My iPad was about to run out of batteries and charger is with sleeping kungfu-ready wife (it’s 4 in the morning ^^). Here is the full code: http://pastebin.com/jP0CAYSC

I’m sorry I can’t share smaller code for now, I’ll try and reproduce this with the basic shaped example when I get it charged up tomorrow.

I would like the transparency to work for both the sky and the terrain, but it’s working for neither :frowning:
I’m drawing two sprites. One will have background color for its transparent pixels, the other will let you see the sky through its transparent pixels. The only difference between them is z value. Why isn’t the first one not letting the sky through ?

When you place the terrain to over to the sprites’ locations, first sprite will be on top but show no transparency, 2nd will be rendered properly behind the terrain.



You need to draw the transparent objects last, after all other drawing. Are you doing this?

To follow up on Simeon’s comment.

When the renderer (OpenGL) is rendering a triangle, then for each pixel inside the triangle (as it will appear on the screen), it has to decide “Is this pixel’s colour already decided by an object that will appear in front?” If so, it ignores that pixel. If not, it paints it with the colour from the new triangle. To figure this out, it remembers the “depth” for each pixel and compares that with the depth of the pixel coming from the new triangle.

The crucial point is that alpha has no bearing on this whatsoever. As far as the depth algorithm is concerned, the alpha of the colour is ignored: a pixel is either painted or not. This means that if you want a transparent pixel to be drawn with the lower elements (partially) showing through, those lower elements have to be drawn first. If drawn after, they will not be drawn at all.

Sadly, this means that you need to depth-sort your transparent triangles manually and paint them onto the screen in the right order. You can leave all the opaque ones to the GPU providing they are drawn before the transparent ones. If you know that your transparent ones won’t overlap then you can simply draw the opaque ones and then the transparent ones, but if they might overlap you have to manually sort them.

I tried doing something with this on my roller coaster where I had a translucent tube, but although it worked well with the stars, because you could see other parts of the tube through the part where you were looking, it mattered which way around they were drawn. As there are several thousand vertices, I decided against depth sorting them (though the sort could have been considerably optimised, I guess) and went to an opaque half-tube which looks quite fun, but sadly without the translucent effect.

The algorithm for doing opacity correctly would be quite tricky. At the moment, the GPU only has to remember the current colour and current depth for each pixel. For opacity, it would have to remember a stack of colours and depths for each pixel, so that when the new one comes in then the GPU slots it into this stack at the correct depth.

@Simeon - oh jeez… I’m so sorry I somehow completely forgot about drawing transparent objects last…

I started on the GUI after getting the background to work, and for some reason I also drew the buttons before drawing my scene…
I guess my brain wasn’t working right…no more coding for me this late at night (hey, that’s a rhyme !)

@Andrew_Stacey - Thanks for the added info, love the roller caster demo



No problem @Xavier@Andrew_Stacey’s explanation is perfect, I hope that solves your issues?

@Simeon - oh yes absolutely :slight_smile: