Set mesh opacity globally

(I’ll put this on the issue tracker as well - remind me if I forget.)

The more I use meshes, the more I like them.

I’ve been using them for quickly drawing complicated shapes, but have run into something that I’m turning into a feature request: the ability to set the opacity globally.

As I understand it, mesh:setColors sets all the colours of the current list of vertices, so it’s a convenience function. But setting the alpha of each vertex isn’t always quite right. When using a mesh to draw a complicated shape, it can be computationally irritating to ensure that the triangles don’t overlap. Sometimes, it’s easiest just to draw a shape with overlapping triangles. But when this happens, the opacities add meaning that the overlaps are obvious if the shape isn’t completely opaque. It would be nice to be able to render the whole mesh and set the opacity so that the opacity on overlaps is the same as the opacity on the rest.

I could do this by rendering to an image and then tinting the image with an appropriate alpha, but that adds an extra step to something that oughtn’t to need it.

Mesh this, mesh that. It seems that mesh got my attention and interest. Thanks to @xavier’s code which I’m still trying to understand. The terms around the usage (vertex, etc) and the lack of my english in this regard make my study a bit harder. Thanks to Google and Wikipedia! :smiley:

Sorry for the interruption. Just ignore me and keep the discussion going. Thank you. :slight_smile:

@Andrew I’m not sure if I’m clear on this. Do you mean you’d like a way to set the opacity on all vertices to a specific value, while maintaining the RGB components of those vertices?

@bee - My example probably isn’t the best code to learn about the mesh api as there is a lot of other stuff that might cloud what you’re after. I was planning to make a very simple Codea tutorial on 3D projections/transformations, but it will be useless with the next version of Codea :stuck_out_tongue:

@Simeon - That would actually be nice, something like overloading the function setColors(R, G, B, A) with setColors(A) ?

No. I’d like to be able to set the opacity of the mesh as a single object.

So if the triangles are at, say, (0,0), (100,0), (100,100) and (0,0), (100,0), (0,100) then when I set the global opacity I see a single shape, not two overlapping triangles.

Take a look at the PGF Manual, page 241 on transparency groups for the idea of what I’m on about.

@xavier: Useless? How come?

@bee - Well right now we need to project our 3D points to our 2D screen, which means quite a few calculations.
Next Codea version will have built-in matrices and vec3 for meshes. You’ll be able to draw a cube in just a few lines of code :slight_smile:

But meshes can take vec3 objects, I believe. The problem is that these are projected orthogonally to the screen. What we really want is stereographic projection and I haven’t seen that in the proposal (as it’s not implemented by a matrix).

I like this idea. It’s like a tint for the mesh. In fact, would applying the current tint to the mesh be a reasonable way of doing this?

@Andrew_Stacey - What do you mean by “stereographic projection” ? Wikipedia tells me it’s a specific way of projecting a sphere onto a plane ?
Either way, if Codea update brings matrix transformations, couldn’t you put that in matrix form or is that not possible ?

@Xavier: It isn’t a linear transformation, so no: it can’t be put in matrix form.

It might be a term that is used for many different things, so to be clear: by stereographic projection I mean that you project your object onto a plane from a point (the “eye”). It’s what you do if you look at a scene through a window and then draw on the window exactly what you see (so it is the right way to project 3D to 2D and make it look “realistic”). If you look for the videos of my 3D shape explorer then you might see what I mean as it’s what I use there.

@Nat: Yes, that would be good. But it has to be applied at the right time. Applying it to the vertices won’t work due to the overlap issue. So it has to be applied to the rendered mesh.

@Andrew: I think you mean perspective projection. It can be expressed as a 4x4 matrix if you use homogeneous coordinates (4d vectors) for your points.

@Andrew the next update will retain vec3.z on meshes (rather than discarding it). In addition you will be able to set the current 4x4 transform matrix — for example, to integrate a perspective transform.

Regarding the opacity setting — you mention that it needs to be applied to the rendered mesh. So internally we’d need to render all meshes to images just to support this setting, which would hurt performance quite a lot. I think perhaps having a helper function (mesh2image()) might be a better way to go. Though maybe it’s too specific for us to implement, and it’s not too hard to implement on the Lua side.

@Nat: Sorry, I’m going to pull rank here. There is no way to represent stereographic (aka perspective) projection via a matrix. It is possible to simplify it using matrices but you cannot take in a 3-vector, apply a matrix, and produce the right 2-vector as output. It also cannot be done via an affine transformation, so it can’t be done using those “homogeneous coordinates” either.

What you can do is transform your data using affine transformations before and after the stereographic projection. What this gains you is that you only need to encode one projection, and so it might as well be (x,y,z) -> (x/z,y/z). But the point of doing it in the heart of Codea, rather than as a user implementation, is to get it faster since looping over all the vertices in a mesh can take some time, in which case you probably don’t want to encode it as the composition of several functions but as a single function.

@Simeon: Hmm, in that case I’ll work around it and make sure that where I really care about this then my mesh doesn’t have overlaps. Rendering to an image each time is going to be expensive if my mesh is dynamic. I can work around this problem - I’d rather you concentrated on better sorting routines.

In addition you will be able to set the current 4x4 transform matrix

Huh? What’s the fourth coordinate? Are we leaving the Gallilean system and leaping straight into relativity? Or are you referring to the same bit as Nat on the Wikipedia page on perspective projection? In which case, I refer you both to the image:

final step of projection

which is the non-linear step. For the other steps, you don’t need 4x4 matrices (requiring 16 numbers). You want 3x3 + translation (only 12 numbers). Indeed, most people will only want to combine rotation, scaling, and translation, in which case the entire transformation can be encoded as a unit quaternion, a real number, and a 3-vector: 8 numbers (ignoring the normalisation on the quaternion which would get you down to 7).

Take a close look at the Space bit on my 3D shape explorer. This is exactly what I do there.

@Andrew that’s exactly it. Rendering an image each time is expensive, and is what we would have to do internally in order to treat the mesh as an image with regards to opacity.

Regarding the 4x4 transformation matrix. OpenGL uses a 4x4 matrix with homogenous coordinates for points. Irrespective of necessity, it is the fastest way to operate — because the GPU will do the matrix multiplication against each point in the mesh. This is already being done in Codea for every frame (for every primitive’s vertex, not just meshes) — so we might as well allow you to set that matrix.

To make this a little bit clearer, here is what Codea does for every vertex on every frame:

    Position = ModelView * Vertex

Where ModelView is a matrix combining the current View matrix — by default an orthographic projection the width and height of the view. And the Model matrix, which is a 4x4 matrix representing the current transform state.

This is the same matrix that gets copied when you do pushMatrix, and it gets set to identity when you do resetMatrix.

This gets executed on the GPU. It’s extremely fast. I believe the iPad 1 hardware can process 15-30 million vertices per second, the iPad 2 is probably double that.

And in that setup then a 3-vector (x,y,z) is first promoted to the 4-vector (x,y,z,1). That means that it can handle arbitrary affine transformations by a single matrix multiplication. Neat. You might need to supply some auxiliary functions for translating “traditional” transformations into 4x4-matrix form. As this is in the GPU, I see why it’s so fast and why it’s what you would expose. No argument from me there.

The orthographic projection matrix should be something like [1 0 0 0; 0 1 0 0]. So the total result is to apply a 2x4 matrix to the vector (x,y,z,1).

This still does not get us perspective projections. So while I would welcome the ability, for some projects then it wouldn’t help. From what you write, it would appear that I have to apply the matrix last of all. But with perspective projections then I want to apply a matrix, then apply the projection, and finally apply another matrix. So I still have to loop over my vertices and apply a transformation to each one, and then the GPU stuff doesn’t really save me a lot.

I was thinking to expose the View matrix to you as well. This would allow convenient perspective projection, at least the way current 3D games and applications handle it through OpenGL.

There’s clearly a language disconnect here.

I searched for a bit more information. Unfortunately, most explanations seem geared for people who don’t understand mathematics but do understand code. Which is the complete opposite of my situation! I did find the following site: which had quite a detailed explanation. In there, is the paragraph:

Note that both xp and yp depend on ze; they are inversely propotional to -ze. It is an important fact to construct GL_PROJECTION matrix. After an eye coordinates are transformed by multiplying GL_PROJECTION matrix, the clip coordinates are still a homogeneous coordinates. It finally becomes normalized device coordinates (NDC) divided by the w-component of the clip coordinates.

which might help to explain my confusion.

To do perspective projection (as that’s the OpenGL name) you have to do three steps:

  1. Apply a spatial transformation. This is done using the GL_PROJECTION matrix. The fact that it is a 4x4 matrix and acts on so-called homogeneous coordinates are aspects of implementation and nothing special.
  2. Apply the projection transformation. This is referred to in the final sentence of the quoted segment.
  3. Apply a view transformation. This is an application of a 2x2 matrix.

Now, when you write something like:

To make this a little bit clearer, here is what Codea does for every vertex on every frame:

   Position = ModelView * Vertex

then I interpret that as at most implementing the first and third steps. This is because the middle one cannot be implemented as matrix multiplication. I was therefore assuming that it was not implemented at all. Reading around, I find that it can be implemented. That’s, in effect, what I’m asking: is it?

A-ha! Clicking further on that site I got to and the diagram at the top of that is just what I need. The point is that the “divide by w” step is not a matrix transformation. However, there are no parameters involved in that step so to specify the entire transformation, one only needs to give the two matrices, the so-called perspective matrix and view matrix.

So what I want to know is this: is the “divide by w” step included in what you are going to implement when you put in true 3D support in meshes (for example)?

If yes, brilliant! Then, if you expose the two matrices I have all the pieces I need. Just please, please, please don’t say that you are “applying a matrix”. You are applying a non-linear transformation whose parameters are specified by a matrix. But the relationship between matrices and linear transformations is so strong that if you don’t specify that you aren’t doing this, the implication is that you are.

If no, please do! Otherwise, I still have to loop over my vertices so I may as well apply the transformation myself.

(The OpenGL commands appear to be glFrustrum() and gluPerspective(). Are these involved at all?)

When I say “view” I did not mean the viewport transform, I meant a separate 4x4 matrix that specifies the camera’s transform. This could be incorporated into the model matrix, but it is common to separate them.

The “divide by w” could be incorporated into the projection matrix. Which we will allow you to set, and probably include some utility functions such as perspective( fov, aspect, near, far ). Edit: I think I don’t mean “divide by w”, the perspective division can be incorporated into the projection matrix. OpenGL projects onto the screen, it is currently and has always been doing that in Codea — projecting a 3D scene onto a 2D image plane.

In fact, here is the 3D API I had in mind (in addition to the matrix, vec4 and improved vec3 classes). Feel free to suggest improvements.

-- This loads matrix m, replacing the current model matrix
loadMatrix( m ) -- "modelMatrix" below could make this redundant

-- This multiplies matrix m against the current model matrix
-- Yes, it's called "apply" but we could name it "multMatrix"
-- applyMatrix is consistent with, though
applyMatrix( m )

--These configure the view matrix so that the camera "looks at" a given point. '
-- Similar to gluLookAt. Called without arguments it resets the view matrix to 
-- default parameters.
camera( eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ )

-- These set the projection matrix using perspective projection. 
-- No parameters gives some default projection
perspective( fov, aspect, zNear, zFar )

-- These set the projection matrix using orthographic projection. 
-- No parameters gives the default ortho( 0, WIDTH, 0, HEIGHT, -10, 10 )
ortho( left, right, bottom, top, near, far )

-- These replace the matrix directly with a user matrix for the appropriate stage
-- Called without arguments to get the current matrix.
modelMatrix( m ) -- Could also be called "transformMatrix"
viewMatrix( m )
projectionMatrix( m )

OpenGL projects onto the screen, it is currently and has always been doing that in Codea — projecting a 3D scene onto a 2D image plane.

The difficulty with that phrase is that the word “projects” is ambiguous here! And since no Codea built-in function takes 3D vector objects and does anything with them (just checked mesh: it throws away the z-coordinate), I can’t test what is actually going on.

However, the fact that the code you’ve posted included “perspective” and “ortho” as separate functions gives me hope! Together with what I’ve read about OpenGL, I’m almost sure that this is going to be what I would like.

NB It’s going to be messy trying to explain how it all works.

So at this point, I think I’ll retire from the fray and let you get on with coding it.

Oh, except one more thing: what about z-level ordering? Seems to be called depth-buffering in OpenGL. Can that be enabled on a mesh? Then I wouldn’t have to sort my triangles, which would be fantastic.