Set mesh opacity globally

A depth buffer is already used in Codea. For example, translate(x, y, z) will currently work in Codea 1.3.1, moving things on the z axis.

The Codea renderer is actually rendering 3D scene locked to an orthographic projection — this is in the current version (and all versions since the beginning).

All these extra functions will do is allow you to change some of the matrices that were previously fixed.

What I meant by depth buffering was to avoid having to manually sort triangles before they were added to the mesh. At the moment, I compute a load of stuff for the vertices and then do table.sort(vertices, function(a,b) blah blah end). If I could avoid that, it would be great.

The Codea renderer is actually rendering 3D scene locked to an orthographic projection … All these extra functions will do is allow you to change some of the matrices that were previously fixed. [emphasis added]

Now my hopes are less hopeful. The orthographic/perspective is not one of those matrices.

Will you remove that lock to allow perspective projection?

The orthographic/perspective was one of those matrices. I think I misled you when I told you that vertices were computed as follows:

 Position = ModelView * Vertex

ModelView, above, is incorrectly named. It should be ModelViewProjection. The Projection matrix is multiplied into the ModelView matrix on the CPU and then sent to the GPU for multiplication against each vertex.

I simply copied and pasted the code from my OpenGL shader, where I have it named as ModelView — the projection is kind of assumed, otherwise we would not be able to see them on the screen.

Edit: by the way, I have a 3D rotating plane with the following code:

function setup()
    parameter("Distance",-100,0,-20)
    parameter("Size",0.1,20,8)
    parameter("Angle",-180, 180, 0)
end

function draw()

    perspective(45, WIDTH/HEIGHT)

    -- This sets a dark background color 
    background(40, 40, 50)

    -- This sets the line thickness
    strokeWidth(5)

    -- Do your drawing here
    translate(0, 0, Distance)
    rotate(Angle,0,1,0)
    
    noSmooth()
    rectMode(CENTER)
    rect(0, 0, Size, Size)
end

As you can see it creates a perspective effect on the rect as it is rotated around the Y-axis:

Perspective

Just to be clear: Codea has always been using a projection matrix multiplied against each vertex. It was just fixed to an orthographic projection before now.

The proof of the pudding …, as they say. That looks right. Any chance of pushing that to beta?

But please stop saying things that imply that this is happening by multiplying a vector by a matrix. That just ain’t so. It may be just semantics, but this is my language - mathematics - that you are mangling. The transformation is more complicated than just matrix multiplication. It is true that all of the parameters are encoded by a matrix, but you are not simply applying that matrix.

I know it must seem as though I’ve been making a mountain out of a molehill here. Sorry about that. It’s … mathematics.

@Andrew_Stacey each homogeneous coordinate (vertex) is being multiplied by a matrix that encodes the projection and model transformation. I’m a bit confused about why you say this isn’t the case.

@Andrew_Stacey
When you apply a 4x4 matrix transformation to a homogenous point (ie, [x,y,z,w]), you get another homogenous point in return (called the Clip Coordinate). Dividing this point by the homogeneous coordinate (w) gives you the perspective division (into Normalized Device Coordinates). This is done by the graphics hardware though, so as far as Codea is concerned the perspective transformation is just a matrix multiplication.

Mathematically the perspective divide isn’t required for the projection matrix to ‘work’ as all points in a homogenous space multiplied by a constant - in this case 1/w - are the same point, ie it is scale invariant. The division is required to actually produce the image though (ie, to extract the points from the homogenous space into a 2D one)

http://www.opengl.org/resources/faq/technical/transformations.htm
Section 9.011 has more details.

This is rapidly getting … ridiculous. The short version of what I want to say is:

  1. Whatever code is need to make Simeon’s example work, please implement it and get it into beta - I want to work with it.
  2. If I use that code and draw two rectangles which, from the viewer’s point of view, are one in front of the other, but I list them in the wrong order in my code, do they come out the right way around or the wrong way around? What if they are part of the same mesh?
  3. Don’t refer to “matrix multiplication” in the documentation.

Okay, here’s the long version.

I am a mathematician. I know that you know that. I’m just reminding you. Moreover, I am teaching about this stuff this semester. So it’s not that it’s something I half remember from when I learnt it. It’s stuff that I know.

As an end user of Codea, I don’t care where the stuff is taking place. Whether it is Codea, GL, or the iPad sends the data away to a support centre in the middle of Birmingham where people sit and work out the transformation by hand. What I care about is: I specify a coordinate (3,4,5) and something is drawn on the screen, and I want to know what went on in the meantime. Take a look at page 23 of today’s lecture (6th March, beamer version). Question 3 on that slide is what I’m trying to work out here.

What I would like to happen is “Perspective Projection” (that I would call Stereographic Projection). From the OpenGL FAQ that Dylan linked to, this is a composition of several steps:

  1. Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates.
  1. Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates.
  1. Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates.
  1. Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.

Now, some of those are happening currently. Exactly which, I’m not sure. What is fairly clear is that Step 3, Clip Coordinates to Normalised Device Coordinates, is not happening.

In the discussion, various people have referred to the matrices involved. Simeon has said that he will provide the ability to set them directly. This is great, but - and this is my key point -: This has nothing to do with Step 3. Step 3 is not implemented by anything remotely resembling “matrix multiplication”.

As far as implementation is concerned, Simeon’s picture and code demonstrate that whatever he has said, his intention is to enable Step 3. So in terms of code I’m satisfied (though I would like to know about the ordering - that’s a follow-up question, and one that I can test when the above is in beta).

However, code isn’t everything. There’s also documentation to think of. And also the link between the geometric transformation and the 4x4 matrix implementing it is not the simplest thing for people to understand - I suspect. So there is great potential for confusion here and good documentation could go a long way to alleviating this.

It would be very wrong to refer to the entire process as “matrix multiplication”. And it would be wrong to write the documentation in such a way that it could be misconstrued as saying this (in particular, the end user is not going care about the distinction between what Codea does and what GL does).

I know this is just “mathematician being pedantic”. But there’s a reason. The ideal way to write mathematics is not in such a way that it be clear, but in such a way that it not be unclear. The idea is that it should be very, very hard to misunderstand the mathematics. That’s not so different from writing documentation.

As Dylan pointed out, step 3 is happening in the current version of Codea. The graphics hardware takes care of this step automatically. It is never explicit in the code (ours or yours), it is just a fundamental part of how OpenGL works to display images on screen.

Regarding the ordering. This is a bit tricky as it behaves differently for transparent and non-transparent primitives.

Something is considered transparent if it has “blended pixels” — e.g. a transparent texture on a mesh, a primitive such as ellipse (or even a smooth() rect). Basically: anything that doesn’t fill, with opaque pixels, the entirety of the triangles it is composed of.

With that out of the way, every “fragment” (like a pixel) rendered by Codea writes its depth into the depth buffer. When new drawing takes place, Codea decides to draw the new pixel if its depth value is less than or equal to the value stored in the buffer.

The reason transparent triangles are a special case is because regardless of their opacity, they write to the depth buffer. They could be completely transparent and will still write to the depth buffer when rendered.

So for non-transparent polygons (such as a mesh with a solid texture, or colour) the draw-order does not matter. Polygons will draw in the correct order because their fragments will be tested against the value in the depth buffer before drawing.

With transparent polygons, you must draw them after drawing all non-transparent polygons and order them back-to-front. This will ensure that every fragment that should be visible gets rendered correctly.

Does that make sense?

As Dylan pointed out, step 3 is happening in the current version of Codea.

Then I’m very confused. I can’t reproduce your picture with your code, or anything looking remotely like your code. It complains about the perspective function. Even when I take that out, I have great difficulty getting your code to produce anything drawn.

So while I accept that OpenGL is technically doing all of the steps, nonetheless the observable result is that it is somehow crippled. Exactly how to make it so that I, as an end-user, can exploit that is, as the song says, not my department. That it appears to be straightforward to enable is great. Please do so. If trying to figure out what I’m getting at is stopping you, then please ignore me. I’d rather you did the code.

What you say about the depth ordering sounds absolutely fantastic. I don’t think that I do have transparent stuff, but if I did then what I could do is make it opaque when doing rapid changes, and then when things have settled do a one-shot reordering to fix the transparent stuff. Now I’m even more keen to have this capability.

So please get this into beta! I’ll fight you over the documentation at a later date.

I’ll try to get a beta out tonight. I’m keen to get your feedback on the API. And keen to see it put to the test.

@Andrew_Stacey
You are correct, step 3 is not matrix multiplication, we know this so there isn’t anything to fight about :slight_smile:

We only have control over steps 1 and 2 in hardware accelerated 3D (and the final step with glViewport but that isn’t very interesting usually), which consists entirely of matrix multiplication, so it is very common for 3D programmers to refer to the projection matrix as “doing the perspective divide” as it sets up the system for the hardware to do the divide correctly by setting the homogenous coordinates to their correct values.

PS
I have a Bachelor in Mathematics and Computer Science, so while not technically a practicing mathematician, I am trained in it and understand the need for precision in terms :slight_smile:

I like to write terms randomly at 160 tokens per minute.

Phew!

Whooosh.

That would be the sound of most things in this thread going over my head.

I came up the business side and was taught the same math (if it could be called that) which claims credit default swaps are nifty and safe.