3D API Overview - 1.3.2 Beta

Thanks @Andrew — looks like it might be a bug when returning the matrix.

Here’s my cubemap code. I’m doing anything particularly unusual, just using meshes with 3d coordinates, and transforming in 3 dimensions when drawing. @Simeon Am I correct in thinking the normal scale, translate, rotate methods simply transform the modelMatrix? I’m still very new to this stuff!

function setup()
     displayMode(STANDARD)

    parameter("Size",50,500,200)
    parameter("CamHeight", 0, 1000, 300)
    parameter("Angle",-360, 360, 0)
    
    -- all the unique vertices that make up a cube
    local vertices = {
      vec3(-0.5, -0.5,  0.5), -- Left  bottom front
      vec3( 0.5, -0.5,  0.5), -- Right bottom front
      vec3( 0.5,  0.5,  0.5), -- Right top    front
      vec3(-0.5,  0.5,  0.5), -- Left  top    front
      vec3(-0.5, -0.5, -0.5), -- Left  bottom back
      vec3( 0.5, -0.5, -0.5), -- Right bottom back
      vec3( 0.5,  0.5, -0.5), -- Right top    back
      vec3(-0.5,  0.5, -0.5), -- Left  top    back
    }


    -- now construct a cube out of the vertices above
    local cubeverts = {
      -- Front
      vertices[1], vertices[2], vertices[3],
      vertices[1], vertices[3], vertices[4],
      -- Right
      vertices[2], vertices[6], vertices[7],
      vertices[2], vertices[7], vertices[3],
      -- Back
      vertices[6], vertices[5], vertices[8],
      vertices[6], vertices[8], vertices[7],
      -- Left
      vertices[5], vertices[1], vertices[4],
      vertices[5], vertices[4], vertices[8],
      -- Top
      vertices[4], vertices[3], vertices[7],
      vertices[4], vertices[7], vertices[8],
      -- Bottom
      vertices[5], vertices[6], vertices[2],
      vertices[5], vertices[2], vertices[1],
    }

    -- all the unique texture positions needed
    local texvertices = { vec2(0.03,0.24),
                          vec2(0.97,0.24),
                          vec2(0.03,0.69),
                          vec2(0.97,0.69) }
                
    -- apply the texture coordinates to each triangle
    local cubetexCoords = {
      -- Front
      texvertices[1], texvertices[2], texvertices[4],
      texvertices[1], texvertices[4], texvertices[3],
      -- Right
      texvertices[1], texvertices[2], texvertices[4],
      texvertices[1], texvertices[4], texvertices[3],
      -- Back
      texvertices[1], texvertices[2], texvertices[4],
      texvertices[1], texvertices[4], texvertices[3],
      -- Left
      texvertices[1], texvertices[2], texvertices[4],
      texvertices[1], texvertices[4], texvertices[3],
      -- Top
      texvertices[1], texvertices[2], texvertices[4],
      texvertices[1], texvertices[4], texvertices[3],
      -- Bottom
      texvertices[1], texvertices[2], texvertices[4],
      texvertices[1], texvertices[4], texvertices[3],
    }
    
    -- now we make our 3 different block types
    ms = mesh()
    ms.vertices = cubeverts
    ms.texture = "Planet Cute:Stone Block"
    ms.texCoords = cubetexCoords
    ms:setColors(255,255,255,255)
    
    md = mesh()
    md.vertices = cubeverts
    md.texture = "Planet Cute:Dirt Block"
    md.texCoords = cubetexCoords
    md:setColors(255,255,255,255)
    
    mg = mesh()
    mg.vertices = cubeverts
    mg.texture = "Planet Cute:Grass Block"
    mg.texCoords = cubetexCoords
    mg:setColors(255,255,255,255)   
    
    -- currently doesnt work properly without backfaces
    mw = mesh()
    mw.vertices = cubeverts
    mw.texture = "Planet Cute:Water Block"
    mw.texCoords = cubetexCoords
    mw:setColors(255,255,255,100)
    
    -- stick 'em in a table
    blocks = { mg, md, ms }
    
    -- our scene itself
    -- numbers correspond to block positions in the blockTypes table
    --             bottom      middle      top
    scene = {   { {3, 3, 0}, {2, 0, 0}, {0, 0, 0} },
                { {3, 3, 3}, {2, 2, 0}, {1, 0, 0} },
                { {3, 3, 3}, {2, 2, 2}, {1, 1, 0} } }
        
end

function draw()
   -- First arg is FOV, second is aspect
   perspective(45, WIDTH/HEIGHT)

   -- Position the camera up and back, look at origin
   camera(0,CamHeight,-300, 0,0,0, 0,1,0)

   -- This sets a dark background color 
   background(40, 40, 50)

   -- Make a floor
   translate(0,-Size/2,0)
   rotate(Angle,0,1,0)
   rotate(90,1,0,0)
   sprite("SpaceCute:Background", 0, 0, 300, 300) 

    -- render each block in turn
    for zi,zv in ipairs(scene) do
        for yi,yv in ipairs(zv) do
            for xi, xv in ipairs(yv) do
                -- apply each transform  need - rotate, scale, translate to the correct place
                resetMatrix()
                rotate(Angle,0,1,0)
                
                local s = Size*0.25
                scale(s,s,s)
                
                translate(xi-2, yi-2, zi-2)    -- renders based on corner
                                               -- so -2 fudges it near center
                
                if xv > 0 then
                    blocks[xv]:draw()
                end
            end
        end
    end
end

That’s correct @frosty. They have actually always transformed the model matrix, we just never exposed the model matrix directly before.

By the way. I really want to try adding ambient occlusion (by darkening the “inner” vertices) of your blocks. I think that would look great.

Nice idea! I just had a quick go and ended up with this:

Obviously not proper occlusion or anything, but hey :slight_smile:

Simeon: Ah, okay. That might explain why things didn’t work well when I tried setting the matrix to the value that I got out.

Since the discussion about this that was prior to its implementation started out as something completely different, and quickly degenerated into a fight about matrices(!), I’d like to share a couple of links about the process that might be useful to others and so that I can refer to them in my questions.

Firstly, from http://www.opengl.org/resources/faq/technical/transformations.htm we have the explanation of what’s going on “under the bonnet” (aka hood).

  • Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates.

  • Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates.

  • Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates.

  • Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.

Here’s a link for those who like formulae (not me - I like armadillos^H^H pictures) http://www.songho.ca/opengl/gl_projectionmatrix.html. If, like me, you prefer pictures then the workflow is in an image on this page: http://www.songho.ca/opengl/gl_transform.html. I found that very useful.

Okay, so what does all of this mean?

We start by defining an object in 3-space, by specifying a bunch of 3-vectors as coordinates. These are the object coordinates. We don’t assume that we are standing at a particular point in space, so once our object is defined we are free to declare that we are looking at it from a particular place. This choice defines a new set of coordinates with the eye at the centre and so that up is up, left is left, and so on. The coordinates of our object in this system are the eye coordinates. Now comes the magic bit. We want to draw what we see on a piece of paper (iPad screen). So we hold up a piece of glass and draw on it exactly what we see through it. But we have a fair amount of choice as to hold up that piece of glass: is it orthogonal to us, skewed, rotated, how far away? The clip coordinates reorient space so that the glass is in a standard position and it is space that is skewed (don’t worry, it’s all relative. No 3D-shapes are actually harmed in this process). Now the coordinates are projected onto this piece of glass, and finally drawn on the screen.

Now, what do we have control over? We have control over the first two steps in this. We can control how we are looking at the scene (the “eye coordinates”) and where we are holding the sheet of glass (the “clip coordinates”). What is important here is to define the transformations between them. To control the first, we set the Model and View matrices. To control the second, we set the Projection matrix.

There are two further issues: we actually have two matrices involved in the first step: The Model and the View matrices. The reason for this is that you often don’t just want to transform the coordinates, you want to transform the normal vectors as well and they transform slightly differently. Separating the two allows this control. This is used in computing lighting effects.

The other issue is that the matrices involved are 4x4 matrices, but the initial coordinates are 3-vectors. They are promoted to 4x4 matrices by appending a 1 at the end, so (x,y,z) becomes (x,y,z,1). Reading around, I’ve seen a lot of stuff about projective space and homogeneous coordinates. Whilst it isn’t completely unrelated, it’s really a load of high-falutin’ nonsense. What really matters is that this promotion allows the computer to do a transformation involving a translation by a single matrix multiplication.

There’s probably lots of bits here that aren’t quite right, or are a bit vague. But I find I understand stuff better if I try to explain it (for example, I hadn’t twigged about how the Model and View matrices interacted). I’m going to experiment to see what happens. As I find out stuff, I’ll report back.

@Simeon: would it be possible to get also some 3D drawing primitives like point(x,y,z) for example? Also a kind of fast meshgrid generator would be nice. Just some ideas in order to draw 3D math functions or make some geometric 3D drawings considering hidden faces, hidden lines, … Point(x,y,z) would also be nice to do some fluid dynamics simulations, something almost impossible today due to lack of speed. Doing the preprocessing already in GPU will enhance that significantly.

@Andrew_Stacey the reason for the separation of Model and View is so that viewpoint can be changed without altering the model matrix. It’s not strictly necessary, you can do the whole thing in the Model matrix is you prefer — e.g. you might use translate in Codea if you want to shift the whole scene, for example.

View is just there for your own convenience, camera() writes to it, but other than that, it’s up to the user how it’s used. If you ignore it, it doesn’t affect anything.

Here’s how the modelViewProjection matrix is computed:

modelViewProjection = fixMatrix * projectionMatrix * (viewMatrix * topOfModelMatrixStack)

Ignore fixMatrix — it’s an orthographic unit projection that is only used when screen recording comes into effect to invert rendering on the Y axis (necessary because CoreVideo uses flipped texture data).

modelViewProjection is multiplied against each vertex in your scene.

This is so good! I’m still figuring out all of how it works, but I’ve managed to get something to render on the screen at last.

http://youtu.be/Yk74hJj4sEE

(How does one embed YouTube videos here?)

I’m going to have to rip out the innards of my shape explorer code to integrate this.

@Andrew_Stacey - just type the unmodified youtube.com link

The best way is actually to choose “Share” in YouTube, then “Embed”, then choose “Old Embed Code” with a resolution of 640x480 — if you do that the video comes out nicely sized.

If you just type the link these forums squish the video for some reason.

By the way @Andrew - next build fixes those matrix bugs. It just wasn’t copying them into Lua properly when accessed.

Like this:

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/Yk74hJj4sEE?version=3&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Yk74hJj4sEE?version=3&amp;hl=en_US" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>

Looks very nice so far. May I suggest sth like GluUnproject(), which is handy for all sorts of things, like extracting the viewing frustum or finding an object in 3D space from a screen coordinate.

True, unProject would be a good one, @gunnar_z. And easy enough for us to include.

@Andrew_Stacey that looks really cool, can’t wait to see what you do with it.

@Zoyt good embed code except you should choose the 640x480 size option — it fits the forum size fairly nicely.

And by the way @Andrew_Stacey - It looks almost 4 diminutional with the colors, no shading, and no area edges. And alright Simeon. Fixed it.

And a little fun animation you might want to check out just because I’m so excited about Codea 3D. Really sorry the video part doesn’t work in iOS because it doesn’t have autoplay.
You can find it on my blog here.

@Bortels: Sorry. I meant the video. Sadly, it’s tons and tons of lines of code (actually I did most of the generating in Hype). Sorry @frosty. Mine was zoomed in so it looked like it was a little displaced, but not all the way. If someone could help me figure out how to make this code work without doing that bug, that would be nice.
<object width="600" height="600"><embed src="http://flurry.name/zoyt/codea/Codea%203D%20Thanks/Codea%203D%20Thanks.html" width="600" height="600" allowscriptaccess="always" allowfullscreen="true"></embed></object>
You can play around on it with my junked up and really old thread here. It has no value and I don’t care if you do something to it.
Thanks!
P.S. Sorry the videos don’t work on iOS. They don’t have auto play.

Nice demo, @Zoyt - the text effects are very old-school demoscene.

But - 27 lines? With the other demos embedded?

I’d like to see them. :slight_smile:

Er, @Zoyt, in my browser (Safari) your animation is fixed to the top left of the window and covers pretty much all of the content in this thread :confused:

I was just about to ask for what @gunnar_z suggested!

I think I might want a little more, though. Here’s the scenario: I have a load of shapes in 3d space which I render onto the screen via this method. The user then touches one and that shape then interacts with the user’s touch in some fashion. To do this, I need to translate between the touch and the object in the Codea program. I think I need to be able to do it in both directions. Firstly, I need to figure out which object has been touched. This might be tricky with GluUnproject because that takes a 3-vector in screen coordinates to a 3-vector in space. But I only have a 2-vector: the touch coordinates. And that doesn’t correspond to just one point in 3-space but to a line. So what I want to do is loop through my objects, extract their screen coordinates, compare them with the touch coordinate, and figure out which was touched from that. So for that, I need a code-level gluproject (or whatever it is called, and it should return a 3-vector with x,y coordinates being the screen coordinates and z coordinate being the depth). Then when I’ve figured out which object is being used, I need to translate the touch information (most likely the deltas) into information that the object can use. In this case, I know how to resolve the ambiguity inherent in going from 2d to 3d because I know that the original touch position must go to some coordinate relating to the object. So now I want a function that says “Return the 3-vector with the property that if I move the original object by this 3-vector then I get an apparent movement on the screen that matches the touch movement, and it should be parallel (in 3-space) to the screen.”

(Incidentally, I already do this in my 3D Shape Explorer, but I have to do it “by hand” so lose the speed of using the GLU. However, doing it “by hand” is not so bad since I only have to do this when processing touches, which isn’t every frame. Still, it would make it easier to interact with touches.)