3D API Overview - 1.3.2 Beta

@Andrew_Stacey there’s a couple of ways to do “picking” in OpenGL.

Unproject: the way to get the “depth” of the screen space coordinate typically involves reading the depth buffer value for touched pixel. This is not possible to read in Codea (and even if it were, it would be slow).

Raycasting: If you store your triangle data in a hierarchical data structure, such as an octree or bounding volume hierarchy, you can construct a ray or line segment from the touched point. You can then efficiently check where it intersects with your scene and return the exact triangle. This is probably the most modern way to pick objects — it’s ridiculously easy with a 3D physics engine, since the physics engine will maintain the efficient spatial partitioning of the scene.

Pickbuffer: This is a really old fashioned way to pick objects. You can probably pull this off in Codea without much hassle. Basically you render all the objects in your scene into an image that is the size of your screen — this is the “pickbuffer”. Each mesh is rendered in a different, solid, color. Then when the user touches a point on the screen you simply query the color of the pixel in the pickbuffer at that location, and return the associated mesh. (Thinking about this some more you can actually render the meshes into a lower-res image for speed but you lose pixel accuracy.)

One thing is that I don’t always want to ask “Was the touch in the image of this triangle?” Sometimes I want to ask “Was the touch within a particular radius of this rendered point?”. For example, if I render a load of spheres on the screen and some are far away, I might want to say that their effective touch radius is larger than their actual radius. So it’s not just about picking objects by where they are rendered, but about being able to compare the rendered coordinate with the touch coordinate.

GluUnproject gives you a point in 3D space. If you call it twice with different depths, or use your camera position as the second point, you have a vector, with which you can then compute intersections with triangles or spheres. There is only so much you can do for picking objects in 3D space from a 2D projection, this method works good for me. Keep in mind that your representation of objects for rendering need not be the same as for picking them, think for example bounding spheres.

@Andrew_Stacey in that case I would do as @gunnar_z suggests and have bounding spheres that you test against for hit purposes (i.e. not rendered).

That sounds reasonable. Probably also faster than my way as I only do at most two transforms (the touch at different depths). Okay, let’s have GluUnproject then!

Memo to self: to scale the entire scene using matrices (rather than the inbuilt scale function), the method is not to do modelMatrix(s*modelMatrix()) as that does absolutely nothing! The scale factor has to be applied only to the first three coordinates (or one could apply the inverse scale factor to the fourth).

Also, the new improved watch stuff is fantastic. I’ve just been using watch("modelMatrix()") and similar and it is extremely useful.

Another memo to self: I’m drawing my scene, then want to put the UI stuff on top. All the UI stuff is defined in “screen coordinates”. It seems that to restore those, I need to save the viewMatrix from the start, then do resetMatrix() ortho() viewMatrix(savedMatrix). This seems like a fairly common thing to want to do, so maybe a resetWorld() function?

Glad you like the improved watch(), it’s much better at actually watching stuff.

Since we don’t know what matrix you saved, your code:

resetMatrix()
ortho()
viewMatrix(savedMatrix)

Would be something that you would have to do. Any resetWorld() that we implement would have to do the following

resetMatrix()
ortho()
viewMatrix(matrix()) -- set to identity (default)

Strange, I could have sworn that first time I tried it then viewMatrix(matrix()) didn’t work, which is why I had to save the “initial” view matrix. But now I try it, it does work.

Although it’s just a convenience, I do think that there’s value in a resetWorld() function. Maybe, to justify it’s existence, it could also include a resetStyle() so it really is put everything back where it came from function.

Oh, and normalize seems to be missing from the vec3 stuff.

Oh thanks for pointing that out. I must have missed it.

I’ll have to consider the merits of resetWorld() for longer — it probably won’t be a decision I can make before the next update.

Planned additions are: unproject, project, transpose, invert.

By the way, I’m trying to think of a good example project for 3D in the next beta. I’m thinking of borrowing the concept of “Tests” from “Physics Lab”. Perhaps having a “3D Lab” with number of small scenes. Do any of you fantastic beta testers have a small 3D demo to contribute? Something that can be easily modified to run in a Test framework.

You know what my code is like: big and hacky! Plus, I’m not completely able to replace my Vec3 code by the new-and-improved vec3 data type due to things like normalize() not being present.

But I’m having a lot of fun! Should have something for you soon, though whether or not it fits in a lab will be a case for discussion (probably much of the extra can be pared down as I’m not actually using it in this project - I just cart it around from project to project hoping one day I’ll just be able to do an importFromProject(file,project) to save myself all the bother).

@Simeon just a quick thanks for making this discussion public. Looking forward to the update!

No problem @Blanchot. I normally keep beta discussion private for fear of drowning out helpful discussion / user questions for the current version.

We’re also working hard to make sure this update works for the iPad Retina display. This means some magic has to happen with the image() class and sprite packs.

@Andrew_Stacey the normalize() method is fixed in the next build (actually all vec3 methods were not being called due to a bug).

@Simeon I think a basic ‘drawing a cube’ demo (with sufficient comments) might be a nice example?

@frosty I was about to ask you about using your cube demo — or variation of. It’s very pretty with the ground textures applied.

Yeah, I quite like the dirt texture. Go for it :slight_smile: do you want me to clean it up some more?

Thanks! I’ll probably just copy it from the previous page for now.

Just replicating a the movie from page 1 here:

I just had a play with a different lighting direction: