A question about efficency

one more thing: your choice of having 6 meshes, one for each face, wont work: 3d is correctly managed within a mesh, but not between meshes. once drawn, each mesh is a 2d image with not depth…

@Muffincoder - you can poke holes in faces using a shader (by simply not drawing certain pixels), either by passing through bounding coordinates or else by using a particular colour as a signal not to draw. However, if you want those holes to have a 3D appearance, there may be no alternative to constructing the block out of smaller pieces which fit together seamlessly and leave a hole. But that is a big hit on performance.

The trick Jmv38 refers to, is about reacting to touches, which is a problem you may not have considered yet. If you want to be able to touch a specific object, which may be partly hidden behind another object, it is hard, because Codea can’t tell you what you are touching, as what is on the screen is just pixels, the objects cannot be distinguished. But you have at least two choices.

  1. @Andrew_Stacey (our mathematician in residence) has code that does this for regular objects (that are reasonably well behaved mathematically).

  2. I also have a kludge that involves trapping the touch position, then drawing the next frame to an image in memory, replacing all the touchable objects by coded colours, then looking at the colour of the pixel in the touched position to get the colour and thereby the object. It means the drawing skips a beat but if your FPS is fast enough you won’t notice. This works best for irregular or holey objects, eg scanned tree images where you want to touch something showing behind the trunk, and the math would be pretty impossible.

@Muffincoder - I might also try @SkyTheCoder’s suggestion of setting the shader to do back face culling. If it does this well, you could end up creating just one mesh block, attached to a shader that does nothing except cull back faces (ie one extra line of code), and draw this block all over the place using different textures.

I think quite a bit of experimentation is required…


one more thing: your choice of having 6 meshes, one for each face, wont work: 3d is correctly managed within a mesh, but not between meshes. once drawn, each mesh is a 2d image with not depth…

Not true. OpenGl remembers the depth of each rendered pixel and only draws on top if the new pixel is closer to the eye than what is there. This is why if you don’t have transparent pixels it is best to draw from front to back.

@Ignatz I got the backface culling in a shader to work now so thats one less problem to solve but when you say that I should make one mesh and just redraw it everywhere it is needed, that is in fact what I’ve done, but with 6 meshes instead of 1.

And joining them to 1 mesh would result in hidden faces being drawn all over the place.

I was thinking about how I could make one mesh for each chunk and it could be done as long as I somehow can identify which vertices belong to which block, which is the tricky part. I’m assuming that the vertices you give a mesh do not have to be connected triangles, just 3 connected vertices per triangle.

And one more thing, the performance problem I’m having is probably the draw calls bunched up from calling the draw method for each face visible.

This could be avoided with batching, a functionality I do not know how to implement code-vise and would like some input on this matter.

One thing I notice from a comment earlier, re one mesh for a cube and draw it with different textures, this may be inefficient. If you change the texture on the mesh it probably needs to pass the new texture to the shader. You could do multiple textures by putting all the textures in a bigger image and passing a parameter to the shader to select the texture (this would inside the shader set an offset to the right part of the texture image).

Basically sending stuff to the shader requires memory transfers to the GPU, and these are expensive, it’s why modifying vertices each draw is bad because it has to pass the vertex array to the GPU each frame, rather than once when the mesh is created and never again…

@spacemonkey You’re right, I tested with only one preset texture and the 10x10x10 got a great performance boost to about 36fps, Thanks!

I’m going to try and create one mesh of the whole chunk now and see what results I’ll get, I’ll keep you updated as I go.

@loopSpace thank you for the remark. I was really unaware of this.

I’m getting there!
When I’m building the whole chunk as one mesh I get around 55-60fps with a 10x10x10 chunk that has 30% of the blocks removed(which creates more faces than a solid chunk)

EDIT: Which is a total of about 9200 faces(about half of them is culled though)

About sending one texture and setting an offset in the shader for cubes, is it possible to have separate textures and merge them into one at runtime?

yes, use setContext(A) to draw them in image A, then use A for texture

@Jmv38 Right, I had forgot about that functionality :stuck_out_tongue:

After some more testing I’ve recorded 30fps on a 16x16x30 which is far more than I could have ever come close to before :slight_smile:

Now I just have to figure out why the textures adds a fpsdrop when viewing the chunk from negative x coordinates…

My findings show that when the camera moves to the negative x side of the blocks, all other faces become visible which should be culled away in the shader.

Do I need to update the gl_FrontFacing variable or some other variable in openGl? As this seems to be the issue here…

@Muffincoder - what may be happening is that because you are drawing each face as a separate mesh, you are drawing them in a fixed order, presumably from back to front - assuming you are at zero x.

If you are at the back of the blocks, looking back the other way, the order of the faces will now be in the wrong order, front to back, and the culling and overlapping may be faulty. You need to ensure you always draw from furthest to nearest (relative to the camera). What I did was sort all my meshes at each draw, based on distance from the camera. I would try doing this as a first step just to confirm that is the problem.

PS You said you didn’t want to use complete blocks because the back faces would be drawn, but isn’t that what back face culling will deal with for you?

@Ignatz As I mentioned earlier I have now merged all meshes to one big mesh for the whole chunk which means that the order of drawing shouldn’t matter afaik.

And the matter with complete blocks wasn’t the backfaces but the faces which are in between blocks. There would always be two faces in the same position right in between the blocks and one of then would be drawn as it is not a backface.

@Muffincoder - fair enough

@Ignatz I even tested with adding normals to each face without getting rid of this nasty effect…

I’m running out of options here, any ideas?

Edit: Here are some screenshots of the scene, the yellow box mark ~0,0,0

@Muffincoder - can you give us a clearer idea of what is happening, eg record a short video or post a screencap?

I think you are saying that OpenGL is not culling correctly when you go behind your mesh.

But I’m not sure what you’re doing when you merge all your faces. Are you saying that at each draw, you figure out which faces will be visible to the camera, and including only those faces in the mesh?

So a bit more detail would be helpful… :slight_smile:

@Ignatz Oh, sorry if I’m a bit unclear.

At start of the app I create the chunk and the blocks as array objects.

Then I check which faces should be removed(ie. theres a neighbour block obscuring a face) and keep an array of numbers to keep record of which should be drawn and which shouldn’t.

After this I go through all block objects(still only classes with data, no mesh data yet) in the chunk and their array of which faces to be drawn and add 2triangles(one face) to the mesh in the proper position. (I add texture coordinates and normals too for each face)

Lastly I generate the mesh from the vertexdata that I generated and add textures to it.