Disable depth test in shader?

Can you disable the depth test in a shader?

Also I wonder if there is a performance difference in having 100 meshes using the same texture or using one mesh with the triangles for the 100 meshes and using the same texture?

It’s probably much more efficient to have the 100 meshes combined into a single large mesh.

You can’t disable depth test in shaders, it isn’t really related to shaders though, it would have to be a global depth test setting that you could set and then change back.

If for some reason you don’t want depth testing you could use setContext and render to an image as images don’t z-buffer then copy the image to the screen. Although this will hopefully stop working if @Simeon adds z-buffer to images :wink:

Thanks for the answers. That’s right, I can transform that bug into a feature. :slight_smile:

I’ve made some attempts at volume rendering, and wanted to try it without depth test instead of discard, but maybe that’s the wrong way to go anyway. Should probably sort my layers so that I write them in the correct order instead…

Btw, do you know if there are performance penalties for loading texture data, for example is it more efficient to read it linearly, or compress it to a smaller texture, or is there no difference as long as you dont’t need to swap between different textures a lot?

And are gray scale textures supported? A bit of a special case I understand, just curious. :slight_smile:

So… from an OpenGL ES point of view…

Minimum spec doesn’t do texture compression however IOS does support it via an extension:

 OpenGL ES for iOS supports the PowerVR Texture Compression (PVRTC) format by implementing the GL_IMG_texture_compression_pvrtc extension.

But, while this would work if you were using xCode, I do not believe Codea’s generic OpenGL ES hooks allow you to use this. Compression is supposed to give a better performance.

I think in a similar way using OpenGL ES from xCode you could hookup grayscale textures, however I am not sure if you have those hooks in Codea.

Finally, reading it linearly, the OpenGL ES literature suggests that texture reading in a fragment shader where you read coordinates based on the interpolated texture coordinates should give better performance as the paralellisation of fragment shader operation will make some assumptions. That being said, reading textures in a non-linear way in a fragment shader is fine, just might be lower performance.

Thanks for the info. The current functionality is just fine. It’s enough for me to write my small prototypes and learn things. I’ve tried to make a gray scale 3d texture, by using r,g,b,a as different images, so you can always find workarounds. The code got a bit confusing though, so I’ll just use it quite uncompressed currently.