I’ve been away a long time, but I just got an iPad pro, so when my keyboard cover arrives I want to get back into it. With the availability of Open GL ES 3.0, what’s the goodness I can start playing with?
Anyone read any good books that gave them a leg up on the new shader capabilities?
has a better description and pictures of the approach…
Basically you do a render first to a color and normal buffer, then do a second pass which applies lighting. This means a scene with multiple lights potentially becomes a lot less expensive to render… Sounds like fun.
For the first pass it needs multiple output buffers linked to the fragment shader
The issue isn’t deferred rendering, it’s multiple render targets. In MMGames code he is effectively doing deferred lighting very similar to the article I’d read (although his lights are voxel based, but I haven’t delved into that fully yet).
So he first renders mesh color and depth, the second renders the normal. The final pass takes these as textures and lights the scene.
Multiple render targets would mean the first and second renders could be done as a single pass rather than two separate passes.
I’m going to dig into MMGames code more, but broadly that’s what I’m interested in playing with as a first project.
I guess if Codea were to do multiple render targets in the future it’d be an extension to SetContext so you could do for instance,
@Simeon any thoughts on multiple render targets or stencils in a future version. I know they are fairly niche and probably aren’t interesting to the wider audience…
@spacemonkey I like your overload of setContext. So you give it a bunch of textures, then the output locations would have to be specified in your fragment shader, is that right?
I assume right now if you set context with a depth buffer you are doing the 1st and 3rd Attachment call or something similar. If the overload of set context attached multiple buffers in some OpenGL way similar to this (but IOS of course) then in the fragment shader you can do
/** GBuffer format
* [0] RGB: Albedo
* [1] RGB: VS Normal
* [2] R: Depth
*/
gl_FragData[0] = vec4(albedo, 1.0);
gl_FragData[1] = encode(normal);
Where you are explicitly saying which render target the result goes to with fl_FragData[X] and presumably the depth buffer gets done automatically by OpenGL.