Need help with simple 3D shading

Yep, that looks like the one I already have, but thank you, every example helps.

@spacemonkey - yes, it is a static terrain. I’m not renormalising in the fragment shader, maybe that is the problem. (What on earth is gubbins, or was that the iPad spellcheck being helpful?)

Thanks to all of you. I’ll battle on until I get it, then I’ll do some tutorials on it for other people, if I can simplify it enough…

okay, I threw together an EXTREMELY quick n dirty test shader to demonstrate this in Codea. This shader only uses directional lighting, and you have to calculate the actual angle of the light yourself. I did that by simply rotating the cube and reading back its model matrix.

For some reason, I’m not managing to rotate the light angle correctly; so I’ll post the Lua part of this after I figured out what’s going on there…

now the vertex shader


//
// A basic vertex shader
//

//This is the current model * view * projection matrix
// Codea sets it automatically
uniform mat4 modelViewProjection;
uniform vec3 lightAngle;

//This is the current mesh vertex position, color and tex coord
// Set automatically
attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;
attribute vec3 normal;

//This is an output variable that will be passed to the fragment shader
varying lowp vec4 vColor;
varying highp vec2 vTexCoord;
varying highp vec3 vNormal;
varying highp float vLightLevel;

void main()
{
    //Pass the mesh color to the fragment shader
    vColor =  color;
    vTexCoord = texCoord;
    vNormal = normal;
    vLightLevel=dot(normal,normalize(lightAngle))*0.5 + 0.5;
    vColor = vColor * vLightLevel;
    vColor.a = 1.0;
    
    //Multiply the vertex position by our combined transform
    gl_Position = modelViewProjection * position;
}

and the fragment shader


//
// A basic fragment shader
//

//Default precision qualifier
precision highp float;

//This represents the current texture on the mesh
uniform lowp sampler2D texture;

varying lowp vec4 vColor;
varying highp vec2 vTexCoord;
varying highp vec3 vNormal;

uniform lowp vec3 normal;

void main()
{
    //Sample the texture at the interpolated coordinate
    lowp vec4 col = texture2D( texture, vTexCoord ) * vColor;

    //Set the output color to the texture color
    gl_FragColor = col;
}

Thanks, Tom. You guys are great. I have to master it, now!

@tomxp411 - what did you mean by “simply rotating the cube and reading back its model matrix”? (Remember I have absolutely no training in 3D graphics!)

right, sorry…

it goes like this: OpenGL uses a 4x4 matrix of numbers to do all the heavy lifting of rotating, translating, and scaling vertices. Every single vertex in your object has to be rotated and translated at least 3 times: you have to put it in the right place in the world, move the world around the camera, and then convert the 3D point to 2D screen coordinates.

The model matrix is what we use to move models around in the world. Every time you use the rotate() or translate() command, you’re manipulating the model matrix.

So what I’m doing is taking advantage of the model matrix to get the relative angle of the object I’m drawing and the light that’s illuminating it.

boxMesh.shader.lightAngle = vec4(0,1,0,0) -- light comes from straight up
rotate(rx,1,0.0) -- this tips the box forward by rx degrees
boxMesh.shader.modelMatrix = modelMatrix()  -- tells the shader what the box's orientation is

This saves the current model matrix to the shader. The shader then takes the matrix and uses it to rotate the light angle. I then compare the light and the normal of each vertex. When the normal is exactly the same as the light angle, the face is 100% illuminated. When the normal faces 90 degrees away from the light, the face is dark.


//
// A basic vertex shader
//

//This is the current model * view * projection matrix
// Codea sets it automatically
uniform mat4 modelViewProjection;
uniform mat4 modelMatrix;
uniform vec4 lightAngle;

//This is the current mesh vertex position, color and tex coord
// Set automatically
attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;
attribute vec3 normal;

//This is an output variable that will be passed to the fragment shader
varying lowp vec4 vColor;
varying highp vec2 vTexCoord;
varying highp vec3 vNormal;

void main()
{
    //Pass the mesh color to the fragment shader
    vColor =  color;
    vTexCoord = texCoord;
    vNormal = normal;

    vec3 l = vec3(lightAngle * modelMatrix);
    l=normalize(l);
    vLightLevel=dot(normal,l);
    vColor = vColor * vLightLevel;
    vColor.a = 1.0;
    
    //Multiply the vertex position by our combined transform
    gl_Position = modelViewProjection * position;
}

notice the modelMatrix and lightAngle variables at the top. You have to set both of those for each mesh in every draw() cycle.

Again, I’ve been away from 3D programming for a while, and I’ve never used shaders for lighting - I’ve always had OpenGL’s built in lighting engine to do the work for me. So this is kind of new to me. There may be a simpler way to do what I’m doing… I’d love if someone showed me how.

Oh, and if you think that’s complicated… you don’t even want to know what I did to make this work back in the days before hardware 3D acceleration. Drawing one vertex took about 30 mathematical steps.

Thanks!! Much appreciated :smiley:

@tomxp411, @spacemonkey -

what I’ve done is to write up one of spacemonkey’s lighting demos in a blog post, and break the code into a step by step project, so I can add the lighting elements one by one, to see the code required for each.

blog post: http://coolcodea.wordpress.com/2013/09/17/3d-lighting/
(doesn’t cover specular, that’s for the next post)
code: https://gist.github.com/dermotbalson/6589383

I chose spacemonkey’s project as a demo because it is a complete demo, and because it has some interesting features, and he has another one with shiny balls that uses the same shader, that I’d like to cover too. I like the way lighting is used for a 3D effect in that one.

I’d appreciate any comments you have. I’m still getting across this stuff, and I’m not sure the step by step code approach works (except to identify the code required for different types of light) because the learning curve gets very steep suddenly - but you can’t split up the inclusion of diffuse or specular light into pieces across several tabs - it’s all or nothing.

I think this really needs to be explained outside of the code, so my blog post is a first attempt, and I’ll tidy it into an ebook version later.

This is only a start, though. Now I want to compare the differences between @spacemonkey’s code and yours, @tomxp411, and make sure I understand all of this. I like the idea of several light sources, too.

Thanks again for your help…

Deleted - answered my own question…

=)

There are multiple types of light, but your tutorial confuses lighting and materials.

Specular and diffuse are not lighting types, they’re material properties. Think about specular highlighting as chrome or shiny stuff… the light doesn’t make it shiny, the material itself makes it shiny.

So the basic types of lights are:
ambient - all around light. Nothing in the area can be darker than the ambient lighting
directional - this is the sun. directional light has an infinite distance, so there’s no falloff and the light is perfectly perpendicular
point - point lights will create a relatively small lit area, and point light falls off with distance. points are good for highlighting areas, and that’s the basic type of light for lamps, fire, etc

My code calculates a directional light. Spacemoney’s code calculates a point light. They’re both valuable and necessary, but they have different artistic uses.

Now your materials are going to have their own properties: diffuse shading, specular highlights, and maybe be emissive. Emissive textures are great for things like car taillights, where the light needs to be visible but doesn’t necessarily need to illuminate an area.

That makes sense, thank you, it’s just that everything I read talked about ambient, diffuse and specular, with very little mention of directional vs point.

I’ll rethink it and rework it, thanks again.

Suppose I build a lighting library called LL.

Then it can be configured something like this, by a user. What I’m focussing on is what information is needed for each element, structuring them the way you suggested. I’m passing named parameters to avoid confusion.

Each LL function will store the info provided and set up the appropriate lighting. At this stage, I’m just trying to get the UI as simple and logical as possible.

    --1. Lighting that applies to the whole scene

    --general background light
    LL.ambient({strength=0.5, color=color(255)})

    --directional light (infinite distance, always vertical)
    LL.directional({strength=0.6, direction=vec3(-.5,1,3), color=color(255)})

    --point light (attenuates and is affected by angle of incidence)
    LL.point({strength=0.3, source=vec3(-.5,1,3), range=300, color=color(255)})

    --specular reflection (depends on each mesh's shiny property below)
    LL.specular({eyePosition = vec4(0,0,3,1)})  
    
    --2. Lighting that applies to each mesh

    LL.properties({reflective=1, shiny=1)}
    --reflective for diffuse light, shiny for specular

@tomxp411 - nice succinct description regarding the difference between lights and material properties. I think you’ve nailed the 3 basic light types - although I’d add a fourth: spotlights, which have a directional cone of light with falloff - although, this would be obviously harder to code.

@Ignatz - I like the idea of your lighting class although my inclination would be to completely decouple light definitions from material properties (like most 3d systems tend to). It doesn’t seem logical to me to have a ‘specular’ light or a ‘reflective’ light - these generally would all be defined within the material (maybe an MM class to complement your LL one?) - you’ll then have the option of multiple material definitions for multiple meshes/objects.

thanks @andymac3d, that’s helpful. I’ll refine some more.

The only reason I had a specular item is that eye position is only required for specular, and (I think) it is not mesh-specific, so it is a global setting. I was just trying to find a place to define it, was all…

Just to add a little.

Point lights can falloff… but mine don’t. The difference there is about light position vs light characteristics.

Pure directional light just says EVERYWHERE light comes in at a consistent intensity from an angle, eg 30 degrees from horizontal or some such.

My point lights are light comes in at a consistent intensity on an angle from the light source point, to the surface.

Those are position characteristics. Then you can have light characteristics where light falls off based on distance travelled, this is commonly done on point lights, so if you are closer to the light then it is brighter, if you are further away then it is dimmer. This simulates the light spreading, imagine you shoot something with a shotgun from 1m, you get high density and a lot of damage. If you shoot something from 10m then you get a wide spread of a little damage.

Finally some systems model lights like spot lights where the light is modelled as a cone from the point, not a 360 degree sphere of light (360 and sphere is probably wrong but I’m sure Andrew Stacey will correct me)

As to diffuse, specular and ambient, while tomxp411 is right that diffuse and specular are material characteristics, from a lighting point of view the calculations are also different. Diffuse is about the relative angle of the surface to the light, so if it’s perpendicular to the light it’s maximum brightness, but if it’s edge on to the light it’s dark. Specular is about the light reflecting off the surface into your eye, so it is about whether the angle of the surface is such that the direction surface to eye and surface to light represent reflection. So while they are material properties (how reflective, how diffuse) the lighting calculations are quite different.

@spacemonkey, I concur and I think this naturally leads to what are technically know as ‘shading models’, which pretty much define how a material relates to a light source and the resulting shaded pixel e.g. Lambert (purely diffuse shading) and Phong/Blinn (diffuse with a specular contribution) tend to be the most common algorithms. There are many others (Cook-Torrance, Oren-Nayar etc…) - but all have slightly different approaches to the same problem. :slight_smile:

@Ignatz Haha, for about over a month now I’ve been working on my own lighting library for @spacemonkey’s LitMesh class, that allows multiple lights, ambient/diffuse/specular materials, better BumpMaps, built-in generate-able BumpMaps, preset square & circle classes, and a model class to put in your own vertices and texture coordinates and let the class handle all the LitMesh stuff.

@spacemonkey yes, diffuse and specular lighting are calculated differently, but they typically use the same light source. You are not going to specify one source for diffuse lighting and a separate source for specular highlighting. In theory, you could do so, of course, but in practice, it would look strange.

@andymac3d yes, spotlights, and you can also include a few other things. However, those are more complex to render in real time, and it’s definitely not beginner material. It is certainly something I’ve seen game engines do in real time, but it is going to require more math to figure out whether a fragment is inside the conical region covered by the spotlight.

Quick question - mainly for @spacemonkey - the code above and for some of your projects works fine as long as you don’t move the object. But as soon as you translate it, the specular lighting goes haywire. I presume you need to modify the eye position, but I’m not sure what to use.

@Ignatz my litmesh class “should” work fine with translations etc. The light position should be given to the class in world terms and similarly eyeposition should just match whatever you put in the Camera(x,y,z) call.

The stuff it does in LitMesh:draw

self.litMesh.shader.mInvModel = modelMatrix():inverse():transpose()

should give it the inverse transform (ie adjusting for translation) to make it work correctly for translated (and scaled) meshes.

It’s possible it’s wrong of course, I’m no expert :wink: If you post some code I’m happy to take a look at what you’ve built to see if I can figure out what’s up.

But in a nut shell, the shader thinks in vertex coordinate terms which are untranslated, it also needs to know where the eye and the light are. If you have translated it, then you’ll need to tell the shader either how to translate the eye and light (what I do in LitMesh) or translate the vertices by the model before you do your math, or a mix of the both.

Basically if you imagine your spaces, object space (the raw vertex coordinates), model space (the vertex coordinates adjusted by the modelMatrix, translates and scales), eye space (the model space adjusted by the viewMatrix) and it finally gets projected by the projectionMatrix. You have to calculate lighting in one of these spaces, and what this means is whichever space you are thinking in you have to get vertex coordinates, normals, light position and eye position all to the same space. If you have them in different spaces it will behave very unexpectedly.

In my approach I translate everything down into object space for the calculation because then the normals can be baked into the mesh up front and you can make nice assumptions for bump mapping and such like.