What do professionals use for 3d animated characters in ios apps?

If you do go full 3D, I’ve written about a lot of tricks I learned that may save you time, on my blog and in my ebooks. I think 3D is absolutely the best part of Codea.

@Ignatz there is just one issue I’ve had with your .obj importer, which is it doesn’t seem to recognise correctly if the .obj file contains normals. You have code for reading the “vn” normal lines in your importer, and I’ve checked that the .obj file does include the “vn” lines, but it doesn’t seem to add the normals to the mesh.

Of course you include two routines for calculating the normals in Codea, depending on whether we want the object to appear smooth or to have sharp-edged faces. But sometimes it would be useful to let Blender control the normals (using recalculate normals for different parts of the object), so you can have, for instance, objects that look smooth on some sections, but sharp-edged on others.

I’m sure this is probably a mistake on my part, but have you been able to get your .obj reader to correctly import the normals from the .obj file?

The whole 3d workflow in Codea is still fairly cumbersome currently and certainly not optimised for large meshes.

Hopefully, the ability to import 3d models should eventually get incorporated into the main Asset /Library system which would probably make things significantly easier, together with an optional shader binding by default. It would be great to see where @Simeon thinks this fits in with his long term roadmap for Codea. :smiley:

@yojimbo2000 - I don’t think I wrote anything to import normals, because I wasn’t using them myself.

I’ll have a look tomorrow to see what’s involved.

@andymac3d - I don’t think we can expect a 3D import facility any time soon, because Simeon’s resources are very limited and because very, very few people will import 3D models, so it’s not a priority.

However, there is nothing to stop us building our own libraries, as we’ve done in the past.

@Ignatz it certainly looks as if your importer is meant to read normals, you have the code for recognising the “vn” normal lines. When I have time I’ll try to troubleshoot those sections, see if I can work out where it’s going wrong. (When you export the .obj from blender, it’s the “export normals” checkbox in the left hand panel that you need to check if you want the normal data in the object files)

@yojimbo2000 - ta.

If I put in code to read normals, it may only have been because I was using other people’s existing OBJ files which had normals, so I had to deal with them. I didn’t use them for anything myself. But I’ll have a look tomorrow (it’s late evening here).

If you are just using geometry, then ignatz smooth or edgy normal generators are going to be fine. If you want to use normals from another app such as blender you are much better off using normal maps which are textures that contain the normal values, and I believe blender can generate the images for these.

The main benefit of normal maps is that your normal resolution can then be much greater than the level of detail in your mesh, so you can via lighting give the impression of a much higher resolution object without the performance hit it’d take to achieve through geometry alone.

On this article http://www.3dkingdoms.com/tutorial.htm the warrior image near the top is a good example of a relatively low detail mesh, and what normal mapping via a texture can achieve.

@spacemonkey I guess you’d need to write a custom normalMap buffer for the shader, as the builtin normal function only allows one normal per vertice… I’m certainly intrigued… I copied a bump-mapping example from somewhere, a rotating cube that you could steer around a simple maze, @spacemonkey was that your code?

@yojimbo2000 - i havent used bump mapping, but AFAIK, it consists of an image which you pass into the fragment shader, then you look up pixel values and interpret them.

Although, didn’t I read somewhere that we can now have textures in the vertex shader?

It might have been one of my very old experiments. But the concept is it’s a texture that in the fragment shader you use on a per pixel basis as the normal for lighting.

@spacemonkey I just got back to my iPad, and yes it’s your litmesh class. That’s very clever, I hadn’t realised that the fragment shader can access more than one texture. ( I just checked and according to the wiki, vertex shaders can’t access textures, but fragment shaders can have up to 8)

https://bitbucket.org/TwoLivesLeft/core/wiki/GLSLES

I must admit, the bump-mapping does look incredible. As always it’s the question, which you have to answer on a per-object basis, of “will the player even see/ notice it?” My video above is just diffuse lighting, and it looks great (if I may say so myself). In that case, I don’t think the chest would be improved much with a wood-grain bumpmap.

I’m tempted to try bumpmapping the cobbled floor though…

@spacemonkey that’s an interesting article that you linked to. If I’ve understood correctly, the normal map is a kind of bump map generated from the high-poly version of the model, wrapped around the low-poly geometry?

So, first, the docs are wrong, since IOS 8 you can actually access textures in the vertex shader as well, which is very cool. You can use it for things such as a heightmap deforming a flat mesh.

On the bumpmap thing, yes, it is simply an image where you’ve encoded the normals as colors and you use them in the fragment shader to affect your lighting. One path to this is as the various articles on the web will say where you create a highly detailed model, produce a normal map, and then produce a lower poly model to use in game with the normal map to get a visually highly detailed model with much reduced runtime processing.

But, the same shaders can be used with a normal map you’ve produced anyway you like for some interesting effects. In litmesh (if I recall correctly) there’s a helper method that does a (fairly shoddy) edge detection on your texture to make it a little 3dish. You could also generate them with noise or whatever to create a textured feel to the surfaces.

@yojimbo2000 Bumpmaps’ normals can also be based off of the texture, not a higher resolution mesh. There are some programs that can generate bumpmaps, such as CrazyBump or Photoshop.

My suggestion is to keep your game as simple as possible until you have all the pieces in place, then start adding stuff like this.

(Also, you perhaps don’t want the floor to be too distracting, it is just the canvas on which all the interesting stuff is drawn).

I held out for as long as I could… But then I had to give in and start putting in the eye candy… Want more candy…

But yes, bump mapping is definitely on the back-burner for now. And you’re right, it would be stupid if the most fascinating thing in your game was the floor.

I think vertice normals (vn) provided in .obj files are pretty worthless, because the only thing I can imagine using them for, is realtime lighting, which, again, is very processor intensive. You could overcome this issue easily by baking lights directly into (diffuse) textures.

However, what would really make sense are face normals, which you could use for things like backface and occlusion culling.

@Ignatz Btw, could you share your .obj importer? I currently try to write my own, but I try to optimize things where I can and it would help me to see alternative approaches…