Reproducing Blender animations in Codea (Now with code link, new video)

Check this out:

http://youtu.be/4YnW2ZiqNOw

I’ve animated a simple walk cycle from Blender in Codea! This dude has 32,000 vertices, so not exactly low-poly (I think you could get a fully articulated figure with 1 to 2,000 verts and it still look really good). It’s a model I found online, I wanted to check I could get this working in Codea before spending time creating a low-poly model.

I animated the walk cycle myself in Blender. I only spent a few minutes on it, and this is my first shot at animating a figure, so it could probably use some work, in terms of getting a fluid and believable effect.

I’m pretty happy with the performance in Codea though. I can animate 3 copies of him at once and get 60 fps on the iPad Air (so that’s 100K vertices being animated). If I add a fourth copy, the fps drops to 45 or so. This bodes well, I think. If your characters were around 1500 verts each, you could have around 60 dudes running round at 60 fps I reckon.

I’ll post code and explanations at some point, if people are interested. Most of the work was the importer, which is a modified version of @Ignatz 's obj importer.

Whilst I’m cleaning up the code, anyone want to guess how I did it? :-"

@Ignatz haha, same here! I didn’t realise you had done that, did you post that? I saw the post you did where you essentially built an entire posing and rigging system, with inverse kinematics etc (which is immensely impressive of course), this one:

http://codea.io/talk/discussion/5946/rigging-3d-models-in-codea-first-animation

There’s also @spacemonkey 's skeletal shader here:

http://codea.io/talk/discussion/2901/skeletal-animation-shader

And @matkatmusic , in your 3d rigging thread, discusses the md5 format as another possibility.

I figured I would try frame interpolation first, as that seemed easiest. I didn’t realise you had done frame interpolation as well. Would you be interested in comparing code?

I’ve not looked into md5 format yet. If it was just something relatively “simple”, like a list of cascading bone transforms, then it could be possible to write an importer and a runtime for it, something similar to @spacemonkey 's shader, where you have an array of transforms in the vertex shader. But if it was something more complex, involving bone weights, real-time kinematics in the runtime etc, that would be immensely complex to code I think, and I doubt it would be practical to use in a game from a performance point of view.

Well, after researching and furiously scrawling cryptic notes all over hundreds of papers and making one of those pinboards with the strings connecting important points Ive come to the conclusion that you did this with Wizardry. It is the only reasonable explanation.

What I did to animate my Blender model was to create several separate frames and use Codea to interpolate between them

Here: http://codea.io/talk/discussion/5946/rigging-3d-models-in-codea-first-animation

@yojimbo2000 - I doubt my code would make much sense, there is a lot of it

As I recall, I used joints as the key elements, like Blender, and simply interpolated between their positions in successive frames, nothing fancy

Yeah, my code is also a complete mess just now, it’ll take a day or two to write something up.

I wrote mine up here.

One thing I need to investigate: at the moment each point is interpolated between 2 key frames in a linear mix (just using the GL SL mix command actually). Because the above animation only uses 4 key frames, it does look a little, well, robotic (not so much from the front, but from the side it’s more obvious). So in this case as the model is in fact a robot it’s quite appropriate, but I was thinking I could interpolate using some kind of curve (cubic maybe?), either between 2, or even 3 key frames. There are hundreds of curve algorithms out there, can anyone recommend one that’s good for key framing and is cheap to implement (as it would be calculated on the vertex shader 100,000 times per frame)?

I’ve just read that GL SL smoothStep does hermite cubic interpolation, I’ll try that first…

OK, I’ve added a Catmull Rom spline that interpolates between all 4 keyframes in the vertex shader, and now the walking motion is silky smooth! B-) I’ll try to do a side-by-side video, and maybe throw in how the animation looks in Blender for comparison purposes.

Now you’re just showing off :wink:

@Ignatz who, me? :-"

Here’s a video comparing linear interpolation to Catmull-Rom. In the side-view Catmull-Rom is noticeably smoother:

https://youtu.be/3g4UWW4Tnas

Awesome!

OK, the explanation for this is quite involved, so… I’ve decided I’m going to blog it. The introductory post is here, comments and criticism is welcome:

https://puffinturtle.wordpress.com/2015/05/14/animating-a-3d-blender-model-in-codea-part-1/

Part 2 will follow shortly, with the code.

@yojimbo2000 - very interesting, keep going!

My approach is much closer to the way Blender does it.

  1. I broke the Blender model into pieces (separate upper arms lower arms, upper legs, lower legs, feet, head, body) in Blender, which was quite messy.

  2. In Codea, I “fitted” a joint to each mesh, essentially a start and end position (and the joints linked together would form a stick figure if you drew them).

  3. I could then move and rotate the joints (within constraints), cascading down the limb, so a rotation of the hip affected the knee and ankle.

Then I set the joint positions for the animation keyframes, and interpolated between them.

This runs pretty fast, and is flexible, in that you can create dynamic new positions, hold objects, etc, but there is a fair bit of setup work in breaking up the model and fitting the joints, and creating positions is not easy either.

I see you have large bulging joints like I did, so that you don’t notice the mesh getting distorted or tearing as the joints rotate. Animating a superhero with a skin tight costume must be pretty difficult!

Actually, I think keyframe interpolation would be able to handle a skin-tight costume very well. The reason why the joints appear to tear is because you’re not taking into account weighting. Each vertex is affected by the bone to varying degrees, and can also be influenced by neighbouring bones. ie a vertex isn’t necessarily just skinned to one bone. Have a look at this image of the weighting. You can see that the neighbouring bones also influence the vertices around the joints, which is what keeps the joints round throughout the animation:

bone weighting

So the advantage of keyframe interpolation is that you don’t have to try to recreate the weighting/ skinning system in a Codea importer and runtime. I can see that you’re sceptical (scepticism is good!), I’ll try to see if I can work up an example with a much more organic model. Maybe a T-Rex or something.

The model in my bones example was originally intended to do this, but the mesh was pretty rubbish with few vertices. But conceptually you have you bones to vertex buffer where you can store the weights to a couple of bones. Then it’s a single mesh for the whole model, and pass in the set of “bones” matrices to the shader at each call.

My code is very messy, mainly because at the time I couldn’t pass a single array of bones to the shader, I had to pass a uniform for each individual bone. I’m not sure if Codea has moved forward on that element or not…

I separated the meshes so I could rotate them individually with no need to manipulate individual vertices. This is fast and doesn’t require any weight calcs, but creates gaps between meshes as they rotate, but as long as you have bulky joints, it’s not obvious.

@spacemonkey no, shader language 1.0 doesn’t allow vertex attributes to be arrays. So rather than attribute vec3 position[4] you have to write

attribute vec3 position
attribute vec3 position2
attribute vec3 position3
attribute vec3 position4

I’ll post my code soon.

As promised, something a little more organic. I give you:

CAPTAIN CODEA!

https://youtu.be/sfvCbToC82I

This guy is 5900 vertices, so around a sixth the number of verts as the robot in the first video.

One thing I’ve discovered: animating a run cycle is way harder than a walk cycle. I think you’d need more than 4 keyframes, because the extremes of the motion, head at highest point, arms and legs at full extent etc, don’t all coincide like they do with a steady walk.