3D dress-up game?

@Yojimbo thanks for snooping my code! Wow!

@Yojimbo I’m not sure I follow you on commenting out the colorCoordinates loop. The amount of calls to touched() per call to draw() can be around 20-1. By capturing every moving touch in colorCoordinates, and then drawing them all at once in draw(), I got way better results than I had before. I was basically able to doodle on the texture, with very natural looking curves and everything.

Maybe, with your framerate boost, the ratio of touches to draws gets a lot smaller, so the difference isn’t noticable. I can’t say–haven’t had the chance to run the code yet. Eager to as soon as I can though.

B-)

Er–am I missing something? I just ran it and it only draws touches to the 2D overlay image, not to the texture at all. When I comment out line 123 there’s no drawing to the screen at all. Is there something else I’m supposed to change?

Well, maybe that’s an option if the frame rate drops. But at 60fps, touching with a single finger, you mostly see just 1 touch event per draw cycle, sometimes 2. I connect the dots with a line in the code above. Well, I don’t think I actually deleted anything, so if you remove the comments, you can restore the colorCoordinates table loop. I doubt that it makes much difference to the performance, and you’re right that it probably does create a more fluid line.

@UberGoober I repasted the code in my comment above, this time with the makePlane class (I didn’t make any changes to it, I don’t think), just in case I pasted it wrong, or you changed that class since you last posted it. Let me know if it’s not working still.

@Yojimbo2000
There we go! Awesome. Very cool!

And try adding

    local newX = 400
    tween( 10, makePlane.plane.rotate, { x = newX }, { easing = tween.easing.linear, loop = tween.loop.pingpong } )

At the end of setup–it is so fun to see it working at the great smooth rate.

Nice! And good work putting this together. Wait, wasn’t a finger fatness parameter added? You should use that to vary the thickness of the line

@Yojimbo, I think I ought to work, first, on abstracting this to accept multiple meshes.

The end purpose, after all, is to let my girls be able to draw directly on the clothes of a 3D model. I have to be able to handle the pants, shirt, shoes, etc, separately.

This is hard work for me but it’s rewarding to work through it. You guys have been so much help!

@yojimbo2000, @Ignatz, I now have Yojimbo’s refinement implemented with one of the dice models from Ignatz’s 3D demos. I must say I’m really pleased with the results.

Still very much a work in progress, but even at this stage it’s fun to draw right on a 3D model.

Gist:
https://gist.github.com/GlueBalloon/9d81558f78f984ca30b1

You’ll notice a couple glitches:

  • the texture for the dice is mapped to the wrong vertexes somehow, making it look very strange as it rotates. I just slapped a standard Codea texture on the original dice model. I don’t know how to fix this, and I’m not sure I even should; it won’t be a problem with the .obj files.
  • since Yojimbo switched from drawing ellipses to drawing lines, the plus is that the drawing feels much more natural and pleasing. The minus is that occasionally a big straight line will get drawn where it shouldn’t be. Also not sure if this is worth trying to fix before moving on.

I think next I need to figure out how to get this working with an .obj file.

Just updated the gist, it’s better now:

  • uses the suuuuuuuuuuper simple color picker I just shared in the forum
  • doesn’t draw touches to the 2D plane, only to the 3D model
  • doesn’t draw the buffer cameo in the corner

It’s now much more fun to play with, since it’s free of all that for-debugging-only clutter, and since you can actually control the color you draw.

It’s also got a side effect that is fun: all touches on the 3D model get drawn to it, even touches that started on the color picker. So if you put your finger on the color picker and then slide it over to the model and swirl it around, the line changes color as you draw with it! Fun.

@yojimbo2000: it seems like there should be some easy math that can make the stroke width on the mesh look the same as the current strokeWidth setting, but I can’t figure it out.

Something like the ratio of the screen to the texture and then the ratio of the stroke to the screen size? But then some addition for the distance of the mesh from the camera?

If the math for that comes easily to you, could you share it with me?

@UberGoober - This should help

https://coolcodea.wordpress.com/2014/12/30/189-locating-the-2d-position-of-a-3d-point/

Look at the section “What are the matrices for?”, and especially the explanation of m[16].

Let me know if it’s unclear.

@Ignatz, that is the clearest explanation of modelMatrix vs. projectionMatrix vs. viewMatrix I’ve read. SO helpful. I’m still working through the math for sizing, but I wanted to mention that.

@Yojimbo2000, did you ever get your super-fast object loader into shareable shape?

I seem to be really close to a 0.1 version of this, having created projects that do the core functions:

  • I can load a female figure, with clothes on, and use an alpha channel to modify the lengths of the sleeves/dress/whatever
  • I can draw directly to a 3D object with my finger

I now just need to put them together, and I’ll have a clothed female figure whose outfit can be modified extensively by hand. Cool!

The big bottleneck is how long it takes the female figure to load. It’s surprisingly quick, to me, but way too slow for coding with. I need something I can run and modify several times a minute.

So I’m doing two things: looking online for simpler female models (with clothes!) and trying to figure out what you guys were talking about when you were discussing how to make loading .obj files go much, much faster.

For testing purposes, I would store the obj data in a way that can be loaded most quickly, and the obvious way is in its final form of tables of vertices and texture coords. You still have to serialise the tables into strings, but I think we discussed this earlier.

Then when you’re done programming, you can go back to loading from obj.

@UberGoober I did experiment with storing 3D obj data in an intermediary format (like @Ignatz is saying, comma separated values of the final tables), and there was a slight speed increase (about a third?) but not enough to justify the extra steps and code complexity (plus larger data size). I think that the main thing slowing it down is calculating the average normals (of course, you could also store the normals in the intermediary format. But then the data size would be +60%).

I ended up going back to reading obj files. As @Ignatz and I discussed earlier in this thread, I think you could perhaps make a saving by incorporating the average normal calculation into the vertex winding section of the .obj decoder (you would at least save one iteration through all of the vertices). @Ignatz pointed out that it was probably just all of the cross calculations that was taking time. So I never bothered testing this.

I think the thing that I said was much faster was the workflow. Because I’m working with 3D animations, interpolating between keyframes, I have lots of .obj files. I rewrote the .obj importer so that it would accept multiple files from the Dropbox/Apps/Codea/ folder, so that you can export from Blender straight into the Dropbox, and then just run the Codea importer, with no intermediary steps of editing the obj files, combining them into one etc.

Loading from the local dropbox is faster than http.requests (but did you implement loading from the local dropbox anyway?)

Because you’re not doing asynchronous loading via http.requests, the obj loader becomes much simpler. It’s just a few hundred lines, and its really just a single function, plus a few helper functions (it doesn’t have to be a class any more, which you need really for asynchronous file loading). The idea is, I can make some edits to a model in Blender, hit export, in Codea I have to sync the Dropbox, but then I can just run the project and the new model is there. That all makes a big difference.

The other thing I only recently discovered in Blender, is that the .obj exporter allows you to save sets of export settings, which really helps smooth things over (especially if you return to work on a project after some time away from it).

I’ll see if I can make this presentable enough to share (the idea was eventually to write a 5th part to my blog series on animating 3D models, but other things like Soda got in the way).

I’m not sure how useful my code would be for you. The animation-related parts presumably aren’t relevant to your project. Because the models I work with just have one texture per model, I took out the whole mechanism that @Ignatz has for splitting the .obj file up into multiple models and multiple textures. So depending on how your model and the clothes are organised you’d have to export each texture as a separate model (this isn’t hard: choose the texture, press select in the texture window to select everything that has that texture, in the export window select “export selection only”). But this might not be worth the hassle if you’ve got your code working with @Ignatz 's loader.

How slow are the startup times anyway? I have the loader in a coroutine so I can animate a progress bar.

@Ignatz, that’s a great idea. Hard-code the model. I’ll do that and see how it affects speed.

@Yojimbo2000, yes I’m reading from the local folder, not making http requests. Speeding up the workflow by not having to edit the files sounds really good to me though! Could you share that part of your code? As to your other questions: the model is its own mesh and the clothes and hair are their own meshes; loading takes long enough that I stop counting how long it takes! :slight_smile: …which is between 45-60 seconds.