3D dress-up game?

@yojimbo2000 @Ignatz You’ll also want to use discard instead of just setting the gl_FragColor’s alpha because OpenGL does some weird things with transparency. If you look at the mesh from different angles if it has transparency in the texture, you’ll get some pretty odd results, with different areas of the mesh not rendering because OpenGL believes it’s not in sight, or not mixing colors behind a transparent image correctly.

discard did the trick. You guys are gods.

http://i49.photobucket.com/albums/f271/jwc15/Photo%20Aug%2012%209%2054%2056%20PM_zpsbvvjytks.png

If that’s a new style, I don’t like it much.

PS I can assure you that we (or at least I) struggle as much as anyone else with this stuff. We just started a little sooner, that’s all.

@SkyTheCoder - I find discard works fine as long as you draw transparent objects last, and in order from back to front

EDIT - I meant to say setting all the colour values works fine, not just discard.

So, I’m chuffed. In terms of core functionality, this already really close. I showed it to my girls, and of course they wanted to touch it and move it. And I realize what they’d like best is to be able to draw right on the model.

That seems really hard, but this seemed really hard at first. And it actually was significant effort, but nowhere near what I’d feared. Talking with you guys, I narrowed it down to five things that seemed doable, and then it I was able to actually do it.

So, trying to continue in the same vein, to implement draw-on-model it seems like I need two core functionalities:

  1. Be able to live-update a texture
  2. Be able to detect where on the texture a person is touching when they tap or drag on the screen.

…not sure where to start on those. Any tips or sample projects or tutorials you could point me towards?

@Ignatz Yes, but im most cases, it’s difficult to sort the order you draw objects in, and even then, you can get some weird results, i.e. from one angle you look through the transparent object and only see the scene behind it, but from another, you also see the transparent back of the object.

Live updating is simply a matter of replacing one texture image with another.

I can show you how to detect a touch.

  1. Capture the touch x,y position

  2. Draw the next frame to an image in memory, and for all meshes on which you want to trap touches, use a coded version of the image for that mesh. By this, I mean an image whose RGB values are set in a way that identifies the touch position.

For example, suppose you have several images, any of which could be touched. Create a duplicate image for each of them, and replace the normal colours (for non transparent pixels) with the following
R = image #
G = X position of pixel
B = Y position of pixel

So for the first image, R will be set to 1 for each pixel, for the second image, R will be set to 2, etc.

G and B have to be set as a value 0-255, so your resolution (the imit of your accuracy) is 1/255 of the image width/height, which is good enough for fat fingers.
So if a pixel is at 200,300 on an image which is 400x1200, you set G=255x200/400 and B=255x300/1200.

Before you draw, set the clipping area so Codea only bothers drawing this image to the tiny area around the touch - for speed. If your touch is at x,y, I would just write clip(x-1,y-1,2,2). After you’re done, reset clip with clip()

Now set your special duplicate as the mesh texture and draw, for each mesh. Only bother drawing the meshes that can be touched.

When you’ve drawn this image in memory with setContext, get the colour of the pixel at (x,y), decode the RGB values, and now you know which image was touched and where.

You set the special duplicates up once-only at the beginning, so this detection process will be extremely fast.

If you want a more dynamic approach, you can write a special shader to encode the pixels for you on the fly.

Update. This post of mine

https://coolcodea.wordpress.com/2015/01/04/191-locating-the-3d-position-of-a-2d-point/

shows how to use a shader to insert position values into the pixel colours. This post suggests using the third colour to get more accuracy for x,y, but in your case, you will need it to identify which mesh was touched, and pass the mesh id into the shader for this purpose.

If I’m confusing you, just say. #-o

@Ignatz 's idea is a good one. In your case it would be simpler, because you’re trying to find the texCoord at which the model was touched not the position, so you wouldn’t need to pack extra positional data into the blue channel or take the final step of extrapolating the position from the texCoord (and with a complex form like this once, I’m not sure you could extrapolate the position from the texCoord?). I would go with something like
col = vec4(texCoord.x, texCoord.y, modelNumber, pixel.a);

What I’m saying is that if you are using a pixel colour to store x,y position, there are only 256 values for width and height, but that should be enough.

My approach gives you back the place on the texture that was touched, and if you want to change the texture image, that’s exactly what you need.

EDIT - I agree with yojimbo2000, I was saying the same thing but I guess I wasn’t clear!

On the subject of efficient vertex storage and recovery, I had a go at using simple strings, as shown below. Decimals are truncated for this test.

EDIT - On my iPad3, this encodes 500,000 (vec3) vertices in about 10 seconds and decodes them in 7 seconds.

function setup()
    m=CreateTestSet(500000) 
    t=os.time() txt=Encode(m)  print("Encoding: "..os.time()-t)
    t=os.time() m2=Decode(txt) print("Decoding: "..os.time()-t)
    for i=1,#m do
        if (m[i]-m2[i]):len()>0.00001 then print("ERRORS!") break end
    end
    print("All done")
end

function CreateTestSet(n)
    local verts={}
    local rand=function() return math.floor((math.random()-0.5)*20000)/100 end
    for i=1,n do
        verts[i]=vec3(rand(),rand(),rand()) --between -100 and +100
    end
    return verts
end

--this method is faster than concatenating strings
--or using tostring to collapse each vector to a string
function Encode(tbl)
    local tbl2={}
    for i=1,#tbl do
        tbl2[#tbl2+1]=tbl[i].x
        tbl2[#tbl2+1]=tbl[i].y
        tbl2[#tbl2+1]=tbl[i].z
    end
    return table.concat(tbl2,",")
end

function Decode(txt)
    local tbl={}
    tbl=loadstring("return {"..txt.."}")()
    local tbl2={}
    for i=1,#tbl,3 do
        tbl2[#tbl2+1]=vec3(tbl[i],tbl[i+1],tbl[i+2])
    end
    return tbl2
end

@Ignatz I’m glad you posted that test. I added json to it, and the json is ten times slower! I think it’s because you have to make separate sub-calls to json.encode every time you encounter an unsupported type. Incidentally, if you make m a local variable, the tests run 1.5 to 2 times as fast.

function setup()
    jsonMethods()
    local m=CreateTestSet(500000) 
    print("starting table.concat test")
    local t=os.time() txt=Encode(m)  print("Encoding: "..os.time()-t) --5-6 seconds (iPad Air). 3-4 secs (local)
    t=os.time() m2=Decode(txt) print("Decoding: "..os.time()-t) --2-5 seconds. 2-3 secs (local)
    for i=1,#m do
        if (m[i]-m2[i]):len()>0.00001 then print("ERRORS!") break end
    end
    print("starting json test")
    t=os.time() txt=json.encode(m)  print("Encoding: "..os.time()-t) --63-70 seconds!!  29-45 secs (local) {exception = jsonVec3}
    t=os.time() m2=decodeJson(txt) print("Decoding: "..os.time()-t) --23 seconds. 13-20 secs (local)
    print("All done")
end

function CreateTestSet(n)
    local verts={}
    local rand=function() return math.floor((math.random()-0.5)*20000)/100 end
    for i=1,n do
        verts[i]=vec3(rand(),rand(),rand()) --between -100 and +100
    end
    return verts
end

--this method is faster than concatenating strings
--or using tostring to collapse each vector to a string
function Encode(tbl)
    local tbl2={}
    for i=1,#tbl do
        tbl2[#tbl2+1]=tbl[i].x
        tbl2[#tbl2+1]=tbl[i].y
        tbl2[#tbl2+1]=tbl[i].z
    end
    return table.concat(tbl2,",")
end

function Decode(txt)
    local tbl={}
    tbl=loadstring("return {"..txt.."}")()
    local tbl2={}
    for i=1,#tbl,3 do
        tbl2[#tbl2+1]=vec3(tbl[i],tbl[i+1],tbl[i+2])
    end
    return tbl2
end

function jsonMethods()
    meta = getmetatable(vec3())
    meta.__tojson = function(t)
        return json.encode({t.x,t.y,t.z})
    end
end

--[[ --as an alternative to setting the meta-method, add this as the exception function
function jsonVec3(_, v)
    return json.encode({v.x,v.y,v.z})
end
  ]]

function decodeJson(txt)
    local tab = json.decode(txt)
    local vert = {}
    for i,v in ipairs(tab) do
        vert[i] = vec3(v[1], v[2], v[3])
    end
    return vert
end

That’s a much bigger difference than I’d expect, but I guess being able to use load string and working with tables rather than strings, helps a lot.

json is useful for tables with lots of dimensions, but I guess its overkill for a one dimensional array.

I’ve been doing lots of testing and comparisons, and I’m still undecided as to whether an intermediary mesh format is needed. Once you take all the asyncronous loading stuff out of the obj loader, it becomes a lot simpler, it doesn’t have to be a class with methods and state management anymore, it can just be a function.

Also, I realised that one part of why my load times were getting slow when there’s lots of models is the CalculateAverageNormals function, which I’ve been using regardless of whether I’m loading from the .obj file or the intermediary format. This means that the overall project load speeds are faster, but only by a third, not a massive difference. You could save the normals (either in the interim file, or have Blender export them for you) so that they’re only calculated once, but that would nearly double the size of the 3D assets. It’s a question of hitting the right balance between asset size and asset loadspeed.

I’m wondering whether calculating the average normals could be done faster as part of the processing of the .obj file, seeing as the obj file already contains a set of unique points for the model, which is one of the things you need for the calculation. It would mean one less iteration through all of the vertices.

Pros of using an intermediary format:

  • load times are around a third faster (4 seconds instead of 6 on an iPad Air for 130,368 vertices worth of models). Not as big a gain as I’d hoped for, because both methods still need the average normals to be calculated. And if average normal calculation can be done at the .obj processing stage, then .obj might have an advantage over the intermediary format.

  • for animation via keyframe interpolation, I pack the vertices from several .obj files into one file (I put colours and texCoords into a separate file, as these don’t change from frame to frame). Not sure if this is a pro, but keeps your Dropbox folder tidy.

  • by saving it as a text asset, it automatically gets included in your project when you export. Although, you can achieve a similar effect by changing the extension of the obj file in Dropbox from .obj to .txt (you can do this in the Codea obj loading program using os.rename), and then loading the file with readText instead of io.read. So again, perhaps not such a unique pro.

Cons

  • 3D asset sizes can be almost twice as big (1404 kb vs 2639 kb for 130,368 verts worth of models). This is using @Ignatz 's minimal comma-separated value format too.

  • it does involve an extra step of importing

I’m not sure you’ll get much gain from doing the normals at the obj processing stage, I suspect the effort comes in the cross product calcs.

I think I’d start off using native obj files, because

  1. that keeps the project simpler

  2. it can be added later

  3. other things are more important right now, like getting the animation working effectively

@Ignatz, @Yojimbo2000, thanks for the pointers! And code! I’ll get working on it. The whole thing is making my head spin, but the blog helps a lot.

One question, to which the answer might be obvious if I understood the whole thing better: will I be able to save the drawn-on texture as a new file?

@Ignatz, I just pasted the sample code from your blog into Codea and got this error:

error: [string "function setup()..."]:84: bad argument #1 to 'get' (number has no integer representation)

Any advice?

Yes, using saveImage.

@UberGoober - Curses! That’s caused by another change in Codea versions.

It means the function on that line requires integers, so any parameters that might be fractional need to have math.floor/ceil applied.

I added this at line 77 (untested)

    t.x,t.y=math.floor(t.x+0.5),math.floor(t.y+0.5)

@Ignatz: Rats. The new error:

error: [string "function setup()..."]:78: attempt to index a userdata value (local 't')