3D dress-up game?

@yojimbo2000, that sure worked! Fantastic!

I’d like to keep as much as possible of the custom code inside the DrawOnMeshTexture class, so what I’ve done is put those lines at the top of the 2D drawing code in that class, which seems to work just as well. Is there any problem there?

I guess not, just make sure you don’t call that function and then try to do more 3D drawing afterwards. Just in terms of keeping code tidy though, I prefer to have a clear distinction in the draw loop: 1. Draw game world (3D) 2. Do 2D drawing.

If the draw texture function is only going to be called from the touched routine, then you could just put the ortho/viewMatrix calls right at the very end of you draw loop, as all touched calls happen after draw.

That makes total sense to me, at least in the context of a self-contained project. I’m building the class as a library, though. So I want to put as much functionality as I can inside the class itself.

@Ignatz, can you break down what’s happening in this part of the shader:

    highp vec2 T = vTexCoord * 4096.0;
    highp float x1 = mod(T.x, 16.0);
    highp float x2 = (T.x - x1) / 16.0;
    highp float y1 = mod(T.y, 16.0);
    highp float y2 = (T.y - y1) / 16.0;
    gl_FragColor = vec4(x2 / 255.0, y2 / 255.0, (x1 * 16.0 + y1) / 255.0, 1.0);

I think I understand what’s going on conceptually, but the rationale behind the specific mathematic operations, as well as some of the C syntax, is fuzzy to me.

@UberGoober - I usually do my 2D drawing at the end of the draw function, but the beginning should be fine too.

The reason you need those extra lines of code is that they control whether you are looking at a flat 2D scene or a 3D scene with z depth, while pushMatrix and popMatrix only store/restore translations and rotations within your scene.

@Ignatz, in the shader code, can you tell me specifically:

  1. Where does the number 4096.0 come from?
  2. All the individual coordinates are either divided by or mod()'d by 16.0–why?

The shader code takes the texture x,y position (a fraction 0-1) and encodes it in the r,g,b colours of the pixel.

The x position is encoded in the red colour and half of the blue colour. The y value is encoded in the green colour and half of the blue colour.

Practically, this means that the x and y position can each be encoded to an accuracy of 1/(256x16).

So the code takes the x texture position (0-1), multiplies by 4096 (=256x16), divides the result by 16 and puts the remainder in blue, and the rest of it in red. Similar for y.

If you don’t need more accuracy than 1/256, then simply use this instead - which also frees up the blue colour so you can store the id of the texture that was touched.

    //this replaces all the lines above
    gl_FragColor = vec4(vTexCoord,0.0, 1.0);

if you want to store the id of the touched item, use this instead

    //convert ID to fraction 0-1, Codea will multiply it by 255
    gl_FragColor = vec4(vTexCoord, id/255.0, 1.0);

    //and put this line with the other "uniform" items above the main function
    uniform float id;   

    --and back in Codea, set the ID when you create the mesh for each part 
    --of the body, eg
    Pants=mesh()
    --your code to create mesh
    Pants.shader = ..... (your code)
    Pants.shader.id=2

And when you touch the pants, the blue colour will be 2.

@Ignatz, thanks for clarifying, I think I almost get it.

Is there a reason you don’t use the alpha value as well? Why use r, g, and b but not a?

alpha gets messed up by Codea’s anti aliasing, which interpolates the alpha around colour transitions, and there’s no way to turn it off. So alpha is unreliable.

EDIT - see minor correction to code in my previous post to add decimal point to 255 (shader doesn’t like integers without decimals)

@Ignatz: so, here’s an odd question. The answer might be obvious to someone who understood everything better, so please forgive me if it’s dumb.

My question is based on the fact that a shader can’t directly output information to the Codea environment, but Codea can directly put information in, right? The clipping window, for example, is a parameter directly set by Codea.

So would it be possible to encode even more information by putting it in more than one pixel?

The process I’m imagining goes like this: Codea tells the shader to render a 3 x 3 pixel region, centered around the touch. Now, in the shader, we’re currently encoding individual coordinate information in every one of those pixels. But since we know that the only one we are going to need is the middle one, is there some way to tell the shader to use, for instance, all of the first three pixels for the x value, and all of the last three pixels for the y? Basically we would only use the first two places of each r, g, and b of the color values, so that each used digit could go from one to nine, but since we would be using three sets of colors that would give us accuracy down to 18 decimal places for both x and y. We could represent fairly precisely the actual decimal number that the shader is using.

Would that be possible?

@Ignatz, the code at this link also no longer runs: https://gist.githubusercontent.com/dermotbalson/7443057/raw/cc47c8c0a7bfaced35feb90e824aced904cba8dd/gistfile1.txt
When you tap play, every tab lights up red, complaining about calling some nil value or other. Sorry! Would you rather not know?

I’ll fix the code. It’s another software update breaking something.

Wrt your question, my understanding is that the shader still has to process all the pixels even if they will be clipped, because at the time the fragment shader runs, it doesn’t know where on the screen the pixel is going to end up - the camera angle hasn’t been applied yet.

So the shader isn’t just processing 9 pixels.

But I don’t see a problem with an accuracy of 1/256, bearing in mind your finger has an accuracy of about 1/50! So isn’t one pixel colour good enough?

@Ignatz: to be embarrassingly honest, I don’t fully understand the math behind the current calculations. My attempts to repurpose the code aren’t working, and in trying to debug it I’m running up against my own math limitations. So I was trying to think of a way to do the same thing but have it be more parseable to me. In the end I think I just have to force myself to understand the existing code.

But as an aside: if the shader has to process all the pixels anyway, why do we clip the buffered image at all?

@Ignatz, you may find these results interesting.

I was working with your touch demo, trying to dissect it.

First, to make the math simpler for me to understand, I changed the fragment shader to this:

void main()
{
    gl_FragColor = vec4(vTexCoord.x, vTexCoord.y, 0, 1.0);
}

…what surprised me is that, without changing any other code at all, it still seemed to work just as well. So I guess your point about necessary precision is a good one.

Next, keeping this simplified shader, I applied a texture to the plane and tried to use this code to draw to it. Inside GetPlaneTouchPoint I added this:

    local codedX,codedY =shaderImg:get(touchX,touchY)
    local proportionalX = codedX / 255
    local proportionalY = codedY / 255
    local convertedX = proportionalX * shaderImg.width
    local convertedY = proportionalY * shaderImg.height
    textureTouched = vec2(convertedX, convertedY)

…and at the end of draw I added:

    if textureTouched then
        ortho()
        viewMatrix(matrix())
        setContext(plane.mesh.texture)
        fill(0, 255, 255, 255)
        ellipse(textureTouched.x,textureTouched.y,10)
        textureTouched = nil
        setContext()
    end

…which works okay, but oddly slightly differently the other code.

If you try it out, you’ll see that the textureTouched point is drawn slightly above and off-center from the p point. Which seems odd. Both are using the simplified shader, so any loss of accuracy should extend to both of them, shouldn’t it?

And between the two, the p point feels more right–it seems to appear closer to where I actually touch.

It may come down to the fact that your code multiplies the shader result by 255, while mine uses 4096 (256*16). Mine may be slightly incorrect because the colours go from 0 to 255, not 256. But I haven’t checked to be sure.

Anyway, what you’ve done is a great way of understanding what is happening, and I have done that many times myself. So it is well worthwhile!

@UberGoober - I fixed that broken code you mentioned above,
here

it is for advanced Codea user, and for female player, i always dont know how to play this type of game

@Ignatz, @Yojimbo2000 - drawing to texture (with visible buffer image & pseudo-unit-tests):


--# Main
-- MakePlane

function setup()
    textMode(CORNER)
    setUpTests()
    makePlane = MakePlane()
    texture = makePlane.plane.mesh.texture
    colorCoordinates = {}
    overlayImage = image(WIDTH, HEIGHT)
    setUpBufferDisplay()
end

function setUpBufferDisplay()
    bufferImage = image(WIDTH, HEIGHT)
    local bufferFract = 1 / 6
    cameoSize = vec2(WIDTH * bufferFract, HEIGHT * bufferFract)
    camoPos = vec2((cameoSize.x/2)+20, (cameoSize.y/2)+20)
    setContext(bufferImage) --fill image with yellow for visibility
    strokeWidth(0)
    fill(220, 210, 122, 255)
    rect(0,0,WIDTH,HEIGHT)
    setContext()
end

function setUpTests()
    local parameterTable = {} --series of buttons to confirm visuals
    local assignableAction = function() end
    local nextParameter = function ()
        parameter.clear()
        if #parameterTable == 0 then
            return
        end
        local testDefinition = parameterTable[1]
        local passAction = function()
            print(testDefinition..": PASS")
            assignableAction()
        end
        local failAction = function()
            print(testDefinition..": FAIL")
            assignableAction()
        end
        parameter.action("PASS if "..testDefinition, passAction)
        parameter.action("FAIL", failAction)
        table.remove(parameterTable, 1)
    end
    assignableAction = nextParameter
    parameterTable = {"plane is showing", "touching draws red dots", "dragging leaves trail", "touch activates shader", "buffer image in corner", "clipping drawn in buffer", "mesh clips in buffer", "buffer perspective matches", "buffer coordinates read", "touch draws to texture", "draws in right place", "all touches draw"}
    nextParameter()
end

function draw()
    backingMode(STANDARD)
    background(69, 119, 104, 255)
    drawPlane()
    draw2D()
end

function drawPlane()
    pushMatrix()
    makePlane:draw()
    popMatrix() 
end

function draw2D()
    ortho()
    viewMatrix(matrix())
    sprite(overlayImage, WIDTH/2, HEIGHT/2, WIDTH, HEIGHT)
    sprite(bufferImage, camoPos.x, camoPos.y, cameoSize.x, cameoSize.y)
    if #colorCoordinates > 0 then
        drawTouchesToTexture()
    end
end

function drawTouchesToTexture()
    local textureX, textureY, baseX, baseY, marker, intCoord, fX, fY
    textureX, textureY = 0, 0
    for i in ipairs(colorCoordinates) do
        coord = colorCoordinates[i]
        intCoord = vec2(math.floor(coord.x), math.floor(coord.y))
        baseX, baseY, marker = bufferImage:get(intCoord.x, intCoord.y)
        if marker ==  1 then
            fX, fY = baseX / 255, baseY / 255
            textureX = texture.width * fX
            textureY = texture.height * fY
            setContext(texture)
            fill(0, 43, 255, 255)
            ellipse(textureX, textureY, 25)
            setContext()
        end
    end
    text("x: "..textureX..", y: "..math.floor(textureY, 10, 10))
    colorCoordinates = {}
end

function touched(touch)
    setContext(overlayImage)
    fill(255, 4, 0, 255)
    ellipse(touch.x, touch.y, 10)
    setContext()
    if touch.state == MOVING then
        toggleShader(true)
        drawClipToBufferPosition(touch)
        table.insert(colorCoordinates, vec2(touch.x, touch.y))
    else
        toggleShader(false)
    end
end

function drawClipToBufferPosition(touch)
        setContext(bufferImage)
        pushMatrix()
        clip(touch.x - 10, touch.y - 10, 20, 20)
        makePlane:draw()
        popMatrix()
        setContext()
end

function toggleShader(onOrOff)
    if onOrOff == true then
        local planeShader = shader(PlaneShader.v, PlaneShader.f)
        makePlane.plane.mesh.shader = planeShader
    else 
        makePlane.plane.mesh.shader = nil
    end
end

--# MakePlane
MakePlane = class()

function MakePlane:init()
    self:createSimpleMesh()
end

function MakePlane:draw()
    self:setupPerspective()
    self.plane.mesh:draw()
end

function MakePlane:createSimpleMesh()
    --settings for our rectangle (named "plane" below)
    self.plane={}
    self.plane.centre=vec3(0,0,0)
    self.plane.rotate=vec3(45,0,20)
    self.plane.size=vec2(500,500,0)
    strokeWidth(10)
    stroke(255, 0, 0, 255)
    --set up mesh, needed for shader
    self.plane.mesh=mesh()
    self.plane.mesh.texture = readImage("SpaceCute:Background")
    local s=self.plane.size
    local x1,y1,x2,y2,z=-s.x/2,-s.y/2,s.x/2,s.y/2,s.z
    self.plane.mesh.vertices={vec3(x1,y1,z),vec3(x2,y1,z),vec3(x2,y2,z),vec3(x2,y2,z),vec3(x1,y2,z),vec3(x1,y1,z)}
    self.plane.mesh.texCoords={vec2(0,0),vec2(1,0),vec2(1,1),vec2(1,1),vec2(0,1),vec2(0,0)}
    self.plane.mesh:setColors(color(255,255,0))
    self.img=image(WIDTH, HEIGHT)
    self.shaderImg=image(WIDTH, HEIGHT)
end

function MakePlane:setupPerspective()
    perspective()
    camera(0,0,900,0,0,-1)
    translate(self.plane.centre.x, self.plane.centre.y, self.plane.centre.z)
    rotate(self.plane.rotate.x,1,0,0)
    rotate(self.plane.rotate.y,0,1,0)
    rotate(self.plane.rotate.z,0,0,1)
end

PlaneShader = {

v = [[
uniform mat4 modelViewProjection;
attribute vec4 position;
attribute vec2 texCoord;
varying highp vec2 vTexCoord;

void main()
{
    vTexCoord = texCoord;
    gl_Position = modelViewProjection * position;
}
]],
f = [[
precision highp float;
varying highp vec2 vTexCoord;

void main()
{
    lowp float marker = 1.0 /255.0;
    gl_FragColor = vec4(vTexCoord.x,vTexCoord.y,marker,1.0);
}
]]}

Progress!

For extra fun add this to the end of setup:

    newX = 400
    tween( 10, makePlane.plane.rotate, { x = newX }, { easing = tween.easing.linear, loop = tween.loop.pingpong } )

…a testament to Ignatz’s code that it works so well during rotation, no extra programming needed!

Ah, rotations, my favourite nightmare! @-)