touches on transformed matrices (rotate, translate, scale) [solved]

I try to create a simple and minimalistic touch library, which takes care of all touches and the corresponding shapes. I wrote a little prototype. It works only with polygons (e.g. mesh) and only when they are not transformed by rotation, translation or scale. To check the ‘hit / collision point’ on the shape itself, I use the pointInPoly() function…

How can I apply my concept to transformed matrices? (How can I check, if the polygon was hit by a touch, when this polygon is rotated, etc… - so, matrix has changed, but its vertices actually remained the same as before?)

Hope you do understand, what I want. Sorry for confusion.

Main

function setup()
    body = {mesh = mesh()}
    body.vertices = {
        vec2(0,0),
        vec2(60,0),
        vec2(0,40),
        
        vec2(60,0),
        vec2(60,40),
        vec2(0,40)
    }
    body.mesh.vertices = body.vertices
    
    head = {mesh = mesh()}
    head.callback = function() print("touched head") end
    head.vertices = {
            vec2(10,20),
            vec2(50,20),
            vec2(10,80),
            
            vec2(50,20),
            vec2(50,80),
            vec2(10,80)
    }
    head.mesh.vertices = head.vertices
    
    monitor = StatusBar()
    
    addTouchListener(body, function(touch) --notice ordering!
        if touch.state == ENDED then
            print("touched body")
        end
    end)
    addTouchListener(head)
    addTouchListener(monitor, function() print("NONONO!!!") end)
end

function draw()
    background(80, 86, 87, 255)
    
    translate(WIDTH/2, HEIGHT/2)
    rotate(45)
    
    fill(174, 174, 40, 255)
    body.mesh:draw()
    
    fill(255, 72, 0, 255)
    head.mesh:draw()
    
    resetMatrix()
    monitor:draw()
end

function touched(touch)
    updateTouchListeners(touch)
end

Facilities

local function pointInPoly(touch, vertices)
    local intersectionCount = 0
    
    local x0 = vertices[#vertices].x - touch.x
    local y0 = vertices[#vertices].y - touch.y
    
    for i = 1, #vertices do
        local x1 = vertices[i].x - touch.x
        local y1 = vertices[i].y - touch.y
        
        if y0 > 0 and y1 <= 0 and x1 * y0 > y1 * x0 then
            intersectionCount = intersectionCount + 1
        end
        
        if y1 > 0 and y0 <= 0 and x0 * y1 > y0 * x1 then
            intersectionCount = intersectionCount + 1
        end
        
        x0 = x1
        y0 = y1    
    end
    
    return (intersectionCount % 2) == 1
end

function addTouchListener(obj, callback)
    if not _touchesStack then --create a stack, if needed
        _touchesStack = {}
    end
    
    if callback then --bind callback to obj, if given
        obj.callback = callback
    end
    
    table.insert(_touchesStack, obj)
end

function updateTouchListeners(touch)
    for i=#_touchesStack, 1, -1 do --travel stack in reverse and compare touch with each obj
        local obj = _touchesStack[i]
        
        if obj.vertices and pointInPoly(touch, obj.vertices) then --check collision
            if obj.callback then
                obj.callback(touch)
            end
            
            return true
        end
        
        if obj.touched then --fallback to 'original' method, if any
            obj:touched(touch)
        end
    end
    
    return false
end

A few hints:
1/ your body should know the situation when it is drawn. So your body:draw() function should have a self.myMatrix = modelMatrix() statement.
2/ your body:touched(touch) function should convert the touch coordinates into your object ccordinates, so:
local mat = self.mymatrix:invert().
Then you have to multiply mat to vec2(touch.x,touch.y), you have to do that by hand i am afraid.
Then the newTouch coordinates should be comparable to you object initial coordiantes.
Not checked.

Thank you very much for the hints.

PS: Has anyone else accomplished what I try to do? Maybe, there is already a useful library like that?
I just don’t like the idea of shaders and z-ordering. In my opinion touches should be evaluated by drawing order.

You say:

touches should be evaluated by drawing order.

Well… Since things drawn last at on top, i would say inverted drawing order?
Then you can have a table where each object rregisters itself in the daw:
order[#order+1] = self.
Then you check your touches from #order down to 1?
Maybe order update should be done only when some object leaves/ appears, because resetting and populating a tables is expensive, and should be not done in each draw…

I’ve done this, fairly recently in fact. Each of my objects keeps it’s own transform info (position, angle, scale) and computes (and caches) a transform matrix in it’s draw function (but only when the transform info changes, because there’s no need to recalculate the matrix if the transform info doesn’t change). I can then use this matrix in the next touch event to calculate touch positions relative to each object. My objects process their touches in the opposite order that they are drawn, so objects drawn last get first crack at the touch events. Unfortunately a lot of the code surrounding the transform handling code is very prototypical at the moment, and the transform code itself is pretty specific to the needs of the project, so I can’t share it just yet. However, if you take a look at the CCNode class in CocosCodea (https://github.com/apendley/CocosCodea), a similar method is used there, and you might be able to figure out what you need from that.

The problem with doing things “the right way” is that the code becomes very complex. I used a relatively simple shader to locate the touch and while it is not an elegant solution, it gives the right answer every time, regardless of rotations or anything else.

For example, if one object can be touched through or behind behind another object, the other solutions above will tell you the front object was touched, whereas my solution will correctly give you the object behind. A classic example is a tree, which is dense at the top and very thin below, so things can be seen past the trunk.

I have the touch functionality already in place as you may have seen in my first post. There is no problem with that. The only thing to manage now is the pixel-perfect touch detection on the object itself - especially in rotated and scaled states. And I try to apply your advice @Jmv38. But how should I multiply the inverse matrix with the touch.x and touch.y?

Could I somehow get the current matrix of the touch? and then multiply just matrix:inverse by the touch matrix? Or should I use the individual nodes of the 4x4 matrix to get the touch multiplied onto it?

Sorry, but I had never to deal with this, at the moment, complicated stuff…

@Ignatz
Yes, I saw your code and its great! But I don’t like the idea to pass id’s through shaders and color information. Seems like a hack to me. And I like clean things))))))
Also There is a lot of setup going on. In my solution I have to call two functions in order to get things to work: addTouchListener() in setup, and updateTouchListeners(touch) in draw()…

@se24vad there were lot od discussions about that in 3d some months ago. Andrew Stacey showed up some tuto and formulas. 2d is simpler so it could be: (not sure, from the top of my head, i cant find the code back in my 400-projects-in-the-same-folder :frowning: )

Local m11,m12,m21,m22, t1,t2= m[1],m[2],m[5],m[6],m[13],m[14]
Local newX = m11*x + m12*y + t1
Local newY = m21*x + m22*y + t2

The matrix conventions with open gl are not exactly the same as usual matrix computations we learn in math class, so i always have to do a little trials and errors to find it back.

But it think the ‘hack’ of @ignatz is probably the best general solution: once inplemented, it will work in any case (3d too) with fixed cost and no head ache! And you can implement it the ‘clean’ way you want too.

@Jmv38 + @Ignatz : Thank you so much guys. All that info was so valuable!

I will look at both solutions - explore mine and look into @Ignatz 's. Thanks again!

one more question @Ignatz: do I understand it correctly? - The object-count is limited to 255?

If your objects know their own transformations then it’s probably easiest for each to also know its own inverse transformation as well. By that I mean that whenever you add a transformation to an object (such as a translation or a rotation or a scaling) then you add the inverse transformation to the inverse matrix. This will produce a more reliable inverse matrix than computing the inverse at the time of computation.

Much as I’m a fan of Ignatz in general, I really don’t like his method for finding out which object was touched. I don’t think that “pixel perfect” is actually a desirable target for touch objects. I certainly can’t touch the screen “pixel perfectly” so any touch mechanism that works out the exact point where I touched is still not going to be absolutely certain that that is where I meant to touch. Rather I would allow each object to specify a touch region and query each in turn. I also don’t like the idea that the order in which things are drawn on the screen is the order in which they should be touched. This can make it very hard to select an object that is mostly obscured. There should be a way to override the “obvious” choice of object.

Andrew, I don’t pretend my solution is elegant, but it lets me select objects through tree branches or through windows, or past very irregular objects, or irrespective of any kind of rotation or any other transformation. It just works… That is the real value of being pixel perfect, and there is no other way to do that.

STOP! Dont start it again you two! The kids are watching! :wink:

sorry, I can’t help it. I’m an actuary, and for us, near enough is good enough. We cut corners on theory whenever we can. :wink:

Ok, thanks everyone, in this discussion, for help.

With all your help, I came up with another pixel-perfect touch detection method, on meshes, without any shaders or alike. Works with rotated, scaled shapes as well. It works on polygon boundries, which are created automatically for any type of object - be it a rect or an image. You even don’t have to check touch boundries to see if the touch was inside the polygon of the shape. Everything is taken care for you!

I also ported rect(), ellipse() and sprite() functions to act as a mesh and used @Ignatz shared code for physics on any image to get images working, too.

You can find the code on Pastie, right here.


**Here's how to use shapes and images with the re-implemented functions:** ([] = optional) ~~~ Rect(x, y, width, height) Ellipse(x, y, size) Ellipse(x, y, width [, height, ofAngle, toAngle, cutOut]) Image(x, y, img [, size]) Image(x, y, img [, width, height]) ~~~ **All of those functions return a table, with a mesh object in it; with additional properties:** (There are some more props and you can add your own, but these are the most useful, imho. You can inspect the table for more detailed info, if you want...) ~~~ mesh.x mesh.y mesh.vertices (which are the actual boundries of the shape) ~~~
**Example:** ~~~ function setup() body = Image(0, 0, "Platformer Art:Guy Jump") end

function draw()
background(140, 130, 120, 255)

body:draw()

end

function touched(touch)
end


<br>
**Using the TouchController:** is amazingly simple - just add two lines to set-up...

function setup()
body = Image(0, 0, “Platformer Art:Guy Jump”)
addListener(body)
end

function draw()
background(140, 130, 120, 255)

body:draw()

end

function touched(touch)
updateListeners(touch)
end


Now it's already listening and reporting the touch activities to the object. Add some logic...

function setup()
body = Image(0, 0, “Platformer Art:Guy Jump”)

local function moveBody(touch)
    if touch.state == MOVING then
        body.x, body.y = body.x + touch.deltaX, body.y + touch.deltaY
    end
end

addListener(body, moveBody)

end

function draw()
background(140, 130, 120, 255)

body:draw()

end

function touched(touch)
updateListeners(touch)
end


You can even use Codea's build in styling- and transformation- modifiers:

function setup()
body = Image(0, 0, “Platformer Art:Guy Jump”)

local function moveBody(touch)
    if touch.state == MOVING then
        body.x, body.y = body.x + touch.deltaX, body.y + touch.deltaY
    end
end

addListener(body, moveBody)

end

function draw()
background(140, 130, 120, 255)

translate(WIDTH/2, HEIGHT/2)
scale(2)
rotate(25)

fill(113, 255, 0, 255)

body:draw()

end

function touched(touch)
updateListeners(touch)
end


http://youtu.be/n8dZX18fsZA

That is very good!
I was surprised about the statement with scale, translate.
So i checked with 2 bodies: actually it works perfectly!
Thanks fors sharing.

Very interesting, nice work

thank you) very glad you like it)