I’ve been grappling with this for days. The coordinate system for touch is so different than that of 3d space but its hard to put my finger on exactly why that is. Is there no way to convert your touch coordinates to 3d space coirdinates? I mean I know z would require something special but how do you justify touch.x not lining up with 3d.x placement?

No. The touchscreen is a 2d plane.

check here:

http://twolivesleft.com/Codea/Talk/discussion/comment/7059#Comment_7059

gluUnproject() was supposed to be available from codea as unproject(), but I can not find it in the docs… basically you can roll your own, here is what to do:

http://www.opengl.org/sdk/docs/man/xhtml/gluUnProject.xml

Check the codea docs for how to get to the projection and modelview matrices and how to multiply them and invert the result, it’s all there.

@IJazmith I think what you are after is “selection” or “picking” in 3D. That is, the problem of translating a 2D touch (x,y) into a 3D point (x,y,z) so that you can tell what is under the user’s finger.

There are a lot of different ways to solve this. I’ve outlined some very high level algorithms below.

*Raycasting* - cast a ray from the touched point into the scene. The first object that the ray intersects is the one the user touched.

*Selection buffer* - render the entire scene into a texture, using different colours for each object. Look up the touch position in the texture and see what colour was under the user’s finger. Relate that colour back to the object.

Sorry @gunnar_z, I never got around to adding that function. I’ll put it on the issue tracker to remind myself.

No problem, as I wrote, you can make your own version. I just think the builtin one would be faster. @IJazmith what I described would be the raycasting method.

from what Ive read raycasting requires 2 passes and that one point be lower than the other thus if you wanted to touch a vertice coordinate point thats infront of another you’d pass through it and touch the point behind it?

Is there any way to set a 3d vertice point inside a texture and can we maybe have a tutorial for it?

From what Ive seen all the tutorials available deal with 2d points in space and 2d vertices not 3d points in space and 3d vertices and certainly not Selection buffering.

I’m not quite able to seal the deal with this one - getting the screen to return useful coords. I think that I have something wrong with the viewport, but overall, it’s just not returning values that make sense.

This is the touched() routine. All I am doing is rendering a 3d scene, and when I touch the screen, I want to place a mesh ANYWHERE near the region I touched. Currently, it seems to only work in a very small area, even smaller than the NDC.

I wrote this code as a port from the default C# glUnproject() code that I found online on the OpenGL page.

Any help appreciated!

```
if (touch.state ==BEGAN) then
A = projectionMatrix() * modelMatrix()
--print(A)
m = A:inverse() --inverse of model * projection matrices
--print (m)
ndc = vec4(0,0,0,0)
out = vec4(0,0,0,0)
coord = vec4(0,0,0,0)
v = vec4(0,0,WIDTH,HEIGHT) --window...ignoring camera pos? Must be wrong...
--NDC here
ndc[1] = (touch.x - v[1]) / (v[3]*2.0-1.0)
ndc[2] = (touch.y - v[2]) / (v[4]*2.0-1.0)
ndc[3] = 2.0 * 5 - 1.0 --touch.z , fake with lookAt.z
ndc[4] = 1
--
print(ndc)
--generate obj coords
out[1] = m[1]*ndc[1] + m[5]*ndc[2] + m[09]*ndc[3] + m[13]*ndc[4]
out[2] = m[2]*ndc[1] + m[6]*ndc[2] + m[10]*ndc[3] + m[14]*ndc[4]
out[3] = m[3]*ndc[1] + m[7]*ndc[2] + m[11]*ndc[3] + m[15]*ndc[4]
out[4] = m[4]*ndc[1] + m[8]*ndc[2] + m[12]*ndc[3] + m[16]*ndc[4]
if (out[4] == 0) then return end
out[4] = 1.0/out[4] ---why?
coord[1] = out[1]*out[4]
coord[2] = out[2]*out[4]
coord[3] = out[3]*out[4]
print(coord) ---very small!
touchlocation = vec3(coord[1], coord[2], coord[3])
print("touched")
end
```

No takers? I thought for sure Andrew would have!

Sorry! Hadn’t looked. Will do so later.

I was going to horde this to myself but well here you go

```
--touch 3d
--nil 3D setup
function draw()
-- Use this nil condition to perform your initial setup
if Angle == nil then
clearOutput()
displayMode(STANDARD)
display = "STANDARD"
--displayMode(FULLSCREEN)
aspect = 0.482422
iparameter("FieldOfView", 1, 140, 120)
iparameter("CamHeight",-1000,1000,0)
iparameter("Angle",-180,180,0)
iparameter("bgr", 0, 255,255)
iparameter("bgg", 0, 255,255)
iparameter("bgb", 0, 255,255)
iparameter("bga", 0, 255,255)
--iparameter("hndx",-500,500,-471)494
--iparameter("hndy",-720,1030,-720)
the3DViewMatrix = viewMatrix()
end
if hndy ~= nil then
coordx = 200+hndx + CurrentTouch.x
coordy = 200+hndy + CurrentTouch.y
coordsforthemesh ={vec3(coordx ,coordy,0),vec3( coordx + 5,coordy,0),vec3(coordx + 5,coordy+5,0)}
whereyoutouched = mesh()
whereyoutouched.vertices = coordsforthemesh
whereyoutouched:setColors(255, 0, 0, 255)
end
-- This sets a background color
background(bgr, bgg, bgb,bga)
perspective(FieldOfView,aspect)
camera(0,CamHeight,300, 0,0,0, 0,1,0)
the3DViewMatrix = viewMatrix()
pushMatrix()
pushStyle()
translate(0,0,0)
rotate(Angle,0,1,0)
-- Do your drawing here
if whereyoutouched ~= nil then whereyoutouched:draw() end
popStyle()
popMatrix()
-- Restore orthographic projection
ortho()
-- Restore the view matrix to the identity
viewMatrix(matrix())
stroke(0, 0, 0, 255)
font("AmericanTypewriter")
fontSize(13)
text("your touch: "..tostring(CurrentTouch.x).." "..tostring(CurrentTouch.y),WIDTH/2,HEIGHT/2)
if coordx ~= nil then
stroke(0, 0, 0, 255)
font("AmericanTypewriter")
fontSize(13)
text("its location: "..tostring(coordx).." "..tostring(coordy),WIDTH/2,HEIGHT/2 - 12)
end
end
function touched(touch)
hndx = -471
hndy = -720
end
```

With this code wherever you touch on screen the red mesh cursor will find (and if you’re too fast follow) your touch. Works fine for STANDARD but Fullscreen needs work any takers?

sidenote: in 0 degrees by angle it works fine

I’m nt exactly sure how that code tries to represent depth except for the HEIGHT/2-12 code, which seems wonky.

gluUnproject in OpenGL needs three coords: x and y as window pos and z as the value in the zBuffer at that location. Now with inverted view matrices it calculates back to the scene coordinates. In Codea you have all the matrices for the way back, but you are missing the zBuffer value, which is necessary. So the only way to find out the 3D location will be to shoot a ray into your scene and see where it hits something.

Ive found this

http://stackoverflow.com/questions/1834648/world-coordinate-issues-with-gluunproject

@aciolino what example were you referring to, link please?

Also what is the viewport?

@iJazmith @aciolino speaking of viewports it should be noted that in Standard you only have 494 pixels that you can see where as in full you have your full 768. Oh and aciolino the Height/2-12 was just to display the text for its coordinates.

Ive updated my code since then but this is not an end all be all solution to our problem more like a work in progress work around.

```
--touch 3d
--nil 3D setup
function draw()
-- Use this nil condition to perform your initial setup
if Angle == nil then
clearOutput()
displayMode(STANDARD)
display = "STANDARD"
--displayMode(FULLSCREEN)
aspect = 0.482422
iparameter("FieldOfView", 1, 140, 120)
iparameter("CamHeight",-1000,1000,0)
iparameter("Angle",-180,180,90)
iparameter("bgr", 0, 255,255)
iparameter("bgg", 0, 255,255)
iparameter("bgb", 0, 255,255)
iparameter("bga", 0, 255,255)
--iparameter("hndy",-1000,500,-720)
--iparameter("hndz",-720,1030,-720)
--touched(touch)
the3DViewMatrix = viewMatrix()
end
if hndy ~= nil then
coordx = 200+hndx
coordy = 200+hndy + CurrentTouch.y
coordz = hndz
coordsforthemesh ={
vec3(coordx ,coordy, coordz),
vec3( coordx + 5,coordy, coordz),
vec3(coordx + 5,coordy+5, coordz),
vec3(coordx ,coordy, coordz),
vec3(coordx ,coordy, 5+coordz ),
vec3(coordx + 5,coordy+5, coordz),
}
whereyoutouched = mesh()
whereyoutouched.vertices = coordsforthemesh
whereyoutouched:setColors(255, 0, 0, 255)
end
-- This sets a background color
background(bgr, bgg, bgb,bga)
perspective(FieldOfView,aspect)
camera(0,CamHeight,300, 0,0,0, 0,1,0)
the3DViewMatrix = viewMatrix()
pushMatrix()
pushStyle()
translate(0,0,0)
rotate(Angle,0,1,0)
-- Do your drawing here
if whereyoutouched ~= nil then whereyoutouched:draw() end
popStyle()
popMatrix()
-- Restore orthographic projection
ortho()
-- Restore the view matrix to the identity
viewMatrix(matrix())
stroke(0, 0, 0, 255)
font("AmericanTypewriter")
fontSize(13)
text("your touch: "..tostring(CurrentTouch.x).." "..tostring(CurrentTouch.y),WIDTH/2,HEIGHT/2)
if coordx ~= nil then
stroke(0, 0, 0, 255)
font("AmericanTypewriter")
fontSize(13)
text("its location: "..tostring(coordx).." "..tostring(coordy).." "..tostring(coordz),WIDTH/2,HEIGHT/2 - 12)
end
end
function touched(touch)
if Angle == 0 then
hndx = -471 + CurrentTouch.x
hndy = -720
hndz = 5
elseif Angle == 90 then
hndx = 5
hndy = -821
hndz = -253 + CurrentTouch.x
end
end
```

@Kiliammalik, I solved the z-buffer issue by selecting the z coordinate of the lookAt - where I am looking is good enough for a z coord. So that’s a non-issue.

@ijazmith I was using the glUnProject code located on the OpenGL site.

http://www.opengl.org/wiki/GluProject_and_gluUnProject_code, look at GlhUnProjectf().

What it comes down to is that there’s something with the order of my math that ain’t right since my code is a direct port of that link.

There are a few errors in your code. I’ll go through it.

```
if (touch.state ==BEGAN) then
A = projectionMatrix() * modelMatrix()
```

You should include `viewMatrix()`

here. Also, the order is wrong since vectors in Codea are row vectors. So it should be:

```
A = modelMatrix() * viewMatrix() * projectionMatrix()
```

We continue …

```
--print(A)
m = A:inverse() --inverse of model * projection matrices
```

Very Bad Idea. ** Never** invert a matrix.

```
--print (m)
ndc = vec4(0,0,0,0)
out = vec4(0,0,0,0)
coord = vec4(0,0,0,0)
```

No real need to initialise these variables, plus you should make them local to avoid leakage.

```
v = vec4(0,0,WIDTH,HEIGHT) --window...ignoring camera pos? Must be wrong...
```

This doesn’t really need to be a `vec4`

. You could just use the explicit values in the following code. The camera position has been taken into account in the `viewMatrix()`

so isn’t involved at this stage.

```
--NDC here
ndc[1] = (touch.x - v[1]) / (v[3]*2.0-1.0)
ndc[2] = (touch.y - v[2]) / (v[4]*2.0-1.0)
ndc[3] = 2.0 * 5 - 1.0 --touch.z , fake with lookAt.z
```

These formulae are wrong. If we use explicit values (since Codea’s viewport is fixed), they should be:

```
ndc[1] = 2*touch.x/WIDTH - 1
ndc[2] = 2*touch.y/HEIGHT - 1
```

Then `ndc[3]`

should be some number between `-1`

and `1`

. Exactly what to choose here is complicated.

```
ndc[4] = 1
--
print(ndc)
--generate obj coords
out[1] = m[1]*ndc[1] + m[5]*ndc[2] + m[09]*ndc[3] + m[13]*ndc[4]
out[2] = m[2]*ndc[1] + m[6]*ndc[2] + m[10]*ndc[3] + m[14]*ndc[4]
out[3] = m[3]*ndc[1] + m[7]*ndc[2] + m[11]*ndc[3] + m[15]*ndc[4]
out[4] = m[4]*ndc[1] + m[8]*ndc[2] + m[12]*ndc[3] + m[16]*ndc[4]
if (out[4] == 0) then return end
out[4] = 1.0/out[4] ---why?
```

Code optimisation, that’s why. It’s quicker to do one reciprocal and three multiplications than to do three divisions.

```
coord[1] = out[1]*out[4]
coord[2] = out[2]*out[4]
coord[3] = out[3]*out[4]
print(coord) ---very small!
touchlocation = vec3(coord[1], coord[2], coord[3])
print("touched")
end
```

With the above modifications, and ignoring my own advice to never invert a matrix (I’m a mathematician - I’m allowed) then I get reasonable answers out again.

However, if I try to do things a little more sophisticatedly then I run into severe problems. I think these are due to the fact that the matrix `A`

can be very nearly singular and this produces accuracy issues when inverting it. Even if you solve the linear system rather than inverting `A`

, these will still be present.

The first problem is that when going from the model to the screen we go from 3 to 2 dimensions so we lose information. This means that inverting the touch position is always going to be somewhat complicated. But it gets worse since on our journey from 3 to 2 we go via 4 dimensions so we add redundant information and then lose it again. This means that it is not necessary for the 4 dimensional transformation (the matrix) to be non-singular. It just has to ensure that it doesn’t lose *important* information - it can happily lose the redundant information.

As an example, the projection matrix that I get for `perspective(40,WIDTH/HEIGHT)`

(the example call) is:

```
5.7 0 0 0
0 2.75 0 0
0 0 -1 -1
0 0 -.2 0
```

So those last two columns are almost the same: the matrix is almost singular.

some of the thing you mention i figured out, such as the locals, and the bad math trying to make the NDC coords - I fixed that code (differently) as I didn’t think Codea’s viewport was fixed…I thought that as the camera “moved” so woul the viewport.

So…as you are allowed to invert a matrix, and I am not, but the code from glUnProjet DOES invert a matrix (they were mathematicians, too, right?)…I believe that I still need to do that step. Right? I didn’t see you remove it…

I understood why 1/x was faster The why is “why do I need to do that” becuase the values I had been seeing were useless, and I wasn’tr eally sure how the forth element was “fixing” things. I still don’t think I get that (yet).

i’ll make some cahnges and see if I can get “Reasonable” results. I don’t need perfection, just something that generally picks a spot and uses the current location as another spot to create a ray. Bounding boxes will get involve later.

Thanks @andrew_stacey!

So…as you are allowed to invert a matrix, and I am not, but the code from glUnProjet DOES invert a matrix (they were mathematicians, too, right?)…I believe that I still need to do that step. Right? I didn’t see you remove it…

Inverting matrices is Bad because it is unnecessary work and inherrently unstable. But without a proper linear system solver then, yes, you can invert the matrix for the time being. I doubt very much that the folks who wrote glUnProject were mathematicians! (For one, they got their vectors the wrong way around.)

I can get sort of reasonable results in that I can put a mark on the screen in the model coordinates (by which I mean that I specify it relative to the model coordinates) and it appears where I touched the screen. But if I try to do more sophisticated things then the errors in the inversion kick in and things get wildly inaccurate.

`view`

and `viewport`

are two different things. The camera affects the `viewMatrix()`

but this is part of the transformation matrix that you are inverting. The `viewport`

refers to the screen rectangle which is (more or less) fixed.

gotcha.

Yes, I can get it to render were I touch, also ,though it is as you say in a weird place.

The code that shows glUnProject uses a long mathematical formulae to do the matrix inversion - so it’s possible that the concerns you are expressing are actually resolved in OpenGL differently; I saw matrix:inverse as a shortcut to making sure I could even get anything on the screen. even with a more precise inversion solver, the same net-net happens, I’m sure - the placement f the object in space is sort of weird.

It probably could work for something like a fly flying away, anything non-static.

I noticed that the good values for z are between .99 and .999999, each decimal seeming to move the object further “away” be what appears to be a factor of 10 or so! It’s a pretty wild swing.