@Ignatz: it’s mysterious to me, but I fixed it. I declared local variables touchX and touchY and then defined them as t.x and t.y. Then I pasted in the new variable names wherever t.x and t.y were. And now it works. Huh.
The working code:
function GetPlaneTouchPoint(t)
--if your rectangle never changes position, delete these commented lines, you don't need them
--however, if it moves around, you'll need to draw the shader image for each touch
--in that case, uncomment all these lines
local touchX, touchY = t.x, t.y
shaderImg=image(WIDTH,HEIGHT)
setContext(shaderImg)
pushMatrix()
SetupPerspective()
touchX,touchY=math.floor(touchX+0.5),math.floor(touchY+0.5)
clip(touchX-1,touchY-1,3,3)
plane.mesh.shader=shader(PlaneShader.v,PlaneShader.f)
plane.mesh:draw()
setContext()
plane.mesh.shader=nil
popMatrix()
local x2,y2,x1y1=shaderImg:get(touchX,touchY)
local y1=math.fmod(x1y1,16)
local x1=(x1y1-y1)/16
local x,y=(x1+x2*16)/4096*plane.size.x,(y1+y2*16)/4096*plane.size.y
return vec2(x,y)
end
@Yojimbo2000, the lines preceded by just “v” are the vertices, right?
If that is the case, there are 1,305. Some of that is the figure model underneath the clothes, which I’m not sure is getting rendered wherever it’s fully covered by the clothes.
Those are the unique points. The mesh will have quite a few more vertices than that. In the obj class, try printing the length of the vertices array after the mesh has been parsed. print("vertices:", #self.v) or whatever the array is called that holds the vertices. I was just curious about how detailed the models were from makehuman, it’s not something that you particularly need to check, unless you’re having performance issues.
@Yojimbo2000, I’m not sure if that would give you the data you want–MakeHuman exports squares, and I have to take the model into Blender and manually convert everything to triangles.
@Igntaz any chance you can tell me (or screen cap even) where that option is? The Blender UI is one of the most impenetrable barrages of menus and icons I’ve ever seen.
Yes, touch is working on the demo now. Did you see my fix? It seemed like I didn’t really do anything at all. Can you explain why that worked / was necessary?
Blender UI is #%^£€¥?!%€#%%###€%%##*# IMHO. I often lost menus and couldn’t get them back. Even simple things like zooming were hard to find.
As I recall, the triangulate option comes up in the form that appears once you have selected export and the obj format, but it is out of sight at the bottom and you have to scroll down. This is from memory and I will check when I am able to get out of bed (but you should understand that I am currently keeping a very grateful kitty warm, and do not have permission to get up yet).
I haven’t looked at that strange error yet, but will do so.
I think the problem with that code is another subtle change to Codea that I wasn’t aware of.
I am passing the touch object through to my function, and reassigning the x and y values of that object. Previously that was allowed, but now it appears it is either read only, or the type has changed in some way. Using temporary local variables instead fixes the problem.
…everybody’s hating on reference values these days, amirite.
Ok so I’m trying to grok this.
I need to keep a screen-sized image in memory–usually totally blank
When a touch happens, define a small clipping area around the touch and render that small area in the image in memory with a shader
Somehow the shader know which point on the texture I’m touching, and I can store that point’s x, y in two color values [the main code can’t grab those values directly somehow?]
Using that x and y I can modify the texture image on the actual screen, which somehow propagates instantly? I don’t have to do some kind of saving and reloading?
Yeah, a lot I’m confused about there. I think 1 and 2 I could manage with some work. But 3 and 4 have me flummoxed.
If you understand shaders, you will understand this process completely.
You don’t need to keep an image in memory - just create a temporary one when a touch happens.
The temporary image is full size, but we only draw the part we want, using clip.
The shader is given the texture image and, for each vertex, the x,y position on the texture that applies to it (this is true of any shader - and any mesh using an image texture).
For any point on the mesh, the shader then simply interpolates the texture x,y position from the given positions for each of the three vertices surrounding the point.
The main code can’t grab this point, because - well, I’ll let an earlier post explain.
Looking up the touched pixel position on the image in memory gives you the exact x,y position of the touch (in encoded form as a colour) on your texture (and nothing else).
What you do with it is up to you. You can poke a hole in your image and save it, or put a red dot on it, or whatever you like. Then redraw it.
So steps 1 to 3 simply give you an x,y position of the touch. That’s all - and it may not seem like much - but if you read my post above, you’ll see that this is extremely difficult!
@Ignatz - that eBook is very helpful. Zero to sixty in seconds flat.
So, ok, to summarize the theory (and you’ve proved it works):
shaders get passed every visible pixel of a 3D model and can change how they’re drawn on the screen
so shaders naturally contain the exact information we need, i.e. two different x, y sets:
the first set is the position on the model texture of every pixel of the model that is to be drawn to the screen
the second set is the position on screen to which each pixel is drawn
the only problem is that, as implemented, shaders cannot directly expose the pairing of these two x, y sets
so we’re being very sneaky. When the shader draws to the screen, i.e. draws using the second x, y set, it changes the r, g, b information of each pixel such that the g and the b values are actually the x and y values for the first set
and instead of drawing those pixels to the visible screen, it draws it to a virtual screen in memory
so when a touch happens:
Codea sends us the pixel touched
We define a small square around that pixel
We use the special sneaky shader and render just that square–but we render it to the virtual image in memory, not the actual screen
We inspect the g and b values of the pixel at the coordinate of the touch on the virtual image
That gives us the x and y values we need to draw directly on the model texture at the exact spot that is ‘under’ the current touch
Something I don’t understand here: drawing to the texture seems to only work if done before the first time draw() is called.
Here’s some code I’m using for tests:
function DrawToMeshTextureTests:randomDraw()
local texture = self.plane.mesh.texture
local randomX, randomY = math.random(texture.width), math.random(texture.height)
local randomPosition = vec2(randomX, randomY)
self.drawer:drawAt(randomPosition, self.drawColor, self.drawWidth)
end
Explanation: I’ve abstracted drawing on textures with a class called DrawOnMeshTexture, which is initialized with a mesh, and thereafter can be used to draw to its texture. self.drawer above is an instance of this class. self.drawColor and self.drawWidth are just convenience variables.
If I call randomDraw() (the method above) in the setup() function, it works. But if I call it from the touched(touch) function, no go. What’s up why huh whaaa?
We’d need to see more code, the draw loop, the drawAt method etc (you could also be a bit more specific in what’s going wrong!)
One thing it could be: you’re trying to draw to a 2 dimensional image, but you’re calling this function from within a 3D space.
Do you switch back to an orthogonal, 2D space at the end of the draw loop? This is the code you need:
ortho()
viewMatrix(matrix())
If you are already doing this, then calling drawAt from touched should be fine (as touched is called at the end of the draw loop), and your problem is something else.
If you do need to update some 2D textures from within a 3D draw loop, you can force the drawing to take place at the end of the draw loop by putting a tiny delay on it, tween.delay(0.001, do2Ddrawing).
@yojimbo2000: sorry to be unclear: when I say it works when called from setup(), I mean that a random dot gets drawn on the texture. Conversely, when called from touched(touch), nothing happens at all.
This is my draw() method:
function draw()
pushMatrix()
drawToMeshTextureTests:setupPerspective()
drawToMeshTextureTests:drawDrawables()
popMatrix()
end
Right now I’m working off of Ignatz’s demo, so setupPerspective() is the same as his. And drawDrawables simply iterates through a table of objects with draw() methods and calls them.