After looking at @dave1707 's little plane sign and dice and skybox and sphere … I can often make an image or image portion go where I want, kind of. What I’m not clear on, at least not well enough to explain it to a child, how the UV points are going to relate to the 3D shape. I’d love to be pointed to a bit of a write up that relates Codea indices, vertices, and UVs.
I’ll put here what I sort of think I know.
UV “coordinates” refer to sections of the image.
This code:
local uvs = {}
table.insert(uvs,vec2(0.5,1))
table.insert(uvs,vec2(1,1))
table.insert(uvs,vec2(1,0))
table.insert(uvs,vec2(0.5,0))
Beyond that I’m flailing …
Refers to the right half of whatever the image is. In the case of a plane, it maps that half of the image onto the whole visible part of the plane.
When I do a simple cube with the text map image, it shows up on all the faces. Dave’s die display has a loop that selects 1/6 of the flat die picture and adds it to UVs. So it seems there is an implicit unwrapping of the cube, apparently into rectangular faces, even though the faces are all really (at least) two triangles.
I do mostly understand how to do UV mapping in Blender. FWIW.
I’m happy to read posts but so far haven’t found much explanation. I can see from this code (and the delightful sphere one) how to do mappings to the plane, cube, and sphere … but darned if I can get a sense of what’s making those scripts be the right answer.
Advise me, please. Thanks!