Craft AR rotations & positions of models

I wanted try to wrap my head around the ARKit in Codea and have a question to you guys.

How can I place a craft.model.plane in front of my device and rotate it towards me?
(The idea is to take a photo, map it to a 3D-plane and place it in air at current device xyz-rotation. So that I could take a few snapshots and walk around them.)

I think figured out that I can get the device position in real world space by sending a ray into the world. That is where I would place the model:

local cam = scene.camera:get(craft.camera)
local position, direction = cam:screenToRay(vec2(WIDTH/2, HEIGHT/2))

Maybe someone could provide a little but complete example?

I seem to have solved it:

local cam = scene.camera:get(craft.camera)
local cam_position, cam_direction = cam:screenToRay(vec2(WIDTH/2, HEIGHT/2))
--local inverse_cam_rotation = quat.lookRotation(-cam_direction, vec3(0, 1, 0)):angles()
local inverse_cam_rotation = quat.lookRotation(cam_direction, vec3(0, 1, 0)):angles() + vec3(0, 180, 0)
local model_direction = quat.eulerAngles(0, inverse_cam_rotation.y, 0)
local model_size = vec3(.1, .2, .025)
        
scene:entity():add(Cube, cam_position, model_size, model_direction)

Note: the obove code has one commented line of code. That line works the same as the very next line beneath it…
The reason I add 180° instead of negating the quat is because I noticed some unexpected “drifts” of feature points (anchors) and when this happens the model rotation gets messed up. So mirroring the rotation by adding 180 degrees seems to work better.

PS: The “Cube” class is a copy of “AR” project (without the riggid body components)

@se24vad Just to take your quote, Maybe someone could provide a little but complete example, could you do that for others who might want to see a small working example of AR code since you solved your problem.

@dave1707 sure,

first of, here are some of my observations:

  • You can’t “place” models into the real world space until the device has collected at least a few feature points (anchors) to calculate its real position and rotation inside the real world.
  • That’s also the reason for the “drifting”-effect that can occur sometimes. The more you walk around and rotate your device, the better can it sense the environment and the quicker will it detect new feature points and planes.
  • As long as AR is not ready, the real world origin is mapped right at your device camera position. Meaning, if you placed an object right in front of you, in the air, and you would try to walk around it, that object would actually “stick” relative(!) to you and you could not get behind it.
  • As soon as AR is ready, the real world origin is dropped at where your device currently is. (All objects that were “sticky” to your camera are dropped as well.) From then on, objects can be placed in real world and remain where they got dropped. Now you can walk around them freely.

I don’t know If I’m doing it wrong but the craft.model (which is basically a mesh() on steroids) seems to be inverted somehow. – No matter how I tried to set the normals, the front faces were inside-out and got culled by OpenGL. I had to change the model.indices to be listed at clockwise winding order to have the front faces show up. But in that case the UV coordinates seem to be flipped on y as well. I commented it in my code and would love to hear your feedback on this topic.

Well, the code is a bit messy but important parts are commented.
https://gist.github.com/jack0088/86f379e6ffe8a2a99c3a0e74fb20c83c

@se24vad Thanks for posting the example code. Just a few notes, AR only works with A9 or greater CPU’s, so my iPad Air can’t run it. Looking at the code, I see this cam.mask = ~0 . Is the ~0 for real or a typo. See your code below.

        parameter.boolean("DrawPlanes", true, function(flag)
            local cam = scene.camera:get(craft.camera)
            if flag then
                cam.mask = ~0
            else
                cam.mask = 1
            end
        end)

EDIT: I found it, the ~ is the bitwise operator for ones compliment.

@se24vad When I run this on my iPad Pro, I get a solid plane and if I run the AR example, I get a grid. I guess that’s what you’re after.

@dave1707 that’s weird because I also run it on an iPad Pro. – The solid plane should have a texture (CAMERA feed). You can see the code in the touched method.

i see the camera feed on the plane, in black and white though.

@se24vad Once I tap on the screen, I get a CAMERA image for the plane.

@se24vad - I see the camera input with a coloured cube overlay in wire-frame. After a few touches I see a point cloud in blue + characters and that was followed by a black and white camera image. What is the project:GridWhite ?

@Bri_G If you look at the example project, AR, line 45, you’ll see project:GridWhite. If you tap that, an asset popup will show. The first entry is AR. If you tap that you’ll see the asset GridWhite which is a white grid.

guys, thanks on reporting!

Here’s what I’ve coded and what I actually see on my device:

  • While ARKit is sensing the environment, it displays a cloud of points (blue crosses)
  • When ARKit has gathered enough points to form planes (horizontal surfaces) out of these feature points, it does it. In this case I see a blue grid. (The grid has right now no purpose, I just want to see what ARKit detects…)
  • Each time I touch the display, a flat plane is placed into the air, right in front of my iPad. That plane gets the CAMERA image texture. Means everything the device camera sees, is displayed in each of that flat rectangles I placed in space.
    The image however should NOT be colored, since all vertices are white (and textures are multiplied in the shader afaik).

So, considering all your reports, my conclusion is that: codea craft seems to be working not equal across different devices. – I have to play with it a little more to make sense of it though.

@se24vad From what you just posted, that’s about what I see. It’s just when I first tried it, I didn’t know I was supposed to tap the screen to capture the CAMERA image. I’m on a 12.9” iPad Pro 1.

@dave1707 oh well, good then. I’m on the same iPad. So our devices work the same, but @Bri_G and @piinthesky have equal experiences which seem to be different from ours … which shouldn’t happen of course.