On iPad 2, it could use the front facing camera to detect head movement. On iPad 1, it could use accelerometers to detect device movement. I’m not a game programmer though I like visual programming, so I don’t really know where to start to make such demo. Anyone interested to make one?
The demo app is called i3D. It’s available in the App Store for free. Too bad I can’t test it because it requires iPad 2 (front facing camera). I only got an iPad 1.
We’d need camera access, which has been requested.
Having said that - I don’t know if you’ve seen this particular illusion in real life; I have, and while it’s eerily convincing in a video like the above - in real life, it’s, well, not convincing. The movement is cool, but with binocular vision you still don’t read depth cues. Indeed - it’s more convincing in real life if you close one eye.
And - accelerometers don’t cut it, not accurate enough - it actually has to head-track you. Which is fun and awesome in and of itself.
I’d much rather have camera access for other camera shenanigans - augmented reality, motion processing, and doing horrible things to pictures of my friends.
I downloaded this and took a look. It’s nice, but I wasn’t particularly impressed. Perhaps it was partly due to the fact that at this time of year, there isn’t all that much light around this far north so the head-tracking didn’t work too well. When the sun comes back in February then I’ll take another look …
Seriously, I agree with Bortels. I find the illusion very difficult to “see” on the iPad because there are so many visual clues that tell me how the iPad itself is oriented. Indeed, I’ve been playing around with similar things and decided against having this simply because it didn’t work visually.
You can have a go with this using Codea by taking a look at my latest project. If you download the Cube program from http://www.math.ntnu.no/~stacey/HowDidIDoThat/iPad/Codea.html then there’s a simple modification to the code which makes tilting the iPad work like this. Go to the Main.lua file and locate line 42 (naturally). It reads e = Vec3.eye -- :absCoords(). Delete the -- so that it reads e = Vec3.eye:absCoords(). What this does (though it is by no means perfect) is show a cube on the screen and as you tilt the iPad then it adjusts the display so that the cube is always “upright”. The two different version of line 42 translate to two different assumptions on where your eye is. Without the modification then the assumption is that your eye is always above the midpoint of the iPad and looking orthogonally onto it. With the modification, the assumption is that you are looking horizontally (with respect to the ground) at the iPad, no matter how it is tilted.
It’s not perfect, but I found it very hard to even see the illusion when I knew what I was looking for, which is why I went for the “looking directly at the iPad” viewpoint.
(Nonetheless, the head-tracking would be quite cool.)
I also agree that using the accelerometers wouldn’t suffice. My experiments, which weren’t extensive, showed that it just has three pieces of information and these appeared to be the linear acceleration in each of the principal directions. That’s not enough to detect rotation. So even with very careful keeping track of how much it had turned, you couldn’t detect the difference between a rotation and a linear translation. That’s why I asked for access to the compass settings: then you could track rotation.
Thanks Andrew. I’ll take a look at the code later. Might got some useful things to learn from it.
Well, I don’t know if such demo has real advantage in real life, but it looks cool to me. I thought it could be a pretty good demo program for Codea as well. At least to demo Codea’s 3D ability.
Bee, I’m interested in the image stuff too so once we’ve got camera access if you want to pair up on some virtual reality stuff, give a shout. I’d guess that on the ipad you’ll have to optimize the hell out of it.
I’ve tried writing head tracking before on the laptop and, well, it’s kind of hard. What worked best for me was to find the big blob of orange stuff on the screen and assume it’s the head, but wasn’t great (and obviously didn’t work well across races). Then I found this open library (from intel I think, openCV, really cool by the way) and basically just had an API for tracking objects like faces, so I’d guess porting some of that is a good starting point. The algorithm it uses is simple and neat.
@ruilov: Actually I’m not too interested with head tracking using camera since I only got an iPad 1 (which has no camera). I prefer to use the accelerometers and compass to detect device movement and orientation to render the 3D effect. But, of course, I can’t prevent anyone to use the camera for real head tracking program.
the thing about camera access is that it is a pretty far way aways from head / face tracking. To do that well you need access to something like openCV, and even then it is a little bit rough to do on iOS.
jonbro is correct, there would have to be a face tracking C library exposed with a Lua API for it to be feasible.
I actually prefer accelerometer/gravity style 3D that I see in some apps. Where tilting the iPad gives you a little bit of perspective shift into the level, like you’re looking through a window.
I think we still all want camera access, so please don’t drop that idea. Also, some of you have heard me talk about Basic!, and app. It has some… strange… features they decided to add before adding important features. They added a face detection feature tht uses the iOS 5 face detection library.