I want to know the user speed via his acceliration.
I wrote this code:
speed.x = speed.x+UserAcceleration.x
speed.y = speed.y+UserAcceleration.y
But! ever i dont move - my speed not == 0.
Its depend on position of my device(tilt). Its mean gravity is has effect on acceleration.
Can i get PURE acceleration?
Nope. You are always under acceleration due to gravity. I guess if you got into orbit somehow… (and even then, you’d be under acceleration due to gravity, but it would be countered by centripetal force)
But even that won’t help - acceleration is change in speed. Trying to work out absolute speed from it won’t work (it could, in theory, but in practice everything is way too inaccurate).
if you could access the GPS, you’d be able to calculate average speed over large distances; but you can’t access the GPS from Codea at this time. It’s been suggested as a “nice to have” thing for the future.
What you can do is figure the gravitational component of acceleration. For example, if your pad is horizontal, gravity will only contribute to the vertical vector, any force felt in other directions is not gravity. So you can take tilt and use it to factor out gravity.
meaby i would resolve this problem via Gravity-component (just make correction, becouse i know the vector of gravity), BUT it is impossible, becouse grvity and acceliration have different orintation (screen orientation and device orientation respectively).
So this is my point: Meaby, its not so bad idea - make orientation of this 2 methods be able to set by user.
@gen4 I can change UserAcceleration to be rotated, like Gravity, in the next update. Would this help?
@Simeon yeah, i think yes, if they both will be work syncho.
meaby, better way make both unable to rotate, but make another vector: screenOrientation - to make computing
I got interested in the acceleration and gravity readings enough to create a scrolling trend graph control to look at the values. It’s really interesting to see what happens when the system is tilted or given a good shake. The acceleration sensor is accurate enough to make a decent seismograph.
WARNING: this program may encourage you to drop, shake, or whack your iPad. Try to resist.
Oh, and if some of the code looks a bit funky in TrendGraph, it’s because I wanted to have an infinitely updatable graph without creating an infinite data set or constantly shuffling the array. So the view is actually iterated in two pieces to knit together the image. It’s kind of clumsy, but it works.
OMG. This is superb, @Mark! I wonder what it would register in free fall? :). Or spinning flight…
I saw a science fair project where someone threw an iPhone into the air (in a heavily padded box) and had the various parameters sent to a nearby iPad to be displayed on a graph, so it has been done.
However, remember that the iPad is an object with a definite size, so its acceleration should have 5 components to truly figure out what it is doing. With 3, there isn’t enough information to distinguish between all the possible types of movement. I tried experimenting with displaying the acceleration as I wanted to detect whether the iPad was being turned, and I decided that it wasn’t giving me the right information, and that it was too sensitive to be able to integrate into any sort of actual motion detection. I think it is designed to be able to figure out if the iPad is being shaken or not, rather than anything precise. Mind you, I didn’t pursue my experiments so more detailed analysis (such as Mark’s) might show that it can be used as such.
(On a similar vein, note that the gravity vector simply points in the direction of the nearest Big Mass. It does not measure the actual force due to gravity.)
I’d much rather use the inbuilt compass … except that we don’t have access to that yet. That would give one a direct reference point against which to measure, rather than integration which is … subject to error.
I suspect, based on the compass app, that access to th compass information is… Slow. Likely too slow to be of real use in positional calculations. It’s nice for absolute orientation, though - I have a starmap that uses it.
What would be handy, but not high enough resolution for this, is gps.
This might be a cool feature to have. People could do some cool things. For instance: Kind of create hide and go seek. One person virtually hides an object somewhere. As the hider walks away to give the iPad to the finder, the iPad calculates how far away the person moved, making a virtual map. When the finder has the iPad, he follows an onscreen arrow that leads to the hidden object. When he finds it, he wins. Ideas, ideas.
P.S. Am I talking about the right thing? (I’m not talking about the GPS, but that would be cool too). I’m thinking about the acceleration of the iPad. Is this already a feature? I always get confused between the terms of how the iPad is tilted, it’s acceleration, and a few other things.
@Mark and others. Do you think UserAcceleration should be rotated into view coordinates like gravity currently is? I was going to change it at @gen4’s suggestion, are there any objections?
Mark, that’s a lot of fun! Unfortunately, you forgot the “no warranty” clause so I can sue you for a new iPad.
It also provides evidence for my suspicions: that
UserAcceleration is pretty useless in a quantative situation. I added graphs for velocity and distance (integrated from acceleration), and got my iPad quite some distance away without actually moving it!
Incidentally, the acceleration graphs gave the impression that gravity is already taken into account and that these are addittional acceleration.
There are two problems that I can see with using the acceleration quantitatively. The first is the sample rate. I guess that UserAcceleration is set to whatever the readings are at the start of the draw cycle. So it’s sensitive to small changes of that order, and looking at the values then it is very noisey. The second is based on the conjecture that there is only one sensor for each direction in the iPad. This means that it can only tell what is happening to that particular point on the iPad. In particular, if that spot doesn’t move but the rest does then no change is registered but the iPad is now pointing in a new direction - which is important to know if you are trying to work out the location by “dead reckoning”.
Two major issues:
- cumulative error - the noise in each reading will get you more and more “off course” over time.
- no rotational inertia sensing - you can rotate the ipad about a vertical axis (ie. gravity), and have no acceleration - so you have no good way of knowing how far you rotated. That means following horizontal measurements could be in any direction. Presumably, the compass could help here, but compass is slow, and again you get back to cumulative error.
FWIW - this isn’t iPad (or iOS) specific - absolute fine measurement of position without an artificial external reference is basically an unsolved (and perhaps unsolvable) problem. The person who can make a small box of any sort that can measure it’s absolute position to within centimeters without an external artificial frame of reference will be a VERY RICH PERSON. (On a larger scale - meters, or tens of meters - GPS is it. Note that it also is using an artificial external reference…)
It seems reasonable to me that UserAcceleration and Gravity should have the same directions relative to the “glass.”
If we had access to the camera, we could use optical flow to recalibrate the inertial sensors every now and then.
re. camera - good point, but you’d still want something reasonable for the camera to focus on - a nice black and white pattern or yardstick or the like.
Actually, I semi-kinda take it back - I believe the google car auto-drive stuff combines cameras looking at the lines painted in the roads along with absolute measurements of the rotation of the wheels and gps for gross position to get a pretty good idea of absolute position. I’m willing to call things like road lines non-artificial for the purposes of this argument. (I should say “not for the purpose of enabling the absolute position measuring system” but that’s a mouthful).
Point is - it’s not a trivial problem to solve.
In my day job, I work on several projects, including one that tracks miners underground. Believe me, I’d love to have a self-contained navigation solution. The best I can do is put out an array of radio beacons and triangulate (no GPS underground) and even that’s problematic since most wavelengths don’t penetrate rock pillars. So to locate miners to the 200’ required by the MINER Act in mines that can be many miles long and many entries wide, you end up with a LOT of access points.
I’ve tried mixtures of UHF, VHF, wifi, and even pulsed magnetics. The most practical solution seems to be low power 900mhz systems.
Anyway, that’s all an aside. Just confirmation that as @Bortels said, without external reference, navigation is a challenge.