Is there a plan to provide APIs for gestures i.e. tap-n-hold for moving, two-fingers pinch for zooming, two-fingers rotation, multi-fingers swipes, drag-n-drop, etc. for Box2D physical objects? I think that would be a great addition to physics API.
For example, if user want to give an object a user controlled (default) rotation capability, s/he could simple enable it by calling i.e.
circle:gestureRotation = true, or
rect:gesturePosition = true to enable (default) user controlled movement capability, or
poly:gestureZoom = true to enable (default) user controlled resize capability, etc. To override the default behavior, user could override the function that is used for the gesture callback. These APIs could save Codea’s users coding time for basic/default gestures for object manipulation. Great for quick game prototyping.
What do you think?
I think gestures in general would be a great addition. Though I think they would be better off separate from the physics API - generic gesture detection that you can do with as you please.
When you talk about rotation and scaling gestures for physics objects, it almost sounds as if you are describing a level editor or similar system. Is this the case?
Game level editor is a good example for gestures API usage. But, no… I’m not trying to make a level editor or similar system. I’m planning to make 2D block-building game for my kids. Similar to this game: Play Bricks or Alpha Baby. Physics and gestures API would be very helpful.
I like the idea of a gesture API. But feel they shouldn’t be directly attached to the physics API.
I think we can implement an API that reflects what Apple offers in built in gesture-recognizers. Versions of pinch, long-press, swipe, tap, rotate, and pan. Then you could easily combine them with physics.
The API could mirror Apple’s. So
gesture = LongPressGesture()
gesture.numFingers = 1
gesture.minimumTouchDelay = 0.5
addGesture( gesture, myGestureCallback )
Or we could take a more functional approach (no gesture objects).
I did some experiments with pinch/zoom today and got it to work to a point. Out of curiosity how would a pinch be done with the API so that it’s properties could be handed off to translate() and scale() ?
On a different point, I can see the utility of having persistent visual objects with properties. For instance a pSprite with a visible and touched properties. But, this gets into making thing appear that are not spelled out in the draw loop. That changes the model of Codea and may make things harder to control.
I prefer to have gesture API being attached to Box2D object because I want gesture per object basis. Say a rect is movable and resizeable but another rect is only movable, or a circle is resizeable but another circle isn’t, etc. Well, I refer this to desktop programming (Delphi) where every object (button, shape, etc) has its own mouse event handler. But I don’t know if this approach would be appropriate for Codea programming model, so I let Codea devs to decide the best approach.
I would much rather have a gesture API that works with the physics API, as opposed to depends on it.
Any plan when gesture API will be supported? v.1.4? v.1.5?
Possibly, I can’t give a timeline yet.