beta 1.3

Lol, I made the beta notes. Ah well - I was having server-side issues with the touchpad thing anyway. There’s a reason I prefer Lua to Javascript… (don’t get me wrong - node.js is awesome. I just wish it was a sane language…) (They took sockets out of the beta and I weep… meh. I’ve had some time to get used to the concept and say goodbye. Perhaps we’ll meet again, dear sockets… Oh, hi - I’m jailbroken, perl does them. :slight_smile: )

I like the physics stuff at first glance - there’s a few effects (“filter” and “raycasting”) that I’ve never seen and I can see great utility for, and of course collision detection, w00t! Is #9 on the example just incomplete? (It must be - just noting #9, “bullets”… isn’t?) Is it still the case that the intent is to use Box2D as a sample implementation as much as possible? I guess what I’m really asking is, for an online API and tutorial of the stuff, will Box2D be close enough to be useful?

No text example - but it seems simple enough to me that the inline docs suffice. I do see you added “How Codea Draws”, that’s good.

Semi-Off-Topic: Saw this today from Penny Arcade https://www.youtube.com/user/thedicetower?feature=watch#p/search/0/uW_6qAUr7EM and it inspired me to make a project I’m calling “Trek” for now… and now the beta comes along with juicy new features, that’s going to put me off of the new project. An embarrassment of riches.

Good points @ruilov. A really simple box-on-a-plane physics example would be good. Left / right keys are unfortunately not possible to do without getting pretty hacky since iOS only reports “selection” changes, not key codes.

Sockets will be back if Apple allows me to include them. Strangely, iLuaBox has Sockets available as in-app purchase, so it seems as if Apple has reviewed that IAP and allowed it.

Filtering and raycasting (and AABB queries) are very important to a game physics API. Consider the example where you have a player character and monsters. You might want the player to be able to collide with monsters, but monsters should be able to move through each other. That’s what filtering gives you.

Raycasting and sensors let you pick up information about the world from a particular point. Raycasting is especially handy.

Rather than have a specific text example I will probably add little bits of text throughout the existing example project such as frame-rate counters and titles.

Ok - I’m officially loving the physics, in that it’s irritating me, in a good way - it’s bothering me the way the cute girls back in high school did, kinda sorta. Well, not exactly the same, but you get the idea. My brain has these physics muscles from college (I have an AS degree in Laser Electro-Optic technology - mostly enginering work involving plumbing and optics and electronics that will ZAP your ass across the room), and that involved your normal college physics as a prereq - and that was a loooong time ago, and those brain muscles are saying “What? You want to remember things?”. GRAND FUN!

The demos are surprisingly short - you get an awful lot of bang for the buck there. Programs that use the engine are going to be significant setup, and then just let things happen.

Gravity is unidirectional, yes? If I was to, say, want to do orbiting bodies stuff, I’d have to apply the gravity myself as individual forces, right?

LOL - I didn’t even NOTICE the physics demo also uses text. How cool. :slight_smile:

So - it looks like the actual world, and it’s simulation steps, are “understood”, in that I am guessing/assuming that the world simulation is processed once each draw() call?

Yeah, we wanted to make it as simple as possible. You can pause and resume the simulation, but it is stepped automatically if you are using physics.

However the physics uses an accumulated time-step. So it is not linked to the frame rate of the scene. What this means is that physics will evaluate at the same rate on iPad 1 and iPad 2.

(The way it does this is to accumulate the frame delta each frame. If the accumulation exceeds the fixed time step value it steps the physics simulation forward. Conversely if the delta accumulates so fast as to cover multiple time steps it will step forward multiple times in one frame.)

For those following along at home - I’m going thru the examples and looking at the corresponding Box2D docs at the same time - and it looks like TLL did a really good job of tucking away a lot of “busywork” you’d normally have to do yourself in, say, a C++ program using Box2D. You don’t need to explicitly set up a world or call it’s simulation - it just happens. Bravo! Physics is always semi-painful to get into because of it’s complexity - but this should make it less so.

In fact - a lot of the routines in Main in the physics demo are likely to make it, intact, into people’s projects, as well as the PhysicsDebugDraw stuff.

I tend to tell people who want to code “go look at the examples, find something close, and start modifying it” - this goes double for the physics stuff. The engine takes care of the animation, so the part the programmer needs to worry about it setup - and you have some perfect templates to model off of here.

Only a few people can see the beta section of the forums. @alvelda, @ruilov, @Andrew_Stacey and you.

I’ve set it up like this so beta discussion doesn’t confuse users who don’t have access.

Thanks for the comments on the physics, that’s all John’s hard work. I’m sure he’ll appreciate your feedback. PhysicsDebugDraw is a file we want people to copy all over the place - I’m thinking to add the cross-project file import feature just for that file.

Heh - wasn’t sure. That’s good - I kinda felt guilty posting about goodies before they can get their hands on them - now I’m guilt-free!

Tell John he officially Wins. The physics stuff is such a neat toy I may be able to forget sockets for a while :slight_smile:

I honestly think the physics stuff is one of those things where it’ll take a bit to get momentum (ha!) then take off. There are some insanely fun things we can do with it. I’m actually marveling at some of the add-on stuff that might be useful for physics, but is also useful for other things - you mentioned the raycasting and AABB stuff, holy cow! I did raycasting in OpenGL back when it was “picking”, and the AABB stuff solves some things I was thinking of the best way to write, so w00t! The best way for me to write code is find someone else already wrote it. (indeed - the region query alone I know I’m going to use for the Trek thing I was looking at - being able to say, quickly and easily, ‘ok - where are the enemies close enough I should actually look at interacting with them’ is really handy) - and you know I wanted collisions for ages - I was delighted to see it in the demos. A lot of the stuff I want to do won’t use “normal” physics (like the orbital stuff, and the game I’m just starting), but the adjuncts to it make the engine so valuable anyway. Can’t say enough good things.

And yes - when I saw the debug tab, I thought “what we really need is to just ‘require’ it and go on”. But it’s nice to be able to see how it’s implemented as well, so it’s good it isn’t just built-in and hidden. Some of the stuff in main like createCircle and such will also be used all the time. (mmm - I actually want to mess around making the draw for some of this be sprites from the spritepacks… I wonder if the asteroids in the space pack are properly centered…)

Oh boo - sockets are gone, and took mime with them - specifically the base64 encode/decode, which I liked to use for binary persistent data.

Any chance we can get mime back?

And zlib too, while your at it :slight_smile:

I can roll my own - did for b64 - but they’ll be slow. And ugly.

If we allowed general file IO, would that be preferable to mime?

Or just the ability to write and read PNG from the project bundle?

That’s orthogonal - what I really want is base64 encoding as a way to save binary as copy/pastable ASCII blobs. General binary file io would elimate a class of places I’d use it (marshaling data to/from persistent storage), but until we can io to the real world (either sockets or file import/export), there will still be utility in that.

Png is an interesting problem - we want, in the end, to be able to easily share, including custom graphics. That means either spritepacks, and the ability to manually import/export them, or perhaps the ability to fetch png from a URL and save it. What goes unsaid in both is that someone (not me, not in public) can use png as a means of code sharing. I don’t see a way around it other than moderation, which strikes me as unworkable and unacceptable. Maybe the remove from actual code will be acceptable enough for apple to allow this sort of sharing. Baby steps I guess.

Physics: is there a way to see/set the world gravity vector? I want to slave it to the accelerometer, but i don’t see access to the world object? Maybe I missed something.

just had a chance to play with the physics, it looks awesome!

Something like this would be easy to make right?

http://megaswf.com/serve/102223

on file I/O one advantage of having it would be the ability to read/write code and data so 1) we can do a user version of library mgmt, 2) we can write a project loader in case .codea does stay out.

If we had file i/o to projects/tabs (including the ability to make new ones), then yes - we’d at least be able to re-create projects, including persistent data (??? I think - we’d need to be able to write to a plist file), from a single cut-and-paste loader. Hmm - could a project nuke itself to clean up afterward?

Not to argue against something I suggested, it’s a kludgey workaround for the whole can’t-import issue; there’s real utility to a filesystem aside from that, but I’m still not 100% satisfied with it as a solution to importing - I guess nobody is. Plus - do we want to allow write-access to another project’s files? I had envisioned write access to the local project, and read only access to other projects (and data). Idea being that a project gone berserk could only corrupt it’s own files.

Physics again: Is this Box2D under the covers, and if so what version, or is it based on or adapted from Box2D, or something else? It strikes me it’s real Box2D, version 2.something, with some adjustments to fit the Codea run loop. I ask because I’m trying to pin down docs and features in versions, so that I’m not confused when I try something and it doesn’t work as expected.

I think it’s the latest Box2D version. @John will be able to confirm that.

There should be a physics.setGravity( vec2 ) and physics.setGravity( x, y ). It looks like it hasn’t made it into the docs yet.

The version of box2d is from svn about a month ago, I’ll have to check the exact version. You can set gravity with physics.setGravity(vec2) which is missing from the docs (oops). In terms of API completeness we are missing a few joint types and one shape type as well as low level access to fixtures. There’s no support for composite bodies (more than one fixture attached to a single body), which will likely be in an update.

Another thing I’m working on which might make it in to 1.3 is a class called mesh2d. Basically its allows you to render an arbitrary list of triangles by specifying vertices, colors and texture coordinates. You can use images or sprites as the texture. The intent is to allow simple but powerful access to the hardware but without the tedious setup required.

So - love the font picker. Love it! But - now on using it, my brain wants the picker and font() call to specify both font and size, and I want to see that in the picker - indeed, I want to drag my selected font bigger and smaller (maybe with left and right drag). Perhaps font size can simply be an optional 2nd parameter in the font() call? Just a suggestion.

I’m confused about mesh2d - in 3d I understand you’d draw a triangle mesh and let the GPU do the perspective and shading and whatnot; but what would a 2d mesh get you over a plain old sprite or composited image (which I hope are textures on GL billboards or some such). Or - why not just go 3d? I’d love to be able to render a textured mesh.

There’s a few advantages. You can construct Tilemaps and render them much more efficiently. You can also construct a grid and use it to render a distorted image. You can render arbitrary shapes, gradients, etc. You can also set texture coordinates independently so you can have scrolling textures, and other similar effects.

The intent is to also support 3d meshes but for that we also need perspective projection support, depth testing and 3d transforms. All of that is going to require a significant overhaul of the current render pipeline.

Depth testing is in there, as are 3D transformations.

All we actually need is a matrix class (wrap glm::matrix in a Lua userdata). And an applyMatrix(m) function.