beta 1.3

That looks brilliant. I have quite a few text-based programs that I can now update with that! Hopefully, I can just provide a drop-in replacement for my Font library and be done with it (oh the joys of OO programming!).

Incidentally, is there a page with a picture of all the fonts that one can get in iOS? For a couple of programs, I want a font that is as close to the kind of letters that children learn as possible, for example the a shouldn’t have the top part.

http://iosfonts.com has some, but not all are rendered in the fonts themselves. When you install Codea you’ll see most of them in the font picker tool.

Also note that iOS 5 adds a huge number of fonts over iOS 4. So anyone on iOS 4 will miss a lot of the bold-italic and italic variations of fonts.

Woof - I go to sleep for 5 hours you add major new features! I should sleep more often :slight_smile:

A+ on font metrics. I wonder - is there a way to determine if a given glyph position in a font has a valid character? I’m asking that weirdly, but just woke up. My thought was being able to enumerate all visible characters in a font, largely for exploration.

And I hear bounding box is useless, but is there a way to say “render a string, and tell me the minimal bounding box of the result”?

The bounding box we’re talking about is some information encoded in the font. It’s meant as some sort of “standard bounding box” of characters, if individual characters don’t override it. So it’s not the bounding box of a piece of rendered text. Simeon’s put that in already.

I second the “missing glyphs” feature. Sort of like how HTML/CSS does it with a list of fonts saying “Take it from this one if it has it, if not then this one, or that one, or fall back on emoji”.

Portrait mode! (imagine my surprise) J/K, I saw the tweet, but I was still surprised - I didn’t think it was in this build.

Is there a way to force modes? or did I miss it?

I notice that it auto-rotates, and preserves center-screen (or… no, I guess I am doing so by using HEIGHT/WIDTH. That’s cool. :slight_smile: )

Portrait only: supportedOrientations(PORTRAIT_ANY)

To preserve the old behaviour: supportedOrientations(LANDSCAPE_ANY)

Standard portrait only: supportedOrientations(PORTRAIT)

Inverted portrait only: supportedOrientations(PORTRAIT_UPSIDE_DOWN)

Standard portrait and landscape left: supportedOrientations(PORTRAIT, LANDSCAPE_LEFT)

And so on. The default is to use ANY orientation.

Note that it is recommended you call supportedOrientations outside of your setup() function. At the top of Main, usually. The reason for this is by the time setup() is evaluated the view is already displaying. So if you call it in setup() it will still work, but it will allow the view to assume whatever rotation the editor was initially in until you next rotate the device, then it will lock. The documentation explains this too.

When it’s called outside of setup() it gets evaluated before the view is displayed, allowing it to force an initial orientation. This is just the way iOS works. You can’t programmatically set a view orientation once it is displayed. And since we want to allow drawing in setup(), setup must be evaluated after display.

Hmmm - I thought drawing was prohibited outside of draw()? Maybe I’m thinking of Love2D…

One of the updates added the ability to draw in setup().

You can do this for example:

function setup()
    backingMode(RETAINED)
    background(0)
    -- Draw some stuff
end

function draw()
    -- Draw nothing
end

in the latest version 1.3(4) when I’m in a game and turn the ipad the screen goes black while it rotates. Is that intended?

It does that because it looks weird if the GL view is temporarily stretched. So I black it out and fade it back in. This only happens when you rotate to a different orientation (portrait->landscape or landscape->portrait). If you rotate portrait->portrait or landscape->landscape it should be fine.

Just sent out a new beta - 1.3 (5). It contains a sound() function picker that lets you audition sound effects directly in the code editor. Let me know what you think.

At the moment, if you put a seed in as an argument it will let you preview that sound (it plays when you tap it the first time). It will also pre-fill the picker with the random seed you chose, so you can keep playing the sound by tapping the table cell.

Fred might be a good beta for sound stuff (but again not sure how freely you’re giving those betas out - well you gave it to me so it can’t be a very restrictive club :))

Aw - not to look a gift horse in the mouth, this is better than nothing, but I had hoped the sound picker was more like http://thirdcog.eu/apps/cfxr

The problem I have with this kind of sound picker is you can’t tweak things - if you get a sound similar to what you want, but not exactly there - you have no way to say “that’s perfect, just sustain a bit longer” or “Ok - I want that tone, but a note up”. I think what people are looking for is a way to program the sfxr thing directly, and then save off the sounds. I know it’s a lot of parameters - I’m thinking you could pack them all in a base64 string, so when someone found the sound they wanted, it could be

$explosion = "83BA93744CC294F3AD4"
sfx($explosion)

We tried, but the base64 string is really long. Causing a lot of text wrapping. Which looks pretty horrible.

Edit: You can actually pass sound() a base64 encoded string of all the SFXR values, or a table of them (see Sounds Plus).

perhaps it could save the resulting sounds to local storage?

But - if i read right, you can already do this??? where is Sounds plus?

Oho - found it!

Ok - that’ll do. You’re talking to a man who passed his image thru netpnm and copied it into lua
code to make an image from it - a few hundred random bytes don’t scare me. :slight_smile:

Hey if you can find a way to compress the 21 float parameters + 1 waveform parameter of SFXR into a short string like your example, I’ll look at implementing an advanced picker.

I’m happy to throw away most of the precision for a system like that (i.e. 1 byte, or even half a byte per float would be fine).

The cut and paste from cxfr is 29 lines of data - let’s call it 30 for the math. 4 bytes per, 120 bytes total. base 64 encoded, you’re at 160 bytes. That’s perfectly reasonable to save to persistent storage, or to move around with admittedly nasty data lines. If you throw away precision (I wouldn’t) or compress (I would! there’s a lot of 'nil’s there in my sample), you could get it smaller, but if you’re doing an advanced picker - do it like spritely, have it get a name, and save it to persistent storage. (I am still optimistic we’ll get some legitimate way to move around blobs - .codea worked, if they allow export we can get closer). Worst case is to spit it all out as a base64 blob of text.

I see this the same way I see images (gif/jpeg/png) - they’re too big to be convenient for sure, but they’re things we want/need.

An alternative would be building in an ABC compatible tone generator - that’s the chief reason we (or I at least) want this type of control, so we can set the frequency directly. I personally am still hoping for proper sound sampling, and that’s nasty in terms of volumes of data.

I would throw out precision for the picker version of the encoded string. You’re only going to have sliders that are about 200 pixels wide. 128, or even 64 discrete values along those sliders is more than enough to toy with to generate sounds.

good point… but if you’re saving to persistent, there is no need. (my concern about throwing out precision is purely for the frequency - a little change means the notes sound off).

Not that it matters, really - for pure note generation, we can plug the values in directly, a picker wouldn’t be used for that.