A song!

So - maybe this came up and I missed it, but:

Being able to make noises is pretty cool - explosions, laser shots and the like - right on.

But my soul yearns to make music.

I’m considering going into the sound function and running thru random seeds until I get various tones and sorting them out and making a map where I can assign them to a scale and… WHAT AM I THINKING. This way lies madness. (“The only difference between myself and a madman is that I am not Mad!” – Salvador Dali)

So - two semi-related requests:

  1. Let me play mp3 (or aac) files, either imported to the project as project data, or from the itunes library. The first would be background game music type stuff, or sound effects (do you remember “Sinistar”? Ha, you’re old. http://www.digitpress.com/dpsoundz/beware_i.wav ) and the second would be to let us do “name that tune” kinda stuff, or just allow the user to play music while they’re messing around (or even use a single sound file as a library, playing sub-snippets of it at the right times…)

  2. Tone generation - I want to be able to do the old-school MML play command. http://en.wikipedia.org/wiki/Music_Macro_Language - so I can get my chiptunes rampage on. Ideally, compatable with the same format a thousand basic interpreters used in the bad old days, so we can lift their old music data. This is presumably what I’ll do if I descend into madness and roll my own using the existing sound effects…

I’d proposed similar ideas to @Simeon a while ago, perhaps somewhere in this forum as well. I forgot. :smiley:

Given that Codea uses SFXR to generate sounds, could you see any parameters you’d like to be able to modify within that system that would give you the desired effect?

You can play with a flash version here to see the available parameters: http://www.superflashbros.net/as3sfxr/

Hmm… “all of them”? I dunno. I managed to get what sounds a lot like a piano note, and with that the only thing I’d need to change is the frequency. The flash demo doesn’t have a way to export the numeric settings that I can see, so it’s difficult to share. (it says copy/paste, but I can’t figure out how…)

But yes - if we had deeper control over this (waveform, attack/decay, frequency, and so on) we’d be able to do chiptune stuff. I’ll write a wrapper to use MML format music if I must! As for which to expose - expose em all, really. I don’t think anyone can anticipate what someone will need to tweak to correctly adjust the ambiance of the buffalo sneeze they need…

Hmmm… Aha!

Same thing, lower:

So - 6th parameter is frequency. I could presumably do a lookup and some timing and “Turkey in the Straw” is stuck in your brain for a week. SEE IF I DON’T! (http://www.youtube.com/watch?v=JOt-CYyfQ74 - I bet I could even grab and rebuild the shape-tables! SHAPE TABLES! If you know those, you’re old. I have “Apple II+ Programming” on my shelf somewhere…)

That’s the piano note (right click on “settings”, copy). If it just took that string, it’d be ugly, but whatever. You might want to wrap it up and do named parameters so you don’t have to pass things that are unchanged.

didn’t realize that would embed. For the record - that’s not Codify, obviously. But you can bet yer sweet bippy the codify version will look identical, even to slowing things down and drawing lines by plotting points. CHE RETRO.

That’s odd - my comment “all of them” post-dates Simeon’s question above - dunno why it’s out of order.

After messing with sfxr, I believe making a MMR (or whatever encoding class) would be straightforward. The pitch (frequency) is the 6th cut-and-paste parameter, and appears to be the actual frequency (1/Hz) * 100. Because Music is Math, this article: http://en.wikipedia.org/wiki/Piano_key_frequencies - tells me how to map the usual notes to frequencies - writing a thing to take notation, compute frequencies and delays, and call sfxr would be straightforward.

So - how many voices do you have? It sounds like a single channel - if I play a sound, then another sound, it clobbers the first one. We’d ideally have at least 2, and 4 or more (for chords and harmony, not to mention things like melody, baseline, and percussion) would be better. When I was looking into Objective-C programming for the iphone, I never touched on sound, so I have no idea what the base hardware can do.

I’d suggest, as an API, you set up a sfxr on a given channel, and pass the string-delimited full string you can cut from the flash site you posted. Then, you can either play the xfx, or modify it (fx.pitch=440) or such and then play again.

(By the way - I don’t EXPECT any of this. It’s awesome that you’re adding to the capabilities, and I wouldn’t suggest something I won’t use - but I also know that this all takes real work, and people have lives, and “meh!”. Add it to the list, see what people want most, and if we never see a sound API above what we have now, we’ll live with it.)

A much simpler alternative to full control would be to add a “tone” sound type, set as I mention above (sine wave, fast attack, reasonable decay, and not a ton of other effects), and let us just set frequency where we set seed now. It’d leave me wanting more, to be sure (I’d LIKE sampled instruments!), but it would be a quick band-aid that’d probably be easy to implement.

Yeah the out-of-order thing is weird. Seems to be a bug with the forums.

The hardware can do a lot of channels, but I suspect it’s because the implementation uses the same memory space for all generated sounds. So generating a new one overwrites the last.

The main issue I had with exposing all the parameters is that the sound() function call could look really, really long.

So we hide the complexity:

s = sfx(“tone”) – or “jump” or “pickup” or whatever

Now s is a table with the 15 or so named parameters, then we


Or even

Play(“tone”, { freq=440 }) – use tone preset and override

Ie force a table of some sort as the argument, and provide a convenience function for presets. Document the table params for geeks like me.

In the end, we can in turn make this work:

Chiptune(“abcdefg”) – play the scale

When you get down to it, it’s all about hiding complexity - there’s a ton of work that goes into line(10,20,400,300), but nobody cares (I do) because they don’t see it.

To paraphrase Men In Black: “only difference between them and us? We make it look good.”

Currently it should actually play two sounds over the top of each other and mix them. If it doesn’t then that is a bug because I tested it before release and it worked :wink:

As far as I know we can have up to 16 (maybe 32) sound sources at once, but the more you mix, the slower it gets.

I like the chiptune interface. We wanted to avoid exposing results from the sound function (so that people dont have to keep track of them), and have a separate play function, and we didn’t like passing in a table with a lot of parameters in it, so we just settled on the simple interface we have now. The chiptune interface you propose fits with the simple api design we were going for. I do understand that if we add a more complex api as well, that lets people like you create awesome things like the font stuff! Its something to think about.

Simeon was also thinking about adding an interface similar to the colour picker for sounds (which would let you scroll through and preview random sounds, and perhaps set values for the other SFXR parameters). That would help I think.

I think as long as simple APIs are available (like the current one, or chiptune above), then it’s no sin to expose the full nasty 20 argument version as well, saying “here’s the full interface, it’s ugly and complex, if you’re looking for an explosion sound go use the simple interface”. That gives you the best of both worlds.

Still wanna be able to play back samples, say in mp3 format. No game is complete until it can shout silly things at the player.

Ooh… A hook into the text-to-speech iOS stuff could be fun as well…

Hi Bortels,

I, too think our creations could be enhanced by music as well as sound effects. This the earlier discussion we had on this topic: http://twolivesleft.com/Codea/Talk/discussion/75/sound-synthesis#Item_8

I have already gone down the path of madness and am mapping the random seeds from the sound() function… :slight_smile: you can see my first attempt code in a link from that thread.

I have had no trouble generating polyphonic music, so I’m not sure what bug you might have found, unless that is because I’m interleaving the notes.

I’ll put up my piano keyboard code too, although it’s pretty rough. I’ve been diverted from getting the pitches exactly right to trying to implement a class that plays the ABC notation format. This, combined with better seed/pitch mapping could give us rudimentry music without further APIs, although I would love them too. BTW ABC format is like MIDI using ASCII - and there are plenty of tunes and software for it.

Simple keyboard code: https://gist.github.com/1343158

/boggle - Fred, that’s awesome. I’m surprised you got the pitches as close as you did - I was messing with it this morning, and was despairing that using the seed may not be viable (as it’s clearly controlling more than pitch).

As for a chiptune-type format - anything that has readily-available songs should be reasonable, and it looks like for Codify you get to be the trend-setter. :slight_smile:

OK, you convinced me to add an interface for the full sfxr parameters!

Sampling MP3s would have to come later due to issues on how to import them. We are still trying to figure out how sprite packs will work, so I assume it would be something similar.

Fred, that is awesome!