How is sfxr implemented! Or how do I send a single amplitude value to speakers

How exactly does sfxr send the current value of the waveform to the iPad speakers, and is there any way to have codea do this directly?

I’m getting a bit tired of utter silence in the music department of my apps, so I’m thinking of making a Chiptune player.

The basic question is this- is there some way to send a single number value to the speakers? I can handle the conversion of music into frequencies and such, but I’m not sure how. Just send a single speaker position.

I thought in terms of what we have, but you’re right, SFXR is a bad choice for making music or any sound at all that isn’t obtained by experimenting.

SFXR itself doesn’t play sounds at all, to be precise, it just fills a buffer and passes it to the next layer (in SoundCommands.mm if you’re interested), exactly what you want to do by yourself. Would be nice if Codea offers a plain audio buffer and perhaps MIDI access in the future. Or a basic synth with predictable behavior.

I managed to play a straight sound on my Linux system, see your other music thread. I have some difficulties to recreate this in Codea, though, I only managed to play a somewhat wavy sound. I’ll keep looking at it, must have something to do with how all the various values are bent (by Codea?).

Well, I’m not looking to use Sfxr at all.

(if you already know the following, I’m not trying to be patronizing)
A speaker works by moving a magnet along an axis- a sound may be a complex thing, but at any single instant in time, it is only a single value, or two if it is in stereo.

I only want to know if there is a way to directly tell the iPad what value to use. Sfxr must output a number somewhere- it’s how audio is transmitted.

If that output value can be accessed by codea directly, then I can write a midi reader. All I need to know is a function that tells the speakers what amplitude they should be at.

I think there have been some tries with poking numbers directly into a soundbuffer (creating a soundbuffer – PCM waveform data – and sending that as the argument to the ‘sound’ function, along with the format of the PCM data - Mono-8, Stereo-16 – and the frequency. You need quite a few numbers in the buffer, to be sure, before you’ll hear anything. But yes, it’s entirely doable.)

I’ve tried it too, but get hisses and clicks and pops on occasion, depending on how I adjust the waveform or frequency–just no time or need yet to experiment more, and no results worth posting myself

See the ‘Theremin’ thread and other ‘sound’-tagged discussions for some leads. I just don’t know that there’s a way to continuously create/‘stream’ that sound buffer smoothly. I can foresee creating them in advance, and 'sound’ing them as needed–but maybe not creating and playing simultaneously on the fly. If I send a few numbers at a time in a buffer to ‘sound’, I suspect I’ll just get clicks, no matter how fast I try to vary those numbers in the program.

I think it would be harder to find out how to control pitch/frequency this way anyway than just using sfxr with the things that have been discovered (adjusting to tune to a particular note of the scale has been described elsewhere.)

Yes, I know it has, I’ve done it lol. I discovered, in my quest, that the sfxr frequency 0.1902 perfectly corresponds to the note c3.

But sfxr just isn’t made for music. It’s like trying to eat peas with chopsticks- sure, it CAN be done but is horribly inefficient.

Seems like the best solution is sondbuffers, then. Big problem I see, though, is that codes frame rate fluctuates up and down by about four frames per second. If the buffer sound is too short then waves overlap, but if it’s too long then you get gaps.

I have to admit that the main reason I bought Codea was because I wanted to write apps designed for musicians. Not only because musicians are eager to buy and try nearly anything that comes out (just check out KRV and other music creating app websites), but because I, as a hobbyist musician, would love to create an app that might spark some creative juices within myself. And to know that I created it (with a little help from my friends) would be icing on the cake.

Perhaps off topic but definitely along the lines of this and other music related threads, my hopes are that Codea can be integrated with standard music creating ideals such as MIDI input/output, MIDI values transmitted over WIFI, MIDI and audio patch support, audio recording and editing, etc.

As a newbie, I am not sure if this is possible but from the threads I have read, it appears to be a work in progress by the more knowledgeable talent here.

I am not sitting back and waiting to sponge off other people’s hard work (though it is much appreciated when their knowledge becomes available) as I am busy learning and experimenting with several ideas.

Thanks to those that are eager to accomplish similar goals.

And of course thanks to TLL for making what we have today possible and for continuing to improve it.

Oh, I forgot, there is already a plain audio buffer, but you have no chance of adding sound data to it (SFXR can, you can’t). If you play two sounds, one will be put on top of the other, as far as I understand it. Codea could make an interface to a user accessible BufferCache, this would allow for buffer continuations without pops and clicks.

Well, one possibility would be to generate, say, 64 samples each a sixtieth of a second long, each correspoNding to a single DC level, then having it interpret the internal waveform to decide which to play.

This would sound absolutely terrible given the slightest lag however, and is the absolute worst way to go about it…

Hopefully Simeon will drop by, I bet he knows the answer.

Actually @Dylan might have some ideas about this - he did all the work behind the sound integration. I’ll ask him to comment on this thread.

There is no way currently to send a continuous stream of audio to the speakers. Sound buffer is as close as you can get at the moment.

We have discussed ways to send continuous data. It shouldnt be too hard, seeing as OpenAL does support streaming buffers. Its just a matter of coming up with a good API for it. Perhaps using a callback which returns the next buffer segment, something like

function callback(size)
   return "some buffer, size bytes in length"
end

soundcallback(callback)

Though that is off the top of my head.

Frankly, I’m not sure that Codea/Lua is fast enough to do DSP effects in real time though (generally you use the vector units on the CPU to speed these things up, but Lua has no access to that). Generating samples at 44.1KHz would be a big ask (especially on an iPad 1, though it may be more likely on an iPad 2). Obviously if you don’t need that resolution, it becomes a lot more feasible.

I honestly hadn’t thought about the frequency thing. GAH, I feel silly. What I had in mind wouldn’t be capable of any note higher than a#1.

Perhaps the function could be as simple as defining channels that then play a single waveform at a single amplitude and a single frequency.

Anything more complex can be handled by the programmer- the API simply plays the wave?

I’d imagine it working well as
createChannel(number,wave) where wave can be either be saw, zine, square, or noise, or possibly even a function that takes in X, which whatoint on the waveform it is in the range of 0-1 and returns An amplitude between 1 and 256.

Then there can be the functions of
Globalvolume() - sets or returns the global volume of all Channels

Channelvolume(channel,v) - sets or returns the volume of one channel, depending on if v is nil or not

Channeltone()- returns or sets the current absolute frequency of a channel
Channel note() sets the absolute frequency based on a frequency table, using either a midi number or strings like “b2” or “f#6”

Thic could be used to create chiptunes that would mix VERY well with the sound style of sfxr.

But the waveform is going to be tightly coupled with format, so that an amplitude of 255 is going to be quite quiet in 16-bit formats, where the range is 0-65535. The amplitude is going to determine volume, then, but the possible range of amplitude will depend on the format chosen.

You can also alter frequency by changing the ‘period’ of the samples–how many times a given level repeats before moving to the next level – e.g. 255-0 vs. 255-255-255-255-0-0-0-0. The waveform itself would have to be altered to alter volume, though–if using sound() with a sound buffer, that is.

A different approach might be possible, but I can see why it’s not an easy matter to come up with an API.

Not really what I want yet, just some experiments (copied from an example from @Dylan, I believe–just playing with ‘datum’ and freq as a parameter) – not really adding anything, just a particular ‘shape’ to the wave (datum)

Actually sounds when I release the touch, as you can see/hear:


-- Use this function to perform your initial setup
function setup()
    tap =  false
    iparameter("freq",1,14400,800)
    parameter("length",0.1,1,0.5)
end

function makeBuffer()
   local data =  ""
  --  local data =  {}
    datum=""

--
datum=datum.."\\128\\135\\157\\165"
datum=datum.."\\180\\195\\210\\240"
datum=datum.."\\240\\210\\195\\180\\165"
datum=datum.."\\157\\135"
datum=datum.."\\115\\105\\90\\75\\60\\45\\31\\15"
datum=datum.."\\15\\31\\45\\60\\75\\90"
datum=datum.."\\105\\115\\128"

--  j=freq%#datum
--  freq=freq-j
    numSamples = math.floor(freq * length)
    for i=1,numSamples/#datum do
       data = data .. datum
    --table.insert(data,i,datum)
    end
   -- freq=freq-(freq%#datum)
   -- make freq a mult of datum length?
    print(freq,length,numSamples,freq/numSamples, freq/#datum)
    --modify freq up & down in draw loop 
    return soundbuffer(data, FORMAT_MONO8, freq)
    
end

function touched(touch)
    
    if touch.state == ENDED and touch.tapCount == 1 then
        tap = true
    end
    
end

function draw()   
    
    if tap == true then

       --   print(freq,length,numSamples,freq/numSamples)
          b=makeBuffer()
          sound(b)

          tap=false
    
    end
    
end

Apparently, I was still working on this. Commented out what I was thinking of and what wasn’t working, sorry it’s not really clear what I was doing. I was really only trying to make a noise with the soundbuffer example, and working on this to work into Theremin–but it was still clicky and pop-py and @Fred beat me to it there anyway. (And besides, there’s that whole ABC player of his. By the time I figure this out I could have learned ABC notation.) But the sounding only when touch is dropped has me thinking of some other notion now…keeping a number of balloons afloat…

But I’m all for being able to create music-like apps ultimately. Just read of a Pitch Painter app, designed for 3-5 year olds to discover music. How I’d love my own iPad version of SimTunes (hardly play it on the PC) which seems readily doable even now with Codea. For personal use only of course–I’m not talking about releasing a knock-off of it, obviously. But for commercial apps, Kaossilator, TNR-i (Tenori-On), MS-20, Fairlight CMI – yeah, musicians and music aficionados are big app consumers!