I have been thinking a ton about sound since I started using Codea,
I have searched the forum and found that there have been many others that share my interest, but for the most part what was needed was unknown, and I have put a great deal of thought into a simple way to extend codea that would allow low level DSP and much better control over sound, and it requires one main thing:
there needs to be access to the main implicit sound thread used for supplying the buffer directly.
I hear you say: “but there already is that, the sound.buffer object”, and you are partially correct.
but there is no place to access this on a sample level.
it would look like this:
function setup() --setup here end function draw() --draw here end function touch() --touch here end function audio() --AUDIO HERE end
this audio() would be working on a buffer level (analogous to the frame level of the draw thread) and calculations performed within it would be done in a way analogous to image:get()/set() API,
setting a single value in the buffer with
getting a value from the audio input with
the sound object can easily be a part of this, the point is accessing the in and out buffers
A buffer is just an array of 32 bit floats -1.0 to 1.0 ,
or if desired, ints for fixed point 8/16/24 bit which would be translated to the native output.
I am assuming this would provide access to the framework that codea is currently using to establish an audio output, but in a lower level way that makes sense along the current established paradigms.
It would only require the addition of an input stream from the mic, and even that is not up-front critical.
but that would add another thing that many people would definately like : CAMERA + AUDIO
this would also potentially make SCREEN CAPTURE with voiceover and sound effects possible
this thread is already there and working. We just need access to it. Then inspired people like myself can make things like the SODA / Cider of audio.
the audio thread would not be essential to use for many people who are ok with using the current sound techniques and would provide backward compatibility moving forward.
We have frame level, pixel level, gpu level, interaction via touch level, why not audio level?
I have done extensive experiments using sound being called in the draw function with buffers of 44100/60 but that breaks down because of the nature of a draw thread, it has to be flexible in length and timing to accomodate for overload of draw requests it can slow the rate. I tried using DeltaTime in various ways to generate the buffer length, but that also falls short for many reasons as it relies on the last DeltaTime. I will continue to try for the sake of my own fun, cause I like making things do stuff they are not usually meant to do (im a hacker :P`)
the fact is there is no current place to call a sample level sound object that is able to manipulate or deal with sound, and that is Ok, as Codea was made for games and simplifying the process of creating them, and sound is usually considered a secondary element in that endervor. But there has been access to a per frame draw function, and a per touch function, etc… and there isn’t currently an actual logical place to put a sound function within that framework because the reality is it requires a root level sound function.
the API and current sound functionality appear to be not fully working, which I have experienced directly and confirmed through many threads. this is because things are fully structured in an atypical way that I can fix. I want to fix, and it will be easy for me to do when I have access to the thread. For that matter, I know DSP inside and out and I am loving codea, and want to help grow it. This is the only thing I need.