How can I record sound?

I want do record voice and replay in my app.
I guess I should use mic.start() and soundbuffer(), but nor sure.
Is someone show me a simple code to record voice from mic and reply the sound.

To earn about soundbuffer, I tried example code for soundbuffer in Codea reference, but
line 10 datum="\0\xAD" gets syntax error in code editor.

@Makoto at the moment you can’t record sound (at least not easily, it may be possible with mic and soundbuffer but that would be pretty tricky)

Interesting feature request, I’ll have to think about the API design

@Simeon, thank you for your quick response and your interest.
I have been developing vocab. building app where I need a replay of recorded voice to compare with the teacher’s voice. Please allow me to explain the reason why I need such feature.

The Core of learning method of the app is to let the student pronounce a pair of English word and corresponding Japanese word 2 times repeatedly. Then the student establish connection between the English word and its meaning by their sound, since the sound is the key to retrieve the meaning of the word from memory but not the spelling. However sometimes student pronounce the English word incorrectly by the sound or its accent. It is very tough for the student to detect such mistake by himself since he believes the word should be read in his manner. The only way to let him aware the mistake is to record his voice and compare with correct sound, i.e Siri. Current solution is to run a voice recording app on background.
This is the reason I want to have voice recording feature in Codea to make an integrated app.

Please show me how to correct the line 10 of the example code for soundbuffer in Codea reference. Code editor gives an error message; unfinished string near “” for ,line 10:

@Makoto Try something like this. I guess you can change what’s in the datum to create different sounds.


@Makoto Maybe you can use the speech say function for them to repeat how a word sounds. Here’s a link to a program I wrote that lets you try a phrase in different accents.

@dave1707 Thanks. It works. “AD” looks hex data but “xAD” makes different sound.

@Simeon Please correct the Example code.
In Codea reference, explanation of mic.start() says “Use this function to start tracking input from the device microphone”. But it doesn’t show the way to capture the input. I thought “soundbuffer” is the one. But according to the example code, it looks “mic.start()” only activates soundbuffer but nothing to do with its input data. What is the purpose of the command, mic.start()?

@Makoto From what I’ve found, soundbuffer is only used for the sound function and has nothing to do with the microphone. When using the microphone, the only things that can be used are mic.amplitude and mic.frequency.

mic.start() turns the microphone on and mic.stop() turns the microphone off.

Below is an example of using the microphone.


function setup()

function draw()
    if x>WIDTH then

Thank you. Now I can see what I can do with microphone in Codea.
It looks we can check if there is sound or not.
But since the sampling frequency is the same as the frame rate, it is not suitable for sound recording.

@Makoto I’m just checking it each draw cycle above. I’m sure it can be sampled at a faster rate. I’ll give it a try later and see what happens.

It doesn’t appear that the mic can be accessed more than the frame rate. I tried samples multiple times per draw cycle, but the values were always the same for each draw cycle. So it seems like the mic is sampled once per draw cycle.

@Makoto, by the way Simeon removed audioKit in the last iteration for the xcode export-so i think currently the microphone is not available for appstore apps.

@Makoto yeah I will need to extend the microphone API for recording to soundbuffers for it to be useful in that way

Something like:


mic.record(function (buffer) 
    -- buffer is a sound buffer delivered after mic.stop() is called

@piinthesky is correct in that AudioKit is not included, which provides the amplitude and frequency parts of the mic API in Xcode export (this is due to it causing problems with App Store submission). But I feel like an actual record-to-soundbuffer implementation wouldn’t require the AudioKit library, it’s just used for frequency and amplitude analysis