advance only when a new camera capture frame exists

I am making some optical flow shader experiments, and I noticed something that appears to be a bit of an issue:
If the draw routine is happening at 60fps, is the camera potentially updating slower than that? my entire program depends on comparing the difference between the current camera frame and the last, and I do not think I am getting a different image from the camera each draw cycle as far as I can tell.

Is there a way to wait at the end or beginning of draw until I have a new image from CAMERA ?

You could try only drawing the shader every other frame, so that you run at 30fps. What device are you running on? And are you monitoring delta time to see what fps you currently run at?

I ended up making lemonaide. something cool by processing the camera through another shader and using that capture as the difference texture rather than the raw last camera frame.

Im getting a rather large fluctuation, it seems that it starts off fast ~60fps and slows to ~30-40 after a minute or so… not sure what thats about. Just using print(DeltaTime)

Im using an ipad pro at the moment. Ill post some test clips if I can figure out how to get the video files out of Codea. currently not showing up in my photo library.

@AxiomCrux it slows down because when you say print(), it eventually overloads the output and starts slowing down the FPS. Instead, write this in the setup:"1/math.floor(DeltaTime)")

If you’re using a print statement in the draw function or a function called from draw, that will slow the program down after awhile and eventually crash Codea. All of those print statements build up in the print buffer and the program slows down as it adds more and more print statements.

@AxiomCrux I ran a test program that added 1 to a counter and displayed the text value each draw cycle. I ran the camera for a few seconds and when I looked at the video, I could drag the video frame to show each increment of the counter. So that tells me the camera is capturing each draw cycle.

@AxiomCrux After more testing, it looks like even though the draw function might slow down due to heavy calculations, the camera still takes its pictures at its same rate. So several frames of the camera could have the same image until the next draw cycle.

@AxiomCrux If you’re comparing camera images, one way to tell if you get a different camera image is to put a small colored rectangle in the upper left corner of the image. When the image changes, draw a different color. If the color changes, then the image was updated and you can compare the images. That way it won’t matter how often the image changes compared to the camera.

@AxiomCrux Interesting - a while back I was looking at doing frame by frame comparisons trying to emulate a green screen (

A major issue I found was that there is preprocessing of the camera frame for automatic light adjustment - this caused an issue when trying a frame to frame comparison as any new object in the frame would rebalance the lighting. Did this cause and issue for you?

@west yeah the built in camera processing is a bit of a tricky deal on this, but not a deal breaker.

@dave1707 how would i compare the camera images to see if the data is different? i tried if oldCAMERA == CAMERA, with oldCAMERA being an image set to CAMERA after the check, but it didnt seem to work. didnt know if == was even implimented for images.

@CamelCoder I found watch just after posting. cheers :slight_smile:

btw, what im doing on this peoject is actually coming out pretty bad ass now by using the feedback from my optical flow displacement instead of trying to use the last frame of the camera, that way it is ALWAYS different and the imagery gets pushed like fluid based on the movement. this is what i was after anyway, so not cruitial to continue figuring out the precise disparity between each frame of CAMERA. still could be useful for other things. ill post some things in another thread soon.

also @CamelCoder i just tried your version of watch and it is showing inf as the value

@CamelCoder turned out the math.floor needs to be around the 1/ as well

@AxiomCrux There were some post long ago about comparing images, but I don’t think they worked out. I’ll see if I can find them. Forget about my suggestion of drawing a colored square. When you capture an image, set a flag to true. If that flag is true, set it to false and compare the current image to the previous image. As long as the flag is false, don’t try to compare images. That way you’ll only compare images when you get a new image and the flag is true.

EDIT: do a forum search on compare images and see if anything there will help.

@dave1707 hehehe:: if that flag is true set it to false and <>

im trying to see precisely where the frames are different, so the flag doesn’t do much.

Im noticing my program seems to crash eventually, and Im not sure why. I use the same quad and keep flipping to different contexts and shaders. Should I be using different mesh quads for each? or is garbage collection or some sort of memory issue a possibility? Ill clean up my code and post if I cant figure it out.

Add collectgarbage. Anytime Codea crashes, its most likely a memory issue. In your first post, you said you didn’t think you were getting a different image to compare. The flag was to let you know when you got a new image and to do the compare.

right @dave1707, but how do we know if the images are different? my issue was that often when I was subtracting the two camera frames in my shader I didn’t get a resulting difference frame, which would indicate the frames were the same. My question being how can one know the frames are different with certainty? I tried OLDCAM (captured to image at the end of the draw loop) with the CAMERA built in image constant using != or == in an if statement, but that didnt seem to do anything.

also I have a feature request that shouldnj’t be too difficult to impliment, its basically I would love to be able to size up the shader lab real time preview window, maybe using the now familiar split screen style drag the divider. Is there a place I should post such a request to potentially get it on the docket? I had a few other requests that may be relatively streightforward to add but will net a large gain in functionality: mainly adding midi/osc and proper audio DSP libraries to the main API. I did a quick google search for LUA midi and LUA audio / DSP which showed at least a few great options of open source libraries on git. I searched the forum and noticed there were many other poeple asking for this through the years. getting the thread sidetracked, but I can post those in an appropriate feature req subforum if I find one now :slight_smile:

@AxiomCrux Can you post any shader code to show what you’re doing.

@AxiomCrux See this link and my code near the bottom of it. Would anything there help.

@AxiomCrux Actually, here’s the code with a lot of the not needed code removed.

function setup()"1/DeltaTime//1")
    img1=readImage("Planet Cute:Character Boy")
    img2=readImage("Planet Cute:Character Pink Girl")

function draw()
    background(44, 44, 117, 255)     
    m.shader=shader(DS.vs, DS.fs)

    vs= [[   uniform mat4 modelViewProjection;
            attribute vec4 position;
            attribute vec2 texCoord;
            varying highp vec2 vTexCoord;
            void main()
            {   vTexCoord = texCoord;
                gl_Position = modelViewProjection * position;
    fs= [[  precision highp float;
            uniform lowp sampler2D texture;
            uniform lowp sampler2D texture2;
            varying highp vec2 vTexCoord;    
            void main()
            {   lowp vec4 col1 = texture2D( texture,  vTexCoord );
                lowp vec4 col2 = texture2D( texture2, vTexCoord );
                if (abs(col2.r-col1.r)<0.01 &&  abs(col2.g-col1.g)<0.01 && 
                    gl_FragColor = abs(col1-col2);