Making calculations on the GPU

Hi. Is it possible to make calculations on vectordata on the gpu to speed it up?

For example two arrays of vec3 as input and calculate an output array? I want to make some calcuations on 3d model data.

Wait for v1.5 (very soon) => you’ll have access to Open GL. (YEEEESSSSS!!!)

.@tnlogy this is exactly what a vertex shader is for — computing in parallel on the GPU. So definitely something you can experiment with using shaders.

Do you have an example of it how to write and get the result? I’ve used buffers in 1.5 for normals.

An are you limited in glsl to calculate them one vertex at the time, and not a loop indexing the array arbitrary?

I’m curious if I can implement CSG as a GLSL shader. I converted a CSG javascript library to Codea which works fine but is a bit slow.

https://github.com/tnlogy/csg.js

The output part is tricky, now that you mention it. Shaders are designed to output to screen (or texture). With pixel shaders it’s fairly straightforward, as you can output to a texture and that’s your result.

You could do your vec3 buffer transformations this way, and render the output to a texture where the R,G,B values correspond to x,y,z values of your vectors. The overhead in reading the data is probably too much though.

This is the closest I can find for CSG through GLSL: http://www.comp.nus.edu.sg/~lowkl/publications/realtime_csg_vrst2010.pdf — but the method looks quite complex, and I doubt it would be possible in GLSL ES (the variant used by mobile devices).

I have a bit to learn before I know the limitations of GLSL ES. Thanks for the link, it might be about rendering CSG and not creating a mesh? But interesting! I’ve stored meshes to images from Codea, so I guess that would be possible, but might loose some precision in the process though.

And I think there is a limitation to only get one vertex at the time, so maybe this kind of algorithm isn’t a good fit as a shader algorithm.


function saveMesh(name, vertices)
    local im = image(#vertices, 1)
    local max = vec3(-math.huge, -math.huge, -math.huge)
    local min = vec3(math.huge, math.huge, math.huge)
    for i,v in ipairs(vertices) do
        max = vec3(math.max(v.x,max.x),
                   math.max(v.y,max.y),
                   math.max(v.z,max.z))
        min = vec3(math.min(v.x,min.x),
                   math.min(v.y,min.y),
                   math.min(v.z,min.z))
    end
    local span = math.max(math.abs(max.x - min.x),
                          math.abs(max.y - min.y),
                          math.abs(max.z - min.z))
    for i,v in ipairs(vertices) do
        v = (v - min) / span * 255
        im:set(i,1, v.x,v.y,v.z)
    end
    saveImage("Documents:" .. name, im)
end

function loadMesh(name, origin, scl)
    local im = readImage("Documents:" .. name)
    if not im then return nil end
    local w,h = spriteSize(im)
    local vertices = {}

    for i = 1,w do
        local r,g,b = im:get(i,1)
        local v = origin + (vec3(r,g,b) * scl) / 255
        table.insert(vertices, v)
    end
    return vertices
end

That’s true, the one vertex (or fragment) at-a-time limit arises from the parallel nature of the GPU. You don’t know when neighbours or other vertices will be computed, so you are limited to the current vertex and any input (such as a past state).

Yes, that CSG algorithm looks to be a raycasting based one rather than a vertex one.

Anyway, changed the code to using Codeas vec3 class instead of the one in Lua, and it got a bit faster. :slight_smile: