Very good work and thanks for sharing.
I like v5 with kaleiodoscopic effect
I would like to create a face recognition shader ( with camera ) for my ia project
but it’s very difficult for me
hmmm, after a bunch of testing, the only shader that worked for me (although only at 20 fps) was the first one by @yojimbo2000. It seems to work good for my needs, and he blur looks good. The other ones when I tried them just worked similarly to the blur shader that comes with Codea; a version of the image appeared a little bit off to all four corners of the image. In the demo program though it worked good, so I’m wondering how to implement it like that (with the 3 different demos, the code confused me). Any help is appreciated, and thanks again for all the shaders!
This one adds the builtin in shader for comparison. It runs quickly on the Air, by doing 10 samples per pixel in a single pass. My last one also does 10 samples, but because it does 2 passes of 5, that results in 25 samples. I’m biased of course, but I think my one looks way better than the builtin one
--# Main
--Gaussian blur
--adapted by Yojimbo2000 from http://xissburg.com/faster-gaussian-blur-in-glsl/ and http://www.sunsetlakesoftware.com/2013/10/21/optimizing-gaussian-blurs-mobile-gpu
function setup()
--2 pass Gaussian blur:
local downSample = 0.5 -- going down to 0.25 actually looks pretty good, but, weirdly, slower than 0.5
local dimensions = {
vec2(WIDTH, HEIGHT), --full size
vec2(WIDTH, HEIGHT) * downSample --down sampled
}
blurRadii = {
vec2(0.002,0), --horizontal pass, increase x value for more horizontal blur
vec2(0,0.002) --vertical pass, increase y value for more vertical blur
}
blur = {} --images
blurred = {} --meshes
for i=1,2 do --2 passes, one for horizontal, one vertical
blur[i]=image(dimensions[i].x, dimensions[i].y) --image 1 is full sized, image 2 is downsampled
blurred[i]=mesh()
blurred[i].texture=blur[i]
local j=3-i --invert i so that...
blurred[i]:addRect(dimensions[j].x/2, dimensions[j].y/2, dimensions[j].x, dimensions[j].y) --mesh 1 rect is down-sampled, mesh 2 rect is full-sized (ie, opposite of their images)
blurred[i].shader=shader(Gaussian.vs, Gaussian.fs)
blurred[i].shader.blurRadius=blurRadii[i]
end
--mesh w/o the blur shader:
unblurred=mesh()
unblurred.texture=blur[1]
unblurred:addRect(WIDTH/2,HEIGHT/2,WIDTH,HEIGHT)
--builtin shader
builtin=mesh()
builtin.texture=blur[1]
builtin:addRect(WIDTH/2,HEIGHT/2,WIDTH,HEIGHT)
builtin.shader=shader("Filters:Blur")
local blurriness = 3
builtin.shader.conPixel=vec2(blurriness/WIDTH,blurriness/HEIGHT)
builtin.shader.conWeight=1/9
showBlur=1 --blur state
--some animations:
pos={x1=200, y2=200, x3=800}
radii={r1 = vec2(0.03,0.01), r2=vec2(-0.01,-0.03)}
tween(2, pos, {x1=800, y2=800, x3=200}, {easing=tween.easing.cubicInOut, loop=tween.loop.pingpong})
tween(3, radii, {r1 = vec2(-0.03,-0.01), r2=vec2(0.01,0.03)}, {easing=tween.easing.sineInOut, loop=tween.loop.pingpong})
movement = true
profiler.init()
print ("tap screen to cycle blur effects")
end
function draw()
if movement then
setContext(blur[1])
background(50)
-- background(0,0)
sprite("Platformer Art:Guy Jump",pos.x1,HEIGHT-200)
sprite("Space Art:Red Ship",pos.x3,200,300)
sprite("Platformer Art:Icon",WIDTH/2,pos.y2)
end
if showBlur<3 then --draw blur if required
if showBlur==2 then
blurred[1].shader.blurRadius = radii.r1
blurred[2].shader.blurRadius = radii.r2
else
blurred[1].shader.blurRadius = blurRadii[1]
blurred[2].shader.blurRadius = blurRadii[2]
end
setContext(blur[2])
-- background(50) --nice after-image effect if you dont clear the intermediary layer
blurred[1]:draw() --pass one, offscreen
setContext()
blurred[2]:draw() --pass two, onscreen
elseif showBlur==3 then
setContext()
-- background(50) --doesn't seem to be necesary to clear screen, for some reason
unblurred:draw()
elseif showBlur==4 then
setContext()
-- background(50) --doesn't seem to be necesary to clear screen, for some reason
builtin:draw()
end
profiler.draw()
end
--touching toggles blur on and off
local states={"2-pass blur, 5+5=25 samples", "2 pass insect eye", "off", "built-in blur, 1 pass, 10 samples"}
function touched(t)
if t.state==ENDED then
showBlur = showBlur + 1
if showBlur==5 then showBlur=1 end
output.clear()
print(states[showBlur])
end
end
profiler={}
function profiler.init(quiet)
profiler.del=0
profiler.c=0
profiler.fps=0
profiler.mem=0
if not quiet then
parameter.watch("profiler.fps")
parameter.watch("profiler.mem")
end
end
function profiler.draw()
profiler.del = profiler.del + DeltaTime
profiler.c = profiler.c + 1
if profiler.c==10 then
profiler.fps=profiler.c/profiler.del
profiler.del=0
profiler.c=0
profiler.mem=collectgarbage("count", 2)
end
end
--# Gaussian
Gaussian = {vs = [[
uniform mat4 modelViewProjection;
uniform vec2 blurRadius;
attribute vec4 position;
attribute vec2 texCoord;
varying vec2 vBlurCoords[5];
void main()
{
gl_Position = modelViewProjection * position;
vBlurCoords[0] = texCoord;
vBlurCoords[1] = texCoord + blurRadius * 1.407333;
vBlurCoords[2] = texCoord - blurRadius * 1.407333;
vBlurCoords[3] = texCoord + blurRadius * 3.294215;
vBlurCoords[4] = texCoord - blurRadius * 3.294215;
}
]],
--optimised fragment shader
fs = [[
precision mediump float;
uniform lowp sampler2D texture;
varying vec2 vBlurCoords[5];
void main()
{
// gl_FragColor = vec4(0.0);
gl_FragColor = texture2D(texture, vBlurCoords[ 0]) * 0.304005;
gl_FragColor += texture2D(texture, vBlurCoords[ 1])* 0.204164;
gl_FragColor += texture2D(texture, vBlurCoords[ 2])* 0.204164;
gl_FragColor += texture2D(texture, vBlurCoords[ 3])* 0.093913;
gl_FragColor += texture2D(texture, vBlurCoords[ 4])* 0.093913;
}]]}
Here’s a Gaussian kernel calculator, if you want to experiment with weightings:
@yojimbo2000 I agree with you, your blur looks better. For doing the 2 passes, would you draw the blur on a single axis, write it to an image, then blur that image on the other axis?
yes, that’s what the code is doing.
So, first the scene you want blurred is drawn to the blur[1]
image. That is then drawn, at a quarter the size, with the horizontal blurring on the shader, to the blur[2]
image (the blurRadii table determines whether the blurring is horizontal or vertical). blur[2]
is then drawn back to the screen, at full size, with the vertical blurring added. I suspect you could optimize it further by drawing blur[2]
to a third buffer, also a quarter the size of the screen, and then drawing that 3rd buffer back to the screen, full-size. I think at the moment, the processing saving from going down to a quarter the size is only gained on the first pass, but not the second.