How Codea composites when drawing

I’ve read the in-app references to premultiplied alpha format images and Codea using premultiplied drawing, but I’ve not yet got to grips with understanding the behaviour illustrated by the example code below:

``````
function setup()
spriteMode(CORNER)
iparameter("ambient", 0, 255)
short = math.min(WIDTH, HEIGHT)
length = math.floor(short/3)
img1 = image(length, length)
img2 = image(length, length)

c = color(127, 0, 0, 127) -- ~50% red, ~50% opacity

for j = 1, length do
for i = 1, length do
img1:set(i, j, c)
end
end

setContext(img2)
sprite(img1, 0, 0)
setContext()
print("Premultiplied?")
print("img1:",img1.premultiplied)
print("img2:",img2.premultiplied)

print("img2 color:")
print(img2:get(1, 1)) -- ~25% red (premult), ~25% opacity
end

function draw()
background(ambient)
sprite(img1, length/3, HEIGHT/2 - length/2)
sprite(img2, WIDTH - length - length/3, HEIGHT/2 - length/2)
end
``````

Any pointers to educational material would be greatly appreciated.

I do not understand the following. Can anyone help by explaining?

In the algebra below, I use the range 0 to 1 for each channel (r, g, b, a), rather than the range 0 to 255.

I understand that a new `image` userdata value has transparent black pixels (0, 0, 0, 0) and `.premultiplied = false`.

I would expect that overlaying any colour S = (r, g, b, a) on transparent black D = (0, 0, 0, 0) would have the result: S (irrespective of whether S is taken to be in straight format or premultiplied format).

For example (using the notation here):

Da’ = Sa + Da x (1 - Sa)

Dca’ = Sca + Dca x (1 - Sa)

If Da = 0 and Dca = 0 then:

Da’ = Sa

Dca’ = Sca

In the following two examples, I create an image `img1 = image(100, 100)`, use `img1:set(..., color(r, g, b, a))` to set its pixels. I then use `setContext()` and `sprite()` to draw `img1` on top of a transparent black image.

(1) If I create `img1` and set `img1.premultiplied = true` before or after using `img1:set()`, then the result is `(r, g, b, a)` - as expected.

(2) If I create `img1` and do not set `img1.premultiplied` (false by default) then the result is `(r*a, g*a, b*a, a*a)`. I could understand straight `(r, g, b, a)` being first converted to premultiplied `(r*a, g*a, b*a, a)` I do not understand the ‘alpha squared’.

After a bit of research, I think I now understand how Codea composites and why I get ‘alpha squared’ in the second example in my comment above - thanks to the Codea Runtime Library and the OpenGL ES 2.0 reference. If a more experienced user of Codea/OpenGL ES could check that I am on the right lines, I would be grateful.

As a shorthand, I’ll use ‘x’ and ‘y’ to represent a colour (in straight or pre-multiplied format),
where ‘x’ is the red, green and blue channels and ‘y’ is the alpha channel. I’m also still using the range 0 to 1 rather than the range 0 to 255. So, for a
straight format source colour (x, y) = (Sc, Sa) and for a pre-multiplied format source
colour (x, y) = (Sca, Sa), where Sca = Sc * Sa.

In each case, the destination (canvas) colour and the output of blending are in a pre-multiplied format - that is, (Dca, Da) and (Dca’, Da’).

If the source colour is in a pre-multiplied format, then the blending function for <a href=“http://en.wikipedia.org/wiki/Alpha_compositing”‘over’ alpha compositing is:

``````Dca' = 1 * x + (1 - Sa) * Dca
Da'  = 1 * y + (1 - Sa) * Da
``````

If the source colour is in a straight format, then the equivalent blending
function is:

``````Dca' = Sa * x + (1 - Sa) * Dca
Da'  = 1  * y + (1 - Sa) * Da
``````

In the Codea Runtime Library, the implementation of the `sprite()` function sets
a blend mode of BLEND_MODE_PREMULT if the source image’s premultiplied flag is set and
a blend mode of BLEND_MODE_NORMAL otherwise.

In the case of BLEND_MODE_PREMULT, this results in a call to OpenGL:

`glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)`

That is consistent with the blending function above: GL_ONE corresponds to the
factor 1 and GL_ONE_MINUS_SRC_ALPHA corresponds to the factor (1 - Sa).

In the case of BLEND_MODE_NORMAL, this results in a call to OpenGL:

`glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)`

That is not consistent with the blending function above: GL_SRC_ALPHA
corresponds to the factor Sa, but for Da’ the factor required is 1 (not Sa). Using the factor Sa results in:

``````Dca' = Sa * x + (1 - Sa) * Dca
Da'  = Sa * y + (1 - Sa) * Da
``````

In cases where (Dca, Da) = (0, 0), this results in the output (Sca, Sa * Sa) - with the ‘alpha squared’ that I had noted.

What I do not understand is why the following call is not made in the case of
BLEND_MODE_NORMAL:

`glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA)`

OpenGL’s `glBlendFuncSeparate` allows different (separate) factors to be applied to the red, green and blue channels than those applied to the alpha channels.

That’s a good find on the use of SRC_ALPHA / 1 - SRC_ALPHA blending. I can see how it reduces the way you point out.

The reason we use glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ) is simply because it is the “recommended” way to blend transparent fragments.

E.g. http://www.opengl.org/sdk/docs/man/xhtml/glBlendFunc.xml contains the following:

Transparency is best implemented using blend function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) with primitives sorted from farthest to nearest.

Though re-reading that now, I suspect it doesn’t take texture render targets into account!

Is the `Da' = Sa * y + (1 - Sa) * Da` computation causing a visual issue when rendering to screen? Because destination framebuffer generally has an alpha of 1 — but when rendering to an image context this could result in an incorrectly computed alpha.

I’m not sure if `glBlendFuncSeparate` is slower than `glBlendFunc` (rendering blended fragments is one of the biggest performance hits on iPad) — but we could certainly enable it conditionally when rendering to an image context using `BLEND_MODE_NORMAL` — would this fix your issue?

My issue was one of principle rather than practice (for me). It seemed to me that the following should be equivalent: (1) using `sprite()` to draw a non-premultiplied (‘straight’) format partially-transparent image directly to the screen; and (2) drawing the same image onto an empty ‘(0,0,0,0)’ image and then drawing the result to the screen. I did not understand why they were behaving differently (the end result had different opacity).

As far as I can see, the only time you might have a straight format image is when you build one in code from scratch (using `image()` and `myImage:set()`). (I believe that `readImage()`, for example, converts straight format image files into premultiplied images.) If that is right, all I have to do is build such images in a premultiplied format from the outset.

As an aside, I have been thinking whether there is an efficient way to convert an existing straight format image into a premultiplied format image using some combination of `setContext(img)`, `tint()`, `.premultiplied` and `sprite()` - but I have not found one.

The following method takes about 5.4E-06 seconds per pixel:

``````
function toPremultiplied(imgIn)
if imgIn.premultiplied then return imgIn end
local imgOut = image(imgIn.width, imgIn.height)
for j = 1, imgIn.height do
for i = 1, imgIn.width do
local r, g, b, a = imgIn:get(i, j)
imgOut:set(i, j, r * a, g * a, b * a, a)
end
end
imgOut.premultiplied = true
return imgOut
end
``````

You’re right that the two use cases you mention should result in identical images. It’s a bug that they don’t. I’ll change to glBlendFuncSeparate for the normal blend mode and see how it performs.