Suppose I want to make a mesh of a rectangular tree image with a transparent background. I really just want the tree.
I can either
(a) use addRect / create two triangles covering the entire image, or
(b) I can create a large set of vectors which accurately map the contours of the tree and omit surrounding blank pixels
So (a) has fewer vertices but (b) has fewer pixels
If the mesh is used to create a pixel map, after which the vertices are disposed of, then I’m guessing (b) is better, because it has fewer pixels. But if the vertices are needed for each redraw, then (a) may put less strain on the processor.
Does anyone know enough about this to give me some advice?
I’ve done some rough testing and it seems that the number of vertices does have a direct impact on speed, so there is quite a price to pay for greater accuracy.
That would be what I expect. Every vertex must go through a vertex shader and every pixel must go through a fragment shader. So adding vertexes to remove pixels means more vertex shader calls, traded against less fragment shader calls. However, the hardware and drivers are probably expecting it to be weighted heavily towards less vertexes, so the balance generally seems to be towards simplex meshes with less vertexes.
That’s where things like bumpmaps come in because you can simulate lighting for higher resolution meshes in the fragment shader while keeping the actual mesh simpler with less vertexes.
Also, if the tree has a decent hard line for it’s border (my example images in the other thread didn’t) then doing a discard in the fragment shader drops the pixels outside the image very well.
Perhaps in the fragment shader you can drop all pixels that are close enough to white.