Please can we have more graphical stuff!

out of order:

I didn’t do aliasing for speed reasons - it is, as implemented, quite slow enough already! :slight_smile: besides, as you’re finding, turns out there’s no “there, no it’s anti-aliased” - it depends immensely on the context of where you’re drawing, and how you composite things.

The “can we one-way scale” was aimed at non-right triangles, specifically those with acute angles (although thinking about it, - a triangle with an acute angle is, by definition, a triangle with at least one obtuse angle that could be drawn by 2 overlapping rectangles, no?). If I can draw a rect, rotate it 45 degrees, transform it so it’s bisected by the edge of the image() - then scale in one axis - I can get an arbitrary angle. Again, it’s the math I’ve not bothered to work out, but that’s simple geometry, or easier.

Of course, if you can sheer then any triangle can be obtained by sheering a 45-90-45 triangle, but then it’s a question of how nice the sheering algorithm affects things like anti-aliasing.

heh - first thing’s first. Triangles. then anti-aliased triangles. As you’ve noted - for a single color polygon, just blurring the edges a bit does the trick for quick-and-dirty.

(or - more accurately: Me doing a lame implementation, someone else doing it “right”, TLL implementing it natively :slight_smile: - and I’m ok with that!)

Oh, and not “getting” GLSL - yeah, it’s a weird concept, because it’s another drastic architectural change about how code is written.

In GLSL, you’re writing code to modify a single pixel (or vertex - vertex code is still done in “shader language”, because the two historic pipelines have merged into one coding environment). The idea is that your single pixel (or vertex) code is ran massively parallel on all pixels (or vertices) at the same time.

It’s easier for me to think about doing it with a vertex - how would you code, say, rotation about the origin? For a given vertex, easy - calculate angle, sin/cos, multiply - boom, new vertex coordinate. Do so with all at once, and your bunny rabbit is rotating about the origin.

It’s harder to grasp because it’s a departure from normal procedure-oriented code - you do not control the main execution loop (much like Codea and processing, actually). You’re just writing “the guts” - the part that’s different.

Here’s the kicker - that GLSL shading language? A subset of C, with some pre-defined environmental things. Point being - you can write whatever you want in it, and it’ll get executed in parallel a gajillion times a second. For some things (any calculation that is “local” in a sense I’m having issues describing, and that would normally be iterated), you get this cool 5 orders of magnitude speedup, which is fun.

Here’s another example other than Life - GPUs and GLSL are used to massively parallel compute the cryptographic stuff needed to do Bitcoin (Bitcoins are created by doing a massive amount of computation in a provable manner). No graphics at all - just a calculation you need to repeat on different data a massive amount of times).

“But GLSL is for graphics!” - and I laugh and laugh and laugh.

More external links - http://mrob.com/pub/comp/screensavers/index.html

Quartz Composer is awesome - and I didn’t realize it also exposed GLSL (I don’t think it always did?) - so, point being, there’s a Mandelbrot fractal implemented as a GLSL shader.

Remember when drawing Mandelbrot fractals were slow? Not when a GPU is drawing thousands of pixels in parallel (and each instance is still an order of magnitude or more faster than the CPU).

Point being - THIS is what we can do with GLSL if they can expose it to us. We go from “ooh, drawing is slow because we have it iterate thru pixel by pixel” to “wow - zooming into the Mandelbrot in realtime…”

(Thought experiment: Draw a circle in GLSL? “set pixel if dist to center = radius”. filled? “set pixel if dist to center < radius”. And it’s all in parallel. You can see why this is exciting…)

Seems cool indeed. But does it really not need to use the cpu to calculate say the pixel dist to center in your example?

it does not, and that’s what’s so exciting!

You have a CPU (general purpose computing), and a GPU (graphical computing). In GLSL, the program (that limited subset of C I talked about above) is running on the GPU - not on the CPU! The GPU is optimized for just this sort of processing, so it’s usually very parallel (ie. it has many cores all running your code), and it’s been set up for certain types of calculations (ie. things like "what’s the distance from here to here?). So - you may have 100 (or 1000 or more - I don’t know the GPU on the ipad) copies running at once, and each may be faster than the CPU by 10 or more - AND while this is happening, the CPU is free to go on it’s way and do things the GPU can’t (I/O and access to large amounts of memory being examples of that).

Upshot being, by offloading work to the GPU, you can often accomplish the game goal thousands (or more) times faster than a traditional iterative CPU-based approach.

I should mention more about the GPU - it’s not a general purpose CPU. It is, for these sorts of calculations, much much faster.

Certain types of math (trigonometric calculations, matrix math, and such) are commonly used in graphics; the GPU is optimized for those sorts of calculations, even to the point where it has common sorts of operations hardwired (ie. it’s not executing microcode - it has hardware that can do the calculation directly). The tradeoff is that other types of work (access to main memory, for example) are either slow or impossible - but that’s what the CPU is for.

Another common example: Draw a triangle. In codea, we’d do it by hand by looking at the vertexes of the triangle, and iterating thru pixels we’ve decided are inside it, plotting them. Or, shenanigans with drawing and clipping rectanges. If you ask the GPU to draw a triangle, it doesn’t even do the math - it has “triangle drawing” hardware, it says “triangle here” and poof, triangle. Crazy Crazy fast.

So - you’re parallel, and faster.

Popping back out - A fast way to, say, draw a circle may be “hand this GLSL program to the GPU, render a frame, and now composite that onto my image”, rather than manipulating the pixels by hand.

That all sounds fantastic. So … when do I get to do this via Codea???

Heh - this is why I say “no, work on GLSL, not on polygon drawing”. Give us powerful, low level tools, and we’ll build the rest of it. And it’s not much more powerful (or low level, frankly) than GLSL.