Shaders?

Hey All, I am super excited for 1.5, but I do have one question…
What exactly is a shader? How do you use it? What language is it in?
I have played around with paragraf, created some cool effects, will it be like that?

Here is someone else’s answer, yes jordan i am lazy, i could answer you but i dont have time, sorry :slight_smile:

The shortest way of saying it is that they’re (usually small) programs that perform calculations on individual elements and produce some information related to that element. An “element” can be anything from a vertex or a pixel or a single polygon. The reason we need lots of power is simply because these elements can be really small and there can be a lot of them.

In the context of GPUs, it’s a little more accurate to refer to them as vertex and fragment programs (as opposed to vertex and pixel shaders), which is the OpenGL terminology, because the term “shader” can have some alternate meanings in the offline rendering world. That, too, Unreal engine in the era prior to programmable hardware used the word shader much in that same sort of meaning.

Quote:

I always get the impression shaders are an icing on the cake type of thing. That they do small effects, little flashy things, side effects, heat haze coming off a gun ala Gears of War, nothing major seems to be attributed to them, only minor things, so why are they pretty much the centerpiece of GPU’s today then?

Umm… actually everything you see on the screen is run through a handful of shaders – each and every pixel on every frame, including many you can’t see. All illumination, all transformation, all shadowing, all post-processing, etc. What you mentioned is just an example of a post-processing effect, but it’s hardly the most outstanding example. Technically speaking, this was always true, it’s just that it used to be the case that all the “shaders” were predetermined, and we as programmers could only do things like adjusting the parameters of these shaders. Nowadays, what we have is the ability to explicitly program them to our liking, though this programmability isn’t without limitations, and we pretty much don’t do without them whenever rendering matters. With further generations, the flexibility increases as does the power.

And the example you bring up is the sort of thing to come up in discussion a little more often because so many other things are very status quo. For instance, you have normal mapping, which is a feature that rests on per-pixel illumination, for which pixel shading is an absolute requirement… but it’s one of those things that is pretty much expected of every game that will ever come out, so nobody’s going to say that somebody has just revolutionized normal mapping.

Quote:

Once I heard shaders reffered to as a way to “fake” textures, without actually using textures. This made sense to me. Because this was something very big and important, at least, to justify building GPU’s around them.

While that’s theoretically possible, no GPU is powerful enough to make this really worthwhile for a wide variety of cases. They’re not that flexible, either, so again, there’s not much you can do procedurally that would hold a candle to an artist’s skill. Pretty much 100% of all realtime running-on-GPU shaders you find in practical use utilize texture images (when they are of concern – obviously, there are cases where textures don’t matter) to acquire at least some of their information.

In the offline rendering world, though, you’ll see a lot more procedural texturing because they have the flexibility to use more complex and better-looking models to simulate certain things.

Quote:

And another thing, I have heard shaders described as simply a way to color pixels. This makes me wonder, since pixels are so small, you know, they could only possibly be one or two colors, that you could actually discern, so it just doesn’t make sense, that you could spend massive amounts of power on coloring a pixel.

That’s exactly WHY you want power to color pixels. Because they’re small, yes, you can only output one color (not that the data location for a “color” needs to actually hold a color)… but of course, you can gather information about pixels of all sorts, and use that to drive how you color each pixel. And the point of coloring something that small is that it’s the smallest thing you can color (sort of)… so you’re able to perform operations that allow you to modify color at the finest possible level of detail. The massive power is there not so you can compute the color of a pixel, but ALL the pixels on the screen. And the more you want to put into that computation, the more power you need to do it because there are millions of pixels to color several dozens of times a second.

Thank you so much for that, but I also want to know how to use them in Codea.

Yes - the code of the shaders are very, very much like paragraf - the differences will be in how Codea imports/outputs parameters to the shader.

The “easy” way to think of it is that in newer Graphical Processing Units (GPUs), you can write a program in a little C-like language (“what language is it in” is kinda circular - it’s GLSL, the “GL Shading Language”, that’s it’s actual name) that gets executed, in parallel, for every vertex in a 3-D model (the ‘vertex shader’) and for every pixel in a given triangle (the ‘fragment shader’), super super fast. This little program you write becomes part of the rendering pipeline, so (for example) you don’t need to do everything in Lua in the CPU (which is, by comparison, vastly more flexible with much more memory and features - and hundreds or thousands of times slower).

In practice, it looks like Codea 1.5 will have a library of stock shaders - ripple effects, shading (like “sepia tone”, for example - think instagram effects), lighting, and so on - and you’ll also be able to preview them, and put them in your code in much the same way you do with the stock sprites now. Then, if you want, you can code your own shaders, or take the stock ones and copy/modify them. In many cases, you probably will be able to use a stock shader - so you won’t need to know GLSL at all if you’re trying to do something that is a stock kind of normal thing (I’m thinking like 3d lighting) - you’ll set some parameters, pick the shader from the library, and boom.

The nice part about GLSL is that it’s not really “Apple” or “Codea” - it’s part of the OpenGL standard, so what you learn (and resources available) should be very applicable to writing and editing shaders in Codea. The difference between this and, say, paragraf is that the ability to write shaders is paired with a first-class self-hosted “normal” programming environment - a first on the iPad, from what I can tell. Shaders are really powerful, and so is normal CPU programming, but pairing them - letting the GPU do what it does well, and the CPU do what it does well - is a “force multiplier” - you get far more out of it than 1 + 1. It is pretty darn cool. It’s also a giant addition, and a ton of work for TLL - I gotta give them a hand, they manage to somehow keep topping my highest expectations; I would have settled for some simple hooks to GLSL, they’re supporting it as a full-blown programming environment.