This is rapidly getting … ridiculous. The short version of what I want to say is:

- Whatever code is need to make Simeon’s example work, please implement it and get it into beta - I want to work with it.
- If I use that code and draw two rectangles which, from the viewer’s point of view, are one in front of the other, but I list them in the
*wrong* order in my code, do they come out the right way around or the wrong way around? What if they are part of the same mesh?
- Don’t refer to “matrix multiplication” in the documentation.

Okay, here’s the long version.

I am a mathematician. I know that you know that. I’m just reminding you. Moreover, I am teaching about this stuff *this semester*. So it’s not that it’s something I half remember from when I learnt it. It’s stuff that I *know*.

As an end user of Codea, I don’t care *where* the stuff is taking place. Whether it is Codea, GL, or the iPad sends the data away to a support centre in the middle of Birmingham where people sit and work out the transformation by hand. What I care about is: I specify a coordinate `(3,4,5)`

and something is drawn on the screen, and I want to know *what went on in the meantime*. Take a look at page 23 of today’s lecture (6th March, *beamer* version). Question 3 on that slide is what I’m trying to work out here.

What I would like to happen is “Perspective Projection” (that I would call *Stereographic Projection*). From the OpenGL FAQ that Dylan linked to, this is a composition of several steps:

- Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates.

- Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates.

- Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates.

- Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.

Now, some of those are happening currently. Exactly which, I’m not sure. What is fairly clear is that Step 3, Clip Coordinates to Normalised Device Coordinates, is **not happening**.

In the discussion, various people have referred to the *matrices* involved. Simeon has said that he will provide the ability to set them directly. This is great, but - and this is my key point -: *This has nothing to do with Step 3*. Step 3 is not implemented by anything remotely resembling “matrix multiplication”.

As far as implementation is concerned, Simeon’s picture and code demonstrate that whatever he has *said*, his intention is to enable Step 3. So in terms of *code* I’m satisfied (though I would like to know about the ordering - that’s a follow-up question, and one that I can test when the above is in beta).

However, code isn’t everything. There’s also documentation to think of. And also the link between the geometric transformation and the 4x4 matrix implementing it is not the simplest thing for people to understand - I suspect. So there is great potential for confusion here and good documentation could go a long way to alleviating this.

It would be very wrong to refer to the *entire* process as “matrix multiplication”. And it would be wrong to write the documentation in such a way that it could be misconstrued as saying this (in particular, the end user is not going care about the distinction between what Codea does and what GL does).

I know this is just “mathematician being pedantic”. But there’s a reason. The ideal way to write mathematics is not in such a way that it be clear, but in such a way that it not be unclear. The idea is that it should be very, very hard to misunderstand the mathematics. That’s not so different from writing documentation.