Why do we need perspective division? - opengl

I know perspective division is done by dividing x,y, and z by w, to get normalized device coordinates. But I am not able to understand the purpose of doing that. Also, does it have anything to do with clipping?

Some details that complement the general answers:
The idea is to project a point (x,y,z) on screen to have (xs,ys,d).
The next figure shows this for the y coordinate.
We know from school that
tan(alpha) = ys / d = y / z
This means that the projection is computed as
ys = d*y/z = y /w
w = z / d
This is enough to apply a projection.
However in OpenGL, you want (xs,ys,zs) to be normalized device coordinates in [-1,1] and yes this has something to do with clipping.
The extrema values for (xs,ys,zs) represent the unit cube and everything outside it will be clipped.
So a projection matrix usually takes into consideration the clipping limits (Frustum) to make a single transformation that, with the perspective division, simultaneously apply a projection and transform the projected coordinates along with the z to normalized device coordinates.

I mean why do we need that?
In layman terms: To make perspective distortion work. In a perspective projection matrix, the Z coordinate gets "mixed" into the W output component. So the smaller the value of the Z coordinate, i.e. the closer to the origin, the more things get scaled up, i.e. bigger on screen.

To really distill it to the basic concept, and why the op is division (instead of e.g. square root or some such), consider that an object twice as far should appear with dimensions exactly one half as large. Obtain 1/2 from 2 by... division.
There are many geometric ways to arrive at the same conclusion. A diagram serves as visual proof for this, really.

Dividing x, y, z by w is a "trick" you do with "homogeneous coordinates". To convert a R⁴ vector back to R³ by dividing by the 4th component (or w component as you said). A process called dehomogenizing.
Why you use homogeneous coordinate? That topic is a little bit more involved, I try to explain. I hope I do it justice.
However I will use the x1, x2, x3, x4 as the components of a vector instead of x, y, z, w:
Consider a 3x3 Matrix M and column vectors x, a, b, c of R³. x=(x1, x2, x3) and x1,x2,x3 being scalars or components of x.
With the 3x3 Matrix can do all linear transformations on a vector x you could do with the linear combination:
x' = x1*a + x2*b + x3*c (1).
(x' is the transformed vector that holds the result of transforming x).
Khan Academy on his Course Linear Algebra has a section explaining the fact that every linear transformation can be written as a matrix product with a vector.
You can try this out for example by putting the column vectors a, b, c in the columns of the Matrix M = [ a b c ].
So with the matrix product you essentially get the upper linear combination:
x' = M * x = [a b c] * x = a*x1 + b*x2 + c*x3 (2).
However this operation only accounts for rotation, scaling and shearing transformations. The origin (0, 0, 0) will always stay at (0, 0, 0).
For this you need another kind of transformation named "translation" (moving a vector or adding a vector to the vector).
Consider the translation column vector t = (t1, t2, t3) and the linear combination
x' = x1*a + x2*b + x3*c + t (3).
With this linear combination you can translate, rotate, scale and shear a vector. As you can see this Linear Combination does actually move the origin vector (0, 0, 0) to (0+t1, 0+t2, 0+t3).
However you can't put this translation into a 3x3 Matrix.
So what Graphics Programmers or Mathematicians came up with is adding another dimension to the Matrix and Vectors like this:
M is 4x4 Matrix, x~ vector in R⁴ with x~=(x1, x2, x3, x4). a, b, c, t also being column vectors of R⁴ (last components of a,b,c being 0 and last component for t being 1 - I keep the names the same to later show the similarity between homogeneous linear combination and (3) ). x~ is the homogeneous coordinate of x.
Now watch what happens if we take a vector x of R³ and put it into x~ of R⁴.
This vector will be in homogeneous coordinates in R⁴ x~=(x1, x2, x3, 1). The last component simply being 1 if it is a point and 0 if it's simply a direction (which couldn't be translated anyway).
So you have the linear combination:
x~' = M * x = [a b c t] * x = x1*a + x2*b + x3*c + x4*t (4).
(x~' is the result vector when transforming the homogeneous vector x~)
Since we took a vector from R³ and put it into R⁴ our x4 component is 1 we have:
x~' = x1*a + x2*b + x3*c + 1*t
<=> x~' = x1*a + x2*b + x3*c + t (5).
which is exactly the upper linear transformation (3) with the translation t. This is called an affine transformation (linear transf. + translation).
So with a 3x3 Matrix and a vector of R³ you couldn't do translations. However adding another dimension having a vector in R⁴ and a Matrix in R^4x4 you actually can do it.
However when you want to return to R³ you have to divide the first components with the last one. This is called "dehomogenizing". Which is the the x4 component or in your variable naming the w-component. So x is the original coordinate in R³. Be x~ in R⁴ and the homogenized vector of x. And x' in R³ of x~.
x' = (x1/x4, x2/x4, x3/x4) (6).
Then x' is the dehomogenized vector of the vector x~.
Coming back to perspective division:
(I will leave it out, because many here have explained the divide by z already. It's because of the relationship of a right triangle, being similar which leads you to simplify that with a given focal length f a z values with y coordinate lands at y' = f*y/z. Also since you stated [I hope I didn't misread that you already know why this is done I simply leave a link to a YT-Video here, I find it very well explained on the course lecture CMU 15-462/662 ).
When dehomogenizing the division by the w-component is a pretty handy property when returning to R³. When you apply homogeneous perspective Matrix of 4x4 on a vector you simply put the z component into the w component and let the dehomogenizing process (as in (6) ) perform the perspective divide. So you can setup the w-Component in a way that the division by w divides by z and also maps the values from 0 to 1 (basically you put the range of z-near to z-far values into a range floating points are precise at).
This is also described by Ravi Ramamoorthi in his Course CSE167 when he explains how to set up the perspective projection matrix.
I hope this helped to understand the rational of putting z into the w component. Sorry for my horrible formatting and lengthy text. Yet I hope it helped more than it confused.
Best of luck!

Actually, via standard notational convention from a 4x4 perspective matrix with sightline along a 'z' direction, 'w' differs by 1 from the distance ratio. Also that ratio, though interpreted correctly, is normally expressed as -z/d where 'z' is negative (therefore producing the correct ratio) because, again, in common notational convention, the camera is looking in the negative 'z' direction.
The reason for the offset by 1 needs to be explained. Many references put the origin at the image plane rather than the center of projection. With that convention (again with the camera looking along the negative 'z' direction) the distance labeled 'z' in the similar triangles diagram is thereby replaced by (d-z). Then substituting that for 'z' the expression for 'w' becomes, instead of 'z/d', (d-z)/d = [1-z/d]. To some these conventions may seem unorthodox but they are quite popular among analysts.

Related

creating my own projection matrix in opengl

Here is my reasoning:
openGL draws everything within a 2x2x2 cube
the x,y values inside this cube determine where the point is drawn on the screen. The z value is used for other stuff...
if you want the z value to have some effect on perspective you need to mutate the scene (usually with a matrix) so that it gives an illusion of distant objects being smaller.
the z values of the cube go from -1 to 1.
Now I want it so that objects that are at z = 1 are infinitely zoomed, and objects that are at z = 0 are normal size, and objects that are at z = -1 are 1/2 size.
When I say an object is zoomed, I mean that the (x,y) coordinates of its points are multiplied by scaler zoom factor, which is based on its z coordinate.
If a point lies outside the 2x2x2 cube I want the calculations to still be done on it if it is between z = 1 and z = -1. Since the z value doesn't change I don't care what happens to any points that are not within this range, as long as their z value is not changed.
Generalized point transformation:
If I have a point P = (x, y, z), and -1 <= z <= 1 then:
the Zoom Factor, S = 1 / (1 - z)
so the translation is as follows:
(x, y, z) ==> (x * S, y * S, z)
Creating the matrix?
This is where I am having issues. I don't know how to create a matrix so that it will transform a generalized point to have the desired effect.
I am considering not using a matrix and applying this transformation via a function in glsl...
If someone has insight on how to create such a matrix I would like to know.

Understand Translation Matrix in OpenGL

Assume we want to translate a point p(1, 2, 3, w=1) with a vector v(a, b, c, w=0) to a new point p'
Note: w=0 represents a vector and w=1 represent a point in OpenGL, please correct me if I'm wrong.
In Affine transformation definition, we have:
p + v = p'
=> p(1, 2, 3, 1) + v(a, b, c, 0) = p(1 + a, 2 + b, 3 + c, 1)
=> point + vector = point (everything works as expected)
In OpenGL, the translation matrix is as following:
1 0 0 a
0 1 0 b
0 0 1 c
0 0 0 1
I assume (a, b, c, 1) is the vector from Affine transformation definition
why we have w=1, but not w=0 such as
1 0 0 a
0 1 0 b
0 0 1 c
0 0 0 0
Note: w=0 represents a vector and w=1 represent a point in OpenGL, please correct me if I'm wrong.
You are wrong. First of all, this hasn't really anything to do with OpenGL. This is about homogenous coordinates, which is a purely mathematical concept. It works by embedding an n-dimensional vector space into an n+1 dimensional vector space. In the 3D case, we use 4D homogenous coordinates, with the definition that the homogenous vector (x, y, z, w) represents the 3D point (x/w, y/w, z/w) in cartesian coordinates.
As a result, for any w != 0, you get a certain finite point, and for w = 0, you are discribing an infinitely far away point into a specific direction. This means that the homogenous coordinates are more powerful in the regard that they can actually describe infinitely far away points with finite coordinates (which is something which comes very handy for perspective transformations, where infinitely far away points are mapped to finite points, and vice versa).
You can, as a shortcut, imagine (x,y,z,0) as some direction vector. But for a point, it is not just w=1, but any w value unequal 0. Conceptually, this means that any cartesian 3D point is represented by a line in homogenous space (we did go up one dimension, so this actually makes sense).
I assume (a, b, c, 1) is the vector from Affine transformation definition why we have w=1, but not w=0?
Your assumption is wrong. One thing about homogenous coordinates is that we do not apply a translation in the 4D space. We get the effect of the translation in the 3D space by actually doing a shearing operation in 4D space.
So what we really want to do in homogenous space is
(x + w *a, y + w*b, z+ w*c, w)
since the 3D interpretation of the resulting vector will then be
(x + w*a) / w == x/w + a
(y + w*b) / w == y/w + b
(z + w*c) / w == z/w + c
which will represent the translation that we were after.
So to try to make this even more clear:
What you wrote in your question:
p(1, 2, 3, 1) + v(a, b, c, 0) = p(1 + a, 2 + b, 3 + c, 1)
Is explicitely not what we want to do. What you describe is an affine translation with respect to the 4D vector space.
But what we actually want is a translation in the 3D cartesian coordinates, so
(1, 2, 3) + (a, b, c) = (1 + a, 2 + b, 3 + c)
Applying your formula would actually mean doing a translation in the homogenous space, which would have the effect of doing a translation which is scaled by the w coordinate, while the formula I gave will always translate the point by (a,b,c), no matter what w we chose for the point.
This is of course not true if we chose w=0. Then, we will get no change at all, which is also correct because a translation will never change directions - your formula would change the direction. Your formula is correct only for w=1, which is aonly a special case. But the key point here is that we are not doing a vector addition after all, but a matrix * vector multiplication. And homogenous coordinates just allow us (among other, more powerful things), to represent a translation via matrix multiplication. But this does not mean that we can just interpret the last column as a translation vector as if we did vector addition.
Simple Answer
The reason is the way how matrix multiplications work. If you multiply a matrix by a vector then the w-component of the result is the inner product of the 4th line of the matrix with the vector. After applying the transformation, a point should still be a point and a direction should be a direction. If you would set that to a 0-vector, the result will always be 0 and thus, the resulting vector will have changed from position (w=1) to direction (w=0).
More detailed answer
The definition of a affine transformation is:
x' = A * x + t,
where is a A is a linear map and t a translation. Traditionally, linear maps are written by mathematicians in matrix form. Note, that t is here, similar to x, a 3-dimensional vector. It would now be cumbersome (and less general, thinking of projective mappings), if we would always have to handle the linear mapping matrix and the translation vector. This can be solved by introducing an additional dimension to the mapping, the so-called homogeneous coordinate, which allows us to store the linear mapping as well as the translation vector in a combined 4x4 matrix. This is called augmented matrix and by definition,
x' A | t x
[ ] = [ | ] * [ ]
1 0 | 1 1
It should also be noted, that affine transformations can now be combined very easily by just multiplying there augmented matrices, which would be hard to do in matrix plus vector notation.
One should also note, that the bottom-right 1 is not part of the translation vector, which is still 3-dimensional, but of the matrix augmentation.
You might also want to read the section about "Augmented matrix" here: https://en.wikipedia.org/wiki/Affine_transformation#Augmented_matrix

OpenGL Frustum mathematical understanding

I have a question pertaining to the mathematical aspect of frustum. The matrix constructed by glFrustum(l,r,b,t,n,f) I understand that l,r is essentially the x axis, b,t is y and n,f is z. Now my question is given the classic frustum matrix : http://csc.lsu.edu/~kooima/articles/genperspective/eq5.svg
I want to find the view coordinates X,Y and Z of point p=(x,y,z)
and in the special case that l=b=-1 r=t=1 and to find X,Y,Z for p(x,y-n) and p(x,y,-f)
I'm just having a hard time understanding how to do this with such abstract numbers. I know that if n=f or n or f is less than zero there's an error. But i just dont understand necessarily what I'm being asked.
Let r be a coordinate vector in eye space of the form r = (x,y,z,1)
Let P be a frustum transformation matrix. This allows to to transform r into a position r' in clip sapce by applying the transformation, which for vectors and matrices can be done using a matrix-vector multiplication r' = P · r
The normalized viewport coordinate r'' (without clipping) is then obtained by applying the homogenous perspective division r'' = r' / r'.w
That's it, there's nothing more to it than that.

An inconsistency in my understanding of the GLM lookAt function

Firstly, if you would like an explanation of the GLM lookAt algorithm, please look at the answer provided on this question: https://stackoverflow.com/a/19740748/1525061
mat4x4 lookAt(vec3 const & eye, vec3 const & center, vec3 const & up)
{
vec3 f = normalize(center - eye);
vec3 u = normalize(up);
vec3 s = normalize(cross(f, u));
u = cross(s, f);
mat4x4 Result(1);
Result[0][0] = s.x;
Result[1][0] = s.y;
Result[2][0] = s.z;
Result[0][1] = u.x;
Result[1][1] = u.y;
Result[2][1] = u.z;
Result[0][2] =-f.x;
Result[1][2] =-f.y;
Result[2][2] =-f.z;
Result[3][0] =-dot(s, eye);
Result[3][1] =-dot(u, eye);
Result[3][2] = dot(f, eye);
return Result;
}
Now I'm going to tell you why I seem to be having a conceptual issue with this algorithm. There are two parts to this view matrix, the translation and the rotation. The translation does the correct inverse transformation, bringing the camera position to the origin, instead of the origin position to the camera. Similarly, you expect the rotation that the camera defines to be inversed before being put into this view matrix as well. I can't see that happening here, that's my issue.
Consider the forward vector, this is where your camera looks at. Consequently, this forward vector needs to be mapped to the -Z axis, which is the forward direction used by openGL. The way this view matrix is suppose to work is by creating an orthonormal basis in the columns of the view matrix, so when you multiply a vertex on the right hand side of this matrix, you are essentially just converting it's coordinates to that of different axes.
When I play the rotation that occurs as a result of this transformation in my mind, I see a rotation that is not the inverse rotation of the camera, like what's suppose to happen, rather I see the non-inverse. That is, instead of finding the camera forward being mapped to the -Z axis, I find the -Z axis being mapped to the camera forward.
If you don't understand what I mean, consider a 2D example of the same type of thing that is happening here. Let's say the forward vector is (sqr(2)/2 , sqr(2)/2), or sin/cos of 45 degrees, and let's also say a side vector for this 2D camera is sin/cos of -45 degrees. We want to map this forward vector to (0,1), the positive Y axis. The positive Y axis can be thought of as the analogy to the -Z axis in openGL space. Let's consider a vertex in the same direction as our forward vector, namely (1,1). By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y axis by using a 2x2 matrix that consists of the forward vector in the first column and the side vector in the second column. This is an equivalent calculation of that calculation http://www.wolframalpha.com/input/?i=%28sqr%282%29%2F2+%2C+sqr%282%29%2F2%29++1+%2B+%28sqr%282%29%2F2%2C+-sqr%282%29%2F2+%29+1.
Note that you don't get your (1,1) vertex mapped the positive Y axis like you wanted, instead you have it mapped to the positive X axis. You might also consider what happened to a vertex that was on the positive Y axis if you applied this transformation. Sure enough, it is transformed to the forward vector.
Therefore it seems like something very fishy is going on with the GLM algorithm. However, I doubt this algorithm is incorrect since it is so popular. What am I missing?
Have a look at GLU source code in Mesa: http://cgit.freedesktop.org/mesa/glu/tree/src/libutil/project.c
First in the implementation of gluPerspective, notice the -1 is using the indices [2][3] and the -2 * zNear * zFar / (zFar - zNear) is using [3][2]. This implies that the indexing is [column][row].
Now in the implementation of gluLookAt, the first row is set to side, the next one to up and the final one to -forward. This gives you the rotation matrix which is post-multiplied by the translation that brings the eye to the origin.
GLM seems to be using the same [column][row] indexing (from the code). And the piece you just posted for lookAt is consistent with the more standard gluLookAt (including the translational part). So at least GLM and GLU agree.
Let's then derive the full construction step by step. Noting C the center position and E the eye position.
Move the whole scene to put the eye position at the origin, i.e. apply a translation of -E.
Rotate the scene to align the axes of the camera with the standard (x, y, z) axes.
2.1 Compute a positive orthonormal basis for the camera:
f = normalize(C - E) (pointing towards the center)
s = normalize(f x u) (pointing to the right side of the eye)
u = s x f (pointing up)
with this, (s, u, -f) is a positive orthonormal basis for the camera.
2.2 Find the rotation matrix R that aligns maps the (s, u, -f) axes to the standard ones (x, y, z). The inverse rotation matrix R^-1 does the opposite and aligns the standard axes to the camera ones, which by definition means that:
(sx ux -fx)
R^-1 = (sy uy -fy)
(sz uz -fz)
Since R^-1 = R^T, we have:
( sx sy sz)
R = ( ux uy uz)
(-fx -fy -fz)
Combine the translation with the rotation. A point M is mapped by the "look at" transform to R (M - E) = R M - R E = R M + t. So the final 4x4 transform matrix for "look at" is indeed:
( sx sy sz tx ) ( sx sy sz -s.E )
L = ( ux uy uz ty ) = ( ux uy uz -u.E )
(-fx -fy -fz tz ) (-fx -fy -fz f.E )
( 0 0 0 1 ) ( 0 0 0 1 )
So when you write:
That is, instead of finding the camera forward being mapped to the -Z
axis, I find the -Z axis being mapped to the camera forward.
it is very surprising, because by construction, the "look at" transform maps the camera forward axis to the -z axis. This "look at" transform should be thought as moving the whole scene to align the camera with the standard origin/axes, it's really what it does.
Using your 2D example:
By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y
axis by using a 2x2 matrix that consists of the forward vector in the
first column and the side vector in the second column.
That's the opposite, following the construction I described, you need a 2x2 matrix with the forward and row vector as rows and not columns to map (1, 1) and the other vector to the y and x axes. To use the definition of the matrix coefficients, you need to have the images of the standard basis vectors by your transform. This gives directly the columns of the matrix. But since what you are looking for is the opposite (mapping your vectors to the standard basis vectors), you have to invert the transformation (transpose, since it's a rotation). And your reference vectors then become rows and not columns.
These guys might give some further insights to your fishy issue:
glm::lookAt vertical camera flips when z <= 0
The answer might be of interest to you?

Perspective projection - how to convert coordinates

I'm studying perspective projections and I stumbled upon this concept:
Basically it says that if I have a point (x,y,z) I can project it into my perspective screen (camera space) by doing
x' = x/z
y' = y/z
z' = f(z-n) / z(f-n)
I can't understand why x' = x/z or y' = y/z
Geometrically, it is a matter of similar triangles.
In your diagram, because (x,y,x) is on the same dotted line as (x',y',z'):
triangle [(0,0,0), (0,0,z), (x,y,z)]
is similar to
triangle [(0,0,0), (0,0,z'), (x',y',z')]
This means that the corresponding sides have a fixed ratio. And, further, the original vector is proportional to the projected vector. Finally, note that the notional projection plane is at z' = 1:
(x,y,z) / z = (x',y',z') / z'
-> so, since z' = 1:
x'/z' = x' = x/z
y'/z' = y' = y/z
[Warning: note that the z' in my answer is different from its occurrence in the question. The question's z' = f(z-n) / z(f-n) doesn't correspond directly to a physical point: it is a "depth value", which is used to do things like hidden surface removal.]
One way to look at this, is that what you are trying to do, is intersect a line which passes through both the viewer position (assumed to be at the origin: 0,0,0), and the point in space you wish to project (P).
So you take the equation of the line, which is P' = P * a, where a is simply a scalar value and solve for P'.Z = 1 (which is where your projection plane is). This is trivially true when the scalar multiple is 1 / P.Z, so the projected point is (P.X, P.Y, P.Z) * (1 / P.Z)
Homogenous coordinates give us the power to represent a point/line at infinity.
we add 1 to the vector representation. The more the distance of a point in 3d space, It tends to move toward the optical centre.
cartesian to homogenous
p=(x,y)to(x,y,1)
homogenous to cartesian
(X, Y, Z)to(X/Z, Y/Z)
For instance,
1. you are travelling in an aeroplane and when you look down, it doesn't seem like points move faster from one instant to another. This is distance is very large, Distance =1/Disparity(drift of the same point in two frames).
2. Try with substituting Infinity in the disparity, it means distance is 0.