I'm following the tutorial at http://alfonse.bitbucket.org/oldtut/Positioning/Tut04%20Perspective%20Projection.html which I believe the user, Nicol Bolas, is the author. Under the Camera Perspective topic, I am getting stuck.
"Our perspective projection transform will be specific to this space. As previously stated, the projection plane shall be a region [-1, 1] in the X and Y axes, and at a Z value of -1. The projection will be from vertices in the -Z direction onto this plane; vertices that have a positive Z value are behind the projection plane.
Now, we will make one more simplifying assumption: the location of the center of the perspective plane is fixed at (0, 0, -1) in camera space. Therefore, since the projection plane is pointing down the -Z axis, eye's location relative to the plane of projection is (0, 0, -1). Thus, the Ez value, the offset from the projection plane to the eye, is always -1. This means that our perspective term, when phrased as division rather than multiplication, is simply Pz/-1: the negation of the camera-space Z coordinate."
Isn't the eye's location relative to the plane of projection a positive 1 in the z direction?
My understanding is the eye is at (0,0,0) and the plane's center is at (0,0,-1). Relatively, the eye is +1 in the z direction to the plane.
I was doing really well understanding this tutorial with this exception. I can even look past it and understand the rest of the tutorial while just believing this, but that's not what I want to do.
Therefore, since the projection plane is pointing down the -Z axis, eye's location relative to the plane of projection is (0, 0, -1).
It's probably best to pretend that this sentence doesn't exist. In fact, it doesn't exist. I defy you to prove me wrong.
No fair looking at the Bitbucket repo either ;)
Related
I was quite confused on how the projection matrix worked so I researched and I discovered a few other things but after researching a few days, I just wanted to confirm my understanding is correct. I might use a few wrong terms but my brain was exhausted after writing this. A few topics I just researched briefly like screen coordinates and window transform so I didn’t write much about it and my knowledge might be incorrect. Is everything I’ve written here correct or mostly correct? Correct me on anything if I’m wrong.
What does the projection matrix do?
So the perspective projection matrix defines a frustum that is a truncated pyramid. Anything outside of that frustum/frustum range will be clipped. I'll get more on that later. The perspective projection matrix also adds perspective. To make the vertices follow the rules of perspective, the perspective projection matrix manipulates the vertex's w component (the homogenous component) depending on how far the vertex is from the viewer (the farther the vertex is, the higher the w coordinate will increase).
Why and how does the w component make the world look perceptive?
The w component makes the world look perceptive because in the perspective division (perspective division happens in the vertex post processing stage), when the x, y and z is divided by the w component, the vertex coordinate will be scaled smaller depending on how big the w component is. So essentially, the w component scales the object smaller the farther the object is.
Example:
Vertex position (1, 1, 2, 2).
Here, the vertex is 2 away from the viewer. In perspective division the x, y, and z will be divided by 2 because 2 is the w component.
(1/2, 1/2, 2/2) = (0.5, 0.5, 1).
As shown here, the vertex coordinate has been scaled by half.
How does the projection matrix decide what will be clipped?
The near and far plane are the limits of where the viewer can see (anything beyond the far plane and before the near plane will be clipped). Any coordinate will also have to go through a clipping check to see if it has to be clipped. The clipping check is checking whether the vertex coordinate is within a frustum range of -w to w. If it is outside of that range, it will be clipped.
Let's say I have a vertex with a position of (2, 130, 90, 90).
x value is 2
y value is 130
z value is 90
w value is 90
This vertex must be within the range of -90 to 90. The x and z value is within the range but the y value goes beyond the range thus the vertex will be clipped.
So after the vertex shader is finished, the next step is vertex post processing. In vertex post processing the clipping happens and also perspective division happens where clip space is converted into NDC (normalized device coordinates). Also, viewport transform happens where NDC is converted to window space.
What does perspective division do?
Perspective division essentially divides the x, y, and z component of a vertex with the w component. Doing this actually does two things, converts the clip space to Normalized device coordinates and also add perspective by scaling the vertices.
What is Normalized Device Coordinates?
Normalized Device Coordinates is the coordinate system where all coordinates are condensed into an NDC box where each axis is in the range of -1 to +1.
After NDC is occurred, viewport transform happens where all the NDC coordinates are converted screen coordinates. NDC space will become window space.
If an NDC coordinate is (0.5, 0.5, 0.3), it will be mapped onto the window based on what the programmer provided in the function glViewport. If the viewport is 400x300, the NDC coordinate will be placed at pixel 200 on x axis and 150 on y axis.
The perspective projection matrix does not decide what is clipped. After transforming a world coordinate with the projection, you get a clipspace coordinate. This is a Homogeneous coordinates. Base on this coordinate the Rendering Pipeline clips the scene. The clipping rule is -w < x, y, z < w. In the following process of the rendering pipeline, the clip space coordinates is transformed into the normalized device space by the perspective divide (x, y, z)' = (x/w, y/w, z/w). This division by the w component gives the perspective effect. (See also What exactly are eye space coordinates? and Transform the modelMatrix)
According to a number sources NDC differs from clip space in that NDC is just clip space AFTER division by the W component. Primitives are clipped in clip space, which in OpenGL is -1 to 1 along X, Y, and Z axes (Edit: this is wrong, see answer). In other words, clip space is a cube. Clipping is done within this cube. If it falls inside, it's visible, if it falls outside, it's not visible.
So let's take this simple example, we're looking from the top down on a viewing frustum, down the negative Y axis. The HALFFOV is 45 degrees, which means the NEAR and the RIGHT are both the same (in this case length 2). The example point is (6, 0, -7).
Now, here is the perspective projection matrix:
For simplicity we'll use an aspect ratio of 1:1. So:
RIGHT = 2
LEFT = -2
TOP = 2
BOTTOM = -2
NEAR = 2
FAR = 8
So filling in our values we get a projection matrix of:
Now we add the homogenous W to our point, which was (6, 0, -7), and get get (6, 0, -7, 1).
Now we multiply our matrix with our point, which results in (6, 0, 6.29, 7).
This point now (the point after being multiplied by the projection matrix, is supposed to lie in "clip space". Supposedly the clipping is done at this stage, figuring out whether a point lies inside or outside the clipping cube, and supposedly BEFORE division with W. Here is how it looks in "clip space":
From the sources I've seen the clipping is done at this stage, as it looks as above, BEFORE dividing by W. If you divide by W NOW, the point ends up in the right area of the clip space cube. This is why I don't understand why everyone says that perspective division is done AFTER the clipping space. In this space, prior to perspective division the point lies completely outside and would be judged to be outside the clipping space, and not visible. However after the perspective division, division by W, here is how it looks:
Now the point lies within the clip space cube, and can be judged to be inside, and visible. This is why I think perspective division is done BEFORE clipping, because if clipping space is in -1 to +1 in each axis, and the clipping stage checks against these dimensions, for a point to be inside this cube it must have already undergone division by W, otherwise almost ANY point lies outside the clipping space cube and is never visible.
So why does everyone say that first comes clipping space which is a result of the projection matrix, and ONLY then there is perspective division (division by W) which results in NDC?
In clip space, clipping is not done against a unit cube. It is done against a cube with side-length w. Points are inside the visible area if each of their x,y,z coordinate is smaller than their w coordinate.
In the example you have, the point [6, 0, 6.29, 7] is visible because all three coordinates (x,y,z) are smaller than 7.
Note, that for points inside the visible area, this is exactly equivalent to testing x/w < 1. The problems start with points in-front of the far-plane since they might get projected to the visible area by the homogeneous divide because their w-value is negative. As we all know, dividing by a negative number in an inequality would switch the operator, which is impracticable on hardware.
Further readings:
OpenGL sutherland-hodgman polygon clipping algorithm in homogeneous coordinates
Why clipping should be done in CCS, not NDCS
Why does GL divide gl_Position by W for you rather than letting you do it yourself?
I am confused about the position of objects in opengl .The eye position is 0,0,0 , the projection plane is at z = -1 . At this point , will the objects be in between the eye position and and the plane (Z =(0 to -1)) ? or its behind the projection plane ? and also if there is any particular reason for being so?
First of all, there is no eye in modern OpenGL. There is also no camera. There is no projection plane. You define these concepts by yourself; the graphics library does not give them to you. It is your job to transform your object from your coordinate system into clip space in your vertex shader.
I think you are thinking about projection wrong. Projection doesn't move the objects in the same sense that a translation or rotation matrix might. If you take a look at the link above, you can see that in order to render a perspective projection, you calculate the x and y components of the projected coordinate with R = V(ez/pz), where ez is the depth of the projection plane, pz is the depth of the object, V is the coordinate vector, and R is the projection. Almost always you will use ez=1, which makes that equation into R = V/pz, allowing you to place pz in the w coordinate allowing OpenGL to do the "perspective divide" for you. Assuming you have your eye and plane in the correct places, projecting a coordinate is almost as simple as dividing by its z coordinate. Your objects can be anywhere in 3D space (even behind the eye), and you can project them onto your plane so long as you don't divide by zero or invalidate your z coordinate that you use for depth testing.
There is no "projection plane" at z=-1. I don't know where you got this from. The classic GL perspective matrix assumes an eye space where the camera is located at origin and looking into -z direction.
However, there is the near plane at z<0 and eveything in front of the near plane is going to be clipped. You cannot put the near plane at z=0, because then, you would end up with a division by zero when trying to project points on that plane. So there is one reasin that the viewing volume isn't a pyramid with they eye point at the top but a pyramid frustum.
This is btw. also true for real-world eyes or cameras. The projection center lies behind the lense, so no object can get infinitely close to the optical center in either case.
The other reason why you want a big near clipping distance is the precision of the depth buffer. The whole depth range between the front and the near plane has to be mapped to some depth value with a limited amount of bits, typically 24. So you want to keep the far plane as close as possible, and shift away the near plane as far as possible. The non-linear mapping of the screen-space z coordinate makes this even more important, as that the precision is non-uniformely distributed over that range.
I want to test if any given point in the world is on a quad/plane? The quad/plane can be translated/rotated/scaled by any values but it still should be able to detect if the given point is on it. I also need to get the location where the point should have been, if the quad was not applied any rotation/scale/translation.
For example, consider a quad at 0, 0, 0 with size 100x100, rotated at an angle of 45 degrees along z axis. If my mouse location in the world is at ( x, y, 0, ), I need to know if that point falls on that quad in its current transformation? If yes, then I need to know if no transformations were applied to the quad, where that point would have been on it? Any code sample would be of great help
A ray-casting approach is probably simplest:
Use gluUnProject() to get the world-space direction of the ray to cast into the scene. The ray's origin is the camera position.
Put this ray into object space by transforming it by the inverse of your rectangle's transform. Note that you need to transform both the ray's origin point and direction vector.
Compute the intersection point between this ray and the XY plane with a standard ray-plane intersection test.
Check that the intersection point's x and y values are within your rectangle's bounds, if they are then that's your desired result.
A math library such as GLM will be very helpful if you aren't confident about some of the math involved here, it has corresponding functions such as glm::unProject() as well as functions to invert matrices and do all the other transformations you'd need.
I am confused with volume of view which generated from glOrtho method ,
I know that last two parameters are for Z axis ,
first one represent the distance between viewer and near plane and second one represent distance between viewer and far plane .
my question is where the viewer(camera) lies exactly in Z coordinate ?
and in this link program some code that make near plane positive and far plane negative , in this case can we say that Z- is behind the viewer and Z+ is in front of viewer ?
if yes , try to make Z coordinate negative for all vertex of one of triangles , you will note that it appear although it is behind the viewer , why ??
first one represent the distance between viewer and near plane and second one represent distance between viewer and far plane
No, it isn't. An orthographic projection defines a box. The zNear and zFar are the positions of the box, not the distance from the "viewer".
Orthographic projections don't have a "viewer" in the same way that perspective projections do. They have a direction of view, not a view position. And the direction of the view is always the direction that puts zFar the farthest away and zNear being closest. If zNear is larger than zFar, then the direction of view is in the positive Z; otherwise, it's the negative Z.
Actually your question is little bit confusing. I think you can try glLookAt() and make the object appear from a different angle and see the difference. here is a link
http://mycodelog.com/2010/05/28/glcamera/