OPENGL clip coordinate - opengl

I have a question for opengl clip coordinate. For example, a triangle, three vetices, now have transformed to camera coordinate, multiply with perspective projection matrix to clip coordinate, begin to clip,
-w=<x<=w, -w=<y<=w, -w=<z<=w,
does x,y,z,w mean to each vertex's clip coordinate? So w may not be same in these three vertices?

Yes, that w will vary per vertex. Most people imagine the clip space as the cube [-1,1]^3. However, that is not the clip space, but the normalized device space (NDC). You get from clip space to NDC by doing the perspective divide, so dividing each vertex by it's w component. So, in NDC, that clip condition would transform to -1 <= x/w <= 1. However, the clipping cannot be done in NDC (withpout extra information).
The problem here is that points which lie behind the camera would appear in front of the camera in NDC space. Think about it: x/w is the same as -x/-w. With a typical GL projection matrix, w_clip == z_eye of the vertex. Also, a point that lies in the camera plane (the plane parallel to the projection plane, but going through the camera itself) will have w=0 and you can't do any clipping after that divide. The solution is to always do the clipping before the divide, hence the clip space is called "clip space"...

Related

My understanding on the projection matrix, perspective division, NDC and viewport transform

I was quite confused on how the projection matrix worked so I researched and I discovered a few other things but after researching a few days, I just wanted to confirm my understanding is correct. I might use a few wrong terms but my brain was exhausted after writing this. A few topics I just researched briefly like screen coordinates and window transform so I didn’t write much about it and my knowledge might be incorrect. Is everything I’ve written here correct or mostly correct? Correct me on anything if I’m wrong.
What does the projection matrix do?
So the perspective projection matrix defines a frustum that is a truncated pyramid. Anything outside of that frustum/frustum range will be clipped. I'll get more on that later. The perspective projection matrix also adds perspective. To make the vertices follow the rules of perspective, the perspective projection matrix manipulates the vertex's w component (the homogenous component) depending on how far the vertex is from the viewer (the farther the vertex is, the higher the w coordinate will increase).
Why and how does the w component make the world look perceptive?
The w component makes the world look perceptive because in the perspective division (perspective division happens in the vertex post processing stage), when the x, y and z is divided by the w component, the vertex coordinate will be scaled smaller depending on how big the w component is. So essentially, the w component scales the object smaller the farther the object is.
Example:
Vertex position (1, 1, 2, 2).
Here, the vertex is 2 away from the viewer. In perspective division the x, y, and z will be divided by 2 because 2 is the w component.
(1/2, 1/2, 2/2) = (0.5, 0.5, 1).
As shown here, the vertex coordinate has been scaled by half.
How does the projection matrix decide what will be clipped?
The near and far plane are the limits of where the viewer can see (anything beyond the far plane and before the near plane will be clipped). Any coordinate will also have to go through a clipping check to see if it has to be clipped. The clipping check is checking whether the vertex coordinate is within a frustum range of -w to w.  If it is outside of that range, it will be clipped.
Let's say I have a vertex with a position of (2, 130, 90, 90).
x value is 2
y value is 130
z value is 90
w value is 90
This vertex must be within the range of -90 to 90. The x and z value is within the range but the y value goes beyond the range thus the vertex will be clipped.
So after the vertex shader is finished, the next step is vertex post processing. In vertex post processing the clipping happens and also perspective division happens where clip space is converted into NDC (normalized device coordinates). Also, viewport transform happens where NDC is converted to window space.
What does perspective division do?
Perspective division essentially divides the x, y, and z component of a vertex with the w component. Doing this actually does two things, converts the clip space to Normalized device coordinates and also add perspective by scaling the vertices.
What is Normalized Device Coordinates?
Normalized Device Coordinates is the coordinate system where all coordinates are condensed into an NDC box where each axis is in the range of -1 to +1.
After NDC is occurred, viewport transform happens where all the NDC coordinates are converted screen coordinates. NDC space will become window space.
If an NDC coordinate is (0.5, 0.5, 0.3), it will be mapped onto the window based on what the programmer provided in the function glViewport. If the viewport is 400x300, the NDC coordinate will be placed at pixel 200 on x axis and 150 on y axis.
The perspective projection matrix does not decide what is clipped. After transforming a world coordinate with the projection, you get a clipspace coordinate. This is a Homogeneous coordinates. Base on this coordinate the Rendering Pipeline clips the scene. The clipping rule is -w < x, y, z < w. In the following process of the rendering pipeline, the clip space coordinates is transformed into the normalized device space by the perspective divide (x, y, z)' = (x/w, y/w, z/w). This division by the w component gives the perspective effect. (See also What exactly are eye space coordinates? and Transform the modelMatrix)

opengl projection- where is the mapping to NDC

Where is the mapping to NDC?
According to my understanding, the projection matrix does two things: First, it clips the view space to form the frustum. It only keeps the vertices that fall into the frustum but clips others that fall outside. Second, it maps the coordinates within the clipped space to NDC [-1, 1].
But I only see that we create a frustum (to use for clipping) by define a perspective matrix with glm::perspective or glm::ortho. Where is the steps to do the mapping between frustum and NDC?
Or we just need to define the frustum and OpenGL will do the two steps for us automatically?
According to my understanding, the projection matrix do two things: first, it clipped the view space to form the frustum. It only keeps the vertice that's fall into the frustum but clipped others fall outside. Second, it maps the coordinates within the clipped space to NDC [-1, 1].
It does neither of these things.
A 4x4 matrix is just a transformation. A transformation by itself cannot clip anything. And while a matrix can transform coordinates into a [-1, 1] space, that's not necessarily what the clip space provided by the vertex shader will be. And it's certainly not the clip space used by 3D projection matrices.
The only mandated job of the vertex shader (or the last vertex processing shader stage) is to generate a clip-space position for each vertex. Clip-space is defined by OpenGL to be the 4D space where the XYZ components of the position are within the range [-W, W] are considered "visible". Here, "W" is the W component of the position. So for each XYZW, "visible" is defined as where the XYZ components are in the range of W.
The actual clipping of primitives happens after the vertex shader. Each primitive's vertices are clipped against the previously-defined 4D clip space.
A projection frustum is created by transforming positions into a clip-space such that those 4D positions outside of the frustum are outside of the [-W, W] range, and those inside of the frustum are inside of that range.
So it is not that the transformation clips anything; it merely sets up the data so that OpenGL's clipping system will clip things correctly.
Similarly, clip-space is not NDC space (unless the W component for the position is 1). NDC space is defined by taking the XYZ of a clip-space position and dividing it by that position's W component (and if W is 1, then this obviously changes nothing). Clip and NDC space are two separate spaces, and clipping happens before NDC space. You can conceptually think of clipping as being done against the [-1, 1] range of NDC space. After all, clip-space is on the [-W, W] range, so if you divide that by W, you get the range [-1, 1].
But it's still important to remember that clip space isn't NDC space.

Perspective Projection - OpenGL

I am confused about the position of objects in opengl .The eye position is 0,0,0 , the projection plane is at z = -1 . At this point , will the objects be in between the eye position and and the plane (Z =(0 to -1)) ? or its behind the projection plane ? and also if there is any particular reason for being so?
First of all, there is no eye in modern OpenGL. There is also no camera. There is no projection plane. You define these concepts by yourself; the graphics library does not give them to you. It is your job to transform your object from your coordinate system into clip space in your vertex shader.
I think you are thinking about projection wrong. Projection doesn't move the objects in the same sense that a translation or rotation matrix might. If you take a look at the link above, you can see that in order to render a perspective projection, you calculate the x and y components of the projected coordinate with R = V(ez/pz), where ez is the depth of the projection plane, pz is the depth of the object, V is the coordinate vector, and R is the projection. Almost always you will use ez=1, which makes that equation into R = V/pz, allowing you to place pz in the w coordinate allowing OpenGL to do the "perspective divide" for you. Assuming you have your eye and plane in the correct places, projecting a coordinate is almost as simple as dividing by its z coordinate. Your objects can be anywhere in 3D space (even behind the eye), and you can project them onto your plane so long as you don't divide by zero or invalidate your z coordinate that you use for depth testing.
There is no "projection plane" at z=-1. I don't know where you got this from. The classic GL perspective matrix assumes an eye space where the camera is located at origin and looking into -z direction.
However, there is the near plane at z<0 and eveything in front of the near plane is going to be clipped. You cannot put the near plane at z=0, because then, you would end up with a division by zero when trying to project points on that plane. So there is one reasin that the viewing volume isn't a pyramid with they eye point at the top but a pyramid frustum.
This is btw. also true for real-world eyes or cameras. The projection center lies behind the lense, so no object can get infinitely close to the optical center in either case.
The other reason why you want a big near clipping distance is the precision of the depth buffer. The whole depth range between the front and the near plane has to be mapped to some depth value with a limited amount of bits, typically 24. So you want to keep the far plane as close as possible, and shift away the near plane as far as possible. The non-linear mapping of the screen-space z coordinate makes this even more important, as that the precision is non-uniformely distributed over that range.

When does the transition from clip space to screen coordinates happen?

I was studying the rendering pipeline and when I got to the clipping stage it was explained that from the view (eye or camera) space we have to pass to the clip space, also called normalized device space (NDC), that is a cubic space from -1 to 1.
However, now I don't understand when the passage from this space to the screen coordinates space happens:
Right after clipping and before rasterization?
After rasterization and before scissor and z-test?
At the end just before writing on the frame buffer?
No, clip space and NDC space are not the same thing.
Clip space is actually one step away from NDC, all coordinates are divided by Clip.W to produce NDC. Anything outside of the range [-1,1] in resulting NDC space corresponds to a point that is outside of the clipping volume. There is a reason the coordinate space before NDC is called clip space ;)
Strictly speaking, however, NDC space is not necessarily cubic. It is true that NDC space is a cube in OpenGL, but in Direct3D it is not. In D3D the Z coordinate in NDC space ranges from 0.0 to 1.0, while it ranges from -1.0 to 1.0 in GL. X and Y behave the same in GL and D3D (that is, they range from -1.0 to 1.0). NDC is a standard coordinate space, but it has different representation in different APIs.
Lastly, NDC space to screen space (AKA window space) occurs during rasterization and is defined by your viewport and depth range. Fragment locations really would not make sense in any other coordinate space, and this is what rasterization produces: fragments.
Update:
Introduced in OpenGL 4.5, the extension GL_ARB_clip_control allows you to adopt D3D's NDC convention in GL.
Traditional OpenGL behavior is:
glClipControl (GL_LOWER_LEFT, GL_NEGATIVE_ONE_TO_ONE);
Direct3D behavior can be achieved through:
glClipControl (GL_UPPER_LEFT, GL_ZERO_TO_ONE); // Y-axis is inverted in D3D
Clip space and NDC (normalized device coordinates) are not the same thing, otherwise they wouldn't have different names.
Clip space is where the space points are in after the point transformation by the projection matrix, but before the normalisation by w.
NDC space is the space points are in after the normalisation by w.
http://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/projection-matrix-GPU-rendering-pipeline-clipping
Camera space -->
x projection matrix --->
Clip space (before normalisation) --->
Clipping --->
Normalisation by w (x/w, y/w, z/w) --->
NDC space (in the range [-1, 1] in x and y)
Apparently, according to Apple, clip space is the same as NDC.
https://developer.apple.com/documentation/metal/hello_triangle
Quote:
"The main task of a vertex function (also known as a vertex shader) is to process incoming vertex data and map each vertex to a position in the viewport. This way, subsequent stages in the pipeline can refer to this viewport position and render pixels to an exact location in the drawable. The vertex function accomplishes this task by translating arbitrary vertex coordinates into normalized device coordinates, also known as clip-space coordinates."
Another quote from comments in sample code:
"The output position of every vertex shader is in clip space (also known as normalized device coordinate space, or NDC)."
Perhaps this is because the tutorial is in 2D? Misleading statements..
You can think the transition from clip space (-1 to +1 on every axis, for anything inside your image) to Screen Coordinates, also called viewport space (0 to ResX in X, 0 to rexY in Y, and 0 to 1 in Z, aka depth), as occurring just before the rasterization, after the vertex processor.
When you write a Vertex shader, you output is the projected position of the vertex in Clip space, but in the Fragment shader, each fragment comes with its own Screen coordinates and depth.
About Clip Space VS NDC
Clip Space is, as the name implies, is a Space, that is, a Reference Frame, a Coordinate System, that is, a particular choice of origin and set of three axis that you use to specify points and vectors.
Its origin is in the middle of the clip volume, its three axis are aligned as specified by the API. For example, the point with Cartesian coordinates (+1,0,0) of this Space appears on the right end of the image, and the point with Cartesian coordinates (-1,0,0) of the left.
NDC (Normalized Device Coordinates) is, as the name implies, a set of coordinates: they are the three Cartesian coordinates of a point in Clip Space. For example, take a point in Clip space of homogeneous coordinates (3,0,0,3), which you can also express as (30,0,0,30), and in many other ways, and which has Cartesian coordinates (1,0,0): its NDC are (1,0,0).
NDC space is clip space, NDC space to window space is done by hardware, happened after NDC and before rasterization.
There is API to set the width and height, the default value is same size with window size.
// metal
func setViewport(_ viewport: MTLViewport)
// OpenGL
void glViewport(GLint x, GLint y, GLsizei width, GLsizei height);
NDC space for OpenGL, xyz range [-1, 1]. For Metal, Z is from 0 to 1
NDC space is usually left hand system.

About OpenGL Matrix Multiplications

http://www.cs.uaf.edu/2007/spring/cs481/lecture/01_23_matrices.html
I have just finished reading this, but i have 2 questions about multiplications.
gl_Position = gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex
This is the final screen coordinates (x,y) when i multiply them all. What if i only multiply gl_ModelViewMatrix*gl_Vertex or gl_ProjectionMatrix*gl_Vertex ? And what does gl_Vertex mean alone ?
gl_Vertex is the vertex coordinate in world space.
vertices and the eye may be arbitrarily placed and oriented in world space.
After multiplication with ModelViewMatrix, you get a vertex coordinate in 'eye-space', a coordinate system with the eye at 0,0,0. Multiplying by the projection matrix(and doing that homogenous coordinate system division thingy) gives you coordinates in screen space. Those are not pixel-coordinates yet, but some normalized coordinate system with 0,0,0 in the center of the screen/window. Viewport transform is last. It maps the image to window (pixel?) coordinates.
An explanation is given in:
http://www.srk.fer.hr/~unreal/theredbook/
chapter 3.
What if i only multiply gl_ModelViewMatrix*gl_Vertex or gl_ProjectionMatrix*gl_Vertex ?
Then the projection matrix (the 1st case) or model-view matrix (the 2nd case) will not take effect. If they are identity matrices, you will not see effect. If they are not, then your view angle or position might be wrong.
And what does gl_Vertex mean alone ?
gl_Vertex is the vertex coordinate that is passed to the input of the pipeline.