opengl projection- where is the mapping to NDC - c++

Where is the mapping to NDC?
According to my understanding, the projection matrix does two things: First, it clips the view space to form the frustum. It only keeps the vertices that fall into the frustum but clips others that fall outside. Second, it maps the coordinates within the clipped space to NDC [-1, 1].
But I only see that we create a frustum (to use for clipping) by define a perspective matrix with glm::perspective or glm::ortho. Where is the steps to do the mapping between frustum and NDC?
Or we just need to define the frustum and OpenGL will do the two steps for us automatically?

According to my understanding, the projection matrix do two things: first, it clipped the view space to form the frustum. It only keeps the vertice that's fall into the frustum but clipped others fall outside. Second, it maps the coordinates within the clipped space to NDC [-1, 1].
It does neither of these things.
A 4x4 matrix is just a transformation. A transformation by itself cannot clip anything. And while a matrix can transform coordinates into a [-1, 1] space, that's not necessarily what the clip space provided by the vertex shader will be. And it's certainly not the clip space used by 3D projection matrices.
The only mandated job of the vertex shader (or the last vertex processing shader stage) is to generate a clip-space position for each vertex. Clip-space is defined by OpenGL to be the 4D space where the XYZ components of the position are within the range [-W, W] are considered "visible". Here, "W" is the W component of the position. So for each XYZW, "visible" is defined as where the XYZ components are in the range of W.
The actual clipping of primitives happens after the vertex shader. Each primitive's vertices are clipped against the previously-defined 4D clip space.
A projection frustum is created by transforming positions into a clip-space such that those 4D positions outside of the frustum are outside of the [-W, W] range, and those inside of the frustum are inside of that range.
So it is not that the transformation clips anything; it merely sets up the data so that OpenGL's clipping system will clip things correctly.
Similarly, clip-space is not NDC space (unless the W component for the position is 1). NDC space is defined by taking the XYZ of a clip-space position and dividing it by that position's W component (and if W is 1, then this obviously changes nothing). Clip and NDC space are two separate spaces, and clipping happens before NDC space. You can conceptually think of clipping as being done against the [-1, 1] range of NDC space. After all, clip-space is on the [-W, W] range, so if you divide that by W, you get the range [-1, 1].
But it's still important to remember that clip space isn't NDC space.

Related

My understanding on the projection matrix, perspective division, NDC and viewport transform

I was quite confused on how the projection matrix worked so I researched and I discovered a few other things but after researching a few days, I just wanted to confirm my understanding is correct. I might use a few wrong terms but my brain was exhausted after writing this. A few topics I just researched briefly like screen coordinates and window transform so I didn’t write much about it and my knowledge might be incorrect. Is everything I’ve written here correct or mostly correct? Correct me on anything if I’m wrong.
What does the projection matrix do?
So the perspective projection matrix defines a frustum that is a truncated pyramid. Anything outside of that frustum/frustum range will be clipped. I'll get more on that later. The perspective projection matrix also adds perspective. To make the vertices follow the rules of perspective, the perspective projection matrix manipulates the vertex's w component (the homogenous component) depending on how far the vertex is from the viewer (the farther the vertex is, the higher the w coordinate will increase).
Why and how does the w component make the world look perceptive?
The w component makes the world look perceptive because in the perspective division (perspective division happens in the vertex post processing stage), when the x, y and z is divided by the w component, the vertex coordinate will be scaled smaller depending on how big the w component is. So essentially, the w component scales the object smaller the farther the object is.
Example:
Vertex position (1, 1, 2, 2).
Here, the vertex is 2 away from the viewer. In perspective division the x, y, and z will be divided by 2 because 2 is the w component.
(1/2, 1/2, 2/2) = (0.5, 0.5, 1).
As shown here, the vertex coordinate has been scaled by half.
How does the projection matrix decide what will be clipped?
The near and far plane are the limits of where the viewer can see (anything beyond the far plane and before the near plane will be clipped). Any coordinate will also have to go through a clipping check to see if it has to be clipped. The clipping check is checking whether the vertex coordinate is within a frustum range of -w to w.  If it is outside of that range, it will be clipped.
Let's say I have a vertex with a position of (2, 130, 90, 90).
x value is 2
y value is 130
z value is 90
w value is 90
This vertex must be within the range of -90 to 90. The x and z value is within the range but the y value goes beyond the range thus the vertex will be clipped.
So after the vertex shader is finished, the next step is vertex post processing. In vertex post processing the clipping happens and also perspective division happens where clip space is converted into NDC (normalized device coordinates). Also, viewport transform happens where NDC is converted to window space.
What does perspective division do?
Perspective division essentially divides the x, y, and z component of a vertex with the w component. Doing this actually does two things, converts the clip space to Normalized device coordinates and also add perspective by scaling the vertices.
What is Normalized Device Coordinates?
Normalized Device Coordinates is the coordinate system where all coordinates are condensed into an NDC box where each axis is in the range of -1 to +1.
After NDC is occurred, viewport transform happens where all the NDC coordinates are converted screen coordinates. NDC space will become window space.
If an NDC coordinate is (0.5, 0.5, 0.3), it will be mapped onto the window based on what the programmer provided in the function glViewport. If the viewport is 400x300, the NDC coordinate will be placed at pixel 200 on x axis and 150 on y axis.
The perspective projection matrix does not decide what is clipped. After transforming a world coordinate with the projection, you get a clipspace coordinate. This is a Homogeneous coordinates. Base on this coordinate the Rendering Pipeline clips the scene. The clipping rule is -w < x, y, z < w. In the following process of the rendering pipeline, the clip space coordinates is transformed into the normalized device space by the perspective divide (x, y, z)' = (x/w, y/w, z/w). This division by the w component gives the perspective effect. (See also What exactly are eye space coordinates? and Transform the modelMatrix)

Coordinate System [-1, 1]

I am confused on how the OpenGL coordinate system works. I know you start with object coordinates -- everything defined in its own system. Then by applying a matrix, the coordinates change to world coordinates. By applying another matrix, you have view coordinates. Then if you're working in 3D, you can apply a perspective matrix. In the end, you are left with a set of coordinates which likely are not from [-1, 1]. How does OpenGL know how to normalize them from [-1, 1]? How does it know what to clip them out? In the shader, glPosition is just given your coordinates, it doesn't know that there have been through several transformations. I know that a view to normalized coordinate matrix involves a translation and a scale, but we never explicitly make a matrix for that in OpenGL. Does OpenGL use its own hidden matrix to translate from coordinates passed to glPostion to normalized coordinates?
Deprecated fixed function vertex transformations are explained in https://www.opengl.org/wiki/Vertex_Transformation
Shader based rendering is likely to use same or very similar math for each transformation step. The missing step between glPosition and device coordinates is perspective divide (like LJ commented quickly) where xyzw coordinates are converted to xyz coordinates. xyzw coordinates are homogeneous coordinates for 3-dimensional coordinates that use 4 components to represent a location.
https://en.wikipedia.org/wiki/Homogeneous_coordinates

When does the transition from clip space to screen coordinates happen?

I was studying the rendering pipeline and when I got to the clipping stage it was explained that from the view (eye or camera) space we have to pass to the clip space, also called normalized device space (NDC), that is a cubic space from -1 to 1.
However, now I don't understand when the passage from this space to the screen coordinates space happens:
Right after clipping and before rasterization?
After rasterization and before scissor and z-test?
At the end just before writing on the frame buffer?
No, clip space and NDC space are not the same thing.
Clip space is actually one step away from NDC, all coordinates are divided by Clip.W to produce NDC. Anything outside of the range [-1,1] in resulting NDC space corresponds to a point that is outside of the clipping volume. There is a reason the coordinate space before NDC is called clip space ;)
Strictly speaking, however, NDC space is not necessarily cubic. It is true that NDC space is a cube in OpenGL, but in Direct3D it is not. In D3D the Z coordinate in NDC space ranges from 0.0 to 1.0, while it ranges from -1.0 to 1.0 in GL. X and Y behave the same in GL and D3D (that is, they range from -1.0 to 1.0). NDC is a standard coordinate space, but it has different representation in different APIs.
Lastly, NDC space to screen space (AKA window space) occurs during rasterization and is defined by your viewport and depth range. Fragment locations really would not make sense in any other coordinate space, and this is what rasterization produces: fragments.
Update:
Introduced in OpenGL 4.5, the extension GL_ARB_clip_control allows you to adopt D3D's NDC convention in GL.
Traditional OpenGL behavior is:
glClipControl (GL_LOWER_LEFT, GL_NEGATIVE_ONE_TO_ONE);
Direct3D behavior can be achieved through:
glClipControl (GL_UPPER_LEFT, GL_ZERO_TO_ONE); // Y-axis is inverted in D3D
Clip space and NDC (normalized device coordinates) are not the same thing, otherwise they wouldn't have different names.
Clip space is where the space points are in after the point transformation by the projection matrix, but before the normalisation by w.
NDC space is the space points are in after the normalisation by w.
http://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/projection-matrix-GPU-rendering-pipeline-clipping
Camera space -->
x projection matrix --->
Clip space (before normalisation) --->
Clipping --->
Normalisation by w (x/w, y/w, z/w) --->
NDC space (in the range [-1, 1] in x and y)
Apparently, according to Apple, clip space is the same as NDC.
https://developer.apple.com/documentation/metal/hello_triangle
Quote:
"The main task of a vertex function (also known as a vertex shader) is to process incoming vertex data and map each vertex to a position in the viewport. This way, subsequent stages in the pipeline can refer to this viewport position and render pixels to an exact location in the drawable. The vertex function accomplishes this task by translating arbitrary vertex coordinates into normalized device coordinates, also known as clip-space coordinates."
Another quote from comments in sample code:
"The output position of every vertex shader is in clip space (also known as normalized device coordinate space, or NDC)."
Perhaps this is because the tutorial is in 2D? Misleading statements..
You can think the transition from clip space (-1 to +1 on every axis, for anything inside your image) to Screen Coordinates, also called viewport space (0 to ResX in X, 0 to rexY in Y, and 0 to 1 in Z, aka depth), as occurring just before the rasterization, after the vertex processor.
When you write a Vertex shader, you output is the projected position of the vertex in Clip space, but in the Fragment shader, each fragment comes with its own Screen coordinates and depth.
About Clip Space VS NDC
Clip Space is, as the name implies, is a Space, that is, a Reference Frame, a Coordinate System, that is, a particular choice of origin and set of three axis that you use to specify points and vectors.
Its origin is in the middle of the clip volume, its three axis are aligned as specified by the API. For example, the point with Cartesian coordinates (+1,0,0) of this Space appears on the right end of the image, and the point with Cartesian coordinates (-1,0,0) of the left.
NDC (Normalized Device Coordinates) is, as the name implies, a set of coordinates: they are the three Cartesian coordinates of a point in Clip Space. For example, take a point in Clip space of homogeneous coordinates (3,0,0,3), which you can also express as (30,0,0,30), and in many other ways, and which has Cartesian coordinates (1,0,0): its NDC are (1,0,0).
NDC space is clip space, NDC space to window space is done by hardware, happened after NDC and before rasterization.
There is API to set the width and height, the default value is same size with window size.
// metal
func setViewport(_ viewport: MTLViewport)
// OpenGL
void glViewport(GLint x, GLint y, GLsizei width, GLsizei height);
NDC space for OpenGL, xyz range [-1, 1]. For Metal, Z is from 0 to 1
NDC space is usually left hand system.

OPENGL clip coordinate

I have a question for opengl clip coordinate. For example, a triangle, three vetices, now have transformed to camera coordinate, multiply with perspective projection matrix to clip coordinate, begin to clip,
-w=<x<=w, -w=<y<=w, -w=<z<=w,
does x,y,z,w mean to each vertex's clip coordinate? So w may not be same in these three vertices?
Yes, that w will vary per vertex. Most people imagine the clip space as the cube [-1,1]^3. However, that is not the clip space, but the normalized device space (NDC). You get from clip space to NDC by doing the perspective divide, so dividing each vertex by it's w component. So, in NDC, that clip condition would transform to -1 <= x/w <= 1. However, the clipping cannot be done in NDC (withpout extra information).
The problem here is that points which lie behind the camera would appear in front of the camera in NDC space. Think about it: x/w is the same as -x/-w. With a typical GL projection matrix, w_clip == z_eye of the vertex. Also, a point that lies in the camera plane (the plane parallel to the projection plane, but going through the camera itself) will have w=0 and you can't do any clipping after that divide. The solution is to always do the clipping before the divide, hence the clip space is called "clip space"...

What exactly are eye space coordinates?

As I am learning OpenGL I often stumble upon so-called eye space coordinates.
If I am right, you typically have three matrices. Model matrix, view matrix and projection matrix. Though I am not entirely sure how the mathematics behind that works, I do know that the convert coordinates to world space, view space and screen space.
But where is the eye space, and which matrices do I need to convert something to eye space?
Perhaps the following illustration showing the relationship between the various spaces will help:
Depending if you're using the fixed-function pipeline (you are if you call glMatrixMode(), for example), or using shaders, the operations are identical - it's just a matter of whether you code them directly in a shader, or the OpenGL pipeline aids in your work.
While there's distaste in discussing things in terms of the fixed-function pipeline, it makes the conversation simpler, so I'll start there.
In legacy OpenGL (i.e., versions before OpenGL 3.1, or using compatibility profiles), two matrix stacks are defined: model-view, and projection, and when an application starts the matrix at the top of each stack is an identity matrix (1.0 on the diagonal, 0.0 for all other elements). If you draw coordinates in that space, you're effectively rendering in normalized device coordinates(NDCs), which clips out any vertices outside of the range [-1,1] in both X, Y, and Z. The viewport transform (as set by calling glViewport()) is what maps NDCs into window coordinates (well, viewport coordinates, really, but most often the viewport and the window are the same size and location), and the depth value to the depth range (which is [0,1] by default).
Now, in most applications, the first transformation that's specified is the projection transform, which come in two varieties: orthographic and perspective projections. An orthographic projection preserves angles, and is usually used in scientific and engineering applications, since it doesn't distort the relative lengths of line segments. In legacy OpenGL, orthographic projections are specified by either glOrtho or gluOrtho2D. More commonly used are perspective transforms, which mimic how the eye works (i.e., objects far from the eye are smaller than those close), and are specified by either glFrustum or gluPerspective. For perspective projections, they defined a viewing frustum, which is a truncated pyramid anchored at the eye's location, which are specified in eye coordinates. In eye coordinates, the "eye" is located at the origin, and looking down the -Z axis. Your near and far clipping planes are specified as distances along the -Z axis. If you render in eye coordinates, any geometry specified between the near and far clipping planes, and inside of the viewing frustum will not be culled, and will be transformed to appear in the viewport. Here's a diagram of a perspective projection, and its relationship to the image plane .
The eye is located at the apex of the viewing frustum.
The last transformation to discuss is the model-view transform, which is responsible for moving coordinate systems (and not objects; more on that in a moment) such that they are well position relative to the eye and the viewing frustum. Common modeling transforms are translations, scales, rotations, and shears (of which there's no native support in OpenGL).
Generally speaking, 3D models are modeled around a local coordinate system (e.g., specifying a sphere's coordinates with the origin at the center). Modeling transforms are used to move the "current" coordinate system to a new location so that when you render your locally-modeled object, it's positioned in the right place.
There's no mathematical difference between a modeling transform and a viewing transform. It's just usually, modeling transforms are used to specific models and are controlled by glPushMatrix() and glPopMatrix() operations, which a viewing transformation is usually specified first, and affects all of the subsequent modeling operations.
Now, if you're doing this modern OpenGL (core profile versions 3.1 and forward), you have to do all these operations logically yourself (you might only specify one transform folding both the model-view and projection transformations into a single matrix multiply). Matrices are specified usually as shader uniforms. There are no matrix stacks, separation of model-view and projection transformations, and you need to get your math correct to emulate the pipeline. (BTW, the perspective division and viewport transform steps are performed by OpenGL after the completion of your vertex shader - you don't need to do the math [you can, it doesn't hurt anything unless you fail to set w to 1.0 in your gl_Position vertex shader output).
Eye space, view space, and camera space are all synonyms for the same thing: the world relative to the camera.
In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix. Finally the projected scene is mapped to the viewport.
The projection, view and model matrix interact together to present the objects (meshes) of a scene on the viewport.
The model matrix defines the position orientation and scale of a single object (mesh) in the world space of the scene.
The view matrix defines the position and viewing direction of the observer (viewer) within the scene.
The projection matrix defines the area (volume) with respect to the observer (viewer) which is projected onto the viewport.
Coordinate Systems:
Model coordinates (Object coordinates)
The model space is the coordinates system, which is used to define or modulate a mesh. The vertex coordinates are defined in model space.
World coordinates
The world space is the coordinate system of the scene. Different models (objects) can be placed multiple times in the world space to form a scene, in together.
The model matrix defines the location, orientation and the relative size of a model (object, mesh) in the scene. The model matrix transforms the vertex positions of a single mesh to world space for a single specific positioning. There are different model matrices, one for each combination of a model (object) and a location of the object in the world space.
View space (Eye coordinates)
The view space is the local system which is defined by the point of view onto the scene.
The position of the view, the line of sight and the upwards direction of the view, define a coordinate system relative to the world coordinate system. The objects of a scene have to be drawn in relation to the view coordinate system, to be "seen" from the viewing position. The inverse matrix of the view coordinate system is named the view matrix. This matrix transforms from world coordinates to view coordinates.
In general world coordinates and view coordinates are Cartesian coordinates
The view coordinates system describes the direction and position from which the scene is looked at. The view matrix transforms from the world space to the view (eye) space.
If the coordinate system of the view space is a Right-handed system, where the X-axis points to the right and the Y-axis points up, then the Z-axis points out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
Clip space coordinates are Homogeneous coordinates. In clip space the clipping of the scene is performed.
A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-w <= x, y, z <= w.
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
At orthographic projection, this area (volume) is defined by 6 distances (left, right, bottom, top, near and far) to the viewer's position.
If the left, bottom and near distance are negative and the right, top and far distance are positive (as in normalized device space), this can be imagined as box around the viewer.
All the objects (meshes) which are in the space (volume) are "visible" on the viewport. All the objects (meshes) which are out (or partly out) of this space are clipped at the borders of the volume.
This means at orthographic projection, the objects "behind" the viewer are possibly "visible". This may seem unnatural, but this is how orthographic projection works.
At perspective projection the viewing volume is a frustum (a truncated pyramid), where the top of the pyramid is the viewing position.
The direction of view (line of sight) and the near and the far distance define the planes which truncated the pyramid to a frustum (the direction of view is the normal vector of this planes).
The left, right, bottom, top distance define the distance from the intersection of the line of sight and the near plane, with the side faces of the frustum (on the near plane).
This causes that the scene looks like, as it would be seen from of a pinhole camera.
One of the most common mistakes, when an object is not visible on the viewport (screen is all "black"), is that the mesh is not within the view volume which is defined by the projection and view matrix.
Normalized device coordinates
The normalized device space is a cube, with right, bottom, front of (-1, -1, -1) and a left, top, back of (1, 1, 1).
The normalized device coordinates are the clip space coordinates divide by the w component of the clip coordinates. This is called Perspective divide
Window coordinates (Screen coordinates)
The window coordinates are the coordinates of the viewport rectangle. The window coordinates are decisive for the rasterization process.
The normalized device coordinates are linearly mapped to the viewport rectangle (Window Coordinates / Screen Coordinates) and to the depth for the depth buffer.
The viewport rectangle is defined by glViewport. The depth range is set by glDepthRange and is by default [0, 1].