What exactly are eye space coordinates? - opengl

As I am learning OpenGL I often stumble upon so-called eye space coordinates.
If I am right, you typically have three matrices. Model matrix, view matrix and projection matrix. Though I am not entirely sure how the mathematics behind that works, I do know that the convert coordinates to world space, view space and screen space.
But where is the eye space, and which matrices do I need to convert something to eye space?

Perhaps the following illustration showing the relationship between the various spaces will help:
Depending if you're using the fixed-function pipeline (you are if you call glMatrixMode(), for example), or using shaders, the operations are identical - it's just a matter of whether you code them directly in a shader, or the OpenGL pipeline aids in your work.
While there's distaste in discussing things in terms of the fixed-function pipeline, it makes the conversation simpler, so I'll start there.
In legacy OpenGL (i.e., versions before OpenGL 3.1, or using compatibility profiles), two matrix stacks are defined: model-view, and projection, and when an application starts the matrix at the top of each stack is an identity matrix (1.0 on the diagonal, 0.0 for all other elements). If you draw coordinates in that space, you're effectively rendering in normalized device coordinates(NDCs), which clips out any vertices outside of the range [-1,1] in both X, Y, and Z. The viewport transform (as set by calling glViewport()) is what maps NDCs into window coordinates (well, viewport coordinates, really, but most often the viewport and the window are the same size and location), and the depth value to the depth range (which is [0,1] by default).
Now, in most applications, the first transformation that's specified is the projection transform, which come in two varieties: orthographic and perspective projections. An orthographic projection preserves angles, and is usually used in scientific and engineering applications, since it doesn't distort the relative lengths of line segments. In legacy OpenGL, orthographic projections are specified by either glOrtho or gluOrtho2D. More commonly used are perspective transforms, which mimic how the eye works (i.e., objects far from the eye are smaller than those close), and are specified by either glFrustum or gluPerspective. For perspective projections, they defined a viewing frustum, which is a truncated pyramid anchored at the eye's location, which are specified in eye coordinates. In eye coordinates, the "eye" is located at the origin, and looking down the -Z axis. Your near and far clipping planes are specified as distances along the -Z axis. If you render in eye coordinates, any geometry specified between the near and far clipping planes, and inside of the viewing frustum will not be culled, and will be transformed to appear in the viewport. Here's a diagram of a perspective projection, and its relationship to the image plane .
The eye is located at the apex of the viewing frustum.
The last transformation to discuss is the model-view transform, which is responsible for moving coordinate systems (and not objects; more on that in a moment) such that they are well position relative to the eye and the viewing frustum. Common modeling transforms are translations, scales, rotations, and shears (of which there's no native support in OpenGL).
Generally speaking, 3D models are modeled around a local coordinate system (e.g., specifying a sphere's coordinates with the origin at the center). Modeling transforms are used to move the "current" coordinate system to a new location so that when you render your locally-modeled object, it's positioned in the right place.
There's no mathematical difference between a modeling transform and a viewing transform. It's just usually, modeling transforms are used to specific models and are controlled by glPushMatrix() and glPopMatrix() operations, which a viewing transformation is usually specified first, and affects all of the subsequent modeling operations.
Now, if you're doing this modern OpenGL (core profile versions 3.1 and forward), you have to do all these operations logically yourself (you might only specify one transform folding both the model-view and projection transformations into a single matrix multiply). Matrices are specified usually as shader uniforms. There are no matrix stacks, separation of model-view and projection transformations, and you need to get your math correct to emulate the pipeline. (BTW, the perspective division and viewport transform steps are performed by OpenGL after the completion of your vertex shader - you don't need to do the math [you can, it doesn't hurt anything unless you fail to set w to 1.0 in your gl_Position vertex shader output).

Eye space, view space, and camera space are all synonyms for the same thing: the world relative to the camera.

In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix. Finally the projected scene is mapped to the viewport.
The projection, view and model matrix interact together to present the objects (meshes) of a scene on the viewport.
The model matrix defines the position orientation and scale of a single object (mesh) in the world space of the scene.
The view matrix defines the position and viewing direction of the observer (viewer) within the scene.
The projection matrix defines the area (volume) with respect to the observer (viewer) which is projected onto the viewport.
Coordinate Systems:
Model coordinates (Object coordinates)
The model space is the coordinates system, which is used to define or modulate a mesh. The vertex coordinates are defined in model space.
World coordinates
The world space is the coordinate system of the scene. Different models (objects) can be placed multiple times in the world space to form a scene, in together.
The model matrix defines the location, orientation and the relative size of a model (object, mesh) in the scene. The model matrix transforms the vertex positions of a single mesh to world space for a single specific positioning. There are different model matrices, one for each combination of a model (object) and a location of the object in the world space.
View space (Eye coordinates)
The view space is the local system which is defined by the point of view onto the scene.
The position of the view, the line of sight and the upwards direction of the view, define a coordinate system relative to the world coordinate system. The objects of a scene have to be drawn in relation to the view coordinate system, to be "seen" from the viewing position. The inverse matrix of the view coordinate system is named the view matrix. This matrix transforms from world coordinates to view coordinates.
In general world coordinates and view coordinates are Cartesian coordinates
The view coordinates system describes the direction and position from which the scene is looked at. The view matrix transforms from the world space to the view (eye) space.
If the coordinate system of the view space is a Right-handed system, where the X-axis points to the right and the Y-axis points up, then the Z-axis points out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
Clip space coordinates are Homogeneous coordinates. In clip space the clipping of the scene is performed.
A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-w <= x, y, z <= w.
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
At orthographic projection, this area (volume) is defined by 6 distances (left, right, bottom, top, near and far) to the viewer's position.
If the left, bottom and near distance are negative and the right, top and far distance are positive (as in normalized device space), this can be imagined as box around the viewer.
All the objects (meshes) which are in the space (volume) are "visible" on the viewport. All the objects (meshes) which are out (or partly out) of this space are clipped at the borders of the volume.
This means at orthographic projection, the objects "behind" the viewer are possibly "visible". This may seem unnatural, but this is how orthographic projection works.
At perspective projection the viewing volume is a frustum (a truncated pyramid), where the top of the pyramid is the viewing position.
The direction of view (line of sight) and the near and the far distance define the planes which truncated the pyramid to a frustum (the direction of view is the normal vector of this planes).
The left, right, bottom, top distance define the distance from the intersection of the line of sight and the near plane, with the side faces of the frustum (on the near plane).
This causes that the scene looks like, as it would be seen from of a pinhole camera.
One of the most common mistakes, when an object is not visible on the viewport (screen is all "black"), is that the mesh is not within the view volume which is defined by the projection and view matrix.
Normalized device coordinates
The normalized device space is a cube, with right, bottom, front of (-1, -1, -1) and a left, top, back of (1, 1, 1).
The normalized device coordinates are the clip space coordinates divide by the w component of the clip coordinates. This is called Perspective divide
Window coordinates (Screen coordinates)
The window coordinates are the coordinates of the viewport rectangle. The window coordinates are decisive for the rasterization process.
The normalized device coordinates are linearly mapped to the viewport rectangle (Window Coordinates / Screen Coordinates) and to the depth for the depth buffer.
The viewport rectangle is defined by glViewport. The depth range is set by glDepthRange and is by default [0, 1].

Related

opengl projection- where is the mapping to NDC

Where is the mapping to NDC?
According to my understanding, the projection matrix does two things: First, it clips the view space to form the frustum. It only keeps the vertices that fall into the frustum but clips others that fall outside. Second, it maps the coordinates within the clipped space to NDC [-1, 1].
But I only see that we create a frustum (to use for clipping) by define a perspective matrix with glm::perspective or glm::ortho. Where is the steps to do the mapping between frustum and NDC?
Or we just need to define the frustum and OpenGL will do the two steps for us automatically?
According to my understanding, the projection matrix do two things: first, it clipped the view space to form the frustum. It only keeps the vertice that's fall into the frustum but clipped others fall outside. Second, it maps the coordinates within the clipped space to NDC [-1, 1].
It does neither of these things.
A 4x4 matrix is just a transformation. A transformation by itself cannot clip anything. And while a matrix can transform coordinates into a [-1, 1] space, that's not necessarily what the clip space provided by the vertex shader will be. And it's certainly not the clip space used by 3D projection matrices.
The only mandated job of the vertex shader (or the last vertex processing shader stage) is to generate a clip-space position for each vertex. Clip-space is defined by OpenGL to be the 4D space where the XYZ components of the position are within the range [-W, W] are considered "visible". Here, "W" is the W component of the position. So for each XYZW, "visible" is defined as where the XYZ components are in the range of W.
The actual clipping of primitives happens after the vertex shader. Each primitive's vertices are clipped against the previously-defined 4D clip space.
A projection frustum is created by transforming positions into a clip-space such that those 4D positions outside of the frustum are outside of the [-W, W] range, and those inside of the frustum are inside of that range.
So it is not that the transformation clips anything; it merely sets up the data so that OpenGL's clipping system will clip things correctly.
Similarly, clip-space is not NDC space (unless the W component for the position is 1). NDC space is defined by taking the XYZ of a clip-space position and dividing it by that position's W component (and if W is 1, then this obviously changes nothing). Clip and NDC space are two separate spaces, and clipping happens before NDC space. You can conceptually think of clipping as being done against the [-1, 1] range of NDC space. After all, clip-space is on the [-W, W] range, so if you divide that by W, you get the range [-1, 1].
But it's still important to remember that clip space isn't NDC space.

OpenGL where to process vertices

I want to use Java and OpenGL (through LWJGL) to render a 3D object. I also use GLFW to set up windows, contexts, etc.
I have a custom class to represent a unit sphere, which stores coordinates of every vertex as well as an array of integers to represent the triangle mesh. It also stores the translation, scale factor and rotation (so that any sphere can be built from the unit sphere model).
The vertices are world coordinates e.g. (1, 0, 0) with s.f. as 5, translation (0, 10, 0) and rotation (0,0,0).
I also have a custom camera object which stores the position of the camera and the orientation (in radians) to each axis (yaw, roll, pitch). It also stores the distance to the projection plane to allow a changeable FOV.
I know that in order to display the sphere, I need to apply a series of transformations to all vertices. My question is, where should I apply each transformation?
OpenGL screen coordinates range from (-1,-1) to (1, 1).
My current solution (which I would like to verify is optimal) is as follows:
In CPU:
Apply model transform on the sphere (so some vertex is now [5, 10, 0]). My model transform is in the following order: scale, rotation [z,y,x] then translation. Buffer these into vbo/vao and load into shader along with camera position, rotation and distance to PP.
In vertex shader:
apply camera transform on world coordinates (build the transformation matrix here or load it in from CPU too?)
calculate 2D screen coordinates by similar triangles
calculate OpenGL coordinates by ratios of screen resolution
pass resulting vertex coordinates into the fragment shader
Have I understood the process correctly? Should I rearrange any processes? I am reading guides online but they aren't always specific enough - most already store the vertices (in Java) in the OpenGL coordinate system but not world coordinates like me. Some say that the camera is fixed at the origin in OpenGL, though I assume that I need to apply the camera transform for it to be so and display shapes properly.
Thanks

Where is the default camera?

I've read that if I don't initialize it, OpenGL uses the default camera.
What configuration have this camera? I've been trying with some basic triangles and it looks like the camera is in 0,0,1 facing 0,0,0, I assume that the Up vector is 0,1,0.
Can someone tell me if this is correct? And is there any way to change the default camera settings?
OpenGL has no "camera". If you do not use any transformations (or all transformation matrices are the Identity matrix), then you have to specify the vertex coordinates in normalized device space.
The normalized device space is a unique cube, with the left, bottom, near of (-1, -1, -1) and the right, top, far of (1, 1, 1).
Hence the first component of the vertex coordinate (x) defines the position from the left (-1) to the right (1). The 2nd component (y) defines the position form the bottom (-1) to the top (1) and the 3rd component (z) defines the depth form near (-1) to far (1).
Thus the "up-vector" is (0, 1, 0).
In common each mesh of the scene is transformed by the model matrix, the view matrix and the projection matrix. Finally the projected scene is mapped to the viewport.
The projection, view and model matrix interact together to present the objects (meshes) of a scene on the viewport.
The model matrix defines the position orientation and scale of a single object (mesh) in the world space of the scene.
The view matrix defines the position and viewing direction of the observer (viewer) within the scene.
The projection matrix defines the area (volume) with respect to the observer (viewer) which is projected onto the viewport.
If you talk about the "camera", then you mean the view matrix. The view matrix transforms a vertex coordinate from world space to view space. If you want a different world coordinate system, then you have to define an appropriate view matrix.

What is the difference between ProjectionTransformMatrix of VTK and GL_PROJECTION of OpenGL?

I am having profound issues regarding understanding the transformations involved in VTK. OpenGL has fairly good documentation and I was of the impression that VTK is verym similar to OpenGL (it is, in many ways). But when it comes to transformations, it seems to be an entirely different story.
This is a good OpenGL documentation about transforms involved:
http://www.songho.ca/opengl/gl_transform.html
The perspective projection matrix in OpenGL is:
I wanted to see if this formula applied in VTK will give me the projection matrix of VTK (by cross-checking with VTK projection matrix).
Relevant Camera and Renderer Parameters:
camera->SetPosition(0,0,20);
camera->SetFocalPoint(0,0,0);
double crSet[2] = {10, 1000};
renderer->GetActiveCamera()->SetClippingRange(crSet);
double windowSize[2];
renderWindow->SetSize(1280,720);
renderWindowInteractor->GetSize(windowSize);
proj = renderer->GetActiveCamera()->GetProjectionTransformMatrix(windowSize[0]/windowSize[1], crSet[0], crSet[1]);
The projection transform matrix I got for this configuration is:
The (3,3) and (3,4) values of the projection matrix (lets say it is indexed 1 to 4 for rows and columns) should be - (f+n)/(f-n) and -2*f*n/(f-n) respectively. In my VTK camera settings, the nearz is 10 and farz is 1000 and hence I should get -1.020 and -20.20 respectively in the (3,3) and (3,4) locations of the matrix. But it is -1010 and -10000.
I have changed my clipping range values to see the changes and the (3,3) position is always nearz+farz which makes no sense to me. Also, it would be great if someone can explain why it is 3.7320 in the (1,1) and (2,2) positions. And this value DOES NOT change when I change the window size of the renderer window. Quite perplexing to me.
I see in VTKCamera class reference that GetProjectionTransformMatrix() returns the transformation matrix that maps from camera coordinates to viewport coordinates.
VTK Camera Class Reference
This is a nice depiction of the transforms involved in OpenGL rendering:
OpenGL Projection Matrix is the matrix that maps from eye coordinates to clip coordinates. It is beyond doubt that eye coordinates in OpenGL is the same as camera coordinates in VTK. But is the clip coordinates in OpenGL same as viewport coordinates of VTK?
My aim is to simulate a real webcam camera (already calibrated) in VTK to render a 3D model.
Well, the documentation you linked to actually explains this (emphasis mine):
vtkCamera::GetProjectionTransformMatrix:
Return the projection transform matrix, which converts from camera
coordinates to viewport coordinates. This method computes the aspect,
nearz and farz, then calls the more specific signature of
GetCompositeProjectionTransformMatrix
with:
vtkCamera::GetCompositeProjectionTransformMatrix:
Return the concatenation of the ViewTransform and the
ProjectionTransform. This transform will convert world coordinates to
viewport coordinates. The 'aspect' is the width/height for the
viewport, and the nearz and farz are the Z-buffer values that map to
the near and far clipping planes. The viewport coordinates of a point located inside the frustum are in the range
([-1,+1],[-1,+1], [nearz,farz]).
Note that this neither matches OpenGL's window space nor normalized device space. If find the term "viewport coordinates" for this aa poor choice, but be it as it may. What bugs me more with this is that the matrix actually does not transform to that "viewport space", but to some clip-space equivalent. Only after the perspective divide, the coordinates will be in the range as given for the above definition of the "viewport space".
But is the clip coordinates in OpenGL same as viewport coordinates of
VTK?
So that answer is a clear no. But it is close. Basically, that projection matrix is just a scaled and shiftet along the z dimension, and it is easy to convert between those two. Basically, you can simply take znear and zfar out of VTK's matrix, and put it into that OpenGL projection matrix formula you linked above, replacing just those two matrix elements.

Perspective Projection - OpenGL

I am confused about the position of objects in opengl .The eye position is 0,0,0 , the projection plane is at z = -1 . At this point , will the objects be in between the eye position and and the plane (Z =(0 to -1)) ? or its behind the projection plane ? and also if there is any particular reason for being so?
First of all, there is no eye in modern OpenGL. There is also no camera. There is no projection plane. You define these concepts by yourself; the graphics library does not give them to you. It is your job to transform your object from your coordinate system into clip space in your vertex shader.
I think you are thinking about projection wrong. Projection doesn't move the objects in the same sense that a translation or rotation matrix might. If you take a look at the link above, you can see that in order to render a perspective projection, you calculate the x and y components of the projected coordinate with R = V(ez/pz), where ez is the depth of the projection plane, pz is the depth of the object, V is the coordinate vector, and R is the projection. Almost always you will use ez=1, which makes that equation into R = V/pz, allowing you to place pz in the w coordinate allowing OpenGL to do the "perspective divide" for you. Assuming you have your eye and plane in the correct places, projecting a coordinate is almost as simple as dividing by its z coordinate. Your objects can be anywhere in 3D space (even behind the eye), and you can project them onto your plane so long as you don't divide by zero or invalidate your z coordinate that you use for depth testing.
There is no "projection plane" at z=-1. I don't know where you got this from. The classic GL perspective matrix assumes an eye space where the camera is located at origin and looking into -z direction.
However, there is the near plane at z<0 and eveything in front of the near plane is going to be clipped. You cannot put the near plane at z=0, because then, you would end up with a division by zero when trying to project points on that plane. So there is one reasin that the viewing volume isn't a pyramid with they eye point at the top but a pyramid frustum.
This is btw. also true for real-world eyes or cameras. The projection center lies behind the lense, so no object can get infinitely close to the optical center in either case.
The other reason why you want a big near clipping distance is the precision of the depth buffer. The whole depth range between the front and the near plane has to be mapped to some depth value with a limited amount of bits, typically 24. So you want to keep the far plane as close as possible, and shift away the near plane as far as possible. The non-linear mapping of the screen-space z coordinate makes this even more important, as that the precision is non-uniformely distributed over that range.