GLSL coordinate space? - opengl

Do I have to map gl_Position to be within (-1,-1),(1,1) for it to appear on screen, or what? i.e., is (-1,-1) the top left, and (1,1) the bottom right?
Before I was just using my library's CreatePerspectiveFieldOfView and it took care of all the conversions for me, but I'm writing my own now and I'm not quite sure what I need to map the coords to...

gl_Position is a 4-vector, so it has more than just those two elements, usually named x, y, z and w. When clipping, the coordinates are clipped to [-w, w], or in other words, after normalization (dividing each coordinate by w), to [-1, 1]. So normalized coordinates are clipped by the cube (-1, -1, -1) - (1, 1, 1) (unless you have defined more clip planes).
For more information on clipping, read
http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node28.html

Related

OpenGL converting between different right hand notations

I'm displaying an array of 3D points with OpenGL. The problem is the 3D points are from a sensor where X is forward, Y is to the left, Z is up. From my understanding OpenGL has X to the right, Y up, Z out of screen. So when I use a lot of the examples of projection matrices, and cameras the points are obviously not viewed the right way, or the way that makes sense.
So to compare the two (S for sensor, O for OpenGL):
Xs == -Zo, Ys == -Xo, Zs == Yo.
Now my questions are:
How can I rotate the the points from S to O. I tried rotating by 90degrees around X, then Z but it doesn't appear to be working.
Do I even need to rotate to OpenGL convention, can I make up my own Axes (use the sensors orientation), and change the camera code? Or will some assumptions break somewhere in the graphics pipeline?
My implementation based on the answer below:
glm::mat4 model = glm::mat4(0.0f);
model[0][1] = -1;
model[1][2] = 1;
model[2][0] = -1;
// My input to the shader was a mat4 for the model matrix so need to
// make sure the bottom right element is 1
model[3][3] = 1;
The one line in the shader:
// Note that the above matrix is OpenGL to Sensor frame conversion
// I want Sensor to OpenGL so I need to take the inverse of the model matrix
// In the real implementation I will change the code above to
// take inverse before sending to shader
" gl_Position = projection * view * inverse(model) * vec4(lidar_pt.x, lidar_pt.y, lidar_pt.z, 1.0f);\n"
In order to convert the sensor data's coordinate system into OpenGL's right-handed world-space, where the X axis points to the right, Y points up and Z points towards the user in front of the screen (i.e. "out of the screen") you can very easily come up with a 3x3 rotation matrix that will perform what you want:
Since you said that in the sensor's coordinate system X points into the screen (which is equivalent to OpenGL's -Z axis, we will map the sensor's (1, 0, 0) axis to (0, 0, -1).
And your sensor's Y axis points to the left (as you said), so that will be OpenGL's (-1, 0, 0). And likewise, the sensor's Z axis points up, so that will be OpenGL's (0, 1, 0).
With this information, we can build the rotation matrix:
/ 0 -1 0\
| 0 0 1|
\-1 0 0/
Simply multiply your sensor data vertices with this matrix before applying OpenGL's view and projection transformation.
So, when you multiply that out with a vector (Sx, Sy, Sz), you get:
Ox = -Sy
Oy = Sz
Oz = -Sx
(where Ox/y/z is the point in OpenGL coordinates and Sx/y/z is the sensor coordinates).
Now, you can just build a transformation matrix (right-multiply against your usual model-view-projection matrix) and let a shader transform the vertices by that or you simply pre-transform the sensor vertices before uploading to OpenGL.
You hardly ever need angles in OpenGL when you know your linear algebra math.

glm::lookat, perspective clarification in OpenGL

All yalls,
I set up my camera eye on the positive z axis (0, 0, 10), up pointing towards positive y (0, 1, 0), and center towards positive x (2, 0, 0). If, y is up, and the camera is staring down the negative z axis, then x points left in screen coordinates, in right-handed OpenGL world coordinates.
I also have an object centered at the world origin. As the camera looks more to the left (positive x direction), I would expect my origin-centered object to move right in the resulting screen projection. But I see the opposite is the case.
Am I lacking a fundamental understanding? If so, what? If not, can anyone explain how to properly use glm to generate view and projection matrices, in the default OpenGL right-handed world model, which are sent to shaders?
glm::vec3 _eye(0, 0, 10), _center(2, 0, 0), _up(0, 1, 0);
viewMatrix = glm::lookAt(_eye, _center, _up);
projectionMatrix = glm::perspective(glm::radians(45), 6./8., 0.1, 200.);
Another thing I find interesting is the red line in the image points in the positive x-direction. It literally is the [eye -> (forward + eye)] vector of another camera in the scene, which I extract from the inverse of the viewMatrix. What melts my brain about this is, when I use that camera's VP matrices, it points in the direction opposite to the same forward direction that was extracted from the inverse of the viewMatrix. I'd really appreciate any insight into this discrepancy as well.
Also worth noting: I built glm 0.9.9 via cmake. And I verified it uses the right-hand, [-1, 1] variants of lookat and perspective.
resulting image:
I would expect my origin-centered object to move right in the resulting screen projection. But I see the opposite is the case.
glm::lookAt defines a view matrix. The parameters of glm::lookAt are in world space and the center parameter of glm::lookAt defines the position where you look at.
The view space is the local system which is defined by the point of view onto the scene.
The position of the view, the line of sight and the upwards direction of the view, define a coordinate system relative to the world coordinate system. The view matrix transforms from the wolrd space to the view (eye) space.
If the coordiante system of the view space is a Right-handed system, then the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
The line of sight is the vector form the eye position to the center positon:
eye = (0, 0, 10)
center = (2, 0, 0)
up = (0, 1, 0)
los = center - eye = (2, 0, -10)
In this case, if the center of the object is at (0, 0, 0) and you look at (0, 0, 2), the you look at a position at the right of the object, this means that the object is shifted to the left.
This will change, if you change the point of view e.g. (0, 0, -10) or flip the up vector e.g. (0, -1, 0).

Display recursively rendered scene into a plane

I have to render 2 scenes separately and embed one of them into another scene as a plane. The sub scene that is rendered as a plane will use a view matrix calculated from relative camera position and perspective matrix considering distance and calculated skew to render sub scene as if that scene is placed actually on the point.
For describing more detail, this is a figure to describe the simpler case.
(In this case, we have the sub scene on the center line of the main frustum)
It is easy to calculate perspective matrix visualized as red frustum by using these parameters.
However, it is very difficult for me to solve the other case. If there were the sub scene outside of the center line, I should skew the projection matrix to correspond with scene outside.
I think this is kind of oblique perspective projection. And also this is very similar to render mirror. How do I calculate this perspective matrix?
As #Rabbid76 already pointed out this is just a standard asymmetric frustum. For that, you just need to know the coordinates of the rectangle on the near plane you are going to use, in eye-space.
However, there is also another option: You can also modify the existing projection matrix. That approach will be easier if you know the position of your rectangle in window coordinates or normalized devices coordinates. You can simply pre-multiply scale and translation matrices to select any sub-region of your original frustum.
Let's assume that your viewport is w * h pixels wide, and starts at (0,0) in the window. And you want to create a frustum which just renders a sub-rectangle which starts at the lower left corner of pixel (x,y), and which is a pixels wide and b pixels tall.
Convert to NDC:
x_ndc = (x / w) * 2 - 1 and y_ndc = (y / h) * 2 - 1
a_ndc = (a / w) * 2 and b_ndc = (b / h) * 2
Create a scale and translation transform which maps the range [x_ndc, x_ndc+a_ndc] to [-1,1], and similiar for y:
( 2/a_ndc 0 0 -2*x_ndc/a_ndc-1 )
M = ( 0 2/b_ndc 0 -2*y_ndc/b_ndc-1 )
( 0 0 1 0 )
( 0 0 0 1 )
(note that the factor 2 is going to be cancled out. Instead of going to [-1,1] NDC space in step 1, we could also just have used the normalized [0,1], I just wanted to use the standard spaces.)
Pre-Multiply M to the original projection matrix P:
P' = M * P
Note that even though we defined the transformation in NDC space, and P works in clip space before the division, the math still will work out. By using the homogenous coordinates, the translation part of M will be scaled by w accordingly. The resulting matrix will just be a general asymmetric projection matrix.
Now this does not adjust the near and far clipping planes of the original projection. But you can adjust them in the very same way by adding appropriate scale and translation to the z coordinate.
Also note that using this approach, you are not even restricted to selecting an axis-parallel rectangle, you can also rotate or skew it arbitrarily, so basically, you can select an arbitrary parallelogram in window space.
How do I calculate this perspective matrix?
An asymmetric perspective (column major order) projection matrix is set up like this:
m[16] = [
2*n/(r-l), 0, 0, 0,
0, 2*n/(t-b), 0, 0,
(r+l)/(r-l), (t+b)/(t-b), -(f+n)/(f-n), -1,
0, 0, -2*f*n/(f-n), 0];
Where r, l, b, and t are the left, right, bottom and top distances to the frustum planes on the near plane. n and f are the distances to the near and far plane.
In common, in a framework or a library a projection matrix like this is set up by a function called frustum.
e.g.
OpenGL Mathematics: glm::frustum
OpenGL fixed function pipeline: glFrustum

what is the coordinate of the camera in opengl

I want to determine what's the coordinate of camera in opengl.
So I simply draw a sphere in a window, the code is like this:
glutSolidSphere (1.0, 20, 16); //draw a sphere, its radius is 1
//I use glOrtho to set the x,y coordinate
//1
glOrtho(-1,1,-1,1,-0.99,-1.0);
//2
glOrtho(-1,1,-1,1,-1.0,-0.99);
//3
glOrtho(-1,1,-1,1,1.0,0.99);
//5
glOrtho(-1,1,-1,1,1.0,1.0);
//6
glOrtho(-1,1,-1,1,10,10);
//7
glOrtho(-1,1,-1,1,0.0,0.0);
//8
glOrtho(-1,1,-1,1,-0.5,0.5);
//9
//glOrtho(-1,1,-1,1,0.0,0.1);
in case 1,2,3,4, the picture is like this:
a small circle
in case 5,6,7, the sphere just the same size
of the window.
in case 8, the picture is like this:
like a torus,strange
According to glOrtho description:
void glOrtho( GLdouble left,
GLdouble right,
GLdouble bottom,
GLdouble top,
GLdouble nearVal,
GLdouble farVal);
Let's assume that the coordinate of camera is fixed in opengl.
from case 1, it seems that the camera is at (0,0,0);
1) but if then, how can case 2,3,4 is the same as case1?
2) how case 5,6,7 come out?
3) how case 8 come out?
You seem to be confusing several things.
Conceptually, the default glOrtho and glFrustum()/gluPerspecitve() functions assume that the camera is at eye space origin and looking at negative z direction. If you have left the ModelView matrix at idendity (the default), it means your object space will be identical to the eye space, so you are drawing directly in eye space.
OpenGL defines a three-dimensional viewing volume. This means that there is not only a 2D rectangle limited by your viewport/window size, but there are aloe near and far clipping planes. That viewing volume is described as a axis-aligned cube -1 <= x,y,z <= 1 in _normalized device coordinates`.
The purpose of the projection matrix is to transfrom some
viewing volume to that normalized cube. With an orthogonal projection, there will be no perspective effect. Objects which are far away will not appear smaller. So you can interpret the ortho matrix as defining an axis-aligned cuboid in eye space, which defines the part ot the space that will be visibile on the screen. Note that you can set up that projection such that you can see things which are actually behind your "camera" (by using neagtive values for near or far).
Your cases 1-4 all appear identically because you cut out only a tiny section z in [0.99, 1] or z in [-1, -0.99]. where the intersection with a sphere will just appear as a disc. It doesn't matter if you flip the ranges, since that will only flip what is in front or behind. Whithout lighting, you basically see only the silhuette, so you can't see the differences.
Your cases 5, 6 and 7 are just invalid, the parameters near and far must not be identical. That code will just generate a GL error and create no ortho matrix at all, which means that the projection matrix is left at identity - and then, you get excatly the [-1,1]^3 viewing volume. Since you draw a sphere with radius 1 centered at the origin, it will exactly fit.
Case 8 is just a cut of the spehre, the intersecion within -0.5 <= z <= 0.5.

A Depth buffer with two different projection matrices

I am using the default OpenGL values like glDepthRangef(0.0,1.0);, gldepthfunc(GL_LESS); and glClearDepthf(1.f); because my projection matrices change the right hand coordinate to the left hand coordinate. I mean, My near plane and the far plane z-values are supposed to be [-1 , 1] in NDC.
The problem is when I draw two objects at the one FBO including same RBOs, for example, like this code below,
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.f);
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
drawObj1(); // this uses 1) the orthogonal projection below
drawObj2(); // this uses 2) the perspective projection below
glDisable(GL_DEPTH_TEST);
always, the object1 is above the object2.
1) orthogonal
2) perspective
However, when they use same projection whatever it is, it works fine.
Which part do you think I should go over?
--Updated--
Coverting Eye coordinate to NDC to Screen coordinate, what really happens?
My understanding is because after both of projections, its NDC shape is same as images below, its z-value after multiplying 2) perspective matrix doesn't have to be distorted. However, according to the derbass's good answer, if z-value in the view coordinate is multiplied by the perspective matrix, the z-value would be hyperbolically distorted in NDC.
If so, if one vertex position, for example, is [-240.0, 0.0, -100.0] in the eye(view) coordinate with [w:480.0,h:320.0], and I clipped it with [-0.01,-100], would it be [-1,0,-1] or [something>=-1,0,-1] in NDC ? And its z value is still same as -1, isn't it? when its z-value is distorted?
1) Orthogonal
2) Perspective
You can't expect that the z values of your vertices are projected to the same window space z value just because you use the same near and far values for a perspecitive and an orthogonal projection matrix.
In the prespecitve case, the eye space z value will be hyperbolically distorted to the NDC z value. In the orthogonal case, it is just linaerily scaled and shifted.
If your "Obj2" lies just in a flat plane z_eye=const, you can pre-calulate the distorted depth it should have in the perspective case. But if it has a non-zero extent into depth, this will not work. I can think of different approaches to deal with the situation:
"Fix" the depth of object two in the fragment shader by adjusting the gl_FragDepth according to the hyperbolic distortion your z buffer expects.
Use a linear z-buffer, aka. a w buffer.
These approaches are conceptually the inverse of each other. In both cases, you have play with gl_FragDepth so that it matches the conventions of the other render pass.
UPDATE
My understanding is because after both of projections, its NDC shape
is same as images below, its z-value after multiplying 2) perspective
matrix doesn't have to be distorted.
Well, these images show the conversion from clip space to NDC. And that transfromation is what the projection matrix followed by the perspective divide do. When it is in normalized device coords, no further distortion does occur. It is just linearily transformed to window space z according to the glDepthRange() setup.
However, according to the
derbass's good answer, if z-value in the view coordinate is multiplied
by the perspective matrix, the z-value would be hyperbolically
distorted in NDC.
The perspective matrix is applied to the complete 4D homogenous eye space vector, so it is applied to z_eye as well as to x_eye, y_eye and also w_eye (which is typically just 1, but doesn't have to).
So the resulting NDC coordinates for the perspective case are hyberbolically distorted to
f + n 2 * f * n B
z_ndc = ------- + ----------------- = A + -------
n - f (n - f) * z_eye z_eye
while, in the orthogonal case, they are just linearily transformed to
- 2 f + n
z_ndc = ------- z_eye - --------- = C * z_eye + D
f - n (f - n)
For n=1 and f=10, it will look like this (note that I plotted the range partly outside of the frustum. Clipping will prevent these values from occuring in the GL, of course).
If so, if one vertex position, for example, is [-240.0, 0.0, -100.0]
in the eye(view) coordinate with [w:480.0,h:320.0], and I clipped it
with [-0.01,-100], would it be [-1,0,-1] or [something>=-1,0,-1] in
NDC ? And its z value is still same as -1, isn't it? when its z-value
is distorted?
Points at the far plane are always transformed to z_ndc=1, and points at the near plane to z_ndc=-1. This is how the projection matrices were constructed, and this is exactly where the two graphs in the plot above intersect. So for these trivial cases, the different mappings do not matter at all. But for all other distances, they will.