Use top-left origin in OpenGL - c++

I am trying to set my co-ordinate system such that the y-axis points down the screen.
// Determine view-projection matrix
glm::mat4 projection = glm::ortho(
-3.0f, // left
3.0f, // right
3.0f, // bottom
-3.0f); // top
// Right handed rule:
// x points right
// y points down
// z points into the screen
glm::mat4 view = glm::lookAt(
glm::vec3(0, 0, -1), // camera position
glm::vec3(0, 0, 0), // look at
glm::vec3(0, -1, 0) // up vector
);
glm::mat4 viewProjMatrix = projection * view;
However, when I try to render 2 objects:
A at (0, 0)
B at (1, 1)
A appears in the centre of the screen, and B appears in the top-right. I would expect it to appear in the bottom-right.
What am I missing here?

tl;dr
My camera was positioned correctly (in a manner of speaking), but I was flipping the y-axis in the call to glOrtho.
Let's visualize this
Imagine a right-handed co-ordinate system such as OpenGL uses by convention:
The x-axis points right
The y-axis points up
The z-axis points out of the screen
If my objects were positioned in this world and we viewed them from the front, we would see this:
B
A
This is the same as what I was seeing, but we weren't viewing from the front. As #Scheff pointed out in the comments, my call to glm::lookat was flipping my camera upside-down and positioning it "behind" the screen.
Now, if you imagine that you are doing a handstand behind your monitor, A and B would now look like this:
∀
𐐒
Wait, but isn't this what I wanted - B in the bottom-right?
Right, but in addition to my camera being positioned strangely, my y-axis was being flipped by the call to glm::ortho. Normally the value for bottom should be less than the value for top (as in this example).
Thus producing, once again:
B
A
So, one solution would have been to swap the top and bottom parameters in glOrtho:
glm::mat4 view = glm::lookAt(
glm::vec3(0, 0, -1), // camera position
glm::vec3(0, 0, 0), // look at
glm::vec3(0, -1, 0) // up vector
);
glm::mat4 projection = glm::ortho(
-3.0f, // left
3.0f, // right
-3.0f, // bottom (less than top -> y-axis is not inverted!)
3.0f); // top
How to flip the y-axis "correctly"
The solution above works, but there is a much simpler solution - or at least much simpler to visualize! And that is just to position the camera in front of the scene and flip the y-axis using glOrtho (by keeping bottom greater than top):
glm::mat4 view = glm::lookAt(
glm::vec3(0, 0, 1), // camera position
glm::vec3(0, 0, 0), // look at
glm::vec3(0, 1, 0) // up vector
);
glm::mat4 projection = glm::ortho(
-3.0f, // left
3.0f, // right
3.0f, // bottom (greater than top -> y-axis is inverted!)
-3.0f); // top
Potential problems
Texture orientation
Flipping the y-axis may result in upside-down textures, which can be fixed by swapping your top and bottom texture co-ordinates.
Back-face culling
Viewing the scene from a different angle may result in faces being culled, if face culling is enabled. This can be fixed by changing the winding order of your vertices, or by changing which faces get culled.

Related

glm rotation with scaling stretched aspect ratio

I am trying to rotate the texture image using glm,but the output looks stretched or not in the properly rotated along z axis. What might be the possible solution for this one.
float imgAspectRatio = imageWidth / (float) imageHeight;
float viewAspectRatio = viewWidth / (float) viewHeight;
if (imgAspectRatio > viewAspectRatio) {
yScale = viewAspectRatio / imgAspectRatio;
} else {
xScale = imgAspectRatio / viewAspectRatio;
}
glm::mat4 model(1.0f);
model = glm::translate(model, glm::vec3(0, 0, 0));
model = glm::rotate(model, glm::radians(angle),
glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScale, yScale, 1.0f));
Some suggested multiply with glm::ortho with but its giving square shape
glm::mat4 projection(1.0f);
projection = glm::ortho(
-imgAspectRatio,
imgAspectRatio,
-1.0f,
1.0f,
-1.0f,
1.0f
);
The code that you currently have would actually work if you had a square window.
By combining the image aspect ratio scaling and the view aspect ratio scaling into a single step, the order of operations you currently have is essentially as follows:
Scale image plane by image aspect ratio.
Scale all xy coordinates in your scene by the view aspect ratio.
Rotate image plane.
Translate image plane.
However, you really want to scale by the view aspect ratio at the very end. An easy exercise to help visualize why: imagine you have a perfectly square window in which you render a triangle with xy coordinates (-1, -1), (1, -1), (0, 1). Now suppose your window doubles in height, but you want the shape to stay the same: obviously, you just multiply each y-coordinate by 1/2. Now suppose you want to rotate the triangle by 90 degrees ccw, but again still keep the shape the same. Do you rotate first and then scale by the view aspect ratio? Or vice versa? By running through both options, it becomes clear that you have to scale by the view aspect ratio very last.
In other words, you need the following order:
Scale image plane by image aspect ratio.
Rotate image plane.
Translate image plane.
Scale all xy coordinates in your scene by the view aspect ratio.
Which is achieved by code that looks something like this:
float xScaleImg = 1.0f;
float yScaleImg = xScaleImg / imgAspectRatio;
float xScaleView = 1.0f;
float yScaleView = viewAspectRatio;
glm::mat4 model(1.0f);
model = glm::scale(model, glm::vec3(xScaleView, yScaleView, 1.0f));
model = glm::translate(model, glm::vec3(0, 0, 0));
model = glm::rotate(model, glm::radians(angle), glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScaleImg, yScaleImg, 1.0f));
The first four lines assume, as is the case in your example, that both the view and image widths are greater than the respective heights. You'll probably want to add some logic to change this around in the event that the reverse is true.
Update:
Depending on the effect you want to achieve for translation, you may want to move the glm::translate(...) command up one line. The order I gave in my original answer keeps the units of translation equal in pixels. E.g. if you pass in glm::vec3(1.0f, 1.0f, 0.0f), and the width of the window is, say, 1280 pixels, then the image will be translated 640 pixels to the left and 640 pixels upwards.
However, you may want to keep the OpenGL [-1, 1] range on both axes. That is, when you pass in glm::vec3(1.0f, 1.0f, 0.0f) for the translation, you may want the image to be translated right by half of the window's width and up by half of the window's height. In that case, you need to make the translation the last operation performed on the image, and this is done by making the glm::translate(...) the first line of code. This is what Nile Qor wanted. In this case, the last lines of the code become something like:
glm::mat4 model(1.0f);
model = glm::translate(model, glm::vec3(1.0f, 1.0f, 0));
model = glm::scale(model, glm::vec3(xScaleView, yScaleView, 1.0f));
model = glm::rotate(model, glm::radians(angle), glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScaleImg, yScaleImg, 1.0f));
For more context, you can see the discussion in the comments section of this answer.

LookAt matrix when casting shadows parallel to up vector

I am following the LearnOpenGL tutorials and have been tinkering with shadows casting. So far everything is working correctly but there is this very specific problem where I can't cast shadows that from a purely vertical directional light. Let's add some code. My light space matrix looks like this:
glm::mat4 view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
return glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f) * view;
direction is a vector with as direction the direction of the directional light, of course. Everything works well until that direction vector is set to (0,-1,0), because it is parallel to the up vector (0,1,0). To construct the lookAt matrix, glm is performing the cross product between the up vector and the difference between the center and the eye (so in that case it's basically the direction), but that cross product won't give any result since the two vectors are parallel.
Knowing all of this, my question would be : how should my lookAt view matrix when the up vector and the direction of the light are parallel ?
Edit : Thank you for your answer, I changed my code to this :
if(abs(direction.x) < FLT_EPSILON && abs(direction.z) < FLT_EPSILON)
view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 0.0f, 1.0f));
else
view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
return glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f) * view;
and now everything works fine !
When the up-vector and the line of sight are parallel, then the view matrix is undefined, because the Cross product is (0, 0, 0).
The view matrix is an Orthogonal matrix, this means each of the 3 axis is perpendicular to plane which is formed by the 2 other axis. Respectively the angele between the axis is 90°.
The view matrix is the inverse matrix of that matrix which defines the viewing position and orientation. This matrix is defined by the parameters to glm::lookAt. 2 of the axis a res specified by the line of sight and the up-vector. The 3rd axis is calculated by the cross product.
This all means, you have to specify the matrix by 2 Orthogonal directions. If the angle between the direction vectors is not exactly (90°) then this is corrected by glm::lookAt. But the algorithm fails to do that if the vectors are parallel.
Define a line of sight (direction) vector and a up-vector, with an angle of 90° to each another. If you rotate on of them, then you've to rotate the other vector in the same way.
e.g. Lets assume you've a direction vector (line of sight) and an up-vector:
direction: (0, 0, 1)
up : (0, 1, 0)
If the direction vector is rotated by 90° then the up-vector has to be rotated by 90°, too:
direction: (0, -1, 0)
up : (0, 0, 1)

Zoom in on current mouse position in OpenGL using GLM functionality

I'm despairing of the task to zoom in on the current mouse position in OpenGL. I've tried a lot of different things and read other posts on this, but I couldn't adapt the possible solutions to my specific problem. So as far as I understood it, you'll have to get the current window coordinates of the mouse curser, then unproject them to get world coordinates and finally translate to those world coordinates.
To find the current mouse positions, I use the following code in my GLUT mouse callback function every time the right mouse button is clicked.
if(button == 2)
{
mouse_current_x = x;
mouse_current_y = y;
...
Next up, I unproject the current mouse positions in my display function before setting up the ModelView and Projection matrices, which also seems to work perfectly fine:
// Unproject Window Coordinates
float mouse_current_z;
glReadPixels(mouse_current_x, mouse_current_y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &mouse_current_z);
glm::vec3 windowCoordinates = glm::vec3(mouse_current_x, mouse_current_y, mouse_current_z);
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, (float)width, (float)height);
glm::vec3 worldCoordinates = glm::unProject(windowCoordinates, modelViewMatrix, projectionMatrix, viewport);
printf("(%f, %f, %f)\n", worldCoordinates.x, worldCoordinates.y, worldCoordinates.z);
Now the translation is where the trouble starts. Currently I'm drawing a cube with dimensions (dimensionX, dimensionY, dimensionZ) and translate to the center of that cube, so my zooming happens to the center point as well. I'm achieving zooming by translating in z-direction (dolly):
// Set ModelViewMatrix
modelViewMatrix = glm::mat4(1.0); // Start with the identity as the transformation matrix
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(0.0, 0.0, -translate_z)); // Zoom in or out by translating in z-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_x, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the whole szene in x-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_y, glm::vec3(0.0f, 1.0f, 0.0f)); // Rotate the whole szene in y-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, -90.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the camera by 90 degrees in negative x-direction to get a frontal look on the szene
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(-dimensionX/2.0f, -dimensionY/2.0f, -dimensionZ/2.0f)); // Translate the origin to be the center of the cube
glBindBuffer(GL_UNIFORM_BUFFER, globalMatricesUBO);
glBufferSubData(GL_UNIFORM_BUFFER, sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
I tried to replace the translation to the center of the cube by translating to the worldCoordinates vector, but this didn't work. I also tried to scale the vector by width or height.
Am I missing out on some essential step here?
Maybe this won't work in your case. But to me this seems like the best way to handle this. Use glulookat() to look at the xyz position of the mouse click that you have already found. Then change the gluPerspective() to a smaller angle of view to achieve the actual zoom.

Moving Camera in OpenGL

I am working on rendering a terrain in OpenGL.
My code is the following:
void Render_Terrain(int k)
{
GLfloat angle = (GLfloat) (k/40 % 360);
//PROJECTION
glm::mat4 Projection = glm::perspective(45.0f, 1.0f, 0.1f, 100.0f);
//VIEW
glm::mat4 View = glm::mat4(1.);
//ROTATION
//View = glm::rotate(View, angle * -0.1f, glm::vec3(1.f, 0.f, 0.f));
//View = glm::rotate(View, angle * 0.2f, glm::vec3(0.f, 1.f, 0.f));
//View = glm::rotate(View, angle * 0.9f, glm::vec3(0.f, 0.f, 1.f));
View = glm::translate(View, glm::vec3(0.f,0.f, -4.0f)); // x, y, z position ?
//MODEL
glm::mat4 Model = glm::mat4(1.0);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MVP_matrix"), 1, GL_FALSE, glm::value_ptr(MVP));
//Transfer additional information to the vertex shader
glm::mat4 MV = Model * View;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MV_matrix"), 1, GL_FALSE, glm::value_ptr(MV));
glClearColor(0.0, 0.0, 0.0, 1.0);
glDrawArrays(GL_LINE_STRIP, terrain_start, terrain_end );
}
I can do a rotation around the X,Y,Z axis, scale my terrain but I can't find a way to move the camera. I am using OpenGL 3+ and I am kinda new to graphics.
The best way to move the camera would be through the use of gluLookAt(), it simulates camera movement since the camera cannot be moved whatsoever. The function takes 9 parameters. The first 3 are the XYZ coordinates of the eye which is where the camera is exactly located. The second 3 parameters are the XYZ coordinates of the center which is the point the camera is looking at from the eye. It is always going to be the center of the screen. The third 3 parameters are the XYZ coordinates of the UP vector which points vertically upwards from the eye. Through manipulating those 3 XYZ coordinates you can simulate any camera movement you want.
Check out this link.
Further details:
-If you want for example to rotate around an object you rotate your eye around the up vector.
-If you want to move forward or backwards you add or subtract to the eye as well as the center points.
-If you want to tilt the camera left or right you rotate your up vector around your look vector where your look vector is center - eye.
gluLookAt operates on the deprecated fixed function pipeline, so you should use glm::lookAt instead.
You are currently using a constant vector for translation. In the commented out code (which I assume you were using to test rotation), you use angle to adjust the rotation. You should have a similar variable for translation. Then, you can change the glm::translate call to:
View = glm::translate(View, glm::vec3(x_transform, y_transform, z_transform)); // x, y, z position ?
and get translation.
You should probably pass in more than one parameter into Render_Terrain, as translation and rotation need at least six parameters.
In OpenGL the camera is always at (0, 0, 0). You need to set the matrix mode to GL_MODELVIEW, and then modify or set the model/view matrix using things like glTranslate, glRotate, glLoadMatrix, etc. in order to make it appear that the camera has moved. If you're using GLU, you can use gluLookAt to point the camera in a particular direction.

Rotation of camera used for perspective projection

I've just started playing with OpenGl to render a number of structure each comprising a number of polygon.
Basically I want to perform the equivalent of setting a camera at (0,0,z) in the world (structure) coordinates and rotate it about the x,y and z-axes of the world axes (in that order!) to render a view of each structure (as I understand it it common practice to do use the inverse camera matrix). Thus as I understand it I need to translate (to world origin i.e. (0,0,-z)) * rotateZrotateYrotateX * translate (re-define world origin see below)
So I think I need something like:
//Called when the window is resized
void handleResize(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(9.148, (double)w / (double)h, 800.0, 1500.0);
}
float _Zangle = 10.0f;
float _cameraAngle = 90.0f;
//Draws the 3D scene
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
glTranslatef(0.0f, 0.0f, -z); //Move forward Z (mm) units
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" by 90 degrees about x-axis
glTranslatef(-11.0f,189.0f,51.0f); //re-define origin of world coordinates to be (11,-189,-51) - applied to all polygon vertices
glPushMatrix(); //Save the transformations performed thus far
glBegin(GL_POLYGON);
glVertex3f(4.91892,-225.978,-50.0009);
glVertex3f(5.73534,-225.978,-50.0009);
glVertex3f(6.55174,-225.978,-50.0009);
glVertex3f(7.36816,-225.978,-50.0009);
.......// etc
glEnd();
glPopMatrix();
However when I compile and run this the _angle and _cameraAngle seem to be reversed i.e. _angle seems to rotate about y-axis (Vertical) of Viewport and _cameraAngle about z-axis (into plane of Viewport)? What am I doing wrong?
Thanks for taking the time to read this
The short answer is: Use gluLookAt(). This utility function creates the proper viewing matrix.
The longer answer is that each OpenGL transformation call takes the current matrix and multiplies it by a matrix built to accomplish the transformation. By calling a series of OpenGL transformation function you build one transformation matrix that will apply the combination of transformations. Effectively, the matrix will be M = M1 * M2 * M3 . . . Mathematically, the transformations are applied from right to left in the above equation.
Your code doesn't move the camera. It stays at the origin, and looks down the negative z-axis. Your transformations move everything in model space to (11,-189,-51), rotates everything 90 degrees about the x-axis, rotates everything 90 degrees about the y-axis, rotates everything 10 degrees about the z-axis, then translates everything -z along the z-axis.
EDIT: More information
I'm a little confused about what you want to accomplish, but I think you want to have elements at the origin, and have the camera look at those elements. The eye coordinates would be where you want the camera, and the center coordinates would be where you want the objects to be. I'd use a little trigonometry to calculate the position of the camera, and point it at the origin.
In this type of situation I usually keep track of camera position using longitude, latitude, and elevation centered on the origin. Calculating x,y,z for the eye coordinates is simplyx = elv * cos(lat) * sin(lon), y = elv * sin(lat), z = elv * cos(lat) * cos(lat).
My gluLookAt call would be gluLookAt(x, y, z, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
You could rotate the up on the camera by changing the last three coordinates for gluLookAt.
The z axis is coming from the center of the monitor into you. So, rotating around the z-axis should make the camera spin in place (like a 2D rotation on just the xy plane). I can't tell, but is that what's happening here?
It's possible that you are encountering Gimbal Lock. Try removing one of the rotations and see if things work the way they should.
While it's true that you can't actually move the camera in OpenGL, you can simulate camera motion by moving everything else. This is why you hear about the inverse camera matrix. Instead of moving the camera by (0, 0, 10), we can move everything in the world by (0, 0, -10). If you expand those out into matrices, you will find that they are inverses of each other.
I also noticed that, given the code presented, you don't need the glPushMatrix()/glPopMatrix() calls. Perhaps there is code that you haven't shown that requires them.
Finally, can you provide an idea of what it is you are trying to render? Debugging rotations can be hard without some context.
Short answer :Good tip
Longer answer: Yes the order of matrix multiplication is clear... that's what I meant by inverse camera matrix to indicate moving all the world coordinates of structures into the camera coordinates (hence the use of "camera" in my comments ;-)) instead of actually translating and rotating camera into the world coordinates.
So if I read between the lines correctly you suggest something like:
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
gluLookAt(0.0,0.0,z,11.0,-189.0,-51.0,0.0,1.0,0.0); //eye(0,0,z) look at re-defined world origin(11,-189,-51) and up(0.0,1.0,0.0)
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" (actually structures) about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" (actually structures!) by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" (actually structures) by 90 degrees about x-axis
glPushMatrix();
Or am I still missing something?
I think you are mixing axes of your world with axes of the camera,
GLRotatef only uses axes of the camera, they are not the same as your the world axes once the camera is rotated.