using glm::ortho projection in OpenGL - opengl

I'm trying to simulate dropping a ball from 200 meters. I know I have to convert the coordinates from -1.0, 1.0 to 0, 200.
I draw my vertices of my ball like so:
for(int i=0; i < NUM_VERTICES; i++)
{
GLfloat angle = 2*M_PI/NUM_VERTICES * i;
GLfloat x = 10 * cos(angle);
GLfloat y = 10 * sin(angle);
vertices.push_back(x);
vertices.push_back(y);
}
then I have an orthographic projection like so:
glm:mat4 projection;
projection = glm::ortho(0.0f, 200.0f, 0.0f, 200.0f, 0.1f, 100.0f);
and a translation
glm::mat4 view;
view = glm::translate(view, glm::vec3(100.0f, 200.0f, 0.0f));
but nothing appears in my viewport.

You seem to draw at z=0, while your z range is [-0.1, -100], so the geometry is clipped because it lies in front of the near plane.

Related

How to calculate the required scale of a rectangle to fit the screen

I want to fit my rectangle to the screen size with regards to the camera position.
This is the VBO data for the rectangle.
width = 50.0f;
height = 50.0f;
verticesRect = {
// Positions // Normal Coords // Texture Coords
width, height, 0.0f, 0.0 , 0.0, 1.0 , 1.0f, 0.0f, // Top Right
width, -height, 0.0f, 0.0 , 0.0, 1.0 , 1.0f, 1.0f, // Bottom Right
-width, -height, 0.0f, 0.0 , 0.0, 1.0 , 0.0f, 1.0f, // Bottom Left
-width, height, 0.0f, 0.0 , 0.0, 1.0 , 0.0f, 0.0f // Top Left
};
The Matrix for Projection.
float angle = 45.0f;
glm::mat4 projection = glm::perspective(glm::radians(angle), (float)1920.0 / (float)1080.0, 0.1, 1000.0);
The Matrix for View
glm::vec3 Position( 0.0 , 0.0 , 500.0);
glm::vec3 Front( 0.0 , 0.0 , 0.0);
glm::vec3 Up( 0.0 , 1.0 , 0.0);
glm::mat4 view = glm::lookAt(Position, Front , Up);
The Matrix for rectangle object is
glm::model = PositionMarix * RotationMatrix * ScalingMatrix;
How can i calculate what the scaling of the object should be so that it fits itself to the screen size
The rectangle object can translate in z position so as camera can also move in z Position.
At perspective projection, the projected size on the viewport depends on the distance to the camera (depth).
aspect = width / height
height_vp = depth * 2 * atan(fov_y / 2)
width_vp = height_vp * aspect
In your case the object is drawn around (0, 0, 0) and the distance of the camera to the origin is 500. The filed of view (fov_y) is glm::radians(angle).
With the above formula you can project a rectangle with the bottom left (-1, -1) and the top right (1, 1) exactly on the viewport. Since the bottom left of your rectangle is (-50, -50) and the top right is (50, 50) you have to divide by this scale.
Hence the scale is:
float scale_x = 500.0f * 2.0f * atan(glm::radians(angle) / 2.0f);
float scale_y = scale_x * 1920.0f / 1080.0f;
glm::mat4 ScalingMatrix = glm::scale(glm::mat4(1.0f),
glm::vec3(scale_x / 50.0f, scale_y / 50.0f, 1.0f));

OpenGL draw a rectangle filling window

I'm trying to understand the OpenGL MVP matrices, and as an exercice I'd like to draw a rectangle filling my window, using the matrices. I thought I would easily find a tutorial for that, but all those I found simply seem to put random values in their MVP matrices setup.
Say my rectangle has these coordinates:
GLfloat vertices[] = {
-1.0f, 1.0f, 0.0f, // Top-left
1.0f, 1.0f, 0.0f, // Top-right
1.0f, -1.0f, 0.0f, // Bottom-right
-1.0f, -1.0f, 0.0f, // Bottom-left
};
Here are my 2 triangles:
GLuint elements[] = {
0, 1, 2,
2, 3, 0
};
If I draw the rectangle with identity MVP matrices, it fills the screen as expected. Now I want to use a frustum. Here are its settings:
float m_fov = 45.0f;
float m_width = 3840;
float m_height = 2160;
float m_zNear = 0.1f;
float m_zFar = 100.0f;
From this I can compute the width / height of my window at z-near & z-far:
float zNearHeight = tan(m_fov) * m_zNear * 2;
float zNearWidth = zNearHeight * m_width / m_height;
float zFarHeight = tan(m_fov) * m_zFar * 2;
float zFarWidth = zFarHeight * m_width / m_height;
Now I can create my view & projection matrices:
glm::mat4 projectionMatrix = glm::perspective(glm::radians(m_fov), m_width / m_height, m_zNear, m_zFar);
glm::mat4 viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -m_zNear));
I'd now expect this to make my rectangle to fill the window:
glm::mat4 identity = glm::mat4(1.0f);
glm::mat4 rectangleModelMatrix = glm::scale(identity, glm::vec3(zNearWidth, zNearHeight, 1));
But doing so, my rectangle is way too big. What did I miss?
SOLUTION: as #Rabbid76 pointed out, the problem was the computation of my z-near size, which must be:
float m_zNearHeight = tan(glm::radians(m_fov) / 2.0f) * m_zNear * 2.0f;
float m_zNearWidth = m_zNearHeight * m_width / m_height;
Also, I now need to specify my object coordinates in normalized view space ([-0.5, 0.5]) rather than device space ([-1, 1]). Thus my vertices must now be:
GLfloat vertices[] = {
-0.5f, 0.5f, 0.0f, // Top-left
0.5f, 0.5f, 0.0f, // Top-right
0.5f, -0.5f, 0.0f, // Bottom-right
-0.5f, -0.5f, 0.0f, // Bottom-left
};
The projected height, of an object on a plan which is parallel to the xy plane of the view is
h' = h * tan(m_fov / 2) / -z
where h is the height of the object on the plane, -z is the depth and m_fov is the field of view angle.
In your case m_fov is 45° and -z is -0.1 (-m_zNear), thus tan(m_fov / 2) / z is ~4,142.
Since the height of the quad is 2, the projected height of the quad is ~8,282.
To create a quad which fits exactly in the viewport, use a filed of view angle of 90° and a distance to the object of 1, because tan(90° / 2) / 1 is 1. e.g:
float m_fov = 90.0f;
glm::mat4 projectionMatrix = glm::perspective(glm::radians(m_fov), m_width / m_height, m_zNear, m_zFar);
glm::mat4 viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -1.0f));
If tan(m_fov / 2) == -z, then an object with the bottom of -1 and the top of 1 fits into the viewport.
Because of the division by z, the projected size of on object on the viewport decrease linear by the distance to the camera.

Switching from orthogonal to perspective projection

I am trying to add a paralax effect to an existing engine. So far the engine worked with an orthogonal projection. Objects are placed in pixel coordinates on the screen. The problem is that I can not figure out how to replicate the same projection with a perspective projection matrix ect. that I can add a Z coordinate for depth.
I tried various combinations of matrices and z coordinates already and the result was always a black screen.
The matrix I am trying to replace:
glm::mat4 projection = glm::ortho(0.0f, static_cast<GLfloat>(1280.0f), static_cast<GLfloat>(720.0f), 0.0f, 0.0f, -100.0f);
The vertex shader:
// Shader code (I tested this while having identity matrices for view and model
#version 330 core
layout (location = 0) in vec2 vertex;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main() {
gl_Position = projection * view * model * vec4(vertex.xy, 1.0f, 1.0f);
}
The projection code I thought might work:
glm::mat4 model = glm::mat4(1.0f);
model = glm::translate(model, glm::vec3(-640, -310.0f, 0.0f));
model = glm::scale(model, glm::vec3(1.0f / 1280.0f, 1.0f / 720.0f, 1.0f));
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
glm::mat4 projection = glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, -100.0f);
Expected Is that a rectangle gets still displayed at similar position (I can correct the details once something works) without having a black screen.
The specification of the Perspective projection matrix is wrong.
glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, -100.0f);
glm::perspective defines a Viewing frustum by an field of view angle along the y axis, an aspect ratio and a distance to the near and the far plane.
So the near and the far plane have to be positive values (> 0) and near has to be less than far:
0 < near < far
e.g.:
glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, 100.0f);
The geometry has to be in between the near and the far plane, else it is clipped.
The ration of the size of the projected area and the depth is linear and can be calculated. It depends on the field of view angle:
float fov_y = glm::radians(45.0f);
float ratio_size_depth = tan(fov_y / 2.0f) * 2.0f;
Note, if an object should appear with half the size in the projection on the viewport, the distance from the object to the camera (depth) has to be doubled.
So the corrected model translation matrix and required depth in the shader to have the coordinates match on the plane are as follows:
int width = 1280.0f;
int height = 720.0f;
glm::mat4 model = glm::mat4(1.0f);
model = glm::scale(model, glm::vec3(-1.0f / width, -1.0f / height, 1.0f));
model = glm::translate(model, glm::vec3(-((float)width / 2.0f), -((float)height / 2.0f), 0.0f));
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, 1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
glm::mat4 projection = glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, 100.0f);
Shader with Z-Value:
#version 330 core
layout (location = 0) in vec2 vertex;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main() {
gl_Position = projection * view * model * vec4(vertex.xy, 1.208f, 1.0f);
}
Which will be equivalent tho this orthogonal matrix:
glm::mat4 model = glm::mat4(1.0f);
glm::mat4 view = glm::mat4(1.0f);
glm::mat4 projection = glm::ortho(0.0f, static_cast<GLfloat>(this->width), static_cast<GLfloat>(this->height), 0.0f, 0.0f, -100.0f);
The matrices can also be multiplied together to have only one projection matrix you pass to the shader. This will make it easier to have an actual model matrix passed with the mesh ect.

How do I rotate my camera around my object?

I want to rotate my camera around the scene and an object which is in the center. I've tried doing it this way:
glm::mat4 view;
float radius = 10.0f;
float camX = sin(SDL_GetTicks()/1000.0f) * radius;
float camZ = cos(SDL_GetTicks()/1000.0f) * radius;
view = glm::lookAt(glm::vec3(camX, 0.0f, camZ), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
glUniformMatrix4fv(glGetUniformLocation(shader, "viewingMatrix"), 1, false, &view[0][0]);
but my object loads up further away on the screen and the object rotates around the scene, not the camera.
That's my vertex shader:
void main()
{
FragPos = vec3(modelMatrix * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(modelMatrix))) * aPos;
TexCoord = aTexture;
vec4 transformedPosition = projectionMatrix * viewingMatrix * vec4(FragPos, 1.0f);
gl_Position = transformedPosition;
}
How do I make it such that the camera is the one rotating around in the scene without the object rotating around?
I'm following this tutorial and I'm trying to work out the what happens in the first animation.
https://learnopengl.com/Getting-started/Camera
modelMatrix
glm::mat4 modelMat(1.0f);
modelMat = glm::translate(modelMat, parentEntity.position);
modelMat = glm::rotate(modelMat, parentEntity.rotation.z, glm::vec3(0.0f, 0.0f, 1.0f));
modelMat = glm::rotate(modelMat, parentEntity.rotation.y, glm::vec3(0.0f, 1.0f, 0.0f));
modelMat = glm::rotate(modelMat, parentEntity.rotation.x, glm::vec3(1.0f, 0.0f, 0.0f));
modelMat = glm::scale(modelMat, parentEntity.scale);
int modelMatrixLoc = glGetUniformLocation(shader, "modelMatrix");
glUniformMatrix4fv(modelMatrixLoc, 1, false, &modelMat[0][0]);
The target of the view (2nd parameter of glm::lookAt) should be the center of the object. The position of the object (and the center of the object) is changed by the model matrix (modelMatrix) in the vertex sahder.
You have to add the world position of the object to the 1st and 2nd parameter of glm::lookAt. The position of the object is the translation of the model matrix.
Further the object is to far away from the camera, because the radius is to large.
To solve your issue, the code has to look somehow like this:
glm::mat4 view;
float radius = 2.0f;
float camX = sin(SDL_GetTicks()/1000.0f) * radius;
float camZ = cos(SDL_GetTicks()/1000.0f) * radius;
view = glm::lookAt(
glm::vec3(camX, 0.0f, camZ) + parentEntity.position,
parentEntity.position,
glm::vec3(0.0f, 1.0f, 0.0f));
glUniformMatrix4fv(
glGetUniformLocation(shader, "viewingMatrix"), 1, false, glm::value_ptr(view));

OpenGL: Orbiting with Orthogonal projection

I'm modelling the solar system and testing the orbit of the Earth around the Sun, using orthogonal projection.
The starting point of the Earth is to the right of the Sun, direction of moving is to the left hand side. But the Earth is not completely rendered when it is in front of the Sun and when go further behind the Sun, as you can see from 3 images below:
Starting point:
Next frame:
Final frame:
Can some one explain what exactly happening here? I think this is the cull face problem, I tried:
glEnable(GL_CULL_FACES);
glCullFace(GL_FRONT);
It doesn't work actually.
This is my code:
// global vars
int width = 1820, height = 960;
#define CENTRE_X static_cast<float>(width/2)
#define CENTRE_Z static_cast<float>(-height/2)
#define EARTH_SIZE glm::vec3(30.0f, 40.0f, 30.0f)
#define SUN_SIZE EARTH_SIZE*3.0f
void renderSun(int i){
glPushMatrix();
glLoadIdentity();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_ID[0]);
glUniform1i(texture_Location, 0);
glm::mat4 Projection = glm::ortho(0.0f, static_cast<float>(width), 0.0f, static_cast<float>(height), 0.0f, 100.0f);
glm::mat4 View = glm::lookAt(
glm::vec3(0, 120, 1),
glm::vec3(0, 0, 0),
glm::vec3(0, 1, 0)
);
/* Animations */
GLfloat angle = (GLfloat) (i);
View = glm::translate(View, glm::vec3(static_cast<float>(width/2), 0.0f, static_cast<float>(-height/2)));
View = glm::scale(View, SUN_SIZE);
View = glm::rotate(View, angle * 0.5f, glm::vec3(0.0f, 0.0f, 1.0f));
/* ******* */
glm::mat4 Model = glm::mat4(1.0f);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderProgram, "mvpMatrix"), 1, GL_FALSE, glm::value_ptr(MVP));
glDrawElements(GL_TRIANGLES, numsToDraw, GL_UNSIGNED_INT, NULL);
glPopMatrix();
}
void renderEarth(int i){
glPushMatrix();
glLoadIdentity();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_ID[3]);
glUniform1i(texture_Location, 0);
glm::mat4 Projection = glm::ortho(0.0f, static_cast<float>(width), 0.0f, static_cast<float>(height), 0.0f, 100.0f);
glm::mat4 View = glm::lookAt(
glm::vec3(0, 150, 1),
glm::vec3(0, 0, 0),
glm::vec3(0, 1, 0)
);
/* Animations */
GLfloat angle = (GLfloat) (i);
View = glm::translate(View, glm::vec3(cos(orbitPos)*150, sin(orbitPos)*150, 0.0f));
View = glm::translate(View, glm::vec3(CENTRE_X, 0.0f, CENTRE_Z));
View = glm::scale(View, EARTH_SIZE);
View = glm::rotate(View, angle * 0.5f, glm::vec3(0.0f, 0.0f, 1.0f));
/* ******* */
glm::mat4 Model = glm::mat4(1.0f);
orbitPos += 0.005;
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderProgram, "mvpMatrix"), 1, GL_FALSE, glm::value_ptr(MVP));
glDrawElements(GL_TRIANGLES, numsToDraw, GL_UNSIGNED_INT, NULL);
glPopMatrix();
}
You are using inconsistent View matrices, your code is essentially rotating the view of the earth, which causes it to move relative to the sun. This is very unusual way to do things, and is probably the cause of your problems. It is (presumably) causing the models to collide in clip space, and intersect with one another. You should instead use the same View matrix for both, and modify the Model matrix for the earth model. Keeping the renderSun method the same, you could do this by modifying renderEarth:
void renderEarth(int i){
//...
/* Animations */
GLfloat angle = (GLfloat) (i);
glm::mat4 M0 = glm::translate(View, glm::vec3(cos(orbitPos)*150, sin(orbitPos)*150, 0.0f));
glm::mat4 M1 = glm::translate(View, glm::vec3(CENTRE_X, 0.0f, CENTRE_Z));
glm::mat4 M2 = glm::scale(View, EARTH_SIZE);
glm::mat4 M3 = glm::rotate(View, angle * 0.5f, glm::vec3(0.0f, 0.0f, 1.0f));
/* ******* */
orbitPos += 0.005;
glm::mat4 MVP = Projection * View * M3 * M2 * M1 * M0;
// ...