Orthographic Projection Matrix - Triangle in wrong place - c++

I trying to implement an orthographic projection matrix - currently, all I am trying to do is draw a triangle to the screen, which works fine without the matrix, but as soon as multiply the coordinates by the matrix, the triangle doesn't fit on the page (hard to explain but one point is in the centre, and the other two are far off the page). I've tried it with a different matrix where there are negative coordinates (so the centre of the screen is the origin) and it works fine - am I doing something obviously wrong here? Relevant code is below:
GLfloat vertices[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
};
glm::mat4 projectionMatrix = glm::ortho(0.0f, 960.0f, 640.0f, 0.0f, 1.0f, -1.0f);
glm::mat4 viewMatrix = glm::lookAt(glm::vec3(0,0,5),glm::vec3(0,0,0),glm::vec3(0,1,0));
glm::mat4 modelMatrix = glm::mat4(1.0f);
glm::mat4 PVM = projectionMatrix * viewMatrix * modelMatrix;
GLuint matrixID = glGetUniformLocation(shader.getShaderID(), "PVM");
And then the vertex shader:
#version 130
in vec4 vertexPosition;
uniform mat4 PVM;
out vec4 position;
void main()
{
gl_Position = vertexPosition * PVM;
position = vertexPosition;
}
I've just included the code I think is relevant.

Well, If you use Orthographics projection with screen resolution, as you probably did. The vertex coordinates become pixel coordinates. So your triangle is only 2pixels wide and half of it is offscreen.
So try to make it a bit bigger.

Related

Having trouble when using 2d orthographic matrix with glm

after I've set up the (orthogonal) projection matrix for my simple 2d game, nothing renders on the screen. I am using cglm (glm but in c) and compared the results of cglm with the normal glm ortho projection implementation that renders well, and the results of the projection matrix match. Here is my render loop:
void RenderSprite(const struct Sprite *sprite) {
struct Shader *shader = GetSpriteShader(sprite);
UseShader(shader);
/* cglm starts here */
mat4 proj;
glm_ortho(0.0f, 800.0f, 600.0f, 0.0f, -1.0f, 1.0f, proj); /* screen width: 800, height: 600 */
mat4 model;
glm_mat4_identity(model); /* an identity model matrix - does nothing */
/* cglm ends here */
SetShaderUniformMat4(shader, "u_Projection", proj); /* set the relevant uniforms */
SetShaderUniformMat4(shader, "u_Model", model);
/* finally, bind the VAO and call the draw call (note that I am not using batch rendering - I am using just a simple plain rendering system) */
glBindVertexArray(ezGetSpriteVAO(sprite));
glDrawElements(GL_TRIANGLES, ezGetSpriteIndexCount(sprite), GL_UNSIGNED_INT, 0);
}
However, this results in a blank screen - nothing renders. I believe I've did everything in order - but the problem is nothing renders.
For anyone interested, here is my vertex shader:
#version 330 core
layout (location = 0) in vec3 pos;
layout (location = 1) in vec2 uv;
uniform mat4 u_Model;
uniform mat4 u_Projection;
void main() {
gl_Position = u_Projection * u_Model * vec4(pos, 1.0f);
}
And here is my fragment shader:
#version 330 core
out vec4 color;
void main() {
color = vec4(1.0f);
}
As far as I am aware, cglm matrices are ordered column-major, which OpenGL wants.
Any help would be appreciated.
Thanks in advance.
EDIT
The sprite coordinates are (in this case it is the vertex data, I guess):
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, 0.5f, 0.0f
EDIT 2
After #BDL's comment, I have adjusted the vertex data as following:
float vertices[] = {
/* Position UV */
5.0f, 5.0f, 0.0f, 0.0f, 0.0f,// bottom left
10.0f, 5.0f, 0.0f, 1.0f, 0.0f, // bottom right
10.0f, 10.0f, 0.0f, 1.0f, 1.0f, // top right
5.0f, 10.0f, 0.0f, 0.0f, 1.0f // top left
};
But, I can't see anything on the screen - nothing is being rendered at this point.
As #BDL and others have suggested, here is my final render loop that works like a charm right now:
(posting as a reference)
void RenderSprite(const struct Sprite *sprite) {
ASSERT(sprite, "[ERROR]: Can't render a sprite which is NULL\n");
struct Shader *shader = GetSpriteShader(sprite);
UseShader(shader);
mat4 proj;
glm_ortho(0.0f, 800.0f, 600.0f, 0.0f, -1.0f, 1.0f, proj);
mat4 model, view;
glm_mat4_identity(model);
glm_mat4_identity(view);
SetShaderUniformMat4(shader, "u_Projection", proj);
SetShaderUniformMat4(shader, "u_Model", model);
SetShaderUniformMat4(shader, "u_View", view);
glBindVertexArray(GetSpriteVAO(sprite));
glDrawElements(GL_TRIANGLES, GetSpriteIndexCount(sprite), GL_UNSIGNED_INT, 0);
}
And here are my coordinates:
float vertices[] = {
/* Position UV */
350.0f, 350.0f, 0.0f, 0.0f, 0.0f,// bottom left
450.0f, 350.0f, 0.0f, 1.0f, 0.0f, // bottom right
450.0f, 250.0f, 0.0f, 1.0f, 1.0f, // top right
350.0f, 250.0f, 0.0f, 0.0f, 1.0f // top left
};
Also here are my shaders:
const char *default_vs = "#version 330 core\n"
"layout (location = 0) in vec3 pos;\n"
"layout (location = 1) in vec2 uv;\n"
"uniform mat4 u_Model;\n"
"uniform mat4 u_Projection;\n"
"uniform mat4 u_View;\n"
"void main() {\n"
" gl_Position = u_Projection * u_View * u_Model * vec4(pos, 1.0f);\n"
"}";
const char *default_fs = "#version 330 core\n"
"out vec4 color;\n"
"void main() {\n"
" color = vec4(1.0f);\n"
"}";
Hope this helps anyone that is having the same problem!

vertex shader uniform distortion

I have a quad, composed by two triangles, defined like so:
glm::vec3 coords[] = {
glm::vec3(-1.0f, -1.0f, -0.1f),
glm::vec3( 1.0f, -1.0f, -0.1f),
glm::vec3( 1.0f, 1.0f, -0.1f),
glm::vec3(-1.0f, 1.0f, -0.1f)
};
glm::vec3 normals[] = {
glm::vec3(0.0f, 0.0f, 1.0f),
glm::vec3(0.0f, 0.0f, 1.0f),
glm::vec3(0.0f, 0.0f, 1.0f),
glm::vec3(0.0f, 0.0f, 1.0f)
};
glm::vec2 texCoords[] = {
glm::vec2(0.0f, 0.0f),
glm::vec2(1.0f, 0.0f),
glm::vec2(1.0f, 1.0f),
glm::vec2(0.0f, 1.0f)
};
unsigned int indices[] = {
0, 1, 2,
2, 3, 0
};
I'm trying to change the quad's 'height' via a black and white jpg so I wrote a vertex shader to do this, however the transformation is not being applied directly to all the points of the quad. Here's the jpg I'm using:
I expect a sudden constant bump where the image turns white, but this is what I'm getting: https://i.gyazo.com/639a699e7aa12cda2f644201d787c507.gif. It appears that only the top left corner is reaching the maximum height, and that somehow the whole entire left triangle is being distorted.
My vertex shader:
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_normal;
layout(location = 2) in vec2 vertex_texCoord;
layout(location = 3) in vec4 vertex_color;
out vec2 v_TexCoord;
out vec4 v_Color;
out vec3 v_Position;
out vec3 v_Normal;
//model view projection matrix
uniform mat4 u_MVP;
uniform mat4 u_ModelMatrix;
uniform sampler2D u_Texture1_Height;
void main()
{
v_TexCoord = vertex_texCoord;
v_Color = vertex_color;
v_Normal = mat3(u_ModelMatrix) * vertex_normal;
vec4 texHeight = texture(u_Texture1_Height, v_TexCoord);
vec3 offset = vertex_normal * (texHeight.r + texHeight.g + texHeight.b) * 0.33;
v_Position = vec3(u_ModelMatrix * vec4(vertex_position + offset, 1.0f));
gl_Position = u_MVP * vec4(vertex_position + offset, 1.0f);
}
The bump map is just evaluated per vertex rather than per fragment, because you do the computation in the vertex shader. The vertex shader is just executed for each vertex (for each corner of the quad). Compare Vertex Shader and Fragment Shader.
It is not possible to displace the clip space coordinate for each fragment. You have to tessellate the geometry (the quad) to a lot of small quads. Since the vertex shader is executed for each fragment, the geometry is displaced for each corner point in the mesh. This is the common approach. See this simulation.
Another possibility is to implement parallax mapping, where a depth effect is accomplished by displacing the texture coordinates and distort the normal vectors in the fragment shader. See Normal, Parallax and Relief mapping respectively LearnOpenGL - Parallax Mapping or Bump Mapping with glsl.

Switching from orthogonal to perspective projection

I am trying to add a paralax effect to an existing engine. So far the engine worked with an orthogonal projection. Objects are placed in pixel coordinates on the screen. The problem is that I can not figure out how to replicate the same projection with a perspective projection matrix ect. that I can add a Z coordinate for depth.
I tried various combinations of matrices and z coordinates already and the result was always a black screen.
The matrix I am trying to replace:
glm::mat4 projection = glm::ortho(0.0f, static_cast<GLfloat>(1280.0f), static_cast<GLfloat>(720.0f), 0.0f, 0.0f, -100.0f);
The vertex shader:
// Shader code (I tested this while having identity matrices for view and model
#version 330 core
layout (location = 0) in vec2 vertex;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main() {
gl_Position = projection * view * model * vec4(vertex.xy, 1.0f, 1.0f);
}
The projection code I thought might work:
glm::mat4 model = glm::mat4(1.0f);
model = glm::translate(model, glm::vec3(-640, -310.0f, 0.0f));
model = glm::scale(model, glm::vec3(1.0f / 1280.0f, 1.0f / 720.0f, 1.0f));
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
glm::mat4 projection = glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, -100.0f);
Expected Is that a rectangle gets still displayed at similar position (I can correct the details once something works) without having a black screen.
The specification of the Perspective projection matrix is wrong.
glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, -100.0f);
glm::perspective defines a Viewing frustum by an field of view angle along the y axis, an aspect ratio and a distance to the near and the far plane.
So the near and the far plane have to be positive values (> 0) and near has to be less than far:
0 < near < far
e.g.:
glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, 100.0f);
The geometry has to be in between the near and the far plane, else it is clipped.
The ration of the size of the projected area and the depth is linear and can be calculated. It depends on the field of view angle:
float fov_y = glm::radians(45.0f);
float ratio_size_depth = tan(fov_y / 2.0f) * 2.0f;
Note, if an object should appear with half the size in the projection on the viewport, the distance from the object to the camera (depth) has to be doubled.
So the corrected model translation matrix and required depth in the shader to have the coordinates match on the plane are as follows:
int width = 1280.0f;
int height = 720.0f;
glm::mat4 model = glm::mat4(1.0f);
model = glm::scale(model, glm::vec3(-1.0f / width, -1.0f / height, 1.0f));
model = glm::translate(model, glm::vec3(-((float)width / 2.0f), -((float)height / 2.0f), 0.0f));
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, 1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
glm::mat4 projection = glm::perspective(glm::radians(45.0f), 1.0f, 0.1f, 100.0f);
Shader with Z-Value:
#version 330 core
layout (location = 0) in vec2 vertex;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main() {
gl_Position = projection * view * model * vec4(vertex.xy, 1.208f, 1.0f);
}
Which will be equivalent tho this orthogonal matrix:
glm::mat4 model = glm::mat4(1.0f);
glm::mat4 view = glm::mat4(1.0f);
glm::mat4 projection = glm::ortho(0.0f, static_cast<GLfloat>(this->width), static_cast<GLfloat>(this->height), 0.0f, 0.0f, -100.0f);
The matrices can also be multiplied together to have only one projection matrix you pass to the shader. This will make it easier to have an actual model matrix passed with the mesh ect.

Change camera position and direction in OpenGL?

I have a code (game) with a fixed camera in an ortho projection. It runs smoothly until I change the camera position from (0,0,1) to (0,0,-1).
In a nutshell, I have 2 textures:
{ //texture 1
960.0f, 0.0f, -5.0f, 0.0f, 0.0f,
960.0f, 1080.0f, -5.0f, 1.0f, 0.0f,
1920.0f, 0.0f, -5.0f, 0.0f, 1.0f,
1920.0f, 1080.0f, -5.0f, 1.0f, 1.0f
}
{ // texture 2
1290.0f, 390.0f, -7.0f, 0.0f, 0.0f,
1290.0f, 690.0f, -7.0f, 1.0f, 0.0f,
1590.0f, 390.0f, -7.0f, 0.0f, 1.0f,
1590.0f, 690.0f, -7.0f, 1.0f, 1.0f
}
the transformation matrices:
view = glm::lookAt
(
glm::vec3( 0.0f, 0.0f, 1.0f ),
glm::vec3( 0.0f, 0.0f, 0.0f ),
glm::vec3( 0.0f, 1.0f, 0.0f )
);
projection = glm::ortho
(
0.0f,
1920.0f,
0.0f,
1080.0f,
1.0f, // zNear
10.0f // zFar
);
the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4( aPos, 1.0 );
TexCoord = vec2( aTexCoord.x, aTexCoord.y );
}
If I run this code, it properly displays both textures, does depth testing,...
However, if I change the camera position to (0, 0, -1) and textures' Z-coordinate to their inverse +5 and +7, and keep the same direction (0, 0, 0), no texture is displayed (rendered). Shouldn't it display the same as before the changes ?
The issue is related to the orthographic projection matrix, because it is not centered. When the z axis of the view is inverted, then the x axis is inverted, too. Note the Right-hand rule has to be still fulfilled and the x.axis is the cross product of the y-axis and z-axis.
When the geometry is at z-5and the view and projection matrix is as follows
view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 1.0f),
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f);
projection = glm::ortho(0.0f, 1920.0f, 0.0f, 1080.0f, 1.0f, 10.0f);
then the object is projected to the viewport:
But if you switch the z position of the geometry and the view, then you get the following situation:
view = glm::lookAt(
glm::vec3(0.0f, 0.0f, -1.0f),
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f);
then the object is beside the viewport:
Shift the the orthographic projection along the X-axis, to solve your issue:
projection = glm::ortho(-1920.0f, 0.0f, 0.0f, 1080.0f, 1.0f, 10.0f);

How to position an object in model / view / world space?

I have a cube which is defined with the centre at 0,0,0 and the edges reaching out to -1/+1 (i.e. the cube has width, height, depth of 2).
I then setup the following matrices:
glm::mat4 modelMat4;
modelMat4 = glm::translate(modelMat4, 0.0f, 0.0f, 200.f);
modelMat4 = glm::scale(modelMat4, 200.0f, 200.0f, 200.0f);
glm::mat4 viewMat4;
viewMat4 = glm::lookAt(glm::vec3(0.0f, 0.0f, zNear),
glm::vec3(0.0f, 0.0f, zFar),
glm::vec3(0.0f, 1.0f, 0.0f));
// initialWidth = window width
// initialHeight = window height
// zNear = 1.0f
// zFar = 1000.0f
glm::mat4 projectionMat4;
projectionMat4 = glm::frustum(-initialWidth / 2.0f, initialWidth / 2.0f, -initialHeight / 2.0f, initialHeight / 2.0f, zNear, zFar);
But the middle of my object appears at the near z-plane (i.e. I can only see the back half of my cube, from the inside).
If I adjust the model transform to be:
glm::translate(modelMat4, 0.0f, 0.0f, 204.f);
Then I can see the front side of my cube as well.
If I change the model transform to be:
glm::translate(modelMat4, 0.0f, 0.0f, 250.f);
Then the cube only rasterises at approx 2x2x2 pixels.
What am I misunderstanding about model, view projection matrices? I was expecting the transform to be linear, but the z-plane disappears between 200 and 250. Even though the planes are defined between 1.0f and 1000.0f.
Edit: My shader code is below:
#version 100
layout(location = 0) in vec3 v_Position;
layout(location = 1) in vec4 v_Colour;
layout(location = 2) uniform mat4 v_ModelMatrix;
layout(location = 3) uniform mat4 v_ViewMatrix;
layout(location = 4) uniform mat4 v_ProjectionMatrix;
out vec4 f_inColour;
void main()
{
gl_Position = v_ProjectionMatrix * v_ViewMatrix * v_ModelMatrix * vec4(v_Position, 1.0);
f_inColour = v_Colour;
}
You didn't show how you are multiplying your matrices, but for what you describe, its seems that it could be doing wrong. Be sure to do in this order:
MVP = projectionMat4 * viewMat4 * modelMat4;
UPDATED
Looking more carefully into your code, it is seeming that you are lacking a multiplication to concanate your transformations:
modelMat4 = glm::translate(modelMat4, 0.0f, 0.0f, 200.f);
modelMat4 *= glm::scale(modelMat4, 200.0f, 200.0f, 200.0f); // <-- here
so, modelMat4 will be the results of a scale and then a translation