I Have a projection matrix in my C++ OpenGL Application.
glm::mat4 projection = glm::perspective(45.0f, 16.0f / 9.0f, 1.0f, 100.0f);
This Matrix is later sent as uniform to the Vertex Shader ->
Nade::Shader::SetMat4(app.shader->GetProgram(), "p", app.projection);
And Utilized inside the Vertex Shader
gl_Position = m * p * vec4(pos,1.0);
And then the Rendered Quad is moved at the Z Axis
object.Translate(0, 0, -0.05);
Observed Behavior: The Rendered Mesh behaves like it is within a Orthographic Matrix where it stays same in size but clips away at the far point
Expected Behavior: The Rendered Mesh reduces in size and clips away.
How can I fix this?
gl_Position = m * p * vec4(pos,1.0); is equivalent to gl_Position = m * (p * vec4(pos,1.0));, which means that the position is transformed by p before being transformed by m.
Assuming p means "projection" and m means "modelview", then it should be:
gl_Position = p * m * vec4(pos,1.0);
You might be wondering: Why didn't this cause issues earlier?
With an orthographic projection, and a camera looking down the z axis, the original code could still look like it works. That's because a zero-centered orthographic projection is basically just a scaling matrix.
Related
I have been getting unexpected behavior while trying to rotate a basic cube. It may be helpful to know that translating the cube works correctly in the y and z direction. However, translating along the x axis is backwards(I negate only x for proper results) which I haven't been able to figure out why.
Furthermore, rotating the cube has been a mess. Without any sort of transform the cube appears correctly. Once I add a rotation transformation the cube is not displayed until I change one of the x,y,z rotation values from 0(Putting all values back to 0 makes it disappear again). Once it appears the cube won't rotate around whichever x,y,z plane I first changed unless I change two or more of the coordinates. It also wobbles around its origin when rotating.
Below is a snippets of my code I believe has incorrect math.
/* Here's how I setup the matrices for a mvp matrix*/
proj = glm::perspective(glm::radians(90.0f), (960.0f / 540.0f), 0.1f, 400.0f);
view = glm::lookAt(glm::vec3(0, 0, -200), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));
glm::mat4 model = glm::mat4(1.0f);
/* Here's how I transform the model matrix, note
translating works properly once the cube is visible*/
model = glm::translate(model, glm::vec3(-translation[0], translation[1], translation[2])); //negative x value
model = glm::rotate(model, 30.0f, rotation);
glm::mat4 mvp = proj * view * model;
shader->Bind();
shader->SetUniformMat4f("MVP", mvp);
renderer.Draw(*c_VAO, *c_EBO, *shader);
/* Here's how I use these values in my vertex shader */
layout(location = 0) in vec4 position;
...
uniform mat4 MVP;
...
void main()
{
gl_Position = u_MVP * position;
....
};
I've checked both the translation and rotation vectors values and they are as expected but I am still going mad trying to figure out this problem.
The unit of the angle of glm::rotate is radians. Use glm::radians to convert form degrees to radians:
model = glm::rotate(model, 30.0f, rotation);
model = glm::rotate(model, glm::radians(30.0f), rotation);
My understanding is that you can convert gl_FragCoord to a point in world coordinates in the fragment shader if you have the inverse of the view projection matrix, the screen width, and the screen height. First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates by dividing by the width and height respectively, then scaling and offsetting them into the range [-1, 1]. Next you transform by the inverse view projection matrix to get a world space point that you can use only if you divide by the w component.
Below is the fragment shader code I have that isn't working. Note inverse_proj is actually set to the inverse view projection matrix:
#version 450
uniform mat4 inverse_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2,
(gl_FragCoord.y / screen_height - 0.5) * 2,
0,
1);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_proj * ndc;
vec3 view = (1 / ndc.w * clip).xyz;
// ...
}
First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates
While simultaneously ignoring the fact that NDC space is three-dimensional (as is window space). You also forgot that the transformation from clip-space to NDC space involved a division, which you did not undo. Well, you did kinda try to undo it, but after transforming by the inverse clip transformation.
Undoing the vertex post-processing transformations use all four components of gl_FragCoord (though you could make due with just 3). The first step is undoing the viewport transform, which requires getting access to the parameters given to glDepthRange.
That gives you the NDC coordinate. Then you have to undo the perspective divide. gl_FragCoord.w is given the value 1/clipW. And clipW was the divisor in that operation. So you divide by gl_FragCoord.w to get back into clip space.
From there, you can multiply by the inverse of the projection matrix. Though if you want world-space, the projection matrix you invert must be a world-to-projection, rather than just pure projection (which is normally camera-to-projection).
In-code:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
Where viewport is a uniform containing the four parameters specified by the glViewport function, in the same order as given to that function.
I figured out the problems with my code. First, as Nicol pointed out, glFragCoord.z (depth) needs to be shifted from screen coordinates. Also, there is a mistake with the original code where I wrote 1 / ndc.w * clip instead of clip / clip.w.
As noted by BDL, however, it would be more efficient to pass the world position as a varying to the fragment shader. However, the code below is a short way to achieve the desired result entirely through the fragment shader (e.g. for screen-space programs that don't have a world position per fragment and you want the view vector per fragment).
#version 450
uniform mat4 inverse_view_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2.0,
(gl_FragCoord.y / screen_height - 0.5) * 2.0,
(gl_FragCoord.z - 0.5) * 2.0,
1.0);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_view_proj * ndc;
vec3 vertex = (clip / clip.w).xyz;
// ...
}
I'm using spherically billboarded sprites along with 3D objects. Because the quad leans backwards to match the camera angle, it intersects with 3D objects immediately behind it. It is more noticeable when the camera angle is very large.The following link provides a very clear visual.
http://answers.unity3d.com/questions/582680/billboard-issue-in-front-of-3d-object.html
Is there an efficient way to resolve this?
The best solution I could come up with was to use cylindrical billboarding for depth calculations and spherical for the quad's actual position. This allows you to use spherical billboarding while ensuring the quad's depth remains constant.
For reference here are the billboarding ModelView Matrixes. [x]: implies the value is left as is.
Cylindrical mvMatrix Spherical mvMatrix
[1][x][0][x] [1][0][0][x]
[0][x][0][x] [0][1][0][x]
[0][x][1][x] [0][0][1][x]
[x][x][x][x] [x][x][x][x]
First modify the ModelViewMatrix for cylindrical billboarding and generate a depth vertex as such:
depthV = projectionMatrix * (mvm * vertex);
Next set the second column values for spherical billboarding and create the quad as usual:
mvm[1][0] = 0; mvm[1][2] = 0; mvm[1][1] = 1;
gl_Position = projectionMatrix * (mvm * vertex);
Finally send depthV to the fragment shader and use it for the depth calculation.
float ndcDepth = depthV.z / depthV.w;
gl_FragDepth = ((gl_DepthRange.diff * ndcDepth ) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
Scaling should be done before applying the ModelView Matrixes.
I am trying to render a 3D model using OpenGL. And for the projection and transformation matrices, I am using glm. I've got my model on the screen and it works just like I intended it to; except one small problem.
I am setting the model's translation matrix as
glm::translate(glm::vec3(0, 0, 4)
to move the model a little bit forward so that I can see it. Since in OpenGL, by default, negative z is out towards the 'camera' and positive z is forward, I expected this to work but it doesn't. It only works if I set it to
glm::translate(glm::vec3(0, 0, -4)
But this seems weird to me, as I am setting my zNear to 0.01 and zFar to 1000. Is glm's z values flipped or am I doing something wrong here?
Here is my code:
glm::mat4 rotation = glm::mat4(1.0f);
glm::mat4 translation = glm::translate(glm::vec3(0, 0, -4));
glm::mat4 scale = glm::mat4(1.0f);
glm::mat4 modelMatrix = translation * rotation * scale;
glm::mat4 projectionMatrix = glm::perspective(70.0f, aspectRatio, 0.01f, 1000.0f);
glm::mat4 transformationMatrix = projectionMatrix * modelMatrix;
When you call perspective() with near = 0.01 and far = 1000.0 planes, its actual meaning is that you are cutting it as -0.01 to -1000.0 so you should put the object's z-value into the range [-0.01, -1000.0].
Imagine the right handed Coordinate and assume your eye's z-value is 0.0 in default.
I am working on a 3D project using vertices, i started it with a simple gluLookAt in order to have a first person camera moving in an environment, i use it this way :
gluLookAt(_position.x,_position.y,_position.z,
_target.x,_target.y,_target.z,
0,0,1);
Everything was working fine, i was calculating my target according to the position of the mouse and angles (theta and phi), my project moved on to using vertices for performance issues, so i had to use the same camera for these new objects, in order to do this i used the GLM library this way :
glm::mat4 Projection = glm::perspective(90.0f, 800.0f / 600.0f, 0.1f, 100.f);
glm::mat4 View = glm::lookAt(
glm::vec3(position.x,position.y,position.z),
glm::vec3(target.x,target.y,target.z),
glm::vec3(0,0,1)
);
glm::mat4 Model = glm::mat4(1.0f); !
// Our ModelViewProjection : multiplication of our 3 matrices
glm::mat4 MVP = Projection * View * Model;
GLuint MatrixID = glGetUniformLocation(this->shaderProgram, "MVP");
here is the shader i use :
const GLchar* default_vertexSource =
"#version 150 core\n"
"in vec2 position;"
"in vec3 color;"
"out vec3 Color;"
"uniform vec3 translation;"
"uniform mat4 rotation;"
"uniform mat4 MVP;"
"void main() {"
" Color = color;"
" gl_Position = MVP*rotation*vec4(position.x + translation.x, position.y + translation.y, 0.0 + translation.z, 1.0);"
"}";
What happens is that my my object's coordinate reference is not the same as my camera, it is drawn above it on the current x/z plan whereas it should be facing the camera on the x/y plan.
From my point of view it seems that you are missunderstanding how translation in OpenGL works.
glm::lookAt returns one modelview matrix. -> How the objects in the scene are translated according to the camera. In openGL you do not "Move" the camera, you are moving all the objects around the camera.
So if you have 2 objects in 0 0 0 (origin of you world camera system) and ur camera is at
eye(0,0,-3), center(0,0,0), up(0,1,0) and you want to move one object to the left and on object to the right you need to have a matrix stack from glm
std::stack<glm::mat4> glm_ProjectionMatrix;
Here you can push the modelview from your camera as top object.
For the object movemtn to the left side you can just use glm::translate, upload this mat4 as modelview matrix to your shader. (here do not render the scene)
Reset the top matrix to modelview (eiter pop() if you made a copy of the top element before) or just reset using glm::lookAt. Then glm::translate in the other direction.
In your vertex shader you now need no vec3 translate or rotate.
You just say
gl_Position = perspective * modelview *vec4(position,1);
This wil update the two object accordingly to you given translation.
If you want to move some objects around, just update the modelview matrix for the one object.
http://www.songho.ca/opengl/gl_transform.html
I hope you can understand my answer.
The basics of movement (rot and trans) in OpenGL is matrix multipication. You do not need to add some translation vec to your modelview in the shader. GLM can do all the stuff in you c++ code
Math:
Modelview is a 4*4 matrix
If your object should not be rotated or translated the modelview is equals to the identity.
If you want to rotate the object you miltiply the modelview matrix with the correct 4*4 rotation matrix (see homogenous coordinates)
If you want to translate the vertex you multiply the correct translation to the matrix
Lets say X = (x,y,z,w) is you vertex
T is translation and R is rotation
a valid operation may looks like this:
Modelview * rotation * translation *v = v'
v' is the new position of the point in your 3D coordinate system
Some examplecode you can find here: http://incentivelabs.de/Sourcecode/ See "Teil 13" or later. If you are able to understand german, you can find my tutorials on OpenGL with C++ on Youtube using the channel-name "incentivelabs"
Edit:
If I want to move an object/rotate an object using C++ with GLM
glm_ProjectionMatrix.top() = camera.getProjectionMat();
glUniformMatrix4fv(uniformLocations["projection"], 1, false,glm::value_ptr(glm_ProjectionMatrix.top()));
glm_ModelViewMatrix.top() = camera.getModelViewMat();
glm_ModelViewMatrix.top() = glm::translate(
glm_ModelViewMatrix.top(),
glm::vec3(0.0f, 0.0f, -Translate));
glUniformMatrix4fv(uniformLocations["modelview"], 1, false,glm::value_ptr(glm_ModelViewMatrix.top()));
In the GLSL vertex Shader:
gl_Position = projection * modelview * vec4(vertexPos,1);
vertexPos is an attribute (vec3 for the position)
This codes moves all vertices drawn after the upload of the modelview and projection to the shader with the same translation.