Scene rendering wonky when camera transformations occur - c++

I've recently switched over to GLM for managing my matrices and vectors, however when I change my variables such as camera angles or position, the whole rendered scene goes haywire.
I really don't know how to describe it other than stretching and moving all over the place.
Problem:
"Camera" transformations such as panning the camera result in strange atypical/unexpected changes. Typically, when the camera pan variables like X and Y deviate from "0"
Note:
I used to perform these very same types of transformations on Qt's datatypes for QMatrix4x4 and QVector3D, rather than glm::mat4x4 and glm::vec4, and it worked fine
Here is the way I'm implementing the camera in my render function (alpha and beta are rotation vars, and = 0 by default, camX and camY are panning vars, and also = 0 by default):
glm::mat4x4 mMatrix;
glm::mat4x4 vMatrix;
glm::mat4x4 cameraTransformation;
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(alpha)/*alpha*(float)M_PI/180*/, glm::vec3(0, 1 ,0));
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(beta)/*beta*(float)M_PI/180*/, glm::vec3(1, 0, 0));
glm::vec4 cameraPosition = (cameraTransformation * glm::vec4(camX, camY, distance, 0));
glm::vec4 cameraUpDirection = cameraTransformation * glm::vec4(0, 1, 0, 0);
vMatrix = glm::lookAt(glm::vec3(cameraPosition[0],cameraPosition[1],cameraPosition[2]), glm::vec3(camX, camY, 0.0), glm::vec3(cameraUpDirection[0],cameraUpDirection[1],cameraUpDirection[2]));
glm::mat4x4 glmat = pMatrix * vMatrix * mMatrix;
QMatrix4x4 qmat = QMatrix4x4(glmat[0][0],glmat[0][1],glmat[0][2],glmat[0][3],
glmat[1][0],glmat[1][1],glmat[1][2],glmat[1][3],
glmat[2][0],glmat[2][1],glmat[2][2],glmat[2][3],
glmat[3][0],glmat[3][1],glmat[3][2],glmat[3][3]);
shaderProgram.bind();
shaderProgram.setUniformValue("mvpMatrix", qmat);
I set up my projection matrix as so (fov = 30 degrees):
pMatrix = glm::perspective( glm::radians(fov), (float)width/(float)height, (float)0.001, (float)10000 );
My matrices look like this at the time they are used:
Here's an example of how it looks
Before any changes, all values are at 0:
When camX changes to 14 (note, I didn't rotate my camera around!):

glm::mat4x4 cameraTransformation;
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(alpha)/*alpha*(float)M_PI/180*/, glm::vec3(0, 1 ,0));
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(beta)/*beta*(float)M_PI/180*/, glm::vec3(1, 0, 0));
This can be simplified by using matrix multiplication and using a different glm call:
glm::mat4x4 cameraTransformation =
glm::rotate(glm::radians(alpha), glm::vec3(0,1,0)) *
glm::rotate(glm::radians(beta), glm::vec3(1,0,0));
Next:
glm::vec4 cameraPosition = (cameraTransformation * glm::vec4(camX, camY, distance, 0));
glm::vec4 cameraUpDirection = cameraTransformation * glm::vec4(0, 1, 0, 0);
Having a zero in the w component of a vector indicates that the vector is a direction, not a position. Yet you are obtaining a position vector as the output. This happens to work because cameraTransformation has only rotation operations, not translating operations, but it's better to be clear:
glm::vec3 cameraPosition = glm::vec3(cameraTransformation * glm::vec4(camX, camY, distance, 1));
Note- I use a vec3 not a vec4 because I just like to do that.
For the next part you actually do want a direction vector and not a position vector, so you should have a zero in the w component. Still cast it to a vec3, because it's just clearer in my opinion.
glm::vec3 cameraUpDirection = glm::vec3(cameraTransformation * glm::vec4(0, 1, 0, 0));
Next:
vMatrix=
glm::lookAt(glm::vec3(cameraPosition[0],cameraPosition[1],cameraPosition[2]),
glm::vec3(camX, camY, 0.0),
glm::vec3(cameraUpDirection[0],cameraUpDirection[1],cameraUpDirection[2]));
Glm lets you pass a vec3 into a vec4 as a constructor parameter so you can shorten your code like this:
vMatrix=
glm::lookAt(glm::vec3(cameraPosition),
glm::vec3(camX, camY, 0.0),
glm::vec3(cameraUpDirection));
But we don't even need to do that because i changed the variables into vec3s not vec4s:
vMatrix= glm::lookAt(cameraPosition, glm::vec3(camX, camY, 0.0), cameraUpDirection);
And finally, you can access the components of a glm vector using .x,.y,.z,.w instead of the [] operator, which I imagine is much safer and easier to read.

I made a very stupid error!
In attempt to convert my glm::mat4x4 to QMatrix4x4, I accidentally swapped the rows and columns.
I needed to change:
QMatrix4x4 qmat = QMatrix4x4(glmat[0][0],glmat[0][1],glmat[0][2],glmat[0][3],
glmat[1][0],glmat[1][1],glmat[1][2],glmat[1][3],
glmat[2][0],glmat[2][1],glmat[2][2],glmat[2][3],
glmat[3][0],glmat[3][1],glmat[3][2],glmat[3][3]);
to:
QMatrix4x4 qmat = QMatrix4x4(glmat[0][0],glmat[1][0],glmat[2][0],glmat[3][0],
glmat[0][1],glmat[1][1],glmat[2][1],glmat[3][1],
glmat[0][2],glmat[1][2],glmat[2][2],glmat[3][2],
glmat[0][3],glmat[1][3],glmat[2][3],glmat[3][3]);

Related

Rotation in OpenGL has different effect each time

I am trying to learn OpenGL by coding some stuff, but am still not able to understand the concept of rotation.
Here is my code:
glm::mat4 projection1 = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f);
glm::mat4 view1 =camera.GetViewMatrix();//// //
ourShader.setMat4("projection", projection1);
ourShader.setMat4("view", view1);
ourShader.setInt ("pass1",1);
glm::mat4 model1 = glm::mat4(1.0f);
vangle+=0.1;
float cvangle = (vangle-90)*PI /180;
model1=glm::translate (model1 ,glm::vec3(cos(cvangle )*50,0,sin(cvangle )*50));
model1 = glm::scale(model1, glm::vec3(1,1, 1));
model1 = glm::rotate(model1,3.0f , glm::vec3(1, 0, 0));
model1 = glm::rotate(model1,2.0f, glm::vec3(0, 1, 0));
ourShader.setMat4("model", model1);
ourModel.Draw(ourShader);
The helicopter should rotate around the camera, but my problem is that the rotation has a different effect in each angle, i.e. at angle 0, it looks like this:
while at angle 90, it looks like this:
My goal is to rotate the helicopter around the camera showing always the same side.
Any help is appreciated.
If you want to rotate an object in place, then you've to dot the rotation before the translation:
model = translate * rotate;
If you want to rotate around a point, then you've to translate the object (by the rotation radius) and then rotate the translated object:
model = rotate * translate
Note, the operations like rotate, scale and translate, define a new matrix and multiply the input matrix by the new matrix.
So In your case the translate has to be done after a rotation (rotate) around the z axis:
vangle+=0.1;
glm::mat4 model1 = glm::mat4(1.0f);
model1 = glm::rotate(model1, glm::radians(vangle), glm::vec3(0, 0, 1));
model1 = glm::translate(model1, glm::vec3(50.0f, 0.0f, 0.0f);
model1 = glm::scale(model1, glm::vec3(1, 1, 1));

Perspective Projection OPENGL and Compute Shaders

I am following this site to learn ray tracing using compute shaders: https://github.com/LWJGL/lwjgl3-wiki/wiki/2.6.1.-Ray-tracing-with-OpenGL-Compute-Shaders-%28Part-I%29
My question, The tutorial details a procedure to get the perspective projection. I think I followed his steps correctly but I am getting the wrong result and I believe I made a mistake in my matrix computations.
My code for the perspective projection-
//Getting the perspective projection using glm::perspective
glm::mat4 projection = glm::perspective(60.0f, 1024.0f/768.0f, 1.0f, 2.0f);
//My Camera Position
glm::vec3 camPos=glm::vec3(3.0, 2.0, 7.0);
//My View matrix using glm::lookAt
glm::mat4 view = glm::lookAt(camPos, glm::vec3(0.0, 0.5, 0.0),glm::vec3(0.0, 1.0, 0.0));
//Calculating inverse of the view*projection
glm::mat4 inv = glm::inverse(view*projection);
//Calculating the rays from camera position to the corners of the frustum as detailed in the site.
glm::vec4 ray00=glm::vec4(-1, -1, 0, 1) * inv;
ray00 /= ray00.w;
ray00 -= glm::vec4(camPos,1.0);
glm::vec4 ray10 = glm::vec4(+1, -1, 0, 1) * inv;
ray10 /= ray10.w;
ray10 -= glm::vec4(camPos,1.0);
glm::vec4 ray01=glm::vec4(-1, 1, 0, 1) * inv;
ray01 /= ray01.w;
ray01 -= glm::vec4(camPos,1.0);
glm::vec4 ray11 = glm::vec4(+1, +1, 0, 1) * inv;
ray11 /= ray11.w;
ray11 -= glm::vec4(camPos,1.0);
Result of above tranformations:
[![enter image description here][1]][1]
As additional information, I am calling my compute shaders using
//Dispatch Shaders.
glDispatchCompute ((GLuint)1024.0/16, (GLuint)768.0f/8 , 1);
I am also passing the values to the shader using the
//Querying the location for ray00 and assigning the value. Similarly for the rest
GLuint ray00Id = glGetUniformLocation(computeS, "ray00");
glUniform3f(ray00Id, ray00.x, ray00.y, ray00.z);
GLuint ray01Id = glGetUniformLocation(computeS, "ray01");
glUniform3f(ray01Id, ray01.x, ray01.y, ray01.z);
GLuint ray10Id = glGetUniformLocation(computeS, "ray10");
glUniform3f(ray10Id, ray10.x, ray10.y, ray10.z);
GLuint ray11Id = glGetUniformLocation(computeS, "ray11");
glUniform3f(ray11Id, ray11.x, ray11.y, ray11.z);
GLuint camId = glGetUniformLocation(computeS, "eye");
glUniform3f(camId, camPos.x, camPos.y, camPos.z);
Updated Answer following derhass suggestion.
My image now looks like :
Latest Image
The glm library uses the standard OpenGL matrix conventions, meaning that the matrices are created with the multiplication order Matrix * Vector in mind. So the following code is wrong:
//Calculating inverse of the view*projection
glm::mat4 inv = glm::inverse(view*projection);
The composition of the view matrix (transforming from world space to eye space) and the projection matrix (transforming from eye space to clip space) is projection * view, not view * projection as you put it (which would apply the projection before the view).

OpenGl glm local rotation

I need to rotate object in local coordinates system, like you can rotate it in 3dmax\maya etc...
My current code is:
ModelMatrix = glm::mat4(1.0f);
TransformMatrix = glm::mat4(1.0f);
ScaleMatrix = glm::mat4(1.0f);
RotateMatrix = glm::mat4(1.0f);
ScaleMatrix = glm::scale(ScaleMatrix, glm::vec3(scalex, scalez, scaley));
TransformMatrix = glm::translate(TransformMatrix, glm::vec3(x, z, y));
RotateMatrix = glm::rotate(RotateMatrix, anglex, glm::vec3(1, 0, 0));
RotateMatrix= glm::rotate(RotateMatrix, angley, glm::vec3(0, 0, 1));
RotateMatrix = glm::rotate(RotateMatrix, anglez, glm::vec3(0, 1, 0));
ModelMatrix = TransformMatrix * ScaleMatrix* RotateMatrix;
MVP = Projection * View * ModelMatrix ;
anglex,y,z - comes from keyboard.
Right now only last dimension works as local (im my example it's glm::vec3(0, 1, 0) Z axis) At this IMAGE I show what I needed(2) and what I've got(3)... If I changes "anglez" it's always works as ROLL. But anglex and angley is in the world coordinates system.
The second my attempt - use Quaternions:
quat MyQuaternion= glm::quat(cos(glm::radians(xangle / 2)), 0, sin(glm::radians(xangle / 2)), 0);
quat MyQuaternion2 = glm::quat(cos(glm::radians(yangle/ 2)), sin(glm::radians(yangle / 2)), 0, 0);
quat MyQuaternion3 = glm::quat(cos(glm::radians(zangle / 2)), 0,0,sin(glm::radians(zangle / 2)));
glm::mat4 RotationMatrix = toMat4(MyQuaternion*MyQuaternion2*MyQuaternion3);
But I have the same result
You should modify the entire ModelMatrix instead of the angles. Initialize ModelMatrix to the identity matrix. Then, when you process keyboard input:
if(rotate about x-axis)
ModelMatrix = glm::rotate(ModelMatrix, angle, glm::vec3(1, 0, 0));
if(rotate about y-axis)
ModelMatrix = glm::rotate(ModelMatrix, angle, glm::vec3(0, 1, 0));
if(rotate about z-axis)
ModelMatrix = glm::rotate(ModelMatrix, angle, glm::vec3(0, 0, 1));
if(any rotation happened)
MVP = Projection * View * ModelMatrix ;
You can do this modification at any level. Either the MVP level, the ModelMatrix level (as shown here) or the RotateMatrix level.

Why does QMatrix4x4::lookAt() result in a upside down camera

I have a got a simple OpenGL program which sets up the camera as follows :
void
SimRenderer::render() {
glDepthMask(true);
glClearColor(0.5f, 0.5f, 0.7f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glFrontFace(GL_CW);
glCullFace(GL_FRONT);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
QMatrix4x4 mMatrix;
QMatrix4x4 vMatrix;
QMatrix4x4 cameraTransformation;
cameraTransformation.rotate(mAlpha, 0, 1, 0); // mAlpha = 25
cameraTransformation.rotate(mBeta, 1, 0, 0); // mBeta = 25
QVector3D cameraPosition = cameraTransformation * QVector3D(0, 0, mDistance);
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0);
vMatrix.lookAt(cameraPosition, QVector3D(0, 0, 0), cameraUpDirection);
mProgram.bind();
mProgram.setUniformValue(mMatrixUniformLoc, mProjMatrix * vMatrix * mMatrix );
// render a grid....
}
But the result is an upside down camera !!
!1
When I change the view matrix to be set up as:
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, -1, 0);
It works ! But why should I need to set my up direction as negative Y when my real up direction is positive Y ?
Complete class here : https://code.google.com/p/rapid-concepts/source/browse/trunk/simviewer/simrenderer.cpp
Other info: I am rendering to a QQuickFramebufferObject which binds a FBO to a widgets surface before calling the rendering function. Dont think that would be an issue but anyway. And this is not a texturing issue at all, there arent any textures to be flipped etc. Seems the camera is interpreting the up direction in the opposite way !!
http://doc.qt.digia.com/qt-maemo/qmatrix4x4.html#lookAt
Update :
So since using lookat and cameraTransformations both together may not work I am trying :
QMatrix4x4 mMatrix;
QMatrix4x4 vMatrix;
QMatrix4x4 cameraTransformation;
cameraTransformation.rotate(mAlpha, 0, 1, 0); // 25
cameraTransformation.rotate(mBeta, 1, 0, 0); // 25
cameraTransformation.translate(0, 0, mDistance);
vMatrix = cameraTransformation.inverted();
That produces exactly the same result :)
I think the camera up axis needs to be accounted for in some way.
It is actually not the camera that upside down but the texture was rendered to QML surface upside down. That is really confusing because you do get the correct direction (Y up) if you are using widget based stacks (QOpenGLWidget) or simply QOpenGLWindow.
Basically the same as this question. some explanation can be found on the forum or in the bug tracker.
I think the best solution is the one in bug tracker which requires no additional transformation on either the QML item or in matrix: overriding updatePaintNode to setTextureCoordinatesTransform to vertically mirrored.
QSGNode *MyQQuickFramebufferObject::updatePaintNode(QSGNode *node, QQuickItem::UpdatePaintNodeData *nodeData)
{
if (!node) {
node = QQuickFramebufferObject::updatePaintNode(node, nodeData);
QSGSimpleTextureNode *n = static_cast<QSGSimpleTextureNode *>(node);
if (n)
n->setTextureCoordinatesTransform(QSGSimpleTextureNode::MirrorVertically);
return node;
}
return QQuickFramebufferObject::updatePaintNode(node, nodeData);
}
Typically this effect is caused by one of several things.
Mixing up radians and degrees
Forgetting to set the modelview matrix to the inverse of the camera transform
Screwing up the inputs to lookat
I suspect the issue with this is the last.
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0);
Why are you multiplying the up vector by this transformation? I can understand multiplying the distance, so that the camera position is transformed, rotating the up axis sent to a lookat function will result in weirdness I suspect.
Generally, doing transforms using a camera matrix AND using lookat is a little odd. If you already have a camera matrix with the proper rotation, you can just translate that matrix by the distance required, expressed as a Z vector of the appropriate length, probably QVector3D(0, 0, mDistance), and then setting the view matrix to the inverse of the camera matrix:
vMatrix = cameraTransformation.inverted();

OpenGL multiple cube, rotate, move

I'm learning from this tutotrials:
http://en.wikibooks.org/wiki/Category:OpenGL_Programming
http://www.opengl-tutorial.org/
I have modified the 7.th lesson from http://www.opengl-tutorial.org/ so that the cube rotate, now what I want to do is to have two or tree cubes each at different places and make them rotate(the cubes), but I really don't know how to do that. So I'm asking and hoping for some help.
The rotation is made by this code:
glm::vec3 axis_y(0, 1, 0);
glm::mat4 anim = glm::rotate(glm::mat4(1.0f), angle, axis_y);
...
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix * anim;
I didn't go through the details of the tutorial, but in principle, you need to create a model matrix for each of the cubes, and then render each cube with its own value of MVP constructed from the cube's model matrix (and the global view & projection matrices).
The above can give you three identical cubes in different positions, rotations and scales. If you want three different objects, you'll need to load each of them separately, preferably into its own buffer object.
EDIT
I don't know the libraries the tutorial uses, but the principle of coding this could be along these lines:
for (int idxCube = 0; idxCube < 3; ++idxCube) {
glm::mat4 offset = glm::translate(10 * idxCube, 0, 0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix * offset * anim;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glDrawArrays(...);
}
This would give 3 cubes at positions (0, 0, 0), (10, 0, 0) and (20, 0, 0).
More generally, you'd just have one ModelMatrix for each cube.