GLM view matrix causing model matrix to have no effect - c++

I'm trying to get an object to always face the camera. I looked up a way to do this, but the problem is when I put this part into the view matrix nothing is affected by the model matrix. How can I get it to translate it using the model matrix? code:
GLuint transformLocation=glGetUniformLocation(textureShaders,"transform");
glm::mat4 transform;
glm::mat4 model;
glm::vec3 playerPosition=user.getPosition();
model=glm::translate(model,glm::vec3(xpos,0.0f,zpos));
glm::mat4 view;
view=glm::lookAt(cam.getPositionVector(),cam.getPositionVector()+cam.getFrontVector(),cam.getUpVector());
glm::mat4 rotationMatrix=glm::transpose(glm::lookAt(glm::vec3(xpos,0.0f,zpos),playerPosition,glm::vec3(0.0f,1.0f,0.0f)));
view*=rotationMatrix;
glm::mat4 projection;
projection=glm::perspective(45.0f,(float)900/(float)600,0.1f,100.0f);
transform=projection*view*model;

Your "rotation" matrix doesn't really make sense:
rotationMatrix=glm::transpose(glm::lookAt(glm::vec3(xpos,0.0f,zpos),playerPosition,glm::vec3(0.0f,1.0f,0.0f)));
This will not result in a rotation matrix (except when both xpos and zpos happen to be zero). lookAt will create a transform matrix which can be decomposed to r * t(-pos) (for whatever pos you call it with). Building the transpose of this matrix will result in the translation column beeing transposed to the fourth row, which completely will screw the final w coordinate.

Related

How to use LookAt matrix in vertex shader

Let's say I have the following vertex shader code below:
attribute vec4 vPos;
uniform mat4 MVP;
uniform mat4 LookAt;
void main{
gl_Position = MVP * vPos;
}
How do I use the LookAt matrix in this shader to position the eye of the camera? I have tried LookAt * MVP * vPos but that didn't seem to work as my triangle just disappeared off screen!
I would suggest move the LookAt outside the shader to prevent un-necessary calculation per vertex. The shader still do
gl_Position = MVP * vPos;
and you manipulate MVP in the application with glm. For example:
projection = glm::perspective(fov, aspect, 0.1f, 10000.0f);
view = glm::lookAt(eye, center, up);
model = matrix of the model, with all the dynamic transforms.
MVP = projection * view * model;
A LookAt matrix is in general called a View matrix and is concatenated with a model-to-world transform matrix to form the WorldView matrix. This is then multiplied by the projection matrix which is often orthographic or perspective. Vertex positions in model space are multiplied with the resulting matrix in order to be transformed to clip space (kinda...I skipped a couple of steps here that you don't have to do and is performed by the hardware/driver).
In your case, make sure that you're using the correct 'handedness' for your transformations. Also you can try and multiply the position in the reverse order with the transpose of your transformation matrices like so vPos*T_MVP*T_LookAt.

OpenGl local coordinate rotation

I have been attempting to rotate an object around its local coordinates and then move it based off based of the rotated coordinates but i have not been able to achieve the desired results,
to explain the problem in a more in depth way i have an object at a certain point in space and i need to rotate it around its own origin(not the global origin) and then translate the object based off of the newly rotated axis's, after much experimenting i have discovered that i can either rotate the object around is origin but the coordinates will not be rotated with it or i can have the objects local coordinates be transformed with it but it will then rotate around the global origin.
currently my rotation/translation/scaling code looks like this
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f),trans);
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
glm::mat4 MVP = Projection* View * Model;
I believe this is the problem code specifically the second line from the bottom but i could be wrong and will be post more code if its needed.
i have also attempted to create an inverse matrix and use that at the start of the calculation but that appears to do nothing(i can add the code that i attempted to do this with if needed)
If any kind of elaboration is needed regarding this issue feel free to ask and i will expand on the question
Thanks.
EDIT 1:
Slightly modified code that was suggested in the answers section, still giving the same bug though.
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
glm::vec4 trans(x,y,z,1);
glm::vec4 vTrans = myRotationMatrix* trans ;
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f),vTrans.x,vTrans.y,vTrans.z);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
You need to apply your rotation matrix to the translation vector (trans).
So, assuming trans is a vec4, your code will be:
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
glm::vec4 vTrans = myRotationMatrix * trans;
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f), vTrans.xyz);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
glm::mat4 MVP = Projection* View * Model;
convert vec4 to vec3
So to complete the answer, if the model center is not (0,0,0) , you will have to compute
bounds of your model and translate it by half of it less model bottom left vertex.
It's well explicated here:
model local origin
According to supplied code, the answer is the best available... if you wants more details, supply some screenshots and details on your Projection and view matrix calculations

Apply transformation to object

I'm creating basic OpenGL scene and I have problem with manipulating with my object. Each has different transformation matrix, there's also modelview/translation/scaling matrix for whole scene.
How do I bind this data tomy object before executing calculations from vertex shader? I've read about gl(Push|Pop)Matrix(), but these functions are deprecated from what I understood.
A bit of my code. Position from vertex shader:
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
And C++ function to display objects:
// Clear etc...
mat4 lookAt = glm::lookAt();
glLoadMatrixf(&lookAt[0][0]);
mat4 combined = lookAt * (mat4) sceneTranslation * (mat4) sceneScale;
glLoadMatrixf(&combined[0][0]);
mat4 objectTransform(1.0);
// Transformations...
// No idea if it works, but objects are affected by camera position but not individually scaled, moved etc.
GLuint gl_ModelViewMatrix = glGetUniformLocation(shaderprogram, "gl_ModelViewMatrix");
glUniformMatrix4fv(gl_ModelViewMatrix, 1, GL_FALSE, &objectTransform[0][0]);
// For example
glutSolidCube(1.0);
glutSwapBuffers();
Well, you dont have to use glLoadMatrix and other built in matrix functions, because it could be even more difficult than handling your own matrixes.
A simple camera example without controlling it, its a static camera:
glm::mat4x4 view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector, //or the focus point the camera is pointing to
upVector);
It returns a 4x4 matrix, this is the view matrix.
glm::mat4x4 projection_matrix =
glm::perspective(60.0f, float(screenWidth)/float(screenHeight), 1.0f, 1000.0f);
this is the projection matrix
so now you have the view and projection matrixes, and you can send it to the shader:
gShader->bindShader();
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
the bindshader is the simple glUseProgram(shaderprog);
the uniform program is
void sendUniform4x4(const string& name, const float* matrix, bool transpose=false)
{
GLuint location = getUniformLocation(name);
glUniformMatrix4fv(location, 1, transpose, matrix);
}
Your model matrix is individual for each of your objects:
glm::mat4x4 model_matrix= glm::mat4(1); //this is the identity matrix, so its static
model_matrix= glm::rotate(model_matrix,
rotationdegree,
vec3(axis)); //same as opengl function.
This created a model matrix, and you can send it to your shader too
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
the glm::value_ptr(...) creates a 2 dimensional array of your matrix.
in your shader code don't use the glModelViewMatrix and gl_ProjectionMatrix,
matrixes are sent via uniforms.
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(){
gl_Position = projection_matrix*view_matrix*model_matrix*gl_Vertex;
//i wrote gl_Vertex because of the glutSolidTeapot.
}
I've never used this build in mesh function so i dont know how it works, supposing it is sending the vertexes to the shader with immediate mode use gl_Vertex.
If you create your own meshes use VBO vertexattribpointer and drawarrays/elements.
Don't forget to bind the shader before sending uniforms.
So with a complete example:
glm::mat4x4 view_matrix = glm::lookAt(2,4,2,-1,-1,-1,0,1,0);
glm::mat4x4 projection_matrix =
glm::perspective(60.0f, float(screenWidth)/float(screenHeight), 1.0f, 10.0f);
glm::mat4x4 model_matrix= glm::mat4(1); //this remains unchanged
glm::mat4x4 teapot_model_matrix= glm::rotate(model_matrix, //this is the teapots model matrix, apply transformations to this
45,
glm::vec3(1,1,1));
teapot_model_matrix = glm::scale(teapot_model_matrix,vec3(2,2,2);
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
glutSolidCube(0.0); //i don't know what the (0.0) stands for :/
glutSwapBuffers();
///////////////////////////////////
in your shader:
uniform mat4 projection_matrix; //these are the matrixes you've sent
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(){
gl_Position = projection_matrix*view_matrix*model_matrix*vec4(gl_Vertex.xyz,1);
}
Now you should have a camera positioned at 2,4,2, focusing at -1,-1,-1, and the up vector is pointing up:)
A teapot is rotated by 45 degrees around the (1,1,1) vector, and scaled by 2 in every direction.
After changing the model matrix, send it to the shader, so if you have more objects to render
send it after each if you want to have different transformations applied to each mesh.
A pseudocode for this looks like:
camera.lookat(camerapostion,focuspoint,updirection); //sets the view
camera.project(fov,aspect ratio,near plane, far plane) //and projection matrix
camera.sendviewmatrixtoshader;
camera.sendprojectionmatrixtoshader;
obj1.rotate(45 degrees, 1,1,1); //these functions should transform the model matrix of the object. Make sure each one has its own.
obj1.sendmodelmatrixtoshader;
obj2.scale(2,1,1);
obj2.sendmodelmatrixtoshader;
If it doesn't work try it with a vertexBuffer, and a simple triangle or cube created by yourself.
You should use a math library, I recommend GLM. It has its matrix functions just like in OpenGL, and uses column major matrixes so you can calculate your owns, and apply them for objects.
First, you should have a matrix class for your scene, which calculates your view matrix, and projection matrix. (glm::lookAt, and glm::project). They work the same as in openGL. You can send them as uniforms to the vertex shader.
For the obejcts, you calculate your own marixes, and send them as the model matrix to the shader(s).
In the shader or on cpu you calculate the mv matrix:
vp = proj*view.
You send your individual model matrixes to the shader and calculate the final position:
gl_Position = vp*m*vec4(vertex.xyz,1);
MODEL MATRIX
with glm, you can easily calculate, transform you matrixes. You create a simple identity matrix:
glm::mat4x4(1) //identity
you can translate, rotate, scale it.
glm::scale
glm::rotate
glm::translate
They work like in immediate mode in opengl.
after you have your matrix send it via the uniform.
MORE MODEL MATRIX
shader->senduniform("proj", camera.projectionmatrix);
shader->senduniform("view", camera.viewmatrix);
glm::mat4 model(1);
obj1.modelmatrix = glm::translate(model,vec3(1,2,1));
shader->senduniform("model", obj1.modelmatrix);
objectloader.render(obj1);
obj2.modelmatrix = glm::rotate(model,obj2.degrees,vec3(obj2.rotationaxis));
shader->senduniform("model", obj2.modelmatrix);
objectloader.render(obj2);
This is just one way to do this. You can write a class for push/pop matrix calculations, automate the method above like this:
obj1.rotate(degrees,vec3(axis)); //this calculates the obj1.modelmatrix for example rotating the identity matrix.
obj1.translate(vec3(x,y,z))//more transform
obj1.render();
//continue with object 2
VIEW MATRIX
the view matrix almost the same as model matrix. Use this to control the global "model matrix", the camera. This transforms your screen globally, and you can have model matrixes for your objects individually.
In my camera class I calculate this with the glm::lookAt(the same as opengl) then send it via uniform to all shaders I use.
Then when I render something I can manipulate its model matrix, rotating or scaling it, but the view matrix is global.
If you want a static object, you don't have to use model matrix on it, you can calculate the position with only:
gl_Position = projmatrix*viewmatrix*staticobjectvertex;
GLOBAL MODEL MATRIX
You can have a global model matrix too.
Use it like
renderer.globmodel.rotate(axis,degree);
renderer.globmodel.scale(x,y,z);
Send it as uniform too, and apply it after the objects' model matrix.
(I've used it to render ocean reflections to texture.)
To sum up:
create a global view(camera) matrix
create a model matrix for each of your sceens, meshes or objects
transform the objects' matrixes individually
send the projection, model and view matrixes via uniforms to the shader
calculate the final position: proj*camera*model*vertex
move your objects, and move your camera
I'm not saying there aren't any better way to do this, but this works for me well.
PS: if you'd like some camera class tuts I have a pretty good one;).

OpenGL rotation and translation done correctly

Im a bit stuck when it comes to rotation and translation in OpenGL.
I got 3 Matrices, projection, view and model.
My VertexShader:
gl_Position = projection * model * view * vec4(vertexData, 1);
What is the best way to translate and rotate objects?
Either multiply my model matrix with a translation and or rotation matrix,
or pass data (rotation and translation) to the shader and to the math there?
Also I need to know "the final object position" for my mousepicking implementation.
What I did so far was something like this:
object.Transformation = Matrix.CreateTransLation(x,y,z) * Matrix.CreateRotation(x,y,z);
...
ForEach object to Draw
{
modelMatrix.Push();
modelMatrix.Mult(object.Transformation); // this also updates the matrix for the shader
object.Draw();
modelMatrix.Pop();
}
This works, but it doesnt feel right. What the best way to do this?
This
gl_Position = projection * model * view * vec4(vertexData, 1);
is wrong. Matrix multiplication is not commutative, i.e. the order of operations matters. The transformations on a vertex' position, in order are:
model
view
projection
Matrix multiplication for column vectors as used by OpenGL is left associative, i.e. goes from right to left. Hence the expression in the R side of the statement should be
gl_Position = projection * view * model * vec4(vertexPosition, 1);
However you can contract view and model transform into a compound modelview (first model, then view) transform. This saves a full matrix multiplication
gl_Position = projection * modelview * vec4(vertexPosition, 1);
The projection should be kept separate as other shading steps may require the eye space position of the vertex which is the result of modelview * position without projection applied.
BTW: You're transforming the vertex position, not the data. A vertex consists a larger number of attributes (not just the position) hence calling it "Data" is semantically wrong.
What is the best way to translate and rotate objects?
Those are part of the modelview transform. You should create a transformation matrix exactly one time on the CPU and pass it to the GPU. Doing this in the shader would force the GPU to redo the whole calculation for each and every vertex. You don't want to do this.
Update due to comment
Let's say you're using my →linmath.h. Then in your drawing function you'd have set up the scaffolding for your scene, i.e. set the viewport, built projection and view matrices
#include <linmath.h>
/* ... */
void display(void)
{
mat4x4 projection;
mat4x4 view;
glClear(…),
glViewport(…);
mat4x4_frustum(projection, …);
// linmath.h doesn't have a look_at function... yet
// I'll add it soon
mat4x4_look_at(view, …);
Then for each object you have a position and a orientation (translation and rotation). Orientations are stored most conveniently in a quaternion, but for processing vectors a matrix representation works better. So we iterate over the objects in the scene
for(int i_object = 0; i_object < scene->n_objects; i++) {
Object * const obj = scene->objects + i;
mat4x4 translation, orientation, model_view;
mat4x4_translate(translation, obj->pos.x, obj->pos.y, obj->pos.z);
mat4x4_from_quat(orientation, obj->orientation);
mat4x4_mul(model_view, translation, orientation);
model_view now contains the model matrix. Next we multiply the view matrix on it. Remember, matrix multiplication is right to left (mat4x4_mul can output onto one of its input operands).
mat4x4_mul(model_view, view, model_view);
Now model_view contains the full compount model orientation and translation and view matrix. All we need to do now is binding the shader program used for the object
glUseProgram(obj->shader->program);
Set the uniforms
glUniformMatrix4f(obj->shader->location.projection, 1, GL_FALSE, projection);
glUniformMatrix4f(obj->shader->location.modelview, 1, GL_FALSE, model_view);
// and a few others...
And draw the object
object_draw(obj);
}
/* ... */
}

glm combine rotation and translation

I have an object which I first want to rotate (about its own center) then translate it to some point. I have a glm::quat that holds the rotation and a glm::vec3 that holds the point to which it needs to be translated.
glm::vec3 position;
glm::quat orientation;
glm::mat4 modelmatrix; <-- want to combine them both in here
modelmatrix = glm::translate(glm::toMat4(orientation),position);
Then at my render function, I do.
pvm = projectionMatrix*viewMatrix*modelmatrix;
glUniformMatrix4fv(pvmMatrixUniformLocation, 1, GL_FALSE, glm::value_ptr(pvm));
..and render...
Unfortunately, the object just orbits around the origin when I apply a rotation (the farther the "position" from the origin, the larger the orbit).
When I apply for only the position it translates fine. When I apply only the rotation it stays at the origin and rotates about its center (as expected). So why does it go weird when I apply them both? Am I missing something basic?
Because you're applying them in the wrong order. By doing glm::translate(glm::toMat4(orientation),position), you are doing the equivalent of this:
glm::mat4 rot = glm::toMat4(orientation);
glm::mat4 trans = glm::translate(glm::mat4(1.0f), position);
glm::mat4 final = rot * trans;
Note that the translation is on the right side of the matrix, not the left. This means that the translation happens first, then the rotation happens relative to the translation. So rotation happens in the space after translation.
You want the rotation to happen first. So reverse the order of the matrix multiplication.