I'm trying to apply multiple rotations around x,y,z axis to an object by using glm::rotate method but for some reason it only rotates around one axis and seems to be completely ignoring other rotations.
Here is how I apply rotation:
glm::mat4 rotateTransform = glm::mat4(1.0f);
rotateTransform = glm::rotate(rotateTransform, this->rotation.x, glm::vec3(1, 0, 0));
rotateTransform = glm::rotate(rotateTransform, this->rotation.y, glm::vec3(0, 1, 0));
rotateTransform = glm::rotate(rotateTransform, this->rotation.z, glm::vec3(0, 0, 1));
return glm::translate(glm::mat4(1.0f), this->position) * rotateTransform * glm::scale(glm::mat4(1.0f), this->scale);
the method returns modelToWorldMatrix which I then pass to my vertexShader where I perform standart calculation on a vertex:
vec4 vertexPositionInModelSpace = vec4(Position, 1);
vec4 vertexInWorldSpace = gModelToWorldTransform * vertexPositionInModelSpace;
vec4 vertexInViewSpace = gWorldToViewTransform * vertexInWorldSpace;
vec4 vertexInHomogeneousClipSpace = gProjectionTransform * vertexInViewSpace;
gl_Position = vertexInHomogeneousClipSpace;
So how can you apply multiple rotations by using glm::mat4 matrices?
Related
I create a cube like normal using 8 vertex points that outline a cube and use indices to draw each individual triangle. However, when I create my camera matrix and rotate it using the lookat function with glm it rotates the entire screen positions not world positions.
glm::mat4 Projection = glm::mat4(1);
Projection = glm::perspective(glm::radians(60.0f), (float)window_width / (float)window_hight, 0.1f, 100.0f);
const float radius = 10.0f;
float camX = sin(glfwGetTime()) * radius;
float camZ = cos(glfwGetTime()) * radius;
glm::mat4 View = glm::mat4(1);
View = glm::lookAt(
glm::vec3(camX, 0, camZ),
glm::vec3(0, 0, 0),
glm::vec3(0, 1, 0)
);
glm::mat4 Model = glm::mat4(1);
glm::mat4 mvp = Projection * View * Model;
Then in glsl:
uniform mat4 camera_mat4
void main()
{
vec4 pos = vec4(vertexPosition_modelspace, 1.0) * camera_mat4;
gl_Position.xyzw = pos;
}
Example: GLM rotating screen coordinates not cube
glm::mat4 Model = glm::mat4(1.0f);
float dir_x = 0.0f, dir_y = 1.0f, dir_z = 0.0f;
do {
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT);
// Use our shader
glUseProgram(programID);
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
glm::mat4 Projection = glm::perspective(glm::radians(45.0f), 4.0f / 3.0f, 0.1f, 100.0f);
//glm::mat4 Projection = glm::ortho(-1.0f,1.0f,-1.0f,1.0f,0.0f,100.0f); // In world coordinates
// Camera matrix
glm::mat4 View = glm::lookAt(
glm::vec3(0.5, 0.5, 3), // Camera is at (4,3,3), in World Space
glm::vec3(0.5, 0.5, 0), // and looks at the origin
glm::vec3(0, 1, 0) // Head is up (set to 0,-1,0 to look upside-down)
);
float rot_angle = 0.0f;
const float speed = 0.01f;
glm::vec3 dir = glm::vec3(dir_x, dir_y, dir-z);
if (glfwGetKey(window, GLFW_KEY_LEFT) == GLFW_PRESS)
{
rot_angle = -1.0f;
Model = glm::translate(tri_center)* glm::rotate(glm::mat4(), glm::radians(rot_angle), glm::vec3(0, 0, -1))*glm::translate(-tri_center)*Model;
//dir left
...
If I rotate the object(car), I want to move it to the head of the car. Now, regardless of the head of car, the car only moves upward.
How do codes make dir rotate?
Changing the center of rotation can be achieved with the following:
Remember, multiply matrices RIGHT TO LEFT, the first transform on the rightmost side, the last on the left
First, create a translation that brings the center of rotation to the origin of the scene (0, 0, 0), this is basically reversing each x,y, and z. So the translation for the example center vec3(1.0, 2.3, -5.2) is glm::mat4 origin = glm::translate(origin, glm::vec3(-1.0, -2.3, 5.2);
Store this vector, we are going to use this for ALL points in the mesh
Now apply the desired rotation(s) to this translate matrix and store them in a new mat4, so do:
glm::mat4 final = glm::rotate(..) * origin
Finally, bring the center (and the rest of the model) back to the original position by creating a translation identical to the vector3 with the following:
glm::mat4 relocate = glm::translate(relocate, center) and then
glm::mat4 final = relocate * glm::rotate(..) * origin
Essentially what we are doing here is bringing the center of the model to the origin, translating all points relative to that, then rotating them around the center (which is now the origin), then bringing them back the same distance they came.
Now apply this translation to ALL of the models points, do this in the vertex shader, obviously. If the model is really small, you could do it in your code but that will gobble memory for most meshes. This mat4 could be applied to the model matrix if you don't want to add another matrix. model = model * final //note, first do transformations, then scale for the model
Full code looks something like this: (you could also multiply the matricies manually, but GLM lets you pass a matrix into the args of translate() function, it then just applies the translation to the matrix in its current form)
glm::vec3 center = vec3(1.0, 2.3, -5.2);
glm::mat4 finalTransform = glm::translate(finalTransform, glm::vec3(-1.0, -2.3, 5.2)); //first bring everything to origin, notice this is a mat4
finalTransform = glm::rotate(finalTransform, ...); //rotate how you want
finalTransform = glm::translate(finalTransform, center); //return to center
model = model * finalTransform; //apply this transformation to be calculated by the vertex shader for the object
glUniformMatrix4fv(glGetUniformLocation(sp, "model"), 1, GL_FALSE, glm::value_ptr(model)); //pass model matrix into shader program
Also, in your current code it appears that you have the right idea, but you are using the translate function incorrectly. It should be called like this: glm::translate(mat4, vec3). At the very least, construct an empty mat4 to translate with the glm::mat4() constructor.
How can I rotate a camera in a axis? What matrix I have to multiply?
I am using glm::lookAt to construct the viewMatrix, but I tried to multiply it by a rotation matrix and nothing happened.
glm::mat4 GetViewMatrix()
{
return glm::lookAt(this->Position, this->Position + this->Front, glm::vec3(0.0f, 5.0f, 0.0f));
}
glm::mat4 ProjectionMatrix = glm::perspective(actual_camera->Zoom, (float)g_nWidth / (float)g_nHeight, 0.1f, 1000.0f);
glm::mat4 ViewMatrix = actual_camera->GetViewMatrix();
glm::mat4 ModelMatrix = glm::mat4(1.0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;
Rotate the front and up vectors of your camera using glm::rotate:
glm::mat4 GetViewMatrix()
{
auto front = glm::rotate(this->Front, angle, axis);
auto up = glm::rotate(glm::vec3(0, 1, 0), angle, axis);
return glm::lookAt(this->Position, this->Position + front, up);
}
Alternatively, you can add a multiplication with your rotation matrix to your MVP construction:
glm::mat4 MVP = ProjectionMatrix * glm::transpose(Rotation) * ViewMatrix * ModelMatrix;
It is important that the rotation happens after the view matrix, so all objects will be rotated relative to the camera's position. Furthermore, you have to use transpose(Rotation) (the inverse of a rotation matrix is its transpose), since rotating the camera clockwise for example, is equivalent to rotating all objects counter-clockwise.
I'm attempting to set up an orthographic projection in OpenGL, but can't seem to find why this triangle is not rendering correctly (it isn't visible). I have used perspective projection with the same code (apart from my vertex coordinates and projection matrix, of course) and it works fine. I construct the triangle vertices as:
Vertex vertices[] = { Vertex(glm::vec3(0, 600, 0.0), glm::vec2(0.0, 0.0)),
Vertex(glm::vec3(300, 0, 0.0), glm::vec2(0.5, 1.0)),
Vertex(glm::vec3(800 , 600, 0.0), glm::vec2(1.0, 0.0)) };
My camera constructor is:
Camera::Camera(const glm::vec3& pos, int width, int height) {
ortho = glm::ortho(0, width, height, 0, 0, 1000);
this->position = pos;
this->up = glm::vec3(0.0f, 1.0f, 0.0f);
this->forward = glm::vec3(0.0f, 0.0f, 1.0f);
}
I call this as:
camera = Camera(glm::vec3(0, 0, 2), window->getSize().x, window->getSize().y);
Where the window is 800 by 600 pixels. I am uploading a transform to the shader via the function:
void Shader::update(const Transform& transform, const Camera& camera) {
glm::mat4 model = camera.getProjection() * transform.getModel();
glUniformMatrix4fv(uniforms[TRANSFORM_U], 1, GL_FALSE, &model[0][0]);
}
In which camera.getProjection() is:
glm::mat4 Camera::getProjection() const {
return ortho * glm::lookAt(position, glm::vec3(0, 0, 0), up);
}
And transform.getModel() is:
glm::mat4 Transform::getModel() const {
glm::mat4 posMat = glm::translate(pos);
glm::quat rotQuat = glm::quat(glm::radians(rot));
glm::mat4 rotMat = glm::toMat4(rotQuat);
glm::mat4 scaleMat = glm::scale(scl);
return posMat * rotMat * scaleMat;
}
Though I suspect the problem lies in my set up of orthographic projection rather than my transforms, as this worked fine for perspective projection. Can anyone see why the triangle rendered with these coordinates is not visible? I am binding my shader and uploading the projection matrix to it before rendering the mesh. If it helps, my vertex shader is:
#version 120
attribute vec3 position;
attribute vec2 texCoord;
varying vec2 texCoord0;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(position, 1.0);
texCoord0 = texCoord;
}
For anyone interested in the issue, it was with:
ortho = glm::ortho(0, width, height, 0, 0, 1000);
Where the arguments are supplied as integers, not floats. Therefore the integer division applied within glm::ortho was creating an incorrect orthographic projection matrix.
So I can draw a spinning cube using OpenGL3.2+ and translate it away from the 0,0,0 and to the left, but when I try and draw a second one (towards the right), it doesn't render...
This is my display function:
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(myShader.handle());
GLuint matLocation = glGetUniformLocation(myShader.handle(), "ProjectionMatrix");
glUniformMatrix4fv(matLocation, 1, GL_FALSE, &ProjectionMatrix[0][0]);
spinY+=0.03;
if(spinY>360) spinY = 0;
glm::mat4 viewMatrix;
viewMatrix = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix = glm::translate(viewMatrix,glm::vec3(-30,0,0)); //translate object from the origin
ModelViewMatrix = glm::rotate(ModelViewMatrix,spinY, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
//Add the following line just before the line to draw the cube to
//check that the origin of the cube in eye space is (-30, 0, -100);
result = glm::vec3(ModelViewMatrix * glm::vec4(0,0,0,1));
std::cout<<glm::to_string(result)<<std::endl; //print matrix to get coordinates.
myCube.render();
glUseProgram(0);
}
I want to be able to use the same Cube class / size etc, but just render it again (I assume that's the most efficient / best way to do it).
I tried this
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(myShader.handle());
GLuint matLocation = glGetUniformLocation(myShader.handle(), "ProjectionMatrix");
glUniformMatrix4fv(matLocation, 1, GL_FALSE, &ProjectionMatrix[0][0]);
spinY+=0.03;
if(spinY>360) spinY = 0;
glm::mat4 viewMatrix;
viewMatrix = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix = glm::translate(viewMatrix,glm::vec3(-30,0,0)); //translate object from the origin
ModelViewMatrix = glm::rotate(ModelViewMatrix,spinY, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
//Add the following line just before the line to draw the cube to
//check that the origin of the cube in eye space is (-30, 0, -100);
result = glm::vec3(ModelViewMatrix * glm::vec4(0,0,0,1));
std::cout<<glm::to_string(result)<<std::endl; //print matrix to get coordinates.
myCube.render();
glm::mat4 viewMatrix_TWO;
viewMatrix_TWO = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix_TWO = glm::translate(viewMatrix_TWO,glm::vec3(30,0,0)); //translate object from the origin
ModelViewMatrix_TWO = glm::rotate(ModelViewMatrix_TWO,spinY, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix_TWO"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
myCube.render();
glUseProgram(0);
}
Obviously, I've implemented it wrong... How can I get a cube either side of the screen? Thanks.
UPDATE
I realised, I hadn't created a second cube object, but with that now implemented, it still doesn't work... Am I confusing how the view/model matrices interact? I've created a new one for each object....
New Code:
myCube.render();
spinX+=0.03;
if(spinX>360) spinX = 0;
glm::mat4 viewMatrix_Two,ModelViewMatrix_Two;
viewMatrix_Two = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix_Two = glm::translate(viewMatrix_Two,glm::vec3(30,0,0)); //translate object from the origin
ModelViewMatrix_Two = glm::rotate(ModelViewMatrix_Two,spinX, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix_Two"), 1, GL_FALSE, &ModelViewMatrix_Two[0][0]); //pass matrix to shader
myCube_Two.render();
UPDATE
Shader:
uniform mat4 ModelViewMatrix;
//uniform mat4 ModelViewMatrix_Two; //NOT NEEDED - USED SAME SHADER OBJECT
uniform mat4 ProjectionMatrix;
in vec3 in_Position; // Position coming in
in vec3 in_Color; // colour coming in
out vec3 ex_Color; // colour leaving the vertex, this will be sent to the fragment shader
void main(void)
{
gl_Position = ProjectionMatrix * ModelViewMatrix * vec4(in_Position, 1.0);
//gl_Position = ProjectionMatrix * ModelViewMatrix_Two * vec4(in_Position, 1.0);
ex_Color = in_Color;
}
In the end, I created a second Cube object, second viewing matrix and used them with the already established model matrix in my shader seems both cubes are called/rendered individually.
The correct code is:
glm::mat4 viewMatrix_Two, ModelViewMatrix_Two;
viewMatrix_Two = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-200));
ModelViewMatrix = glm::translate(viewMatrix_Two,glm::vec3(30,0,0));
ModelViewMatrix = glm::rotate(ModelViewMatrix,spinX, glm::vec3(1,0,0));
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
myCube_Two.render();
Unless your shader has a uniform called ModelViewMatrix_Two, this won't work. I don't see a reason why your shader would need another uniform for the model view since you are not drawing both cube on the same call. If it's not the problem, can you post your shader code?