This question already has answers here:
How to move objects after rotation in Qt?
How to rotate this mesh correctly?
(2 answers)
Closed 8 months ago.
I have a cube consisted from 6 different planes(meshes). All this planes i generate in XY coordinates and then place them by matrix transformations.
I need to rotate this cube around global axis and then move plane correctly.
So, i'll show, what i need and what i have now.
I can rotate cube
Then i need to move one of the plane correctly, depending on cube rotation. But planes anyway moves along global axis and i can't implement moving along rotation.
Red line shows how it moves now, green line shows how it should move
How i create a cube. All vertices in planes in range (0,0) - (2, 2);
planeXY.setupMesh();
planeXY.setOrigin({1, 1, 1});
planeXY1.setupMesh();
planeXY1.setOrigin({1, 1, 1});
planeXY1.moveAlongGlobalAxis(QVector3D(0.0, 0.0, 2.0));
planeZY.setupMesh();
planeZY.setOrigin({1, 1, 1});
planeZY.rotate(QVector3D(0.0f, -90.0f, 0.0f));
planeZY.moveAlongGlobalAxis(QVector3D(-2.0, 0.0, 0.0));
planeZY1.setupMesh();
planeZY1.setOrigin({1, 1, 1});
planeZY1.rotate(QVector3D(0.0f, -90.0f, 0.0f));
planeXZ.setupMesh();
planeXZ.setOrigin({1, 1, 1});
planeXZ.rotate(QVector3D(90.0f, 0.0f, 0.0f));
planeXZ.moveAlongGlobalAxis(QVector3D(0.0, -2.0, 0.0));
planeXZ1.setupMesh();
planeXZ1.setOrigin({1, 1, 1});
planeXZ1.rotate(QVector3D(90.0f, 0.0f, 0.0f));
Mesh.cpp
void Mesh::moveAlongGlobalAxis(QVector3D coordinates)
{
QMatrix4x4 identityMatrix;
identityMatrix.translate(coordinates);
position += coordinates;
this->translationMatrix = identityMatrix * translationMatrix;
}
void Mesh::moveAlongLocalAxis(QVector3D coordinates)
{
this->moveAlongGlobalAxis(coordinates);
}
void Mesh::rotate(QVector3D rotation)
{
QMatrix4x4 identityMatrix;
identityMatrix.translate((-1) * this->position + this->origin);
identityMatrix.rotate(rotation.x(), QVector3D(1.0, 0.0, 0.0));
identityMatrix.rotate(rotation.y(), QVector3D(0.0, 1.0, 0.0));
identityMatrix.rotate(rotation.z(), QVector3D(0.0, 0.0, 1.0));
identityMatrix.translate(this->position - this->origin);
this->rotationMatrix = identityMatrix * this->rotationMatrix;
}
void Mesh::setOrigin(QVector3D origin)
{
this->origin = origin;
}
const QMatrix4x4 Mesh::getModelMatrix() const
{
return translationMatrix * rotationMatrix;
}
This function i want to implement and don't know how:
void Mesh::moveAlongLocalAxis(QVector3D coordinates)
{
this->moveAlongGlobalAxis(coordinates);
}
I know, that for moving like i want i need to rotate cube back, move, and then rotate again, but i can't do this, because i can't save rotations of mesh because of my first rotations, where i rotate planes for getting cube. So, i want to know, how to transform objects in OpenGL correctly and how to achive result that i describe above
Related
For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);
I have 3 functions,which should perform a rotation for an triangle correctly, but it doesnt.My thoughts were, if i want to rotate my triangle around its center, i should translate the triangle origin to 0,0,0, which does translateToOrigen, then perform the rotation with rotateand then translate the triangle back to its orginal position, with translateBacktoPosition :
void Triangle::rotate(float angle,glm::vec3 axis) {
translateToOrigen();
modelMatrix = glm::rotate(modelMatrix, glm::radians(angle), axis);
translateBacktoPosition();
}
void Triangle::translateToOrigen() {
modelMatrix = glm::translate(modelMatrix, -_origen);
}
void Triangle::translateBacktoPosition() {
modelMatrix = glm::translate(modelMatrix, _origen);
}
I call the rotate-Function like this :
t1.rotate(-90.0f, glm::vec3(0, 0, 1));
How i understand this, this should rotate my triangle around the z-axis with -90 degrees.
But what actually happens , is that my triangle got stretched and not be displayed at the correct position.
I already test the translateToOrigen() and translateBacktoOrigen()and they work very good, until i try to rotate.
I save all the matrices in the variable modelMatrix and when finished with translating and rotating , i send via uniform to the shader, where i multiply this matrix with my vertices.
I dont get what i am missing.
I've been trying to work around rotating a plane in 3D space, but I keep hitting dead ends. The following is the situation:
I have a physics engine where I simulate a moving sphere inside a cube. To make things simpler, I have only drawn the top and bottom plane and moved the sphere vertically. I have defined my two planes as follows:
CollisionPlane* p = new CollisionPlane(glm::vec3(0.0, 1.0, 0.0), -5.0);
CollisionPlane* p2 = new CollisionPlane(glm::vec3(0.0, -1.0, 0.0), -5.0);
Where the vec3 defines the normal of the plane, and the second parameter defines the distance of the plane from the normal. The reason I defined their distance as -5 is because I have scaled the the model that represents my two planes by 10 on all axis, so now the distance from the origin is 5 to top and bottom, if that makes any sense.
To give you some reference, I am creating my two planes as two line loops, and I have a model which models those two line loop, like the following:
top plane:
std::shared_ptr<Mesh> f1 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts = { Vertex(glm::vec3(0.5, 0.5, 0.5)), Vertex(glm::vec3(0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, 0.5)) };
f1->BufferVertices(verts);
bottom plane:
std::shared_ptr<Mesh> f2 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts2 = { Vertex(glm::vec3(0.5, -0.5, 0.5)), Vertex(glm::vec3(0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, 0.5)) };
f2->BufferVertices(verts2);
std::shared_ptr<Model> faceModel = std::make_shared<Model>(std::vector<std::shared_ptr<Mesh>> {f1, f2 });
And like I said I scale the model by 10.
Now I have a sphere that moves up and down, and collides with each face, and the collision response is implemented as well.
The problem I am facing is when I try to rotate my planes. It seems to work fine when I rotate around the Z-axis, but when I rotate around the X axis it doesn't seem to work. The following shows the result of rotating around Z:
However If I try to rotate around X, the ball penetrates the bottom plane, as if the collisionplane has moved down:
The following is the code I've tried to rotate the normals and the planes:
for (int i = 0; i < m_entities.size(); ++i)
{
glm::mat3 normalMatrix = glm::mat3_cast(glm::angleAxis(glm::radians(6.0f), glm::vec3(0.0, 0.0, 1.0)));
CollisionPlane* p = (CollisionPlane*)m_entities[i]->GetCollisionVolume();
glm::vec3 normalDivLength = p->GetNormal() / glm::length(p->GetNormal());
glm::vec3 pointOnPlane = normalDivLength * p->GetDistance();
glm::vec3 newNormal = normalMatrix * normalDivLength;
glm::vec3 newPointOnPlane = newNormal * (normalMatrix * (pointOnPlane - glm::vec3(0.0)) + glm::vec3(0.0));
p->SetNormal(newNormal);
float newDistance = newPointOnPlane.x + newPointOnPlane.y + newPointOnPlane.z;
p->SetDistance(newDistance);
}
I've done the same thing for rotating around X, except changed the glm::vec3(0.0, 0.0, 1.0) to glm::vec3(1.0, 0.0, 0.0)
m_entites are basically my physics entities that hold the different collision shapes (spheres planes etc). I based my code on the answer here Rotating plane with normal and distance
I can't seem to figure at all why it works when I rotate around Z, but not when I rotate around X. Am I missing something crucial?
I am working on rendering a terrain in OpenGL.
My code is the following:
void Render_Terrain(int k)
{
GLfloat angle = (GLfloat) (k/40 % 360);
//PROJECTION
glm::mat4 Projection = glm::perspective(45.0f, 1.0f, 0.1f, 100.0f);
//VIEW
glm::mat4 View = glm::mat4(1.);
//ROTATION
//View = glm::rotate(View, angle * -0.1f, glm::vec3(1.f, 0.f, 0.f));
//View = glm::rotate(View, angle * 0.2f, glm::vec3(0.f, 1.f, 0.f));
//View = glm::rotate(View, angle * 0.9f, glm::vec3(0.f, 0.f, 1.f));
View = glm::translate(View, glm::vec3(0.f,0.f, -4.0f)); // x, y, z position ?
//MODEL
glm::mat4 Model = glm::mat4(1.0);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MVP_matrix"), 1, GL_FALSE, glm::value_ptr(MVP));
//Transfer additional information to the vertex shader
glm::mat4 MV = Model * View;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MV_matrix"), 1, GL_FALSE, glm::value_ptr(MV));
glClearColor(0.0, 0.0, 0.0, 1.0);
glDrawArrays(GL_LINE_STRIP, terrain_start, terrain_end );
}
I can do a rotation around the X,Y,Z axis, scale my terrain but I can't find a way to move the camera. I am using OpenGL 3+ and I am kinda new to graphics.
The best way to move the camera would be through the use of gluLookAt(), it simulates camera movement since the camera cannot be moved whatsoever. The function takes 9 parameters. The first 3 are the XYZ coordinates of the eye which is where the camera is exactly located. The second 3 parameters are the XYZ coordinates of the center which is the point the camera is looking at from the eye. It is always going to be the center of the screen. The third 3 parameters are the XYZ coordinates of the UP vector which points vertically upwards from the eye. Through manipulating those 3 XYZ coordinates you can simulate any camera movement you want.
Check out this link.
Further details:
-If you want for example to rotate around an object you rotate your eye around the up vector.
-If you want to move forward or backwards you add or subtract to the eye as well as the center points.
-If you want to tilt the camera left or right you rotate your up vector around your look vector where your look vector is center - eye.
gluLookAt operates on the deprecated fixed function pipeline, so you should use glm::lookAt instead.
You are currently using a constant vector for translation. In the commented out code (which I assume you were using to test rotation), you use angle to adjust the rotation. You should have a similar variable for translation. Then, you can change the glm::translate call to:
View = glm::translate(View, glm::vec3(x_transform, y_transform, z_transform)); // x, y, z position ?
and get translation.
You should probably pass in more than one parameter into Render_Terrain, as translation and rotation need at least six parameters.
In OpenGL the camera is always at (0, 0, 0). You need to set the matrix mode to GL_MODELVIEW, and then modify or set the model/view matrix using things like glTranslate, glRotate, glLoadMatrix, etc. in order to make it appear that the camera has moved. If you're using GLU, you can use gluLookAt to point the camera in a particular direction.
I've implemented an FPS style camera, with the camera consisting of a position vector, and Euler angles pitch and yaw (x and y rotations).
After setting up the projection matrix, I then translate to camera coordinates by rotating, then translating to the inverse of the camera position:
// Load projection matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Set perspective
gluPerspective(m_fFOV, m_fWidth/m_fHeight, m_fNear, m_fFar);
// Load modelview matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Position camera
glRotatef(m_fRotateX, 1.0, 0.0, 0.0);
glRotatef(m_fRotateY, 0.0, 1.0, 0.0);
glTranslatef(-m_vPosition.x, -m_vPosition.y, -m_vPosition.z);
Now I've got a few viewports set up, each with its own camera, and from every camera I render the position of the other cameras (as a simple box).
I'd like to also draw the view vector for these cameras, except I haven't a clue how to calculate the lookat vector from the position and Euler angles.
I've tried to multiply the original camera vector (0, 0, -1) by a matrix representing the camera rotations
then adding the camera position to the transformed vector, but that doesn't work at all (most probably because I'm way off base):
vector v1(0, 0, -1);
matrix m1 = matrix::IDENTITY;
m1.rotate(m_fRotateX, 0, 0);
m1.rotate(0, m_fRotateY, 0);
vector v2 = v1 * m1;
v2 = v2 + m_vPosition; // add camera position vector
glBegin(GL_LINES);
glVertex3fv(m_vPosition);
glVertex3fv(v2);
glEnd();
What I'd like is to draw a line segment from the camera towards the lookat direction.
I've looked all over the place for examples of this, but can't seem to find anything.
Thanks a lot!
I just figured it out. When I went back to add the answer, I saw that Ivan had just told me the same thing :)
Basically, to draw the camera vector, I do this:
glPushMatrix();
// Apply inverse camera transform
glTranslatef(m_vPosition.x, m_vPosition.y, m_vPosition.z);
glRotatef(-m_fRotateY, 0.0, 1.0, 0.0);
glRotatef(-m_fRotateX, 1.0, 0.0, 0.0);
// Then draw the vector representing the camera
glBegin(GL_LINES);
glVertex3f(0, 0, 0);
glVertex3f(0, 0, -10);
glEnd();
glPopMatrix();
This draws a line from the camera position for 10 units in the lookat direction.