I'm trying to rotate an object on it's own axis by means of Matrices, but I've ran into a problem. Rotating on either X or Y axis produces the expected results, but when it comes to rotating on the Z-axis, the entire model appears to pivot around the Z-axis as if there was an invisible pole under it, where when rotating from the X or Y axis, the invisible pole runs through the model.
glm::mat4 rotateMatrix(GLfloat angle, glm::vec3 axis)
{
axis = glm::normalize(axis);
GLfloat s = sin(angle);
GLfloat c = cos(angle);
GLfloat oc = 1.0f - c;
return glm::mat4(
oc * axis.x * axis.x + c, oc * axis.x * axis.y - axis.z * s, oc * axis.z * axis.x + axis.y * s, 0.0,
oc * axis.x * axis.y + axis.z * s, oc * axis.y * axis.y + c, oc * axis.y * axis.z - axis.x * s, 0.0,
oc * axis.z * axis.x - axis.y * s, oc * axis.y * axis.z + axis.x * s, oc * axis.z * axis.z + c, 0.0,
0.0, 0.0, 0.0, 1.0
);
}
Within draw function:
_prog.setUniform("Translation", glm::vec3(0.0f, 20.0f, -20.0f));
_prog.setUniform("Rotate", rotateMatrix(45.0f, glm::vec3(0.0f, 0.0f, 1.0f)));
_model.render();
This would happen if you rotated a translated model. Check that your transform order is correct:
Scale model;
Rotate model;
Translate model;
Also, make sure that model itself at the origin of the coordinate system.
Related
I'm working on a project that required a 3d cube to be rotated along 3 axes. The cube is made up of 12 triangles, each with an instance of the Triangle class. Each triangle has a p0, p1, and p2 with the type sf::Vector3f. The triangles also have a float* position and a float* rotation. The position and rotation of a triangle is updated using this method.
void Triangle::update() {
position;
p0 = originalP0;
p1 = originalP1;
p2 = originalP2;
sf::Vector3f rotatedP0;
sf::Vector3f rotatedP1;
sf::Vector3f rotatedP2;
// along z
rotatedP0.x = p0.x * cos((*rotation).z * 0.0174533) - p0.y * sin((*rotation).z * 0.0174533);
rotatedP0.y = p0.x * sin((*rotation).z * 0.0174533) + p0.y * cos((*rotation).z * 0.0174533);
rotatedP0.z = p0.z;
rotatedP1.x = p1.x * cos((*rotation).z * 0.0174533) - p1.y * sin((*rotation).z * 0.0174533);
rotatedP1.y = p1.x * sin((*rotation).z * 0.0174533) + p1.y * cos((*rotation).z * 0.0174533);
rotatedP1.z = p1.z;
rotatedP2.x = p2.x * cos((*rotation).z * 0.0174533) - p2.y * sin((*rotation).z * 0.0174533);
rotatedP2.y = p2.x * sin((*rotation).z * 0.0174533) + p2.y * cos((*rotation).z * 0.0174533);
rotatedP2.z = p2.z;
p0 = rotatedP0;
p1 = rotatedP1;
p2 = rotatedP2;
// along y
rotatedP0.x = p0.x * cos((*rotation).y * 0.0174533) + originalP0.z * sin((*rotation).y * 0.0174533);
rotatedP0.y = p0.y;
rotatedP0.z = p0.x * -sin((*rotation).y * 0.0174533) + originalP0.z * cos((*rotation).y * 0.0174533);
rotatedP1.x = p1.x * cos((*rotation).y * 0.0174533) + originalP1.z * sin((*rotation).y * 0.0174533);
rotatedP1.y = p1.y;
rotatedP1.z = p1.x * -sin((*rotation).y * 0.0174533) + originalP1.z * cos((*rotation).y * 0.0174533);
rotatedP2.x = p2.x * cos((*rotation).y * 0.0174533) + originalP2.z * sin((*rotation).y * 0.0174533);
rotatedP2.y = p2.y;
rotatedP2.z = p2.x * -sin((*rotation).y * 0.0174533) + originalP2.z * cos((*rotation).y * 0.0174533);
p0 = rotatedP0;
p1 = rotatedP1;
p2 = rotatedP2;
// along x
rotatedP0.x = p0.x;
rotatedP0.y = p0.y * cos((*rotation).x * 0.0174533) - p0.z * sin((*rotation).x * 0.0174533);
rotatedP0.z = p0.y * sin((*rotation).x * 0.0174533) + p0.z * cos((*rotation).x * 0.0174533);
rotatedP1.x = p1.x;
rotatedP1.y = p1.y * cos((*rotation).x * 0.0174533) - p1.z * sin((*rotation).x * 0.0174533);
rotatedP1.z = p1.y * sin((*rotation).x * 0.0174533) + p1.z * cos((*rotation).x * 0.0174533);
rotatedP2.x = p2.x;
rotatedP2.y = p2.y * cos((*rotation).x * 0.0174533) - p2.z * sin((*rotation).x * 0.0174533);
rotatedP2.z = p2.y * sin((*rotation).x * 0.0174533) + p2.z * cos((*rotation).x * 0.0174533);
p0 = rotatedP0 + *position;
p1 = rotatedP1 + *position;
p2 = rotatedP2 + *position;
}
This method works well for all axes except the X axis. The cube has two red faces intersecting the Z axis, two green faces intersecting the Y axis, and two blue faces intersecting the X axis. Rotating the cube along the Z and Y axes works fine. The cube is rotating around the red and green faces. When rotating along the X axis, the cube is not rotated around the blue faces, but rather the global X axis.
Am I doing something wrong? Is it supposed to be this way? Is there any way to fix it? I searched all over and couldn't find anything helpful.
bro you did it all wrong. use this 3D point rotation algorithm. i know it is javascript but the math still the same
I want to take advantage of the SSE intrinsics (mostly for the speed advantage) that DirectXMath provides
How would i do it? this is the current code im using
DirectX::XMFLOAT4 clipCoordinates;
DirectX::XMFLOAT4X4 WorldMatrix4x4;
DirectX::XMStoreFloat4x4(&WorldMatrix4x4, WorldMatrix);
clipCoordinates.x = pos.x * WorldMatrix4x4._11 + pos.y * WorldMatrix4x4._12 + pos.z * WorldMatrix4x4._13 + WorldMatrix4x4._14;
clipCoordinates.y = pos.x * WorldMatrix4x4._21 + pos.y * WorldMatrix4x4._22 + pos.z * WorldMatrix4x4._23 + WorldMatrix4x4._24;
clipCoordinates.z = pos.x * WorldMatrix4x4._31 + pos.y * WorldMatrix4x4._32 + pos.z * WorldMatrix4x4._33 + WorldMatrix4x4._34;
clipCoordinates.w = pos.x * WorldMatrix4x4._41 + pos.y * WorldMatrix4x4._42 + pos.z * WorldMatrix4x4._43 + WorldMatrix4x4._44;
If by 2D position you mean screenspace coordinates:
First transform from local space to world space:
XMVECTOR world_pos = XMVector4Transform(pos, world_matrix);
Then transform from world space to camera space:
XMVECTOR view_pos = XMVector4Transform(world_pos, view_matrix);
Then transform from camera space to clip space:
XMVECTOR clip_pos = XMVector4Transform(view_pos , proj_matrix);
Remember that you can concatenate all of these into one transform by multiplying the XMMATRIX matrix = world_matrix * view_matrix * proj_matrix; together and then transforming the point by matrix.
Now divide by w coordinate to get NDC:
XMVECTOR w_vector = XMVectorSplatW(clip_pos);
XMVECTOR ndc = XMVectorDivide(clip_pos, w_vector);
You can combine the transform and divide by using XMVECTOR ndc = XMVector3TransformCoord(clip_pos, matrix);
Your NDC x and y components are in interval [-1,1]. To transform them in [0,ScreenWidth] and [0,ScreenHeight] you use the following formulas:
float ndc_x = XMVectorGetX(ndc);
float ndc_y = XMVectorGetY(ndc);
float pixel_x = (1.0f + ndc_x) * 0.5 * ScreenWidth;
float pixel_y = (1.0f - ndc_y) * 0.5 * ScreenHeight;
And those are your screen coordinates.
I'm trying to calculate the view frustum corners in world space. I've implemented it by using the FOV and using the width/height of the planes and some vector math
However, a lot of examples simply state that you can multiply a NDC corner like (1,1,1) by the inverse viewProjection matrix. But when I do this I get somewhat different results. This is the code I'm using right now to test things:
float nearHeight = 2 * tan(mFOV / 2) * mNear;
float nearWidth = mNear * mRatio;
float farHeight = 2 * tan(mFOV / 2) * mFar;
float farWidth = mFar * mRatio;
glm::vec3 fc = mPos + mFront * mFar;
glm::vec3 nc = mPos + mFront * mNear;
mFrustum.frustumCorners[0] = fc + (mUp * farHeight / 2.0f) - (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[1] = fc + (mUp * farHeight / 2.0f) + (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[2] = fc - (mUp * farHeight / 2.0f) - (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[3] = fc - (mUp * farHeight / 2.0f) + (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[4] = nc + (mUp * nearHeight / 2.0f) - (mRight * nearWidth / 2.0f);
mFrustum.frustumCorners[5] = nc + (mUp * nearHeight / 2.0f) + (mRight * nearWidth / 2.0f);
mFrustum.frustumCorners[6] = nc - (mUp * nearHeight / 2.0f) - (mRight * nearWidth / 2.0f);
mFrustum.frustumCorners[7] = nc - (mUp * nearHeight / 2.0f) + (mRight * nearWidth / 2.0f);
glm::vec4 test(1.0f, 1.0f, 1.0f,1.0f);
glm::vec4 test2(-1.0f, -1.0f, -1.0f, 1.0f);
glm::mat4 testingMatrix = glm::inverse(mProjectionMatrix * getViewMatrix());
test = testingMatrix*test;
test2 = testingMatrix*test2;
test2.x /= test2.w;
test2.y /= test2.w;
test2.z /= test2.w;
test.x /= test.w;
test.y /= test.w;
test.z /= test.w;
Now both of these results give an accurate z value for [near,far] = [1, 10000] but the x values are off by quite a bit while the y values are pretty much the same. I was just wonder which way is the most accurate one?
Inverse viewProjection
Regular calculation
I don't know why but it seems my z-axis is bugged (it seems it is doubling the value or something)
This is supposed to be a cube
https://scontent-mad1-1.xx.fbcdn.net/hphotos-xtp1/v/t34.0-12/11998230_879538255460372_61658668_n.jpg?oh=dd08fee0a66e37bf8f3aae2fba107fa1&oe=55F6170C
However it seems like it is "bigger in depth" what seems wrong
My pr_Matrix:
mat4 mat4::prespective(float fov, float aspectRatio, float near, float far){
mat4 result;
float yScale = 1.0f / tan(toRadians(fov/2.0f));
float xScale = yScale / aspectRatio;
float frustumLength = far - near;
result.elements[0 + 0 * 4] = xScale;
result.elements[1 + 1 * 4] = yScale;
result.elements[2 + 2 * 4] = -(far + near) / frustumLength;
result.elements[3 + 2 * 4] = -1.0f;
result.elements[2 + 3 * 4] = -(2.0f * far * near) / frustumLength;
return result;
}
My ml_Matrix:
maths::mat4 &ProjectionMatrix(){
maths::mat4 m_ProjM = maths::mat4::identity();
m_ProjM *= maths::mat4::translation(m_Position);
m_ProjM *= maths::mat4::rotation(m_Rotation.x, maths::vec3(1, 0, 0));
m_ProjM *= maths::mat4::rotation(m_Rotation.y, maths::vec3(0, 1, 0));
m_ProjM *= maths::mat4::rotation(m_Rotation.z, maths::vec3(0, 0, 1));
maths::mat4 scale_matrix = maths::mat4(m_Scale);
scale_matrix.elements[3 + 3 * 4] = 1.0f;
m_ProjM *= scale_matrix;
return m_ProjM;
}
My vw_matrix (camera)
void update(){
maths::mat4 newMatrix = maths::mat4::identity();
newMatrix *= maths::mat4::rotation(m_Pitch, maths::vec3(1, 0, 0));
newMatrix *= maths::mat4::rotation(m_Yaw, maths::vec3(0, 1, 0));
newMatrix *= maths::mat4::translation(maths::vec3(-m_Pos.x, -m_Pos.y, -m_Pos.z));
m_ViewMatrix = newMatrix;
}
My Matrix Multiplication in the glsl code:
vec4 worldPosition = ml_matrix * vec4(position, 1.0);
vec4 positionRelativeToCamera = vw_matrix * worldPosition;
gl_Position = pr_matrix * positionRelativeToCamera;
Edit: I think I got it! (Will double check maths once I get time) Most tutorials(so, the source of my code) use a mat4::prespective(float fov, float aspectRatio, float near, float far), the thing they don't say is that "fov" means "fovy" (so "vertical field of view"), the results seem to replicate the "usual fov in games" now with this simple change:
float xScale = 1.0f / tan(toRadians(fov/2.0f));
float yScale = xScale * aspectRatio;
Thanks for pointing it out as a fov problem Henk De Boer
This is simply a result of FOV, the FOV you supply greatly alters the way an image looks. It's a matter of personal taste what you choose. To get a natural feeling scene anything between 45-60 is good, but you can enhance this to make the scene feel more immediate or the action more fast paced. Observe how the following image has twice the exact same geometry, but the dock to the right is protruding much further in to the viewport with a 90 degree FOV.
My guess would be that in declaring:
mat4 result;
You're invoking a default constructor that's initialising the matrix to identity.
Then in the subsequent code that sets up the matrix, you need a call to:
result.elements[3 + 3 * 4] = 0.0f;
Also, for what it's worth, in my code the lines:
result.elements[3 + 2 * 4] = -1.0f;
result.elements[2 + 3 * 4] = -(2.0f * far * near) / frustumLength;
Are not negated. TBH, I'm not sure if that's anything to worry about.
I'm drawing a quad using Geometry Shader, but can't figure out how to rotate it with angle.
void main(void)
{
float scaleX = 2.0f / u_resolution.x;
float scaleY = 2.0f / u_resolution.y;
float nx = (u_position.x * scaleX) - 1.0f;
float ny = -(u_position.y * scaleY) + 1.0f;
float nw = u_size.x * scaleX;
float nh = u_size.y * scaleY;
gl_Position = vec4( nx+nw, ny, 0.0, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(nx, ny, 0.0, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
gl_Position = vec4( nx+nw, ny-nh, 0.0, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(nx, ny-nh, 0.0, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
EndPrimitive();
}
Should I use a rotation matrix or sin and cos? I'm not too great at math.
You don't have to use matrices, but you need to use those sine functions. Here is a way to rotate a 3D position about some arbitrary axis by some angle specified in degrees:
// This is the 3D position that we want to rotate:
vec3 p = position.xyz;
// Specify the axis to rotate about:
float x = 0.0;
float y = 0.0;
float z = 1.0;
// Specify the angle in radians:
float angle = 90.0 * 3.14 / 180.0; // 90 degrees, CCW
vec3 q;
q.x = p.x * (x*x * (1.0 - cos(angle)) + cos(angle))
+ p.y * (x*y * (1.0 - cos(angle)) + z * sin(angle))
+ p.z * (x*z * (1.0 - cos(angle)) - y * sin(angle));
q.y = p.x * (y*x * (1.0 - cos(angle)) - z * sin(angle))
+ p.y * (y*y * (1.0 - cos(angle)) + cos(angle))
+ p.z * (y*z * (1.0 - cos(angle)) + x * sin(angle));
q.z = p.x * (z*x * (1.0 - cos(angle)) + y * sin(angle))
+ p.y * (z*y * (1.0 - cos(angle)) - x * sin(angle))
+ p.z * (z*z * (1.0 - cos(angle)) + cos(angle));
gl_Position = vec4(q, 1.0);
If you know that you are rotating about some standard x-, y-, or z-axis, you can simplify the "algorithm" a lot by defining it explicitly for that standard axis.
Notice how we rotate about the z-axis in the above code. For example, rotation about the x-axis would be (x,y,z) = (1,0,0). You could set the variables to anything, but the values should result in the axis being a unit vector (if that even matters.)
Then again, you might as well use matrices:
vec3 n = vec3(0.0, 0.0, 1.0); // the axis to rotate about
// Specify the rotation transformation matrix:
mat3 m = mat3(
n.x*n.x * (1.0f - cos(angle)) + cos(angle), // column 1 of row 1
n.x*n.y * (1.0f - cos(angle)) + n.z * sin(angle), // column 2 of row 1
n.x*n.z * (1.0f - cos(angle)) - n.y * sin(angle), // column 3 of row 1
n.y*n.x * (1.0f - cos(angle)) - n.z * sin(angle), // column 1 of row 2
n.y*n.y * (1.0f - cos(angle)) + cos(angle), // ...
n.y*n.z * (1.0f - cos(angle)) + n.x * sin(angle), // ...
n.z*n.x * (1.0f - cos(angle)) + n.y * sin(angle), // column 1 of row 3
n.z*n.y * (1.0f - cos(angle)) - n.x * sin(angle), // ...
n.z*n.z * (1.0f - cos(angle)) + cos(angle) // ...
);
// Apply the rotation to our 3D position:
vec3 q = m * p;
gl_Position = vec4(q, 1.0);
Notice how the elements of the matrix are laid out such that we first complete the first column, and then the second, and so on; the matrix is in column-major order. This matters when you try to transfer a matrix written in mathematical notation into a data type in your code. You basically need to transpose it (to flip the elements diagonally) in order to use it in your code. Also, we are essentially multiplying a matrix on left with a column vector on right.
If you need a 4-by-4 homogeneous matrix, then you would simply add an extra column and a row into the above matrix, such that both the new rightmost column and bottommost row would consist of [0 0 0 1]:
vec4 p = position.xyzw; // new dimension
vec3 n = ...; // same
mat4 m = mat4( // new dimension
...
0.0,
...
0.0,
...
0.0,
0.0,
0.0,
0.0,
1.0
);
vec4 q = m * p;
gl_Position = q;
Again, notice the order of the multiplication when applying the rotation, it is important because it affects the end result. What happens in the multiplication is basically that a new vector is formed by calculating the dot-product of the position vector and each column in the matrix; each coordinate in the resulting vector is the dot-product of the original vector and a single column in the matrix (see the first code example.)
The
q.x = p.x * (x*x * (1.0 - cos(angle)) + cos(angle))
+ p.y * (x*y * (1.0 - cos(angle)) + z * sin(angle))
+ p.z * (x*z * (1.0 - cos(angle)) - y * sin(angle));
Is same as:
q.x = dot(p, m[0]);
One could even compose the matrix with itself: m = m*m; which would result in a 180-degree counterclockwise rotation matrix, depending on the angle used.