I'm trying to implement my custom opengl Rotation around y axix. Here is my code;
void mglRotateY(float angle)
{
float radians = angle * (PI/180);
GLfloat t[4][4] =
{
{cosf(angle), 0, -sinf(angle),0},
{0, 1, 0, 0},
{sinf(angle), 0, cosf(angle), 0},
{0, 0, 0, 1}
}; //Rotation matrix y
glMultMatrixf(*t);
}
The effect is a rotation around y axis, but the degrees seems to not correspond.
Does anyone know why?
Use radians not angle when calculating the sine and cosine.
In your code you reference angle instead of radian. Also you may want to precalc the values, as you have 4 calculations to populate the matrix t
perhaps something like
void mglRotateY(float angle)
{
float radians = angle * (PI/180);
float cosVal = cosf(radians);
float sinVal = sinf(radians);
GLfloat t[4][4] =
{
{cosVal, 0, -sinVal,0},
{0, 1, 0, 0},
{sinVal, 0, cosVal, 0},
{0, 0, 0, 1}
}; //Rotation matrix y
glMultMatrixf(*t);
}
Related
This is my perspective projection matrix code
inline m4
Projection(float WidthOverHeight, float FOV)
{
float Near = 1.0f;
float Far = 100.0f;
float f = 1.0f/(float)tan(DegToRad(FOV / 2.0f));
float fn = 1.0f / (Near - Far);
float a = f / WidthOverHeight;
float b = f;
float c = Far * fn;
float d = Near * Far * fn;
m4 Result =
{
{{a, 0, 0, 0},
{0, b, 0, 0},
{0, 0, c, -1},
{0, 0, d, 0}}
};
return Result;
}
And here is the main code
m4 Project = Projection(ar, 90);
m4 Move = {};
CreateMat4(&Move,
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, -2,
0, 0, 0, 1);
m4 Rotate = Rotation(Scale);
Scale += 0.01f;
m4 FinalTransformation = Project * Move * Rotate;
SetShaderUniformMat4("Project", FinalTransformation, ShaderProgram);
Here are some pictures of the cube rotating.
In the shader code I just multiply the transformation by the position (with the transformation being on the left).
I am not sure if it's helpful but here is the rotation code:
float c = cos(Angle);
float s = sin(Angle);
m4 R =
{
{{ c, 0, s, 0},
{ 0, 1, 0, 0},
{-s, 0, c, 0},
{ 0, 0, 0, 1}}
};
return R;
I tried multiplying the matricies in the shader code instead of on the c++ side but then everything disappeared.
OpenGL matrixes are stored with column major order. You have to read the columns from left to right. For example the 1st column of the matrix R is { c, 0, s, 0}, the 2nd one is { 0, 1, 0, 0} the 3rd is {-s, 0, c, 0} and the 4th is { 0, 0, 0, 1}. The lines in your code are actually columns (not rows).
Therefore you need to to transpose you projection matrix (Project) and translation matrix (Move).
I performed an MVP transformation on the vertices of the model. In theory, I must apply the inverse transpose matrix of the MVP transformation to the normal.
This is the derivation process:
(A, B, C) is the normal of the plane where the point (x, y, z) lies
For a vector, such as (x0, y0, z0), it is (x0, y0, z0, 0) in homogeneous coordinates. After transformation, it should still be a vector, like (x1, y1, z1, 0), This requires that the last row of the 4 * 4 transformation matrix is all 0 except for the elements in the last column, otherwise it will become (x1, y1, z1, n) after the transformation.
In fact, my MVP transformation matrix cannot satisfy this point after undergoing inverse transpose transformation.
Code:
Mat<4, 4> View(const Vec3& pos){
Mat<4, 4> pan{1, 0, 0, -pos.x,
0, 1, 0, -pos.y,
0, 0, 1, -pos.z,
0, 0, 0, 1};
Vec3 v = Cross(camera.lookAt, camera.upDirection).Normalize();
Mat<4, 4> rotate{v.x, v.y, v.z, 0,
camera.upDirection.x, camera.upDirection.y, camera.upDirection.z, 0,
-camera.lookAt.x, -camera.lookAt.y, -camera.lookAt.z, 0,
0, 0, 0, 1};
return rotate * pan;
}
Mat<4, 4> Projection(double near, double far, double fov, double aspectRatio){
double angle = fov * PI / 180;
double t = -near * tan(angle / 2);
double b = -t;
double r = t * aspectRatio;
double l = -r;
Mat<4, 4> zoom{2 / (r - l), 0, 0, 0,
0, 2 / (t - b), 0, 0,
0, 0, 2 / (near - far), 0,
0, 0, 0, 1};
Mat<4, 4> pan{1, 0, 0, -(l + r) / 2,
0, 1, 0, -(t + b) / 2,
0, 0, 1, -(near + far) / 2,
0, 0, 0, 1};
Mat<4, 4> extrusion{near, 0, 0, 0,
0, near, 0, 0,
0, 0, near + far, -near * far,
0, 0, 1, 0};
Mat<4, 4> ret = zoom * pan * extrusion;
return ret;
}
Mat<4, 4> modelMatrix = Mat<4, 4>::identity();
Mat<4, 4> viewMatrix = View(camera.position);
Mat<4, 4> projectionMatrix = Projection(-0.1, -50, camera.fov, camera.aspectRatio);
Mat<4, 4> mvp = projectionMatrix * viewMatrix * modelMatrix;
Mat<4, 4> mvpInverseTranspose = mvp.Inverse().Transpose();
mvp:
-2.29032 0 0.763441 -2.68032e-16
0 -2.41421 0 0
-0.317495 0 -0.952486 2.97455
0.316228 0 0.948683 -3.16228
mvpInverseTranspose:
-0.392957 0 0.130986 0
0 -0.414214 0 0
-4.99 0 -14.97 -4.99
-4.69377 0 -14.0813 -5.01
I seem to understand the problem. The lighting should be calculated in world space, so I only need to apply the inverse transpose matrix of the model transformation to the normal.
I am practicing DirectX 11 following Frank Luna's book.
I have implemented a demo that renders a cube, but the result is not correct.
https://i.imgur.com/2uSkEiq.gif
As I hope you can see from the image (I apologize for the low quality), it seems like the camera is "trapped" inside the cube even when I move it away. There is also a camera frustum clipping problem.
I think the problem is therefore in the definition of the projection matrix.
Here is the cube vertices definition.
std::vector<Vertex> vertices =
{
{XMFLOAT3(-1, -1, -1), XMFLOAT4(1, 1, 1, 1)},
{XMFLOAT3(-1, +1, -1), XMFLOAT4(0, 0, 0, 1)},
{XMFLOAT3(+1, +1, -1), XMFLOAT4(1, 0, 0, 1)},
{XMFLOAT3(+1, -1, -1), XMFLOAT4(0, 1, 0, 1)},
{XMFLOAT3(-1, -1, +1), XMFLOAT4(0, 0, 1, 1)},
{XMFLOAT3(-1, +1, +1), XMFLOAT4(1, 1, 0, 1)},
{XMFLOAT3(+1, +1, +1), XMFLOAT4(0, 1, 1, 1)},
{XMFLOAT3(+1, -1, +1), XMFLOAT4(1, 0, 1, 1)},
};
Here is how I calculate the view and projection matrices.
void TestApp::OnResize()
{
D3DApp::OnResize();
mProj = XMMatrixPerspectiveFovLH(XM_PIDIV4, AspectRatio(), 1, 1000);
}
void TestApp::UpdateScene(float dt)
{
float x = mRadius * std::sin(mPhi) * std::cos(mTheta);
float y = mRadius * std::cos(mPhi);
float z = mRadius * std::sin(mPhi) * std::sin(mTheta);
XMVECTOR EyePosition = XMVectorSet(x, y, z, 1);
XMVECTOR FocusPosition = XMVectorZero();
XMVECTOR UpDirection = XMVectorSet(0, 1, 0, 0);
mView = XMMatrixLookAtLH(EyePosition, FocusPosition, UpDirection);
}
And here is how I update the camera position on mouse move.
glfwSetCursorPosCallback(mMainWindow, [](GLFWwindow* window, double xpos, double ypos)
{
TestApp* app = reinterpret_cast<TestApp*>(glfwGetWindowUserPointer(window));
if (glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_LEFT) == GLFW_PRESS)
{
float dx = 0.25f * XMConvertToRadians(xpos - app->mLastMousePos.x);
float dy = 0.25f * XMConvertToRadians(ypos - app->mLastMousePos.y);
app->mTheta += dx;
app->mPhi += dy;
app->mPhi = std::clamp(app->mPhi, 0.1f, XM_PI - 0.1f);
}
else if (glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_RIGHT) == GLFW_PRESS)
{
float dx = 0.05f * XMConvertToRadians(xpos - app->mLastMousePos.x);
float dy = 0.05f * XMConvertToRadians(ypos - app->mLastMousePos.y);
app->mRadius += (dx - dy);
app->mRadius = std::clamp(app->mRadius, 3.f, 15.f);
}
app->mLastMousePos = XMFLOAT2(xpos, ypos);
});
Thanks.
The root problem here was in the constant buffer vs. CPU update.
HLSL defaults to column-major matrix definitions per Microsoft Docs. DirectXMath uses row-major matrices, so you have to transpose while updating the Constant Buffer.
Alternatively, you can declare the HLSL matrix with the row_major keyword, #pragma pack_matrix, or the /Zpr compiler switch.
I have a task to draw 3D objects on the ground using OpenGL. I use OpenGL left-handed coord system where Y axis is up. But 3D objects and camera orbiting objects should use different coord systems with following properties:
XY plane is a ground plane;
Z-axis is up;
Y-axis is "North";
X-axis is "East";
The azimuth (or horizontal) angle is [0, 360] degrees;
The elevation (or vertical) angle is [0, 90] degrees from XY plane.
End user uses azimuth and elevation to rotate camera around some center. So I made following code to convert from spherical coordinates to quaternion:
//polar: x - radius, y - horizontal angle, z - vertical
QQuaternion CoordinateSystemGround::fromPolarToQuat(const QVector3D& polar) const {
QQuaternion dest = QQuaternion::fromEulerAngles({0, polar.y(), polar.z()});
//convert user coord system back to OpenGL by rotation around X-axis
QQuaternion orig = QQuaternion::fromAxisAndAngle(1, 0, 0, -90);
return dest * orig;
}
and back:
QVector3D CoordinateSystemGround::fromQuatToPolar(const QQuaternion& q) const {
//convert OpenGL coord system to destination by rotation around X-axis
QQuaternion dest = QQuaternion::fromAxisAndAngle(1, 0, 0, 90);
QQuaternion out = q * dest;
QVector3D euler = out.toEulerAngles();
float hor = euler.y();
if(hor < 0.0f)
hor += 360.0f;
float ver = euler.z();
if(ver > 90.0f)
ver = 90.0f;
else if(ver < 0.0f)
ver = 0.0f;
//x changes later
return QVector3D(0, hor, ver);
}
But it doesn't work right. I suppose fromPolarToQuat conversion has mistake somewhere and I can't understand where.
Seems like I found the solution. So, to get polar angles from quaternion:
QVector3D CoordinateSystemGround::fromQuatToPolar(const QQuaternion& q) const {
QQuaternion coord1 = QQuaternion::fromAxisAndAngle(0, 1, 0, -180);
QQuaternion coord2 = QQuaternion::fromAxisAndAngle(1, 0, 0, -90);
QQuaternion out = orig1 * orig2 * q;
QVector3D euler = out.toEulerAngles();
float hor = euler.y();
if(hor < 0.0f)
hor += 360.0f;
float ver = euler.x();
return QVector3D(0, hor, -ver);
}
And back:
QQuaternion CoordinateSystemGround::fromPolarToQuat(const QVector3D& polar) const {
QQuaternion dest = QQuaternion::fromEulerAngles({-polar.z(), polar.y(), 0});
QQuaternion coord1 = QQuaternion::fromAxisAndAngle(1, 0, 0, 90);
QQuaternion coord2 = QQuaternion::fromAxisAndAngle(0, 1, 0, 180);
return coord1 * coord2 * dest;
}
Not sure it's an optimal solution though, but it works as it should.
Edited
After some research I've found few mistakes and made optimized and hope correct version of conversion:
QVector3D CoordinateSystemGround::fromQuatToPolar(const QQuaternion& q) const {
// back to OpenGL coord system: just multiplication on inverted destination coord system
QQuaternion dest = QQuaternion::fromAxes({-1, 0, 0}, {0, 0, 1}, {0, 1, 0}).inverted();
QVector3D euler = (dest * q).toEulerAngles();
float hor = euler.y();
if(hor < 0.0f)
hor += 360.0f;
float ver = euler.x();
return QVector3D(0, hor, -ver);
}
And back:
QQuaternion CoordinateSystemGround::fromPolarToQuat(const QVector3D& polar) const {
//just rotate if we were in OpenGL coord system
QQuaternion orig = QQuaternion::fromEulerAngles({-polar.z(), polar.y(), 0});
//and then multiply on destination coord system
QQuaternion dest = QQuaternion::fromAxes({-1, 0, 0}, {0, 0, 1}, {0, 1, 0});
return dest * orig;
}
I'm trying to calculate a lookat matrix myself, instead of using gluLookAt().
My problem is that my matrix doesn't work. using the same parameters on gluLookAt does work however.
my way of creating a lookat matrix:
Vector3 Eye, At, Up; //these should be parameters =)
Vector3 zaxis = At - Eye; zaxis.Normalize();
Vector3 xaxis = Vector3::Cross(Up, zaxis); xaxis.Normalize();
Vector3 yaxis = Vector3::Cross(zaxis, xaxis); yaxis.Normalize();
float r[16] =
{
xaxis.x, yaxis.x, zaxis.x, 0,
xaxis.y, yaxis.y, zaxis.y, 0,
xaxis.z, yaxis.z, zaxis.z, 0,
0, 0, 0, 1,
};
Matrix Rotation;
memcpy(Rotation.values, r, sizeof(r));
float t[16] =
{
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
-Eye.x, -Eye.y, -Eye.z, 1,
};
Matrix Translation;
memcpy(Translation.values, t, sizeof(t));
View = Rotation * Translation; // i tried reversing this as well (translation*rotation)
now, when i try to use this matrix be calling glMultMatrixf, nothing shows up in my engine, while using the same eye, lookat and up values on gluLookAt works perfect as i said before.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(View);
the problem must be in somewhere in the code i posted here, i know the problem is not in my Vector3/Matrix classes, because they work fine when creating a projection matrix.
I assume you have a right handed coordinate system (it is default in OpenGL).
Try the following code. I think you forgot to normalize up and you have to put "-zaxis" in the matrix.
Vector3 Eye, At, Up; //these should be parameters =)
Vector3 zaxis = At - Eye; zaxis.Normalize();
Up.Normalize();
Vector3 xaxis = Vector3::Cross(Up, zaxis); xaxis.Normalize();
Vector3 yaxis = Vector3::Cross(zaxis, xaxis); yaxis.Normalize();
float r[16] =
{
xaxis.x, yaxis.x, -zaxis.x, 0,
xaxis.y, yaxis.y, -zaxis.y, 0,
xaxis.z, yaxis.z, -zaxis.z, 0,
0, 0, 0, 1,
};