How to calculate rotation properly in OpenGL using GLM - opengl

I have this script to get an entity's MVP, all in glm. I want the object to rotate according to it's orientation value that I use for movement, but visually, the object rotates about a hundred times slower. This would have to be accurate, to represent direction properly, visually. But I'm no expert, still just learning and trying to get my head around it, so once again, hoping the wise ones can help me out? :)
void Entity::turnRight()
{
m_Orientation += m_TurnSpeed;
}
turn speed is 0.02 by default
then get mvp:
m_Direction[0] = cos(0.0f) * sin(m_Orientation);
m_Direction[1] = sin(0.0f);
m_Direction[2] = cos(0.0f) * cos(m_Orientation);
glm::mat4 projectionMatrix = glm::perspective(90.0f, (GLfloat)640 / (GLfloat)480, 0.001f, 60.0f);
glm::mat4 translationMatrix = glm::translate(glm::mat4(), glm::vec3(m_Position[0], m_Position[1], m_Position[2]));
glm::mat4 Rotation = glm::rotate(glm::mat4(), (m_Orientation / m_TurnSpeed), glm::vec3(0, 1, 0));
m_MVP = (projectionMatrix * viewMatrix * glm::mat4() * translationMatrix * Rotation);
This is all there is to it really... the player rotates and translates properly but visually, it's off, like it turns way less than the actual orientation value (m_Orientation)
What am I not calculating in there?
Thanks in advance!

I think you are using an older version of GLM that uses degrees instead of radians. This is an unfortunate design flaw in older versions of GLM.
You can probably use a preprocessor macro to fix things:
#define GLM_FORCE_RADIANS
#include <glm/...>
#include <glm/...>
However, it might be better to upgrade to a more recent version of GLM. Versions 0.9.6 and newer use radians by default.

Related

OpenGL Camera Strafing doesnt work

I'm currently working on a OpenGL FrameWork/Engine and as far as the OpenGL part goes, I'm quite satisfied with my results.
On the other hand I have a serious problem getting a Camera to work.
Moving along the Z-Axis works well, but as soon as I start to strafe (moving along the X-Axis), the whole Scene get screwed.
You can see the result of strafing in the image below.
The left part shows the actual scene, the right part shows the scene resulting from a strafe movement.
My code is the following.
In Constructor:
//Info is a Struct with Default values
m_projectionMatrix = glm::perspective(
info.fieldOfView, width / height, //info.fov = 90
info.nearPlane, info.farPlane // info.near = 0.1f, info.far = 1000
);
//m_pos = glm::vec3(0.0f,0.0f,0.0f), info.target = glm::vec3(0.0f, 0.0f, -1.0f)
m_viewMatrix = glm::lookAt(m_pos, m_pos + info.target, Camera::UP);
//combine projection and view
m_vpMatrix = m_projectionMatrix * m_viewMatrix;
In the "Update"-Method I'm currently doing the following:
glm::mat4x4 Camera::GetVPMatrix()
{
m_vpMatrix = glm::translate(m_vpMatrix, m_deltaPos);
return m_vpMatrix;
}
As far as i know:
The projection matrix achieves the actual perspective view. The view matrix, initially, translates and rotates the whole scene, that it is centered?
So why translating the VP-Matrix by any Z-Value works just fine, but by an X-Value doesn't?
I would like to achive a camera behaviour like this:
Initial Cam Pos is (0,0,0) and "Center" is e.g. (0,0,-1).
Then after Translation by X = 5: Cam Pos is (5,0,0) and Center is (5,0,-1).
Edit: Additional Question.
Why is the Z-Coordinate affekted by VP-Transformation?
Thanks for any help!
Best regards, Christoph.
Okay, I finally got the solution... As you can see, I am using GLM for my matrix math. GLM stores its matrices values in column major order. Open GL wants column major ordered matrices, too. C/C++ native 2d Array layout is row major, so most of the available OpenGL/C++ tutorials state, that one should use
glUniformMatrix4fv(location, 1, GL_TRUE, &mat[0][0]);
With GL_TRUE meaning, that the matrix should be converted (transposed?) from row major to column major order. Because of my matrices already beeing in column major format, that makes absolutely no sense...
Changing the above to
glUniformMatrix4fv(location, 1, GL_FALSE, &mat[0][0]);
fixed my problem...
Matrix math is not my strong point so I can't explain why your current approach doesn't work, though I have my suspicions (translating the projection matrix doesn't seem right). The following should allow you to move the camera though:
// update your camera position
m_pos = new_pos;
// update the view matrix
m_viewMatrix = glm::lookAt(m_pos, m_pos + info.target, Camera::UP);
// update the view-projection matrix, projection matrix never changes
m_vpMatrix = m_projectionMatrix * m_viewMatrix;

Minko - camera and rotation angle

using minko version 3.0, i am creating a camera as the samples :
auto camera = scene::Node::create("camera")
->addComponent(Renderer::create(0x000000ff))
->addComponent(Transform::create(
//test
Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 3.f)) //ori
//Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 30.f))
))
->addComponent(PerspectiveCamera::create(canvas->aspectRatio()));
Then loading my obj using a similar method :
RotateMyobj(const char *objName,float rotX, rotY, float rotZ)
{
...
auto myObjModel = sceneMan->assets()->symbol(objName);
auto clonedobj = myObjModel->clone(CloneOption::DEEP);
...
clonedobj->component<Transform>()->matrix()->prependRotationX(rotX); //test - ok
clonedobj->component<Transform>()->matrix()->prependRotationY(rotY);
clonedobj->component<Transform>()->matrix()->prependRotationZ(rotZ);
...
//include adding child to rootnode
}
calling it from asset complete callback :
auto _ = sceneManager->assets()->loader()->complete()->connect([=](file::Loader::Ptr loader)
{
...
RotateMyobj(0,0,0);
...
}
The obj does load however it is rotated "to the left" (compared when loaded within blender for example).
if i call my method using RotateMyobj(0,1.5,0); the obj is diaplyed at the right angle, however i think this shouldn't be needed.
PS: tested with many obj, all giving same results.
PS2 : commenting / turning off Matrix4x4::create()->lookAt leads to the same result
PS3 : shouldn't create cam with a position of 30 (Z axis) feels like looking at the ground from the top of a building ?
Any idea if this from the camera creation code or the obj loading one ?
Thx.
Update :
I found the source of my problem, it is being caused by calling this method inside enterFrame callback: UpdateSceneOnMouse( camera );
void UpdateSceneOnMouse( std::shared_ptr<scene::Node> &cam ){
yaw += cameraRotationYSpeed;
cameraRotationYSpeed *= 0.9f;
pitch += cameraRotationXSpeed;
cameraRotationXSpeed *= 0.9f;
if (pitch > maxPitch)
{
pitch = maxPitch;
}
else if (pitch < minPitch)
{
pitch = minPitch;
}
cam->component<Transform>()->matrix()->lookAt(
lookAt,
Vector3::create(
lookAt->x() + distance * cosf(yaw) * sinf(pitch),
lookAt->y() + distance * cosf(pitch),
lookAt->z() + distance * sinf(yaw) * sinf(pitch)
)
);}
with the following initialization parameters :
float CallbackManager::yaw = 0.f;
float CallbackManager::pitch = (float)M_PI * 0.5f;
float CallbackManager::minPitch = 0.f + 1e-5;
float CallbackManager::maxPitch = (float)M_PI - 1e-5;
std::shared_ptr CallbackManager::lookAt = Vector3::create(0.f, .8f, 0.f);
float CallbackManager::distance = 10.f;
float CallbackManager::cameraRotationXSpeed = 0.f;
float CallbackManager::cameraRotationYSpeed = 0.f;
If i turn off the call (inspired by the clone example), the object loads more or less correctly
(still a bit rotated to the left but better than previously). I am no math guru, can anyone suggest better
default parameters so the object / cameras aren't rotated at startup ?
Thx.
Many 3D/CAD tools will export files - including OBJ - with coordinates system different from the one used by Minko:
Minko 3 beta 2 uses a left-handed coordinates system
Minko 3 beta 3 uses the OpenGL coordinates system, which is right-handed.
For example, in a right handed coordinates system x+ goes "right", y+ goes "up" and z+ goes "out from the screen".
Your Minko app likely loads 3D files using the ASSIMP library through the Minko/ASSIMP plugin. If the 3D file (format) provides the information about the coordinate system it was used upon export, then ASSIMP will convert the coordinates to the right-handed system. But the OBJ file format standard does not include such information.
Solution 1 : Try exporting your 3D models using a more versatile format such as Collada (*.dae).
Commenting/turning off Matrix4x4::create()->lookAt() does not affect the orientation of your 3D model because there is no reason for it to do so. Changing the position of the camera has no reason to affect the orientation of a mesh. If you want to change the up axis of the camera you have to use the 3rd parameter of the Matrix4x4::lookAt() method.
Solution 2 : The best thing to do is to properly rotate your mesh** using RotateMyobj(0, PI / 2, 0).
I also recommend using the dev branch (Minko beta 3 instead of beta 2) because there are some API changes -especially math wise - and a tremendous performance boost.
Solution 3 : Try adding the symbol without cloning it, see if it makes any difference.

First Person Camera movement issues

I'm implementing a first person camera using the GLM library that provides me with some useful functions that calculate perspective and 'lookAt' matrices. I'm also using OpenGL but that shouldn't make a difference in this code.
Basically, what I'm experiencing is that I can look around, much like in a regular FPS, and move around. But the movement is constrained to the three axes in a way that if I rotate the camera, I would still move in the same direction as if I had not rotated it... Let me illustrate (in 2D, to simplify things).
In this image, you can see four camera positions.
Those marked with a one are before movement, those marked with a two are after movement.
The red triangles represent a camera that is oriented straight forward along the z axis. The blue triangles represent a camera that hasbeen rotated to look backward along the x axis (to the left).
When I press the 'forward movement key', the camera moves forward along the z axis in both cases, disregarding the camera orientation.
What I want is a more FPS-like behaviour, where pressing forward moves me in the direction the camera is facing. I thought that with the arguments I pass to glm::lookAt, this would be achieved. Apparently not.
What is wrong with my calculations?
// Calculate the camera's orientation
float angleHori = -mMouseSpeed * Mouse::x; // Note that (0, 0) is the center of the screen
float angleVert = -mMouseSpeed * Mouse::y;
glm::vec3 dir(
cos(angleVert) * sin(angleHori),
sin(angleVert),
cos(angleVert) * cos(angleHori)
);
glm::vec3 right(
sin(angleHori - M_PI / 2.0f),
0.0f,
cos(angleHori - M_PI / 2.0f)
);
glm::vec3 up = glm::cross(right, dir);
// Calculate projection and view matrix
glm::mat4 projMatrix = glm::perspective(mFOV, mViewPortSizeX / (float)mViewPortSizeY, mZNear, mZFar);
glm::mat4 viewMatrix = glm::lookAt(mPosition, mPosition + dir, up);
gluLookAt takes 3 parameters: eye, centre and up. The first two are positions while the last is a vector. If you're planning on using this function it's better that you maintain only these three parameters consistently.
Coming to the issue with the calculation. I see that the position variable is unchanged throughout the code. All that changes is the look at point I.e. centre only. The right thing to do is to first do position += dir, which will move the camera (position) along the direction pointed to by dir. Now to update the centre, the second parameter can be left as-is: position + dir; this will work since the position was already updated to the new position and from there we've a point farther in dir direction to look at.
The issue was actually in another method. When moving the camera, I needed to do this:
void Camera::moveX(char s)
{
mPosition += s * mSpeed * mRight;
}
void Camera::moveY(char s)
{
mPosition += s * mSpeed * mUp;
}
void Camera::moveZ(chars)
{
mPosition += s * mSpeed * mDirection;
}
To make the camera move across the correct axes.

Rotating the LookAt vector of gluLookAt

I'm a student new to opengl. Currently, I'm doing a project that creates a scene.
Right now, my team is using gluLookAt() for my camera. What I want to accomplish is to try and rotate the LookAt vector around a certain point, namely where the camera is looking at.
This accomplishes a sort of "swaying in a circle". I need this because I am making a dart game for the scene, and my camera stay still, but I need it to move in a circle, but still allow the user's mouse to influence it. I also need it to create a drunken movement. That is why I am not considering rotating the Up or Eye vectors.
Currently, my look at code is like this.
int deltax = x - mouse.mX;
int deltay = y - mouse.mY;
cameradart.mYaw -= ((deltax/360.0) * 3.142) * 0.5;
cameradart.mPitch -= deltay * 0.02;
mouse.mX = x;
mouse.mY = y;
cameradart.lookAt.x = sin (cameradart.mYaw);
cameradart.lookAt.y = cameradart.mPitch ;
cameradart.lookAt.z = cos (cameradart.mYaw);
gluLookAt (cameradart.eye.x, cameradart.eye.y, cameradart.eye.z,
cameradart.eye.x + cameradart.lookAt.x, cameradart.eye.y + cameradart.lookAt.y,
cameradart.eye.z + cameradart.lookAt.z,
cameradart.up.x, cameradart.up.y, cameradart.up.z);
I know that it could be done easier using a different camera, but I really don't want to mess with my team's code by not using gluLookAt().
There's a couple of solutions in my mind, I'll tell you the easiest to understand/implement as a new graphics student
Assuming at first you're looking at (0,0,1) -store that vector-:
Think of a point that's drawing a circle and you're looking at it,
-Do it first to turn right and left (2D on X & Z)
-Let HDiff be the horizontal difference between old mouse position and the new one
-Update the x = cos(HDiff)
-Update the z = sin(HDiff)
*I didn't try it but it should work :)
If you want to be able to manipulate the camera, using a camera matrix is a much more effective mechanism than working with gluLookat. GLM is a good library for matrix and vector math and includes a lookat mechanism you can use to initialize the matrix, or you can just initialize it with a series of operations. However, remember that lookat produces a view matrix, and the view matrix is the inverse of the camera matrix.
This piece of code has a demonstration of what I'm talking about. Specifically look at the player member variable and how it's manipulated
glm::mat4 player;
...
glm::vec3 playerPosition(0, eyeHeight, ipd * 4.0f);
player = glm::inverse(glm::lookAt(playerPosition, glm::vec3(0, eyeHeight, 0), GlUtils::Y_AXIS));
This approach lets you apply changes like rotation and translation directly to the player matrix
// Rotate on the Y axis
player = glm::rotate(player, angle, glm::vec3(0, 1, 0));
This is much more intuitive than manipulating the view matrix, since changes to the view matrix always have to be the inverse of what you'd do to the player matrix.
When you're ready to render you need to convert the player matrix to a view matrix by taking it's inverse. In my example it's done like this:
gl::Stacks::modelview().top() = riftOrientation * glm::inverse(player);
This is because I'm using an application based modelview matrix stack that gets applied to the Shader programs I'm running.
For an OpenGL 1.x program, you'd instead use LoadMatrix
glMatrixMode(GL_MODELVIEW);
glm::mat4 modelview = glm::inverse(player);
glLoadMatrixf(&modelview);

OpenGL/Glut draw Pyramid - Skeleton Bones

I would like to draw a human skeleton (pose estimation project) in OpenGL, using whichever toolkit might be helpful.
I already have something simple working, drawing the joints with
glvertex3d // and/or
glutWireSphere
and the bones with
glBbegin(GL_Lines)
glvertex3d // coordinates of starting point
glvertex3d // coordinates of ending point
glEnd()
This looks like very similar to the next picture, however this bone depiction doesn't give any intuition about bone rotation.
I would like to have something looking more similar to the following picture, with bones drawn as elongated pyramids.
Glut doesn't seem to help on this.
glutWireCone // better than simple line, but still no intuition about rotation
glutWireTetrahedron // not really useful / parametrizable
Is there any tool available or should this be a custom solution?
GLUT objects will be drawn using the current OpenGL modelview matrix. This can be used to control the position, rotation, or even the scaling of the rendered object. With a cone, you would first translate (using glTranslate()) to the position of one end point, and then rotate (using glRotate() or glMultMatrix()) so that your cone was pointed towards the other endpoint. Finally you call your glutWireCone() method so that the height is equal to the distance from one endpoint to the other.
The tricky bit is finding the glRotate() parameters. GLUT will draw the cone by default along the Z axis, so your rotation needs to be whatever rotation would be required to rotate the Z axis to the axis defined by your end point minus your start point. The GLM math library can be really useful for this kind of thing. It even has a function for it
quat rotation(vec3 const & orig, vec3 const & dest)
So you could do something like this:
// these are filled in with your start and end points
glm::vec3 start, end;
glm::vec3 direction = end - start;
glm::quat rotation = rotation(glm::vec3(0, 0, 1), direction);
glm::mat4 rotationMatrix = glm::mat4_cast(rotation);
glMultMatrixf(&rotationMatrix);
You can get the tetrahedron effect by specifying 3 for the number of slices in your glut cone. In fact I wouldn't be very surprised if glutWireTetrahedron wasn't implemented in terms of glutWireCone
As mentioned in the comment above, Jherico's answer really helped me solve this problem.
This answer builts uppon his post, just to make things a bit more complete, for future reference.
Please find the GLM library here, it's a header only library, so you don't need to build this, just link to it and include the necessary header files. It has some great tools, as can be seen below.
#define GLM_FORCE_RADIANS // for compatibility between GLM and OpenGL API legacy functions
#include <glm/glm.hpp>
#include <glm/gtc/type_ptr.hpp> // for glm::mat4_cast // casting quaternion->mat4
#include <glm/gtx/quaternion.hpp> // for glm::rotation
#include <glm/gtx/string_cast.hpp> // for glm::to_string
...
...
...
// _startBoneVec3 used below is Eigen::Vector3d
// _endddBoneVec3 used below is Eigen::Vector3d
...
...
...
//
// for JOINTS
//
glPushMatrix();
glTranslated( _startBoneVec3(0),_startBoneVec3(1),_startBoneVec3(2) );
glutWireSphere(1.0,30,30);
glPopMatrix();
...
...
//
// for BONES
//
glPushMatrix();
float boneLength = ( _endddBoneVec3 - _startBoneVec3 ).norm();
glTranslated( _startBoneVec3(0),_startBoneVec3(1),_startBoneVec3(2) );
glm::vec3 _glmStart( _startBoneVec3(0),_startBoneVec3(1),_startBoneVec3(2) );
glm::vec3 _glmEnddd( _endddBoneVec3(0),_endddBoneVec3(1),_endddBoneVec3(2) );
glm::vec3 _glmDirrr = glm::normalize( _glmEnddd - _glmStart ); // super important to normalize!!!
glm::quat _glmRotQuat = glm::rotation( glm::vec3(0,0,1),_glmDirrr); // calculates rotation quaternion between 2 normalized vectors
//glm::mat4 _glmRotMat = glm::mat4_cast(_glmRotQuat); // quaternion -> mat4
glm::mat4 _glmRotMat = glm::toMat4 (_glmRotQuat); // quaternion -> mat4
//std::cout << glm::to_string(_glmDirrr) << std::endl;
glMultMatrixf(glm::value_ptr(_glmRotMat));
glutWireCone(4.2,boneLength,4,20); // cone with 4 slices = pyramid-like
glPopMatrix();
Result: