D3DXMatrixRotationAxis rotate the wrong axis - c++

I'm writing simple 3d application, with directX. Im a newbie, but i think i understand how D3DX Rotation works.
when creating a colision detection functionality i notice that ball bounce in wrong direction. the code should change the direction of axis given in "direction" vector. Instead it change the 2 others:
speed = D3DXVECTOR3(1.0f, 2.0f, 3.0f);
direction = D3DXVECTOR3(1.0f, 0.0f, 0.0f);
D3DXMATRIX temp;
D3DXMatrixRotationAxis(&temp, &direction, 3.14f);
D3DXVec3TransformCoord(&speed, &speed, &temp);
from breakpoint i know that speed changed from 1 , 2 , 3 to:
_D3DVECTOR {x=0.999999762 y=-2.00477481 z=-2.99681091 } _D3DVECTOR
What am i doing wrong here? The idea is to invert the axis specified in direction vector.

You have created a rotation transformation of 180 around the X axis.
Operating that on (1,2,3) resulted with (1,-2,-3) which is what you specified.
"Bouncing" your "speed" S vector with a plane that has Normal N:
angle = acos(S*N); // provided that * is an operator for D3DXVec3Dot
axis = S^N; // provided that ^ is an operator for D3DXVec3Cross
D3DXMatrixRotationAxis(&temp, &axis , angle*2); // angle is *2
D3DXVec3TransformCoord(&S, &S, &temp);
S=-S; // negate direction

Related

How to rotate a cube by its center

I am trying to rotate a "cube" full of little cubes using keyboard which works but not so great.
I am struggling with setting the pivot point of rotation to the very center of the big "cube" / world. As you can see on this video, center of front (initial) face of the big cube is the pivot point for my rotation right now, which is a bit confusing when I rotate the world a little bit.
To explain it better, it looks like I am moving initial face of the cube when using keys to rotate the cube. So the pivot point might be okay from this point of view, but what is wrong in my code? I don't understand why it is moving by front face, not the entire cube by its very center?
In case of generating all little cubes, I call a function in 3 for loops (x, y, z) and the function returns cubeMat so I have all cubes generated as you can see on the video.
cubeMat = scale(cubeMat, {0.1f, 0.1f, 0.1f});
cubeMat = translate(cubeMat, {positioning...);
For rotation itself, a short example of rotation to left looks like this:
mat4 total_rotation; //global variable - never resets
mat4 rotation; //local variable
if(keysPressed[GLFW_KEY_LEFT]){
timer -= delta;
rotation = rotate(mat4{}, -delta, {0, 1, 0});
}
... //rest of key controls
total_rotation *= rotation;
And inside of those 3 for cycles is also this:
program.setUniform("ModelMatrix", total_rotation * cubeMat);
cube.render();
I have read that I should use transformation to set the pivot point to the middle but in this case, how can I set the pivot point inside of little cube which is in center of world? That cube is obviously x=2, y=2, z=2 since in for cycles, I generate cubes starting at x=0.
You are accumulating the rotation matrices by right-multiplication. This way, all rotations are performed in the local coordinate systems that result from all previous transformations. And this is why your right-rotation results in a turn after an up-rotation (because it is a right-rotation in the local coordinate system).
But you want your rotations to be in the global coordinate system. Thus, simply revert the multiplication order:
total_rotation = rotation * total_rotation;

Detection of head nod pose from pitch angle

I am making an android application that will detect driver's fatigue from head nod pose.
What I did:
I have detected 68 facial landmarks from image using dlib library.
Used solvePnP to find out rotation matrix and from that matrix I have got roll, pitch and yaw angles.
Now, I have to detect head nod from pitch angle.
My Problem:
How to set a threshold value so that an angle below or above the threshold can be said as a head nod?
It results some negative angles. What does a negative angle mean in 3 dimensional axis?
My code:
void getEulerAngles(Mat &rotCamerMatrix,Vec3d &eulerAngles)
{
Mat cameraMatrix,rotMatrix,transVect,rotMatrixX,rotMatrixY,rotMatrixZ;
double* _r = rotCamerMatrix.ptr<double>();
double projMatrix[12] = {_r[0],_r[1],_r[2],0,
_r[3],_r[4],_r[5],0,
_r[6],_r[7],_r[8],0};
decomposeProjectionMatrix( Mat(3,4,CV_64FC1,projMatrix),
cameraMatrix,
rotMatrix,
transVect,
rotMatrixX,
rotMatrixY,
rotMatrixZ,
eulerAngles);
}
int renderToMat(std::vector<full_object_detection>& dets, Mat& dst)
{
Scalar color;
std::vector<cv::Point2d> image_points;
std::vector<cv::Point3d> model_points;
string disp;
int sz = 3,l;
color = Scalar(0,255,0);
double p1,p2,p3,leftear,rightear,ear=0,yawn=0.00,yaw=0.00,pitch=0.00,roll=0.00;
l=dets.size();
//I am calculating only for one face.. so assuming l=1
for(unsigned long idx = 0; idx < l; idx++)
{
image_points.push_back(
Point2d(dets[idx].part(30).x(),dets[idx].part(30).x() ) );
image_points.push_back(Point2d(dets[idx].part(8).x(),dets[idx].part(8).x() ) );
image_points.push_back(Point2d(dets[idx].part(36).x(),dets[idx].part(36).x() ) );
image_points.push_back(Point2d(dets[idx].part(45).x(),dets[idx].part(45).x() ) );
image_points.push_back(Point2d(dets[idx].part(48).x(),dets[idx].part(48).x() ) );
image_points.push_back(Point2d(dets[idx].part(54).x(),dets[idx].part(54).x() ) );
}
double focal_length = dst.cols;
Point2d center = cv::Point2d(dst.cols/2.00,dst.rows/2.00);
cv::Mat camera_matrix = (cv::Mat_<double>(3.00,3.00) << focal_length, 0, center.x, 0, focal_length, center.y, 0,
0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type);
cv::Mat rotation_vector; //s Rotation in axis-angle form
cv::Mat translation_vector;
cv::Mat rotCamerMatrix1;
if(l!=0)
{
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f));
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f));
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f));
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f));
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f));
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f));
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs,rotation_vector, translation_vector);
Rodrigues(rotation_vector,rotCamerMatrix1);
Vec3d eulerAngles;
getEulerAngles(rotCamerMatrix1,eulerAngles);
yaw = eulerAngles[1];
pitch = eulerAngles[0];
roll = eulerAngles[2];
/*My problem begins here. I don't know how to set a threshold value for pitch so that I can say a value below or
above the pitch is a head nod!*/
}
return 0;
}
First you have to understand how are the 3 angles used in a 3D context. Basically they rotate an object in the 3D space with respect to an origin (the origin can change depending on the context), but how are the 3 angles applied?.
This question can be express as: in which order do they rotate the object. If you apply yaw, then pitch and then roll it may give you a different orientation of the object that if you do it in the opposite order. Having said that, you must understand what are those values representing, to understand what to do with them.
Now, you ask what would be a good threshold, well it depends. On what? well, in the order they are applied. For instance if you apply pitch first with 45 degrees so it looks down and then you apply a roll 180 degrees, then it is looking up, so it is a little hard to define a threshold.
Since you have your model points, you can create a 3D rotation matrix and apply it to them with different pitch angles (the rest of the angles will be 0 so the order will not be important here) and visualize them and choose the one that you consider is nodding. This is a little bit subjective, so you should be the one doing it.
Now, to the second question. The answer again is, it depends. On what this time? you may ask. Simple, is your system left handed or right handed? in one this means that the rotation is apply clockwise and the other counter clockwise, the negative sign changes the direction. So, in a left handed system it will be clockwise and with the negative sign will be counterclockwise. The right handed system will be counterclockwise and the negative sign will make it clockwise.
As a suggestion, you can make a vector (0,0,-1) assuming your z axis looks towards the back. Then apply the rotation to it and proyect it to a 2D plane parallel to the z axis, here take the tip of your vector and see what is the angle here. This way you are sure what are you getting.

Using aTan2 to face an object to its destination

I've created an object that moves towards its destination with inertia. I am having alot of trouble getting the object to face its destination. My code is simple, it calculates the angle, converts it to degrees and passes that angle to the Matrix4 Rotate function, which adjusts the localTransform (scenegraph).
The problem is that the object spawns, and then just rotates endlessly. It slowly progresses towards its target, but just keeps spinning. I've tested it without translation, it spins regardless on the spot. All I need is for the object to face its destination. My Translate/Rotate functions work correctly, I've used it to rotate an object, have an object spawn with its parent's rotation and head in that direction. They provide 1:1 results with the GLM library.
I've tried swapping the order in aTan2, removing the degrees conversion, (though that does nothing, the Rotate function takes degrees) and swapping translation/rotation order.
localTransform is the combined rotation/scale/translation matrix. row[3]column[1] is Y. [3][0] is X.
float fAngle = atan2(v3Destination[1] - localTransform.data[3][1] , v3Destination[0] - localTransform.data[3][0]);
float fAngleDegrees = fAngle * 180 / PI;
localTransform = Matrix4::Rotate(localTransform, fAngleDegrees, Vector3(0.0f, 0.0f, 1.0f));
Vector3 Movement;
Movement[0] = v3Destination[0] - localTransform.data[3][0];
Movement[1] = v3Destination[1] - localTransform.data[3][1];
Movement = Movement * fSpeed * Application.GetTimeStep();
localTransform = Matrix4::Translate(localTransform, Movement);
Any advice on how to handle this? This is all in 2D coordinates, however the rotation is done on the Z-Axis.
Just a hunch, but is the localTransform matrix completely recomputed each time step?
Or could you be adding a rotation to a matrix that's already been rotated in the previous iteration.
This could explain the continuous rotation.

First Person Camera movement issues

I'm implementing a first person camera using the GLM library that provides me with some useful functions that calculate perspective and 'lookAt' matrices. I'm also using OpenGL but that shouldn't make a difference in this code.
Basically, what I'm experiencing is that I can look around, much like in a regular FPS, and move around. But the movement is constrained to the three axes in a way that if I rotate the camera, I would still move in the same direction as if I had not rotated it... Let me illustrate (in 2D, to simplify things).
In this image, you can see four camera positions.
Those marked with a one are before movement, those marked with a two are after movement.
The red triangles represent a camera that is oriented straight forward along the z axis. The blue triangles represent a camera that hasbeen rotated to look backward along the x axis (to the left).
When I press the 'forward movement key', the camera moves forward along the z axis in both cases, disregarding the camera orientation.
What I want is a more FPS-like behaviour, where pressing forward moves me in the direction the camera is facing. I thought that with the arguments I pass to glm::lookAt, this would be achieved. Apparently not.
What is wrong with my calculations?
// Calculate the camera's orientation
float angleHori = -mMouseSpeed * Mouse::x; // Note that (0, 0) is the center of the screen
float angleVert = -mMouseSpeed * Mouse::y;
glm::vec3 dir(
cos(angleVert) * sin(angleHori),
sin(angleVert),
cos(angleVert) * cos(angleHori)
);
glm::vec3 right(
sin(angleHori - M_PI / 2.0f),
0.0f,
cos(angleHori - M_PI / 2.0f)
);
glm::vec3 up = glm::cross(right, dir);
// Calculate projection and view matrix
glm::mat4 projMatrix = glm::perspective(mFOV, mViewPortSizeX / (float)mViewPortSizeY, mZNear, mZFar);
glm::mat4 viewMatrix = glm::lookAt(mPosition, mPosition + dir, up);
gluLookAt takes 3 parameters: eye, centre and up. The first two are positions while the last is a vector. If you're planning on using this function it's better that you maintain only these three parameters consistently.
Coming to the issue with the calculation. I see that the position variable is unchanged throughout the code. All that changes is the look at point I.e. centre only. The right thing to do is to first do position += dir, which will move the camera (position) along the direction pointed to by dir. Now to update the centre, the second parameter can be left as-is: position + dir; this will work since the position was already updated to the new position and from there we've a point farther in dir direction to look at.
The issue was actually in another method. When moving the camera, I needed to do this:
void Camera::moveX(char s)
{
mPosition += s * mSpeed * mRight;
}
void Camera::moveY(char s)
{
mPosition += s * mSpeed * mUp;
}
void Camera::moveZ(chars)
{
mPosition += s * mSpeed * mDirection;
}
To make the camera move across the correct axes.

opengl - Rotating around a sphere using vectors and NOT glulookat

I'm having an issue with drawing a model and rotating it using the mouse,
I'm pretty sure there's a problem with the mathematics but not sure .
The object just rotates in a weird way.
I want the object to start rotating each click from its current spot and not reset because of the
vectors are now changed and the calculation starts all over again.
void DrawHandler::drawModel(Model * model){
unsigned int l_index;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); // Modeling transformation
glLoadIdentity();
Point tempCross;
crossProduct(tempCross,model->getBeginRotate(),model->getCurrRotate());
float tempInner= innerProduct(model->getBeginRotate(),model->getCurrRotate());
float tempNormA =normProduct(model->getBeginRotate());
float tempNormB=normProduct(model->getCurrRotate());
glTranslatef(0.0,0.0,-250.0);
glRotatef(acos (tempInner/(tempNormA*tempNormB)) * 180.0 / M_PI,tempCross.getX(),tempCross.getY(),tempCross.getZ());
glColor3d(1,1,1);
glBegin(GL_TRIANGLES);
for (l_index=0;l_index < model->getTrianglesDequeSize() ;l_index++)
{
Triangle t = model->getTriangleByPosition(l_index);
Vertex a1 = model->getVertexByPosition(t.getA());
Vertex a2 = model->getVertexByPosition(t.getB());
Vertex a3 = model->getVertexByPosition(t.getC());
glVertex3f( a1.getX(),a1.getY(),a1.getZ());
glVertex3f( a2.getX(),a2.getY(),a2.getZ());
glVertex3f( a3.getX(),a3.getY(),a3.getZ());
}
glEnd();
}
This is the mouse function which saves the beginning vector of the rotating formula
void Controller::mouse(int btn, int state, int x, int y)
{
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if (btn==GLUT_LEFT_BUTTON){
switch(state){
case(GLUT_DOWN):
if(!_rotating){
_model->setBeginRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
_rotating=true;
}
break;
case(GLUT_UP):
_rotating=false;
break;
}
}
}
and finally the following function which holds the current vector.
(the beginning vector is where the mouse was clicked at
and the curr vector is where the mouse position at the moment )
void Controller::getMousePosition(int x,int y){
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if(_rotating){
_model->setCurrRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
}
}
where sphereradius is the sphere radius O_O of 70 degress
is any calculation wrong ? cant seem to find the problem...
thanks
Why so complicated? Either you change the view matrix or you change the model matrix of your focused object. If you choose to change the model matrix and your object is centered in (0,0,0) of the world coordinate system, computing the rotation around a sphere illusion is trivial - you just rotate into the opposite direction. If you want to change the view matrix (which is actually done when you change the position of the camera) you have to approximate the surface points on the chosen sphere. Therefore, you could introduce two parameters specifying two angles. Everytime you click move your mouse, you update the params and compute the new locations on the sphere. There are some useful equations in [http://en.wikipedia.org/wiki/Sphere].
Without knowing what library (or libraries) you're using your code is rather difficult to read. It seems you're setting up your camera at (0, 0, -250), looking towards the origin, then rotating around the origin by the angle between two vectors, model->getCurrRotate() and model->getBeginRotate().
The problem seems to be that in "mouse down" events you explicitly set BeginRotate to the point on the sphere under the mouse, then in "mouse move" events you set CurrRotate to the point under the mouse, so every time you click somewhere else, you lose the previous state of rotation because BeginRotate and CurrRotate are simply overwritten.
Combining multiple rotations around arbitrary different axes is not a trivially simple task. The proper way to do it is to use quaternions. You may find this primer on quaternions and other 3D math concepts useful.
You might also want a more robust algorithm for converting screen coordinates to model coordinates on the sphere. The one you are using is assuming the sphere appears 70 pixels in radius on the screen and that the projection matrix is orthographic.