Skeletal animation, transformation multiplication - opengl

I have implemented a skeletal animation system where I seem to be missing one last detail for it to work properly.
I have made an animation which only a part of the character has bones. In this image The stickman has a waving arm, but the arm waves at the wrong place compared to the rest of the stickman. (You barely see it between his legs)
I will try to outline the basics of my matrix computation to see if I am doing something wrong.
Computation of bone specific absolute and relative animation matrix (based on my keyframe matrix data):
if (b == this->root) {
b->absoluteMatrix = M;
} else {
b->absoluteMatrix = b->parent->absoluteMatrix * M;
}
b->skinningMatrix = b->absoluteMatrix * inverse(b->offsetMatrix);
if (this->currentAnimationTime == 0) {
cout << "Bone '" << b->name << "' skinningMatrix:\n";
printMatrix(b->skinningMatrix);
cout << "Bone '" << b->name << "' absoluteMatrix:\n";
printMatrix(b->absoluteMatrix);
cout << "Bone '" << b->name << "' offsetMatrix:\n";
printMatrix(b->offsetMatrix);
cout << "---------------------------------\n";
}
skinningMatrix which I send to the GPU. This prints the following:
where offsetMatrix is a transform that transforms from mesh space to bone space in bind pose.
In my shader I then do:
layout(location = 0) in vec4 v; // normal vertex data
newVertex = (skinningMatrix[boneIndex.x] * v) * weights.x;
newVertex = (skinningMatrix[boneIndex.y] * v) * weights.y + newVertex;
newVertex = (skinningMatrix[boneIndex.z] * v) * weights.z + newVertex;
Any hints on what could be wrong with my computations?

I am currently working through skeletal animation myself, and the only thing I noticed which may be an issue is with how you use the offset matrix from ASSIMP. The matrix in question is a matrix which "transforms from mesh space to bone space in bind pose".
To my knowledge this matrix is intended to be used 'as-is', which will essentially take your vertices into the bones local space, which you will than multiply by a 'new' global joint pose which will take the vertices from bone space to model space.
When you inverse the matrix, you are transforming the vertices into model space again, and than with your current animation frames global joint pose, pushing the vertices even further.
I believe your solution will be to remove the inverting of your offset matrix, which will result in your vertices moving from 'model-joint-model'.

Related

Computation of the matrix inverse using the Eigen C++ library introduces noise

I have a publish-subscribe type of a node that receives pose information (position and orientation) from the subscribed data stream and it should compute the inverse and publish out.
In order to do so I'm creating a 4-by-4 homogeneous transformation matrix from the original pose data.
Inverse it using the Eigen C++ template library, convert the transformation matrix back to position and orientation form and publish it.
When I plotted the published data stream I noticed some noise so I ended up publishing the original data too for comparison, here is what I did:
convert original_pose to TF matrix, named as original_TF
convert original_TF back to pose, named as original_pose_
publish original_pose_
inverse original_TF assign to inverted_TF
convert inverted_TF to pose, named as inverted_pose_
publish inverted_pose_
When I plot the X, Y, Z position fields, I'm seeing a significant amount of noise (spikes and notches in the visual below) in the inverted pose data. Since I'm using the same functions to convert the original pose to TF and back, I know that those equations aren't the source of the noise.
Blue is the original, whereas red is the inverted.
Here is the code. Really nothing extraordinary.
bool inverse_matrix(std::vector<std::vector<double> > & input, std::vector<std::vector<double> > & output)
{
// TODO: Currently only supports 4-by-4 matrices, I can make this configurable.
// see https://eigen.tuxfamily.org/dox/group__TutorialMatrixClass.html
Eigen::Matrix4d input_matrix;
Eigen::Matrix4d output_matrix;
Eigen::VectorXcd input_eivals;
Eigen::VectorXcd output_eivals;
input_matrix << input[0][0], input[0][1], input[0][2], input[0][3],
input[1][0], input[1][1], input[1][2], input[1][3],
input[2][0], input[2][1], input[2][2], input[2][3],
input[3][0], input[3][1], input[3][2], input[3][3];
cout << "Here is the matrix input:\n" << input_matrix << endl;
input_eivals = input_matrix.eigenvalues();
cout << "The eigenvalues of the input_eivals are:" << endl << input_eivals << endl;
if(input_matrix.determinant() == 0) { return false; }
output_matrix = input_matrix.inverse();
cout << "Here is the matrix output:\n" << output_matrix << endl;
output_eivals = output_matrix.eigenvalues();
cout << "The eigenvalues of the output_eivals are:" << endl << output_eivals << endl;
// Copy output_matrix to output
for (int i = 0; i < 16; ++i)
{
int in = i/4;
int im = i%4;
output[in][im] = output_matrix(in, im);
}
return true;
}
-- Edit 1 --
I printed out the eigenvalues of the input and output matrices of the inverse_matrix function.
Here is the matrix input:
0.99916 -0.00155684 -0.0409514 0.505506
0.00342358 -0.992614 0.121267 0.19625
-0.0408377 -0.121305 -0.991775 1.64257
0 0 0 1
The eigenvalues of the input_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
Here is the matrix output:
0.99916 0.00342358 -0.0408377 -0.438674
-0.00155684 -0.992614 -0.121305 0.39484
-0.0409514 0.121267 -0.991775 1.62597
-0 -0 0 1
The eigenvalues of the output_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
-- Edit 2 --
I don't quite understand what you are plotting. Is it original_pose.{X,Y,Z} and inverted_pose.{X,Y,Z}? Then the "spikes" will really depend on the orientation-part of the matrix.
I am plotting original_pose_{position.x, position.y, position.z} and inverted_pose_{position.x, position.y, position.z} where the complete data that's published is <variable_name>{position.x, position.y, position.z, orientation.w, orientation.x, orientation.y, orientation.z}.
Can you elaborate on "the "spikes" will really depend on the orientation-part of the matrix."?
Also, how is your description related to the code-snippet? (I don't see any matching variable names).
I've identified that the source of the noise is the inversion, which is the item number 4 in my description: inverse original_TF assign to inverted_TF. To relate one another, I'm calling the function as follows:
isSuccess = inverse_matrix(original_TF, inverted_TF);
How do you store "poses" (is that the vector<vector> in your snippet)?
Yes, I'm storing them in 2-dimensional vectors of type double.
At any point, do you use Eigen::Transform to store transformations, or just plain Eigen::Matrix4d?
No, I'm only using Eigen::Matrix4d locally in the inverse_matrix function to be able to make use of the Eigen library for computation.

Skinning Animation - Weights Destroy Mesh

I am in the process of writing an animation system with my own Collada parser and am running into an issue that I can't wrap my head around.
I have collected my mesh/skin information (vertices, normals, jointIds, weights, etc), my skeleton information (joints, their local transforms, inverse bind position, hierarchy structure), and my animation (keyframe transform position for each joint, timestamp).
My issue is that with everything calculated and then implemented in the shader (the summation of weights multiplied by the joint transform and vertex position) - I get the following:
When I remove the weight multiplication, the mesh remains fully intact - however the skin doesn't actually follow the animation. I am at a lost as I feel as though the math is correct, but very obviously I am going wrong somewhere. Would someone be able to shine light on the aspect I have misinterpreted?
Here is my current understanding and implementation:
After collecting all of the joint's localTransforms and hierarchy, I calculate their inverse bind transfromation matrix. To do this I multiple each joints localTransform with their parentLocalTransform to get a bindTransform. Inverting that bindTransform results in their inverseBindTransform. Below is my code for that:
// Recursively collect each Joints InverseBindTransform -
// root joint's local position is an identity matrix.
// Function is only called once after data collection.
void Joint::CalcInverseBindTransform(glm::mat4 parentLocalPosition)
{
glm::mat4 bindTransform = parentLocalPosition * m_LocalTransform;
m_InverseBindPoseMatrix = glm::inverse(bindTransform);
for (Joint child : Children) {
child.CalcInverseBindTransform(bindTransform);
}
}
Within my animator during an animation, for each joint I take the two JointTransforms for the two frame's my currentTime is in between and I calculate the interpolated JointTransform. (JointTransform simply has a vec3 for position and quaternion for rotation). I do this for every joint and then apply those interpolated values to each Joint by again recursively muliplying the new frameLocalTransform by their parentLocalTransform. I take that bindTransform and multiply it by the invBindTransform and then transpose the matrix. Below is the code for that:
std::unordered_map<int, glm::mat4> Animator::InterpolatePoses(float time) {
std::unordered_map<int, glm::mat4> poses;
if (IsPlaying()) {
for (std::pair<int, JointTransform> keyframe : m_PreviousFrame.GetJointKeyFrames()) {
JointTransform previousFrame = m_PreviousFrame.GetJointKeyFrames()[keyframe.first];
JointTransform nextFrame = m_NextFrame.GetJointKeyFrames()[keyframe.first];
JointTransform interpolated = JointTransform::Interpolate(previousFrame, nextFrame, time);
poses[keyframe.first] = interpolated.getLocalTransform();
}
}
return poses;
}
void Animator::ApplyPosesToJoints(std::unordered_map<int, glm::mat4> newPose, Joint* j, glm::mat4 parentTransform)
{
if (IsPlaying()) {
glm::mat4 currentPose = newPose[j->GetJointId()];
glm::mat4 modelSpaceJoint = parentTransform * currentPose;
for (Joint child : j->GetChildren()) {
ApplyPosesToJoints(newPose, &child, modelSpaceJoint);
}
modelSpaceJoint = glm::transpose(j->GetInvBindPosition() * modelSpaceJoint);
j->SetAnimationTransform(modelSpaceJoint);
}
}
I then collect all the newly AnimatedTransforms for each joint and send them to the shader:
void AnimationModel::Render(bool& pass)
{
[...]
std::vector<glm::mat4> transforms = GetJointTransforms();
for (int i = 0; i < transforms.size(); ++i) {
m_Shader->SetMat4f(transforms[i], ("JointTransforms[" + std::to_string(i) + "]").c_str());
}
[...]
}
void AnimationModel::AddJointsToArray(Joint current, std::vector<glm::mat4>& matrix)
{
glm::mat4 finalMatrix = current.GetAnimatedTransform();
matrix.push_back(finalMatrix);
for (Joint child : current.GetChildren()) {
AddJointsToArray(child, matrix);
}
}
In the shader, I simply follow the summation formula that can be found all over the web when researchiing this topic:
for (int i = 0; i < total_weight_amnt; ++i) {
mat4 jointTransform = JointTransforms[jointIds[i]];
vec4 newVertexPos = jointTransform * vec4(pos, 1.0);
total_pos += newVertexPos * weights[i];
[...]
---------- Reply to Normalizing Weights ------------
There were a few weights summing above 1, but after solving the error in my code the model looked like this:
For calculating the weights - I loop through all preadded weights in the vector, and if I find a weight that is less than the weight I'm looking to add - I replace that weight in that position. Otherwise, I append the weight onto the end of the vector. If there are less weights in my vector than my specified max_weights (which is 4) - I fill in the remaining weights/jointIds with 0.
I understand when something is going wrong in skinning animations, there can be alot of different areas the problem is occuring. As such, for future googlers experiencing the same issue I am - take this as more of a list of suggestions of what you could be doing wrong rather than absolutely doing wrong.
For my problem - I had the right idea but wrong approach in a lot of minor areas. Which brought me fairly close but, as they say, no cigar.
I had no need to calculate the Inverse Bind Pose myself, Collada's Inverse Bind Pose (sometimes/often declared as an "offsetMatrix") is more than perfect. This wasn't a problem more as I was just doing unnecessary calculations.
In a Collada file, they often provide you more "joints" or "nodes" in the hierarchy than what is needed for the animation. Prior to the start of your actual animated "joints", there is the scene and an initial armature "node" type. The scene is typically an identity matrix that was manipulated based on your "up axis" upon reading in the Collada file. The Node type will determine the overall size of each joint in the skeleton - so if it wasn't resized, its probably the identity matrix. Make sure your hierarchy still contains ALL nodes/joints listed in the hierarchy. I very much was not doing so - which greatly distorted my globalPosition (BindPose).
If you are representing your Joint's transforms rotation through quaternions (which is highly recommended), make sure the resulted quaternion is normalized after interpolating between two rotated positions.
On the same note - when combining the Rotation and Transform into your final matrix - make sure your order of multiplication and the final output is correct.
Finally - your last skinning matrix is comprised of your joints InvBindMatrix * GlobalPosition * GlobalInverseRootTransform (<- this is the inverse of the local transfrom from your "scene" node mentioned in (1), remember?).
Based on your prior matrix multiplications up to this point, you may or may not need to transpose this final matrix.
And with that - I was able to successfully animate my model!
One final note - my mesh and animation files are added in separately. If your animations are in separate files from your mesh, make sure you collect the skinning/joint information from the files with an animation rather than the file with the mesh. I list my steps for loading in a model and then giving it multiple animations through different files:
Load in the Mesh (This contains Vertices,Normals,TexCoords,JointIds,Weights)
Load in the animation file (This gives Skeleton, InverseBindPositions, and other needed info to bind skeleton to mesh) - Once skeleton and binding info is collected, gather first animation info from that file as well.
For another animation, the above Skeleton should work fine for any other animation on the same mesh/model - just read in the animation information and store in your chosen data structure. Repeat step 3 til happy.

Space carving of tetrahedra [duplicate]

I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...

Apply rotation to Eigen::Affine3f

I'm using an Eigen::Affine3f to represent a camera matrix. (I've already figured out how to setup the view matrix/Affine3f from an initial "lookAt" and "up" vector)
Now, I want to support change the camera's orientation. Simple question: what's the best way to apply rotations to this Affine3f, i.e. pitch, yaw, roll?
It's quite simple using the built in functionality. You can use an AxisAngle object to multiply the existing Affine3f. Just note that the axis needs to be normalized:
Vector3f rotationAxis;
rotationAxis.setRandom(); // I don't really care, you determine the axis
rotationAxis.normalize(); // This is important, don't forget it
Affine3f randomAffine3f, rotatedAffine;
// Whatever was left in memory in my case,
// whatever your transformation is in yours
std::cout << randomAffine3f.matrix() << std::endl;
// We'll now apply a rotation of 0.256*M_PI around the rotationAxis
rotatedAffine = (AngleAxisf(0.256*M_PI, rotationAxis) * randomAffine3f);
std::cout << rotatedAffine.matrix() << std::endl; // Ta dum!!

How to get local rotation from a reference 'zero' quaternion and the global rotation quaternion

I am displaying characters on a screen connected to a flystick (3D tracking object). My goal is to move the characters according to device input.
I noticed the 'zero' of the device (corresponding to the default position) is not corresponding to a not rotated quaternion of my characters. When moving the device, my character moves around some other axis (probably world's) than the device's. So I added a synchronising function which registers the quaternion rotation of my device when it should be at the default position, but now I have no idea how to combine this reference quaternion with the actual quaternion I receive from the device when it moves in order to rotate my character as I intend.
Here is how i am using the quaternions with OpenGL :
glm::quat rotation = QAccumulative.getQuat();
glm::mat4 matrix_rotation = glm::mat4_cast(rotation);
object_transform *= matrix_rotation;
Model *= object_transform;
glm::mat4 MVP = Projection * View * Model;
It works fine with keyboard and mouse if I rotate my objects with this method:
rotate(float angleX,float angleY,float angleZ) {
Quaternion worldRotationx( 1.0,0,0, angleZ);
Quaternion worldRotationy( 0,1.0,0, angleX);
Quaternion worldRotationz( 0,0,1.0, angleY);
QAccumulative = worldRotationx * worldRotationy * worldRotationz * QAccumulative;
QAccumulative.normalise();
}
Here is the main loop, where I compute the data from the tracking device:
void VRPN_CALLBACK handle_tracker(void* userData, const vrpn_TRACKERCB t ) {
Vrpn_tracker* current = Vrpn_tracker::current_vrpn_device;
if (current->calibrating == true) {
current->reference = Quaternion(t.quat[0],t.quat[1],t.quat[2],t.quat[3]);
current->reference.normalise();
} else {
// translation :
current->newPosition = sf::Vector3f(t.pos[0],t.pos[1],t.pos[2]);
current->world->current_object->translate(
current->newPosition.x - current->oldPosition.x,
current->newPosition.y - current->oldPosition.y,
current->newPosition.z - current->oldPosition.z);
current->oldPosition = current->newPosition;
// rotation:
current->newQuat = Quaternion(t.quat[0],t.quat[1],t.quat[2],t.quat[3]);
current->newQuat.normalise();
current->world->current_object->QAccumulative =
(current->reference.getConjugate() * current->newQuat);
Quaternion result = current->world->current_object->QAccumulative;
cout << "result? : " << result.x << result.y << result.z << result.w << endl;
// prints something like:
// x : -0.00207066 y : 0.00186546 z : -0.00165524 w : 0.999995
// when position of flystick = default position
}
}
The simple test I do to check the code: I start my program without moving the flystick, (ref quaternion is getting captured) so after the synchronisation the character should be in the default position (not moved) as I haven't touched the flystick.
I did try to multiply my reference by the quaternion received but it seems my character is moving according to local axis.
If someone could shed some light on how I can get the rotation in global axis from these two quaternions it would be great.
You have different calibrations possible for the flystick that may correspond to what you are expecting, however i suggest you read carefully the documentation about the flystick, your answer must be there.
If you are using ART tracking, page 124 of the User manual will help you.
Change multiplication order. If you rotate first, than do it last if last than do it first.
The hidden problem of your question, we don't know your transformation notation.
Your rotation can transform from World To Object frame or from Object to world. Depending of notation , exact transformation will be different.