Bones rotate around parent - OpenGL Animation/Skinning - c++

I am having an issue with my bone skinning where instead of my bones rotating around their local origin, they instead rotate around the their parent. Left - my engine. Right - blender
There are two locations in the pipeline that I suspect are at fault, but I cant tell where exactly it is. First, I have this bit of code that actually calculates the bone matrices:
void CalculateBoneTransform(DSkeletonBone sb, dmat4x4f parentTransform, std::vector<DSkeletonBone> boneOverrides)
{
if (boneDataIndex.count(sb.name) == 0) return;
Transform transform = sb.localTransform;
for (auto& bo : boneOverrides)
{
if (bo.name == sb.name)
{
transform = bo.localTransform;
}
}
dmat4x4f globalTransform = dmat4x4f::Mul(parentTransform, transform.GetMatrix());
dmat4x4f finalTransform = dmat4x4f::Mul(globalTransform, sb.inverseTransform);
finalTransforms[boneDataIndex[sb.name]] = finalTransform;
for (auto& pbC : sb.children)
{
CalculateBoneTransform(pbC, globalTransform, boneOverrides);
}
}
The boneOverrides come from anything that would like to override a bone (such as an animation).
finalTransforms is the array that is pushed to the shader.
In my mesh/anim parser, I have this code that calculates the inverse bind pose matrices (using OpenFBX, but this is not relevant):
glm::mat4 local = glm::inverse(GetMatFromVecs(
boneData[skin->getCluster(i)->name].pos,
boneData[skin->getCluster(i)->name].rot,
boneData[skin->getCluster(i)->name].scl));
for (Object* parent = skin->getCluster(i)->getLink()->getParent(); parent != NULL; parent = parent->getParent())
{
glm::mat4 parentInverseTransform = glm::inverse(GetMatFromVecs(
parent->getLocalTranslation(),
parent->getLocalRotation(),
parent->getLocalScaling()));
local = parentInverseTransform * local;
}
boneData[skin->getCluster(i)->name].offset = local;
The getLocalXX functions get the item in local space.
And the GetMatFromVecs function referenced above:
glm::mat4 GetMatFromVecs(Vec3 pos, Vec3 rot)
{
glm::mat4 m(1);
m = glm::scale(m, glm::vec3(1,1,1));
m = glm::rotate(m, (float)rot.x * 0.0174532925f, glm::vec3(1, 0, 0));
m = glm::rotate(m, (float)rot.y * 0.0174532925f, glm::vec3(0, 1, 0));
m = glm::rotate(m, (float)rot.z * 0.0174532925f, glm::vec3(0, 0, 1));
m = glm::translate(m, glm::vec3(pos.x, pos.y, pos.z));
return m;
}
Notice I am rotating first, then translating (this is also the case with the GetMatrix() function referenced in the first snippit, by the way).
So my question is, why is this happening? Also, what are some causes of this type of thing in mesh skinning? My translations are local, but rotations are not.

My issue was bad matrix math. I swapped out my matrix::mul function for glm's and got intended results.

Related

Trying to load animations in OpenGL from an MD5 file using Assimp GLM

I'm trying to follow the tutorial at here ( at ogldev ) mentioned in this answer .
I am however facing a few issues which I believe to be related to the Row Major Order for Assimp vs the Column major order fro GLM, although I am quite not sure.
I've tried a few variations and orders to see if any of those work, but to no avail.
Here ( Gist ) is the Class which I use to load the complete MD5 file. And the current Result I have.
And, this is the part where I think it is going wrong, when I try to update the bone transformation matrices.
void SkeletalModel::ReadNodeHierarchyAnimation(float _animationTime, const aiNode* _node,
const glm::mat4& _parentTransform)
{
std::string node_name = _node->mName.data;
const aiAnimation * p_animation = scene->mAnimations[0];
glm::mat4 node_transformation(1.0f);
convert_aimatrix_to_glm(node_transformation, _node->mTransformation);
// Transpose it.
node_transformation = glm::transpose(node_transformation);
const aiNodeAnim * node_anim = FindNodeAnim(p_animation, node_name);
if (node_anim) {
//glm::mat4 transformation_matrix(1.0f);
glm::mat4 translation_matrix(1.0f);
glm::mat4 rotation_matrix(1.0f);
glm::mat4 scaling_matrix(1.0f);
aiVector3D translation;
CalcInterpolatedPosition(translation, _animationTime, node_anim);
translation_matrix = glm::translate(translation_matrix, glm::vec3(translation.x, translation.y, translation.z));
aiQuaternion rotation;
CalcInterpolatedRotation(rotation, _animationTime, node_anim);
// Transpose the matrix after this.
convert_aimatrix_to_glm(rotation_matrix, rotation.GetMatrix());
//rotation_matrix = glm::transpose(rotation_matrix);
aiVector3D scaling;
CalcInterpolatedScaling(scaling, _animationTime, node_anim);
scaling_matrix = glm::scale(scaling_matrix, glm::vec3(scaling.x, scaling.y, scaling.z));
node_transformation = scaling_matrix * rotation_matrix * translation_matrix;
//node_transformation = translation_matrix * rotation_matrix * scaling_matrix;
}
glm::mat4 global_transformation = node_transformation * _parentTransform;
if (boneMapping.find(node_name) != boneMapping.end()) {
// Update the Global Transformation.
auto bone_index = boneMapping[node_name];
//boneInfoData[bone_index].finalTransformation = globalInverseTransform * global_transformation * boneInfoData[bone_index].boneOffset;
boneInfoData[bone_index].finalTransformation = boneInfoData[bone_index].boneOffset * global_transformation * globalInverseTransform;
//boneInfoData[bone_index].finalTransformation = globalInverseTransform;
}
for (auto i = 0; i < _node->mNumChildren; i++) {
ReadNodeHierarchyAnimation(_animationTime, _node->mChildren[i], global_transformation);
}
}
My Current Output:
I tried going through each matrix used in the code to check whether I should tranpose it or not. Whether I should change the matrix multiplication order or not. I could not find my issue.
If anyone can point out my mistakes here or direct me to a different tutorial that would help me load animations, that would be great.
Also, I see suggestions to use a basic model in the initial stages of learning this. But I was told Obj format doesn't support animations, and I have been using just Obj before this. Can I use any other formats that blender exports in a manner similar to MD5 as shown in this tutorial?
I built an animated scene a few years ago using Assimp library, basically following these tutorials. http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html and http://sourceforge.net/projects/assimp/forums/forum/817654/topic/3880745
While I was using and old X format (blender can work with X, using an extension), I can definitely confirm you need to transpose the assimp animation matrices for use with GML.
Regarding using other formats, you can use what whatever you like provided they are supported by Blender (import, Editing, Export) and by Assimp. Be prepared for a fair bit of trial and error when changing formats!
Rather then me trying to understand your code, I will post the relevant fragments from my working system, that shows the calculation of bone matrices. Hopefully this will help you, as I remember having the same problem as you describe, and taking some time to track it down. Code is plain 'C'.
You can see where the transposition takes place at the end of the code.
// calculateAnimPose() calculates the bone transformations for a mesh at a particular time in an animation (in scene)
// Each bone transformation is relative to the rest pose.
void calculateAnimPose(aiMesh* mesh, const aiScene* scene, int animNum, float poseTime, mat4 *boneTransforms) {
if(mesh->mNumBones == 0 || animNum < 0) { // animNum = -1 for no animation
boneTransforms[0] = mat4(1.0); // so, just return a single identity matrix
return;
}
if(scene->mNumAnimations <= (unsigned int)animNum)
failInt("No animation with number:", animNum);
aiAnimation *anim = scene->mAnimations[animNum]; // animNum = 0 for the first animation
// Set transforms from bone channels
for(unsigned int chanID=0; chanID < anim->mNumChannels; chanID++) {
aiNodeAnim *channel = anim->mChannels[chanID];
aiVector3D curPosition;
aiQuaternion curRotation; // interpolation of scaling purposefully left out for simplicity.
// find the node which the channel affects
aiNode* targetNode = scene->mRootNode->FindNode( channel->mNodeName );
// find current positionKey
size_t posIndex = 0;
for(posIndex=0; posIndex+1 < channel->mNumPositionKeys; posIndex++)
if( channel->mPositionKeys[posIndex + 1].mTime > poseTime )
break; // the next key lies in the future - so use the current key
// This assumes that there is at least one key
if(posIndex+1 == channel-> mNumPositionKeys)
curPosition = channel->mPositionKeys[posIndex].mValue;
else {
float t0 = channel->mPositionKeys[posIndex].mTime; // Interpolate position/translation
float t1 = channel->mPositionKeys[posIndex+1].mTime;
float weight1 = (poseTime-t0)/(t1-t0);
curPosition = channel->mPositionKeys[posIndex].mValue * (1.0f - weight1) +
channel->mPositionKeys[posIndex+1].mValue * weight1;
}
// find current rotationKey
size_t rotIndex = 0;
for(rotIndex=0; rotIndex+1 < channel->mNumRotationKeys; rotIndex++)
if( channel->mRotationKeys[rotIndex + 1].mTime > poseTime )
break; // the next key lies in the future - so use the current key
if(rotIndex+1 == channel-> mNumRotationKeys)
curRotation = channel->mRotationKeys[rotIndex].mValue;
else {
float t0 = channel->mRotationKeys[rotIndex].mTime; // Interpolate using quaternions
float t1 = channel->mRotationKeys[rotIndex+1].mTime;
float weight1 = (poseTime-t0)/(t1-t0);
aiQuaternion::Interpolate(curRotation, channel->mRotationKeys[rotIndex].mValue,
channel->mRotationKeys[rotIndex+1].mValue, weight1);
curRotation = curRotation.Normalize();
}
aiMatrix4x4 trafo = aiMatrix4x4(curRotation.GetMatrix()); // now build a rotation matrix
trafo.a4 = curPosition.x; trafo.b4 = curPosition.y; trafo.c4 = curPosition.z; // add the translation
targetNode->mTransformation = trafo; // assign this transformation to the node
}
// Calculate the total transformation for each bone relative to the rest pose
for(unsigned int a=0; a<mesh->mNumBones; a++) {
const aiBone* bone = mesh->mBones[a];
aiMatrix4x4 bTrans = bone->mOffsetMatrix; // start with mesh-to-bone matrix to subtract rest pose
// Find the bone, then loop through the nodes/bones on the path up to the root.
for(aiNode* node = scene->mRootNode->FindNode(bone->mName); node!=NULL; node=node->mParent)
bTrans = node->mTransformation * bTrans; // add each bone's current relative transformation
boneTransforms[a] = mat4(vec4(bTrans.a1, bTrans.a2, bTrans.a3, bTrans.a4),
vec4(bTrans.b1, bTrans.b2, bTrans.b3, bTrans.b4),
vec4(bTrans.c1, bTrans.c2, bTrans.c3, bTrans.c4),
vec4(bTrans.d1, bTrans.d2, bTrans.d3, bTrans.d4)); // Convert to mat4
}
}

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

How can I get the correct position of fbx mesh?

I want to load a model from an fbx file into my own graphic engine and render the meshes. But when I've load the file and try to render it, the meshes of some nodes did not display on the right position or with the correct rotation. Just like this:
I have tried almost all methods to transform the control points of meshes, such as using the code from fbx_sdk sample code:
FbxAMatrix CalculateGlobalTransform(FbxNode* pNode)
{
FbxAMatrix lTranlationM, lScalingM, lScalingPivotM, lScalingOffsetM, lRotationOffsetM, lRotationPivotM, lPreRotationM, lRotationM, lPostRotationM, lTransform;
FbxAMatrix lParentGX, lGlobalT, lGlobalRS;
if(!pNode)
{
lTransform.SetIdentity();
return lTransform;
}
// Construct translation matrix
FbxVector4 lTranslation = pNode->LclTranslation.Get();
lTranlationM.SetT(lTranslation);
// Construct rotation matrices
FbxVector4 lRotation = pNode->LclRotation.Get();
FbxVector4 lPreRotation = pNode->PreRotation.Get();
FbxVector4 lPostRotation = pNode->PostRotation.Get();
lRotationM.SetR(lRotation);
lPreRotationM.SetR(lPreRotation);
lPostRotationM.SetR(lPostRotation);
// Construct scaling matrix
FbxVector4 lScaling = pNode->LclScaling.Get();
lScalingM.SetS(lScaling);
// Construct offset and pivot matrices
FbxVector4 lScalingOffset = pNode->ScalingOffset.Get();
FbxVector4 lScalingPivot = pNode->ScalingPivot.Get();
FbxVector4 lRotationOffset = pNode->RotationOffset.Get();
FbxVector4 lRotationPivot = pNode->RotationPivot.Get();
lScalingOffsetM.SetT(lScalingOffset);
lScalingPivotM.SetT(lScalingPivot);
lRotationOffsetM.SetT(lRotationOffset);
lRotationPivotM.SetT(lRotationPivot);
// Calculate the global transform matrix of the parent node
FbxNode* lParentNode = pNode->GetParent();
if(lParentNode)
{
lParentGX = CalculateGlobalTransform(lParentNode);
}
else
{
lParentGX.SetIdentity();
}
//Construct Global Rotation
FbxAMatrix lLRM, lParentGRM;
FbxVector4 lParentGR = lParentGX.GetR();
lParentGRM.SetR(lParentGR);
lLRM = lPreRotationM * lRotationM * lPostRotationM;
//Construct Global Shear*Scaling
//FBX SDK does not support shear, to patch this, we use:
//Shear*Scaling = RotationMatrix.Inverse * TranslationMatrix.Inverse * WholeTranformMatrix
FbxAMatrix lLSM, lParentGSM, lParentGRSM, lParentTM;
FbxVector4 lParentGT = lParentGX.GetT();
lParentTM.SetT(lParentGT);
lParentGRSM = lParentTM.Inverse() * lParentGX;
lParentGSM = lParentGRM.Inverse() * lParentGRSM;
lLSM = lScalingM;
//Do not consider translation now
FbxTransform::EInheritType lInheritType = pNode->InheritType.Get();
if(lInheritType == FbxTransform::eInheritRrSs)
{
lGlobalRS = lParentGRM * lLRM * lParentGSM * lLSM;
}
else if(lInheritType == FbxTransform::eInheritRSrs)
{
lGlobalRS = lParentGRM * lParentGSM * lLRM * lLSM;
}
else if(lInheritType == FbxTransform::eInheritRrs)
{
FbxAMatrix lParentLSM;
FbxVector4 lParentLS = lParentNode->LclScaling.Get();
lParentLSM.SetS(lParentLS);
FbxAMatrix lParentGSM_noLocal = lParentGSM * lParentLSM.Inverse();
lGlobalRS = lParentGRM * lLRM * lParentGSM_noLocal * lLSM;
}
else
{
FBXSDK_printf("error, unknown inherit type! \n");
}
// Construct translation matrix
// Calculate the local transform matrix
lTransform = lTranlationM * lRotationOffsetM * lRotationPivotM * lPreRotationM * lRotationM * lPostRotationM * lRotationPivotM.Inverse()\
* lScalingOffsetM * lScalingPivotM * lScalingM * lScalingPivotM.Inverse();
FbxVector4 lLocalTWithAllPivotAndOffsetInfo = lTransform.GetT();
// Calculate global translation vector according to:
// GlobalTranslation = ParentGlobalTransform * LocalTranslationWithPivotAndOffsetInfo
FbxVector4 lGlobalTranslation = lParentGX.MultT(lLocalTWithAllPivotAndOffsetInfo);
lGlobalT.SetT(lGlobalTranslation);
//Construct the whole global transform
lTransform = lGlobalT * lGlobalRS;
return lTransform;
}
which works well in U3d:
But in my engine there are still meshes of two or three nodes of incorrect transform.
I've opened the fbx file in Autodesk Fbx Review and Unity3D and it works well. The problem has bothered me for a whole day, so it would be awesome if someone can help. Thanks
This happens because of node hierarchy. And your answer is not completely correct
rewrite yor function like this
FbxVector4 CMeshLoader::multT(FbxNode* pNode, FbxVector4 vector) {
FbxAMatrix matrixGeo;
matrixGeo.SetIdentity();
if (pNode->GetNodeAttribute())
{
const FbxVector4 lT = pNode->GetGeometricTranslation(FbxNode::eSourcePivot);
const FbxVector4 lR = pNode->GetGeometricRotation(FbxNode::eSourcePivot);
const FbxVector4 lS = pNode->GetGeometricScaling(FbxNode::eSourcePivot);
matrixGeo.SetT(lT);
matrixGeo.SetR(lR);
matrixGeo.SetS(lS);
}
FbxAMatrix localMatrix = pNode->EvaluateLocalTransform();
FbxNode* pParentNode = pNode->GetParent();
FbxAMatrix parentMatrix = pParentNode->EvaluateLocalTransform();
while ((pParentNode = pParentNode->GetParent()) != NULL)
{
parentMatrix = pParentNode->EvaluateLocalTransform() * parentMatrix;
}
FbxAMatrix matrix = parentMatrix * localMatrix * matrixGeo;
FbxVector4 result = matrix.MultT(vector);
return result;}
Today I've tried to get "Local Transform", "Global Transform" and "Geometric Transform" from all nodes of the FBX file, and I've found that some of the nodes hold a "Geometric Transform", which are the ones in wrong position. So I use the nodes' global transform to multiply its geometric transfom, which generated a new matrix. Then I use this matrix to transform all the vertices of the node's mesh, and finally I got the properiate model.
Here's the code to transform the vertices
FbxVector4 multT(FbxNode node, FbxVector4 vector){
FbxAMatrix matrixGeo;
matrixGeo.SetIdentity();
if (node->GetNodeAttribute())
{
const FbxVector4 lT = node->GetGeometricTranslation(FbxNode::eSourcePivot);
const FbxVector4 lR = node->GetGeometricRotation(FbxNode::eSourcePivot);
const FbxVector4 lS = node->GetGeometricScaling(FbxNode::eSourcePivot);
matrixGeo.SetT(lT);
matrixGeo.SetR(lR);
matrixGeo.SetS(lS);
}
FbxAMatrix globalMatrix = node->EvaluateLocalTransform();
FbxAMatrix matrix = globalMatrix*matrixGeo;
FbxVector4 result = matrix.MultT(vector);
return result;}
And the model rendered in my engine
enter image description here

Why is DirectX skipping every second shader call?

I get a bit frustrating trying to figure out why I have to call a DirectX 11 shader twice to see the desired result.
Here's my current state:
I have a 3d object built from vertex and index buffer. This object is then instanced several times. Later, there will be a lot more than just one object, but for now, I'm testing it with this one only. In my render routine, I iterate over all instances, change the world matrices (so that all object instances are put together and form "one big, whole object") and call the shader method to render the data to the screen.
Here's the code so far, that doesn't work:
m_pLevel->Simulate(0.1f);
std::list<CLevelElementInstance*>& lst = m_pLevel->GetInstances();
float x = -(*lst.begin())->GetPosition().x, y = -(*lst.begin())->GetPosition().y, z = -(*lst.begin())->GetPosition().z;
int i = 0;
for (std::list<CLevelElementInstance*>::iterator it = lst.begin(); it != lst.end(); it++)
{
// Extract base element from current instance
CLevelElement* elem = (*it)->GetBaseElement();
// Write vertex and index buffer to video memory
elem->Render(m_pDirect3D->GetDeviceContext());
// Call shader
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), XMMatrixTranslation(x, y, z + (i * 8)), viewMatrix, projectionMatrix, elem->GetTexture());
++i;
}
My std::list consists of 4 3d objects, which are all the same. They only differ in their position in 3d space. All of the objects are 8.0f x 8.0f x 8.0f, so for simplicity I just line them up. (As can be seen on the shader render line, where I just add 8 units to the Z dimension)
The result is the following: I only see two elements rendered on the screen. And between them, there's an empty space the size of an element.
At first, I thought I did some errors with the 3d math, but after a lot of time spent debugging my code, I could not find any errors.
And here is now the confusing part:
If I change the content of the for-loop and add another call to the shader, I suddenly see all four elements; and they're all on their correct positions in 3d space:
m_pLevel->Simulate(0.1f);
std::list<CLevelElementInstance*>& lst = m_pLevel->GetInstances();
float x = -(*lst.begin())->GetPosition().x, y = -(*lst.begin())->GetPosition().y, z = -(*lst.begin())->GetPosition().z;
int i = 0;
for (std::list<CLevelElementInstance*>::iterator it = lst.begin(); it != lst.end(); it++)
{
// Extract base element from current instance
CLevelElement* elem = (*it)->GetBaseElement();
// Write vertex and index buffer to video memory
elem->Render(m_pDirect3D->GetDeviceContext());
// Call shader
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), XMMatrixTranslation(x, y, z + (i * 8)), viewMatrix, projectionMatrix, elem->GetTexture());
// Call shader a second time - this seems to have no effect but to allow the next iteration to perform it's shader rendering...
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), XMMatrixTranslation(x, y, z + (i * 8)), viewMatrix, projectionMatrix, elem->GetTexture());
++i;
}
Does anybody has an idea of what is going on here?
If it helps, here's the code of the shader:
bool CTextureShader::Render(ID3D11DeviceContext* _pDeviceContext, const int _IndexCount, XMMATRIX& _pWorldMatrix, XMMATRIX& _pViewMatrix, XMMATRIX& _pProjectionMatrix, ID3D11ShaderResourceView* _pTexture)
{
bool result = SetShaderParameters(_pDeviceContext, _pWorldMatrix, _pViewMatrix, _pProjectionMatrix, _pTexture);
if (!result)
return false;
RenderShader(_pDeviceContext, _IndexCount);
return true;
}
bool CTextureShader::SetShaderParameters(ID3D11DeviceContext* _pDeviceContext, XMMATRIX& _WorldMatrix, XMMATRIX& _ViewMatrix, XMMATRIX& _ProjectionMatrix, ID3D11ShaderResourceView* _pTexture)
{
HRESULT result;
D3D11_MAPPED_SUBRESOURCE mappedResource;
MatrixBufferType* dataPtr;
unsigned int bufferNumber;
_WorldMatrix = XMMatrixTranspose(_WorldMatrix);
_ViewMatrix = XMMatrixTranspose(_ViewMatrix);
_ProjectionMatrix = XMMatrixTranspose(_ProjectionMatrix);
result = _pDeviceContext->Map(m_pMatrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(result))
return false;
dataPtr = (MatrixBufferType*)mappedResource.pData;
dataPtr->world = _WorldMatrix;
dataPtr->view = _ViewMatrix;
dataPtr->projection = _ProjectionMatrix;
_pDeviceContext->Unmap(m_pMatrixBuffer, 0);
bufferNumber = 0;
_pDeviceContext->VSSetConstantBuffers(bufferNumber, 1, &m_pMatrixBuffer);
_pDeviceContext->PSSetShaderResources(0, 1, &_pTexture);
return true;
}
void CTextureShader::RenderShader(ID3D11DeviceContext* _pDeviceContext, const int _IndexCount)
{
_pDeviceContext->IASetInputLayout(m_pLayout);
_pDeviceContext->VSSetShader(m_pVertexShader, NULL, 0);
_pDeviceContext->PSSetShader(m_pPixelShader, NULL, 0);
_pDeviceContext->PSSetSamplers(0, 1, &m_pSampleState);
_pDeviceContext->DrawIndexed(_IndexCount, 0, 0);
}
If it helps, I can also post the code from the shaders here.
Any help would be appreciated - I'm totally stuck here :-(
The problem is that you are transposing your data every frame, so it's only "right" every other frame:
_WorldMatrix = XMMatrixTranspose(_WorldMatrix);
_ViewMatrix = XMMatrixTranspose(_ViewMatrix);
_ProjectionMatrix = XMMatrixTranspose(_ProjectionMatrix);
Instead, you should be doing something like:
XMMATRIX worldMatrix = XMMatrixTranspose(_WorldMatrix);
XMMATRIX viewMatrix = XMMatrixTranspose(_ViewMatrix);
XMMATRIX projectionMatrix = XMMatrixTranspose(_ProjectionMatrix);
result = _pDeviceContext->Map(m_pMatrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(result))
return false;
dataPtr = (MatrixBufferType*)mappedResource.pData;
dataPtr->world = worldMatrix;
dataPtr->view = viewMatrix;
dataPtr->projection = projectionMatrix;
_pDeviceContext->Unmap(m_pMatrixBuffer, 0);
I wonder if the problem is that the temporary created by the call to XMMatrixTranslation(x, y, z + (i * 8)) is being passed into a function by reference, and then passed into another function by reference where it is modified.
My knowledge of the c++ spec isn't complete enough for me to tell whether that's undefined behaviour or not (but I know that there are tricky rules in this area - assigning a temporary to a non-const temporary is not supported, for example). Regardless, it's close enough to being suspicious that even if it is well defined c++ it still might be a dusty corner that could trip up a non-compliant compiler.
To rule out this possibility, try doing it like:
XMMatrix worldMatrix = XMMatrixTranslation(x, y, z + (i * 8));
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix, elem->GetTexture());

OpenGL skeleton animation

I am trying to add animation to my program.
I have human model created in Blender with skeletal animation, and I can skip through the keyframes to see the model walking.
Now I've exported the model to an XML (Ogre3D) format, and in this XML file I can see the rotation, translation and scale assigned to each bone at a specific time (t=0.00000, t=0.00040, ... etc.)
What I've done is found which vertices are assigned each bone. Now I'm assuming all I need to do is apply the transformations defined for the bone to each one of these vertices. Is this the correct approach?
In my OpenGL draw() function (rough pseudo-code):
for (Bone b : bones){
gl.glLoadIdentity();
List<Vertex> v= b.getVertices();
rotation = b.getRotation();
translation = b.getTranslation();
scale = b.getScale();
gl.glTranslatef(translation);
gl.glRotatef(rotation);
gl.glScalef(scale);
gl.glDrawElements(v);
}
Vertices are usually affected by more than one bone -- it sounds like you're after linear blend skinning. My code's in C++ unfortunately, but hopefully it'll give you the idea:
void Submesh::skin(const Skeleton_CPtr& skeleton)
{
/*
Linear Blend Skinning Algorithm:
P = (\sum_i w_i * M_i * M_{0,i}^{-1}) * P_0 / (sum i w_i)
Each M_{0,i}^{-1} matrix gets P_0 (the rest vertex) into its corresponding bone's coordinate frame.
We construct matrices M_n * M_{0,n}^-1 for each n in advance to avoid repeating calculations.
I refer to these in the code as the 'skinning matrices'.
*/
BoneHierarchy_CPtr boneHierarchy = skeleton->bone_hierarchy();
ConfiguredPose_CPtr pose = skeleton->get_pose();
int boneCount = boneHierarchy->bone_count();
// Construct the skinning matrices.
std::vector<RBTMatrix_CPtr> skinningMatrices(boneCount);
for(int i=0; i<boneCount; ++i)
{
skinningMatrices[i] = pose->bones(i)->absolute_matrix() * skeleton->to_bone_matrix(i);
}
// Build the vertex array.
RBTMatrix_Ptr m = RBTMatrix::zeros(); // used as an accumulator for \sum_i w_i * M_i * M_{0,i}^{-1}
int vertCount = static_cast<int>(m_vertices.size());
for(int i=0, offset=0; i<vertCount; ++i, offset+=3)
{
const Vector3d& p0 = m_vertices[i].position();
const std::vector<BoneWeight>& boneWeights = m_vertices[i].bone_weights();
int boneWeightCount = static_cast<int>(boneWeights.size());
Vector3d p;
if(boneWeightCount != 0)
{
double boneWeightSum = 0;
for(int j=0; j<boneWeightCount; ++j)
{
int boneIndex = boneWeights[j].bone_index();
double boneWeight = boneWeights[j].weight();
boneWeightSum += boneWeight;
m->add_scaled(skinningMatrices[boneIndex], boneWeight);
}
// Note: This is effectively p = m*p0 (if we think of p0 as (p0.x, p0.y, p0.z, 1)).
p = m->apply_to_point(p0);
p /= boneWeightSum;
// Reset the accumulator matrix ready for the next vertex.
m->reset_to_zeros();
}
else
{
// If this vertex is unaffected by the armature (i.e. no bone weights have been assigned to it),
// use its rest position as its real position (it's the best we can do).
p = p0;
}
m_vertArray[offset] = p.x;
m_vertArray[offset+1] = p.y;
m_vertArray[offset+2] = p.z;
}
}
void Submesh::render() const
{
glPushClientAttrib(GL_CLIENT_VERTEX_ARRAY_BIT);
glPushAttrib(GL_ENABLE_BIT | GL_POLYGON_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_DOUBLE, 0, &m_vertArray[0]);
if(m_material->uses_texcoords())
{
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_DOUBLE, 0, &m_texCoordArray[0]);
}
m_material->apply();
glDrawElements(GL_TRIANGLES, static_cast<GLsizei>(m_vertIndices.size()), GL_UNSIGNED_INT, &m_vertIndices[0]);
glPopAttrib();
glPopClientAttrib();
}
Note in passing that real-world implementations usually do this sort of thing on the GPU to the best of my knowledge.
Your code assumes that each bone has an independent transformation matrix (you reset your matrix at the start of each loop iteration). But in reality, bones form a hierarchical structure that you must preserve when you do your rendering. Consider that when your upper arm rotates your forearm rotates along, because it is attached. The forearm may have its own rotation, but that is applied after it is rotated with the upper arm.
The rendering of the skeleton then is done recursively. Here is some pseudo-code:
function renderBone(Bone b) {
setupTransformMatrix(b);
draw(b);
foreach c in b.getChildren()
renderBone(c);
}
main() {
gl.glLoadIdentity();
renderBone(rootBone);
}
I hope this helps.