Ive been trying to load my skinned collada mesh using assimp for the last few days but am having an extremely hard time. Currently I am just trying to render it in its normal "pose" without any transformations or animations but the mesh comes very deformed. I think the problem lies in my bones, not my weights so I am only posting the bone section.
Here is my bone loading code:
// this is done for every bone in a for loop:
Bone new_bone;
new_bone.name = std::string(bone->mName.data);
aiMatrix4x4 b_matrix = bone->mOffsetMatrix;
aiMatrix4x4 g_inv = scene->mRootNode->mTransformation;
g_inv.Inverse(); // according to a tutorial, you have to multiply by inverse root
b_matrix = b_matrix * g_inv;
memcpy(new_bone.matrix, &b_matrix, sizeof(float) * 16);
Bones.push_back(new_bone);
Then I simply send this to my shader with
glUniformMatrix4fv(MatrixArray, bone_count, GL_FALSE, &MATRIX_BUFFER[0][0][0]);
And apply it in the vertex shader with:
mat4 Bone = V_MatrixArray[int(Indicies[0])] * Weights[0];
Bone += V_MatrixArray[int(Indicies[1])] * Weights[1];
Bone += V_MatrixArray[int(Indicies[2])] * Weights[2];
Bone += V_MatrixArray[int(Indicies[3])] * Weights[3];
vec4 v = Bone * vec4(Vertex, 1);
gl_Position = MVP * vec4(v.xyz, 1);
This code mostly works, however my mesh is very deformed.... It looks like this:
According to the reserach ive done so far:
I do not need to transpose my matrices since assimp uses OpenGL column major
I do not need to read the nodes of the scene yet since they are for
animation
Please correct me if I am mistaken about these last 2 things.
I managed to solve it. Turns out you DO need to read the nodes, even for a simple bind pose without animation. For any future readers with this issue, each bones matrix = root_node_inverse * matrix concatenated from node heirechy * bone offset matrix.
Related
I recently implemented vertex skinning in my own Vulkan engine. Pretty much all models render properly, however, I find that with Mixamo models, I get skinning artifacts. The main difference I found between "regular" glTF models and Mixamo models, is that the Mixamo models share inverse bind matrices between multiple meshes, but I highly doubt that this causes this issue.
Here you can see that for some reason the vertices are pulled towards one specific point which seems to be at (0, 0, 0). I know for sure that this is not caused by the vertex and index loading, as the model renders properly without vertex skinning.
Calculation of joint matrices
void Object::updateJointsByNode(Node *node) {
if (node->mesh && node->skinIndex > -1) {
auto inverseTransform = glm::inverse(node->getLocalMatrix());
auto skin = this->skinLookup[node->skinIndex];
auto numberOfJoints = skin->jointsIndices.size();
std::vector<glm::mat4> jointMatrices(numberOfJoints);
for (size_t i = 0; i < numberOfJoints; i++) {
jointMatrices[i] =
this->getNodeByIndex(skin->jointsIndices[i])->getLocalMatrix() * skin->inverseBindMatrices[i];
jointMatrices[i] = inverseTransform * jointMatrices[i];
}
this->inverseBindMatrices = jointMatrices;
}
for (auto &child : node->children) {
this->updateJointsByNode(child);
}
}
Calculation of vertex displacement in GLSL Vertex Shader
mat4 skinMat =
inWeight0.x * model.jointMatrix[int(inJoint0.x)] +
inWeight0.y * model.jointMatrix[int(inJoint0.y)] +
inWeight0.z * model.jointMatrix[int(inJoint0.z)] +
inWeight0.w * model.jointMatrix[int(inJoint0.w)];
localPosition = model.model * model.local * skinMat * vec4(inPosition, 1.0);
So basically the mistake I made was relatively simple. It was caused by casting the joints to the wrong type. When a glTF model uses a byte stride of size 4 for the vertex joints, the implicit type is uint8_t (unsigned byte) and not the uint16_t (unsigned short) which I was using.
I am in the process of writing an animation system with my own Collada parser and am running into an issue that I can't wrap my head around.
I have collected my mesh/skin information (vertices, normals, jointIds, weights, etc), my skeleton information (joints, their local transforms, inverse bind position, hierarchy structure), and my animation (keyframe transform position for each joint, timestamp).
My issue is that with everything calculated and then implemented in the shader (the summation of weights multiplied by the joint transform and vertex position) - I get the following:
When I remove the weight multiplication, the mesh remains fully intact - however the skin doesn't actually follow the animation. I am at a lost as I feel as though the math is correct, but very obviously I am going wrong somewhere. Would someone be able to shine light on the aspect I have misinterpreted?
Here is my current understanding and implementation:
After collecting all of the joint's localTransforms and hierarchy, I calculate their inverse bind transfromation matrix. To do this I multiple each joints localTransform with their parentLocalTransform to get a bindTransform. Inverting that bindTransform results in their inverseBindTransform. Below is my code for that:
// Recursively collect each Joints InverseBindTransform -
// root joint's local position is an identity matrix.
// Function is only called once after data collection.
void Joint::CalcInverseBindTransform(glm::mat4 parentLocalPosition)
{
glm::mat4 bindTransform = parentLocalPosition * m_LocalTransform;
m_InverseBindPoseMatrix = glm::inverse(bindTransform);
for (Joint child : Children) {
child.CalcInverseBindTransform(bindTransform);
}
}
Within my animator during an animation, for each joint I take the two JointTransforms for the two frame's my currentTime is in between and I calculate the interpolated JointTransform. (JointTransform simply has a vec3 for position and quaternion for rotation). I do this for every joint and then apply those interpolated values to each Joint by again recursively muliplying the new frameLocalTransform by their parentLocalTransform. I take that bindTransform and multiply it by the invBindTransform and then transpose the matrix. Below is the code for that:
std::unordered_map<int, glm::mat4> Animator::InterpolatePoses(float time) {
std::unordered_map<int, glm::mat4> poses;
if (IsPlaying()) {
for (std::pair<int, JointTransform> keyframe : m_PreviousFrame.GetJointKeyFrames()) {
JointTransform previousFrame = m_PreviousFrame.GetJointKeyFrames()[keyframe.first];
JointTransform nextFrame = m_NextFrame.GetJointKeyFrames()[keyframe.first];
JointTransform interpolated = JointTransform::Interpolate(previousFrame, nextFrame, time);
poses[keyframe.first] = interpolated.getLocalTransform();
}
}
return poses;
}
void Animator::ApplyPosesToJoints(std::unordered_map<int, glm::mat4> newPose, Joint* j, glm::mat4 parentTransform)
{
if (IsPlaying()) {
glm::mat4 currentPose = newPose[j->GetJointId()];
glm::mat4 modelSpaceJoint = parentTransform * currentPose;
for (Joint child : j->GetChildren()) {
ApplyPosesToJoints(newPose, &child, modelSpaceJoint);
}
modelSpaceJoint = glm::transpose(j->GetInvBindPosition() * modelSpaceJoint);
j->SetAnimationTransform(modelSpaceJoint);
}
}
I then collect all the newly AnimatedTransforms for each joint and send them to the shader:
void AnimationModel::Render(bool& pass)
{
[...]
std::vector<glm::mat4> transforms = GetJointTransforms();
for (int i = 0; i < transforms.size(); ++i) {
m_Shader->SetMat4f(transforms[i], ("JointTransforms[" + std::to_string(i) + "]").c_str());
}
[...]
}
void AnimationModel::AddJointsToArray(Joint current, std::vector<glm::mat4>& matrix)
{
glm::mat4 finalMatrix = current.GetAnimatedTransform();
matrix.push_back(finalMatrix);
for (Joint child : current.GetChildren()) {
AddJointsToArray(child, matrix);
}
}
In the shader, I simply follow the summation formula that can be found all over the web when researchiing this topic:
for (int i = 0; i < total_weight_amnt; ++i) {
mat4 jointTransform = JointTransforms[jointIds[i]];
vec4 newVertexPos = jointTransform * vec4(pos, 1.0);
total_pos += newVertexPos * weights[i];
[...]
---------- Reply to Normalizing Weights ------------
There were a few weights summing above 1, but after solving the error in my code the model looked like this:
For calculating the weights - I loop through all preadded weights in the vector, and if I find a weight that is less than the weight I'm looking to add - I replace that weight in that position. Otherwise, I append the weight onto the end of the vector. If there are less weights in my vector than my specified max_weights (which is 4) - I fill in the remaining weights/jointIds with 0.
I understand when something is going wrong in skinning animations, there can be alot of different areas the problem is occuring. As such, for future googlers experiencing the same issue I am - take this as more of a list of suggestions of what you could be doing wrong rather than absolutely doing wrong.
For my problem - I had the right idea but wrong approach in a lot of minor areas. Which brought me fairly close but, as they say, no cigar.
I had no need to calculate the Inverse Bind Pose myself, Collada's Inverse Bind Pose (sometimes/often declared as an "offsetMatrix") is more than perfect. This wasn't a problem more as I was just doing unnecessary calculations.
In a Collada file, they often provide you more "joints" or "nodes" in the hierarchy than what is needed for the animation. Prior to the start of your actual animated "joints", there is the scene and an initial armature "node" type. The scene is typically an identity matrix that was manipulated based on your "up axis" upon reading in the Collada file. The Node type will determine the overall size of each joint in the skeleton - so if it wasn't resized, its probably the identity matrix. Make sure your hierarchy still contains ALL nodes/joints listed in the hierarchy. I very much was not doing so - which greatly distorted my globalPosition (BindPose).
If you are representing your Joint's transforms rotation through quaternions (which is highly recommended), make sure the resulted quaternion is normalized after interpolating between two rotated positions.
On the same note - when combining the Rotation and Transform into your final matrix - make sure your order of multiplication and the final output is correct.
Finally - your last skinning matrix is comprised of your joints InvBindMatrix * GlobalPosition * GlobalInverseRootTransform (<- this is the inverse of the local transfrom from your "scene" node mentioned in (1), remember?).
Based on your prior matrix multiplications up to this point, you may or may not need to transpose this final matrix.
And with that - I was able to successfully animate my model!
One final note - my mesh and animation files are added in separately. If your animations are in separate files from your mesh, make sure you collect the skinning/joint information from the files with an animation rather than the file with the mesh. I list my steps for loading in a model and then giving it multiple animations through different files:
Load in the Mesh (This contains Vertices,Normals,TexCoords,JointIds,Weights)
Load in the animation file (This gives Skeleton, InverseBindPositions, and other needed info to bind skeleton to mesh) - Once skeleton and binding info is collected, gather first animation info from that file as well.
For another animation, the above Skeleton should work fine for any other animation on the same mesh/model - just read in the animation information and store in your chosen data structure. Repeat step 3 til happy.
I am currently working on my first OpenGL based game engine. I need normal mapping as a feature, but it isn't working correctly.
Here is an animation of what is Happening
The artifacts are affected by the angle between the light and the normals on the surface. Camera movement does not affect it in any way. I am also (at least for now) going the route of the less efficient method where the normal extracted from the normal map is converted into view space rather than converting everything to tangent space.
Here are the relevant pieces of my code:
Generating Tangents and Bitangents
for(int k=0;k<(int)mb->getIndexCount();k+=3)
{
unsigned int i1 = mb->getIndex(k);
unsigned int i2 = mb->getIndex(k+1);
unsigned int i3 = mb->getIndex(k+2);
JGE_v3f v0 = mb->getVertexPosition(i1);
JGE_v3f v1 = mb->getVertexPosition(i2);
JGE_v3f v2 = mb->getVertexPosition(i3);
JGE_v2f uv0 = mb->getVertexUV(i1);
JGE_v2f uv1 = mb->getVertexUV(i2);
JGE_v2f uv2 = mb->getVertexUV(i3);
JGE_v3f deltaPos1 = v1-v0;
JGE_v3f deltaPos2 = v2-v0;
JGE_v2f deltaUV1 = uv1-uv0;
JGE_v2f deltaUV2 = uv2-uv0;
float ur = deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x;
if(ur != 0)
{
float r = 1.0 / ur;
JGE_v3f tangent;
JGE_v3f bitangent;
tangent = ((deltaPos1 * deltaUV2.y) - (deltaPos2 * deltaUV1.y)) * r;
tangent.normalize();
bitangent = ((deltaPos1 * -deltaUV2.x) + (deltaPos2 * deltaUV1.x)) * r;
bitangent.normalize();
tans[i1] += tangent;
tans[i2] += tangent;
tans[i3] += tangent;
btans[i1] += bitangent;
btans[i2] += bitangent;
btans[i3] += bitangent;
}
}
Calculating the TBN matrix in the Vertex Shader
(mNormal corrects the normal for non-uniform scales)
vec3 T = normalize((mVW * vec4(tangent, 0.0)).xyz);
tnormal = normalize((mNormal * n).xyz);
vec3 B = normalize((mVW * vec4(bitangent, 0.0)).xyz);
tmTBN = transpose(mat3(
T.x, B.x, tnormal.x,
T.y, B.y, tnormal.y,
T.z, B.z, tnormal.z));
Finally here is where I use the sampled normal from the normal map and attempt to convert it to view space in the Fragment Shader
fnormal = normalize(nmapcolor.xyz * 2.0 - 1.0);
fnormal = normalize(tmTBN * fnormal);
"nmapcolor" is the sampled color from the normal map.
"fnormal" is then used like normal in the lighting calculations.
I have been trying to solve this for so long and have absolutely no idea how to get this working. Any help would be greatly appreciated.
EDIT - I slightly modified the code to work in world space and outputted the results. The big platform does not have normal mapping (and it works correctly) while the smaller platform does.
I added in what direction the normals are facing. They should both be generally the same color, but they're clearly different. Seems the mTBN matrix isn't transforming the tangent space normal into world (and normally view) space properly.
Well... I solved the problem. Turns out my normal mapping implementation was perfect. The problem actually was in my texture class. This is, of course, my first time writing an OpenGL rendering engine, and I did not realize that the unlock() function in my texture class saved ALL my textures as GL_SRGB_ALPHA including normal maps. Only diffuse map textures should be GL_SRGB_ALPHA. Temporarily forcing all textures to load as GL_RGBA fixed the problem.
Can't believe I had this problem for 11 months, only to find it was something so small.
I'm using the OpenGL OBJ loader, which can be downloaded here!
I exported an OBJ model from Blender.
The problem is that I want to achieve a smooth shading.
As you can see, it is not smooth here.
How do I achieve this? Maybe something is wrong at the normal vector calculator?
float* Model_OBJ::calculateNormal( float *coord1, float *coord2, float *coord3 )
{
/* calculate Vector1 and Vector2 */
float va[3], vb[3], vr[3], val;
va[0] = coord1[0] - coord2[0];
va[1] = coord1[1] - coord2[1];
va[2] = coord1[2] - coord2[2];
vb[0] = coord1[0] - coord3[0];
vb[1] = coord1[1] - coord3[1];
vb[2] = coord1[2] - coord3[2];
/* cross product */
vr[0] = va[1] * vb[2] - vb[1] * va[2];
vr[1] = vb[0] * va[2] - va[0] * vb[2];
vr[2] = va[0] * vb[1] - vb[0] * va[1];
/* normalization factor */
val = sqrt( vr[0]*vr[0] + vr[1]*vr[1] + vr[2]*vr[2] );
float norm[3];
norm[0] = vr[0]/val;
norm[1] = vr[1]/val;
norm[2] = vr[2]/val;
return norm;
}
And glShadeModel( GL_SMOOTH ) is set.
Any ideas?
If you want to do smooth shading, you can't simply calculate the normal for each triangle vertex on a per-triangle basis as you're doing now. That would yield flat shading.
To do smooth shading, you want to sum up the normals you calculate for each triangle to the associated vertices and then normalize the result for each vertex. That will yield a kind of average vector which points in a smooth direction.
Basically take what you're doing and add the resulting triangle normal to all of its vertices. Then normalize the summed vectors for each vertex. It'd look something like this (pseudocode):
for each vertex, vert:
vert.normal = vec3(0, 0, 0)
for each triangle, tri:
tri_normal = calculate_normal(tri)
for each vertex in tri, vert:
vert.normal += tri_normal
for each vertex, vert:
normalize(vert.normal)
However, this should normally not be necessary when loading from an OBJ file like this, and a hard surface model like this typically needs its share of creases and sharp corners to look right where the normals are not completely smooth and continuous everywhere. OBJ files typically store the normals for each vertex inside as the artist intended the model to look. So I'd have a look at your OBJ loader and see how to properly fetch the normals contained inside the file out of it.
Thanks pal! I found the option in Blender to export normal vectors, so now I have completely rewritten the data structure to manage normal vectors too. Now it looks smooth! image
I'm trying to implement skeletal animation in a small program I'm writing. The idea is to calculate the transformation matrix on the CPU every frame by interpolating keyframe data, then feeding this data to my vertex shader which multiplies my vertices by this matrix like this:
vec4 v = animationMatrices[int(boneIndices.x)] * gl_Vertex * boneWeights.x;
Where boneWeights and boneIndices are attributes and animationMatrices is a uniform array of transformation matrices updated every frame before drawing. (The idea is to have multiple bones affecting one vertex eventually, but right now I'm testing with one bone per vertex so just taking the weight.x and indices.x is enough).
Now the problem is calculating the transformation matrix for each bone. My transformation matrix for the single joint is good, the problem is that it always takes (0,0,0) as pivot instead of the pivot. I took the joint matrices from the COLLADA which correctly shows my skeleton when I draw them like this:
public void Draw()
{
GL.PushMatrix();
drawBone(Root);
GL.PopMatrix();
}
private void drawBone(Bone b)
{
GL.PointSize(50);
GL.MultMatrix(ref b.restMatrix);
GL.Begin(BeginMode.Points);
GL.Color3((byte)0, (byte)50, (byte)0);
if (b.Name == "Blades")
{
GL.Vertex3(0, 0, 0);
}
GL.End();
foreach (Bone bc in b.Children)
{
GL.PushMatrix();
drawBone(bc);
GL.PopMatrix();
}
}
So now to calculate the actual matrix I've tried:
Matrix4 jointMatrix = b.restMatrixInv * boneTransform * b.restMatrix;
or according to the collada documentation (this doesn't really make sense to me):
Matrix4 jointMatrix = b.restMatrix * b.restMatrixInv * boneTransform;
And I know I also have to put the parent matrix in here somewhere, I'm guessing something like this:
Matrix4 jointMatrix = b.restMatrixInv * boneTransform * b.restMatrix * b.Parent.jointMatrix;
But at the moment I'm mostly just confused and any push in the right direction would help. I really need to get this order right...