Set bone rotations for Poseable mesh in local space - c++

Greetings to everyone,
Greetings to everyone. : )
I'm implementing a VR experience on UE4 that detects the movement of the VR experience user, and then updates the corresponding pose of the avatar or representation of the user inside the virtual world.
For doing so, I've implemented this behavior with C++ based on what is said on this topic:
https://forums.unrealengine.com/showthread.php?25264-Pose-a-skeletal-mesh-in-runtime
That is, I have an USkeletalMeshComponent (whose Skeletal mesh is the one seen on screen), and an UPoseableMeshComponent (being set as the "master pose component" for the USkeletalMeshComponent), that receives the rotation data as quaternions from each tracking sensor.
One hand, the mesh for the UPoseableMeshComponent will be set to the mesh of the USkeletalMeshComponet. The other hand, the bones of the poseable mesh are updated with the corresponding rotation data coming from the tracking sensors for each bone, which makes automatically the skeletal mesh to be updated accordingly due to the UPoseableMeshComponent was set as "master pose component" for the USkeletalMeshComponent.
This works pefectly when working with all the body parts that are not finger phalanxes, because rotations are set on those body parts in world space, in this way:
m_pPoseableMeshComp->SetBoneRotationByName(strBoneName, qBoneRotation.Rotator(), EBoneSpaces::WorldSpace);
However, for technical reasons, it is mandatory to set the bone rotations for each finger phalanx in LOCAL space, but poseable meshes seems to work only on world space and component space; checking out 'SkinnedMeshComponent.h' there is a 'LocalSpace' enum value on EBoneSpaces::Type... but it's commented.
/** Values for specifying bone space. */
UENUM()
namespace EBoneSpaces
{
enum Type
{
/** Set absolute position of bone in world space. */
WorldSpace UMETA(DisplayName = "World Space"),
/** Set position of bone in components reference frame. */
ComponentSpace UMETA(DisplayName = "Component Space"),
/** Set position of bone relative to parent bone. */
//LocalSpace UMETA( DisplayName = "Parent Bone Space" ),
};
}
I cannot understand the phrase 'Set position of bone in components reference frame'. It doesn't seem to work as the "replacement" for local space (as if the bone were rotating around the avatar's pivot point). Is there any other way, or a workaround way, to set rotations in local space for a poseable mesh?.
Thanks on advance. : )

I had this issue as well. Sadly the feature was never implemented in the official Unreal Engine code base.
But you can solve it very easily by extending the poseable mesh component like so.
// MyPoseableMeshComponent.h
#pragma once
#include "CoreMinimal.h"
#include "Components/PoseableMeshComponent.h"
#include "MyPoseableMeshComponent.generated.h"
UCLASS(meta = (BlueprintSpawnableComponent))
class ALSUTILS_API UMyPoseableMeshComponent : public UPoseableMeshComponent
{
GENERATED_BODY()
public:
UFUNCTION(BlueprintCallable, Category = "Components|PoseableMesh")
void SetBoneLocalTransformByName(const FName& BoneName, const FTransform& InTransform);
UFUNCTION(BlueprintCallable, Category = "Components|PoseableMesh")
const FTransform& GetBoneLocalTransformByName(const FName& BoneName) const;
};
// MyPoseableMeshComponent.cpp
void UMyPoseableMeshComponent::SetBoneLocalTransformByName(const FName& BoneName, const FTransform& InTransform)
{
if (!SkeletalMesh || !RequiredBones.IsValid())
{
return;
}
int32 boneIndex = GetBoneIndex(BoneName);
if (boneIndex >= 0 && boneIndex < BoneSpaceTransforms.Num())
{
BoneSpaceTransforms[boneIndex] = InTransform;
MarkRefreshTransformDirty();
}
}
const FTransform& UMyPoseableMeshComponent::GetBoneLocalTransformByName(const FName& BoneName) const
{
static FTransform zeroTransform;
if (!SkeletalMesh || !RequiredBones.IsValid())
{
return zeroTransform;
}
int32 boneIndex = GetBoneIndex(BoneName);
if (boneIndex >= 0 && boneIndex < BoneSpaceTransforms.Num())
{
return BoneSpaceTransforms[boneIndex];
}
return zeroTransform;
}

Related

Skinning Animation - Weights Destroy Mesh

I am in the process of writing an animation system with my own Collada parser and am running into an issue that I can't wrap my head around.
I have collected my mesh/skin information (vertices, normals, jointIds, weights, etc), my skeleton information (joints, their local transforms, inverse bind position, hierarchy structure), and my animation (keyframe transform position for each joint, timestamp).
My issue is that with everything calculated and then implemented in the shader (the summation of weights multiplied by the joint transform and vertex position) - I get the following:
When I remove the weight multiplication, the mesh remains fully intact - however the skin doesn't actually follow the animation. I am at a lost as I feel as though the math is correct, but very obviously I am going wrong somewhere. Would someone be able to shine light on the aspect I have misinterpreted?
Here is my current understanding and implementation:
After collecting all of the joint's localTransforms and hierarchy, I calculate their inverse bind transfromation matrix. To do this I multiple each joints localTransform with their parentLocalTransform to get a bindTransform. Inverting that bindTransform results in their inverseBindTransform. Below is my code for that:
// Recursively collect each Joints InverseBindTransform -
// root joint's local position is an identity matrix.
// Function is only called once after data collection.
void Joint::CalcInverseBindTransform(glm::mat4 parentLocalPosition)
{
glm::mat4 bindTransform = parentLocalPosition * m_LocalTransform;
m_InverseBindPoseMatrix = glm::inverse(bindTransform);
for (Joint child : Children) {
child.CalcInverseBindTransform(bindTransform);
}
}
Within my animator during an animation, for each joint I take the two JointTransforms for the two frame's my currentTime is in between and I calculate the interpolated JointTransform. (JointTransform simply has a vec3 for position and quaternion for rotation). I do this for every joint and then apply those interpolated values to each Joint by again recursively muliplying the new frameLocalTransform by their parentLocalTransform. I take that bindTransform and multiply it by the invBindTransform and then transpose the matrix. Below is the code for that:
std::unordered_map<int, glm::mat4> Animator::InterpolatePoses(float time) {
std::unordered_map<int, glm::mat4> poses;
if (IsPlaying()) {
for (std::pair<int, JointTransform> keyframe : m_PreviousFrame.GetJointKeyFrames()) {
JointTransform previousFrame = m_PreviousFrame.GetJointKeyFrames()[keyframe.first];
JointTransform nextFrame = m_NextFrame.GetJointKeyFrames()[keyframe.first];
JointTransform interpolated = JointTransform::Interpolate(previousFrame, nextFrame, time);
poses[keyframe.first] = interpolated.getLocalTransform();
}
}
return poses;
}
void Animator::ApplyPosesToJoints(std::unordered_map<int, glm::mat4> newPose, Joint* j, glm::mat4 parentTransform)
{
if (IsPlaying()) {
glm::mat4 currentPose = newPose[j->GetJointId()];
glm::mat4 modelSpaceJoint = parentTransform * currentPose;
for (Joint child : j->GetChildren()) {
ApplyPosesToJoints(newPose, &child, modelSpaceJoint);
}
modelSpaceJoint = glm::transpose(j->GetInvBindPosition() * modelSpaceJoint);
j->SetAnimationTransform(modelSpaceJoint);
}
}
I then collect all the newly AnimatedTransforms for each joint and send them to the shader:
void AnimationModel::Render(bool& pass)
{
[...]
std::vector<glm::mat4> transforms = GetJointTransforms();
for (int i = 0; i < transforms.size(); ++i) {
m_Shader->SetMat4f(transforms[i], ("JointTransforms[" + std::to_string(i) + "]").c_str());
}
[...]
}
void AnimationModel::AddJointsToArray(Joint current, std::vector<glm::mat4>& matrix)
{
glm::mat4 finalMatrix = current.GetAnimatedTransform();
matrix.push_back(finalMatrix);
for (Joint child : current.GetChildren()) {
AddJointsToArray(child, matrix);
}
}
In the shader, I simply follow the summation formula that can be found all over the web when researchiing this topic:
for (int i = 0; i < total_weight_amnt; ++i) {
mat4 jointTransform = JointTransforms[jointIds[i]];
vec4 newVertexPos = jointTransform * vec4(pos, 1.0);
total_pos += newVertexPos * weights[i];
[...]
---------- Reply to Normalizing Weights ------------
There were a few weights summing above 1, but after solving the error in my code the model looked like this:
For calculating the weights - I loop through all preadded weights in the vector, and if I find a weight that is less than the weight I'm looking to add - I replace that weight in that position. Otherwise, I append the weight onto the end of the vector. If there are less weights in my vector than my specified max_weights (which is 4) - I fill in the remaining weights/jointIds with 0.
I understand when something is going wrong in skinning animations, there can be alot of different areas the problem is occuring. As such, for future googlers experiencing the same issue I am - take this as more of a list of suggestions of what you could be doing wrong rather than absolutely doing wrong.
For my problem - I had the right idea but wrong approach in a lot of minor areas. Which brought me fairly close but, as they say, no cigar.
I had no need to calculate the Inverse Bind Pose myself, Collada's Inverse Bind Pose (sometimes/often declared as an "offsetMatrix") is more than perfect. This wasn't a problem more as I was just doing unnecessary calculations.
In a Collada file, they often provide you more "joints" or "nodes" in the hierarchy than what is needed for the animation. Prior to the start of your actual animated "joints", there is the scene and an initial armature "node" type. The scene is typically an identity matrix that was manipulated based on your "up axis" upon reading in the Collada file. The Node type will determine the overall size of each joint in the skeleton - so if it wasn't resized, its probably the identity matrix. Make sure your hierarchy still contains ALL nodes/joints listed in the hierarchy. I very much was not doing so - which greatly distorted my globalPosition (BindPose).
If you are representing your Joint's transforms rotation through quaternions (which is highly recommended), make sure the resulted quaternion is normalized after interpolating between two rotated positions.
On the same note - when combining the Rotation and Transform into your final matrix - make sure your order of multiplication and the final output is correct.
Finally - your last skinning matrix is comprised of your joints InvBindMatrix * GlobalPosition * GlobalInverseRootTransform (<- this is the inverse of the local transfrom from your "scene" node mentioned in (1), remember?).
Based on your prior matrix multiplications up to this point, you may or may not need to transpose this final matrix.
And with that - I was able to successfully animate my model!
One final note - my mesh and animation files are added in separately. If your animations are in separate files from your mesh, make sure you collect the skinning/joint information from the files with an animation rather than the file with the mesh. I list my steps for loading in a model and then giving it multiple animations through different files:
Load in the Mesh (This contains Vertices,Normals,TexCoords,JointIds,Weights)
Load in the animation file (This gives Skeleton, InverseBindPositions, and other needed info to bind skeleton to mesh) - Once skeleton and binding info is collected, gather first animation info from that file as well.
For another animation, the above Skeleton should work fine for any other animation on the same mesh/model - just read in the animation information and store in your chosen data structure. Repeat step 3 til happy.

Is there are way to return the collision pair which are in collision in Bullet?

I am currently building a simulation software for robotics. I am using bullet physics to perform collision detection and for robot rigid body dynamics.
I am able to successfully detect when two triangle collision meshes collide with each other. Now I would like to highlight the collision pair by changing the colour of the mesh to red when collision happens. To do so, I need to know which two meshes are colliding. Is there a way to return the collision pair which is in collision in Bullet?
I have looked through the Bullet Physics documentation and could not find anything useful.
At collision object creation, you're able to set the CollisionObject.UserObject with an id, name,... or the mesh itself.
If you're using a custom ContactResultCallback, you will get the collided objects and the collision points as parameters of the AddSingleResult() method. The previously set user object could now help you to identify the collided meshes:
/// <inheritdoc cref="ContactResultCallback"/>
public override double AddSingleResult(ManifoldPoint manifoldPoint,
CollisionObjectWrapper collisionObjectWrapper1,
int partId1,
int index1,
CollisionObjectWrapper collisionObjectWrapper2,
int partId2,
int index2)
{
var mesh1 = collisionObjectWrapper1.CollisionObject.UserObject as Mesh;
var mesh2 = collisionObjectWrapper2.CollisionObject.UserObject as Mesh;
// ...
return 0;
}
If you're working with CollisionWorld.StepSimulate(...) and CollisionWorld.SetInternalTickCallback(OnSimualtionTick) for "live" collision detection, you will also get the user objects:
private void OnSimualtionTick(DynamicsWorld world, float timeStep)
{
for (int i = 0; i < world.Dispatcher.NumManifolds; i++)
{
PersistentManifold contactManifold = world.Dispatcher.GetManifoldByIndexInternal(i);
var mesh1 = contactManifold.Body0.UserObject as Mesh;
var mesh2 = contactManifold.Body1.UserObject as Mesh;
// ...
}
// ...
}

Weird artefact while rotating mesh uvs

I created a Unity sphere and applied standard material with albedo texture. Now I'm trying to rotate mesh uvs (it looks like this is the simpliest way to rotate the texture)
Here is the code
using UnityEngine;
public class GameController : MonoBehaviour
{
public GameObject player;
public float rotationValue;
void Start ()
{
Mesh mesh = player.GetComponent<MeshFilter>().mesh;
Vector2[] uvs = mesh.uv;
mesh.uv = changeUvs(uvs);
}
private Vector2[] changeUvs (Vector2[] originalUvs)
{
for (int i = 0; i < originalUvs.Length; i++)
{
originalUvs[i].x = originalUvs[i].x + rotationValue;
if (originalUvs[i].x > 1)
{
originalUvs[i].x = originalUvs[i].x - 1f;
}
}
return originalUvs;
}
}
This gives me this strange artifact. What am I doing wrong?
It can't be done the way you're trying to do it. Even if you go outside the [0,1] range as pleluron suggests there will be a line on the sphere where the texture interpolates from high to low, and you get the entire texture in a single band as you see now.
The way the original sphere solves the problem is by having a seam that is duplicated. One version has x 0 and the other one has x 1. You don't see it because the vertices are at the same location. If you want to solve the problem with uv trickery then the only option is to move the seam, which involves creating a new mesh.
Actually the simplest way to rotate the planet is by leaving the texture alone and just rotate the object! If this for some reason is not an option then go into the material and find the tiling and offset. If you're using the standard shader then make sure you use the top one, just below the emission checkbox. If you modify that X you get the effect you're trying to create with the script you posted.

How do I accomplish proper trajectory with a Cocos2d-x node using Chipmunk 2D impulses and rotation?

I'm building a game with Cocos2d-x version 3.13.1 and I've decided to go with the built-in physics engine (Chipmunk 2D) to accomplish animations and collision detection. I have a simple projectile called BulletUnit that inherits from cocos2d::Node. It has a child sprite that displays artwork, and a rectangular physics body with the same dimensions as the artwork.
The BulletUnit has a method called fireAtPoint, which determines the angle between itself and the point specified, then sets the initial velocity based on the angle. On each update cycle, acceleration is applied to the projectile. This is done by applying impulses to the body based on an acceleration variable and the angle calculated in fireAtPoint. Here's the code:
bool BulletUnit::init() {
if (!Unit::init()) return false;
displaySprite_ = Sprite::createWithSpriteFrameName(frameName_);
this->addChild(displaySprite_);
auto physicsBody = PhysicsBody::createBox(displaySprite_->getContentSize());
physicsBody->setCollisionBitmask(0);
this->setPhysicsBody(physicsBody);
return true;
}
void BulletUnit::update(float dt) {
auto mass = this->getPhysicsBody()->getMass();
this->getPhysicsBody()->applyImpulse({
acceleration_ * mass * cosf(angle_),
acceleration_ * mass * sinf(angle_)
});
}
void BulletUnit::fireAtPoint(const Point &point) {
angle_ = Trig::angleBetweenPoints(this->getPosition(), point);
auto physicsBody = this->getPhysicsBody();
physicsBody->setVelocityLimit(maxSpeed_);
physicsBody->setVelocity({
startingSpeed_ * cosf(angle_),
startingSpeed_ * sinf(angle_)
});
}
This works exactly as I want it to. You can see in the image below, my bullets are accelerating as planned and traveling directly towards my mouse clicks.
But, there's one obvious flaw: the bullet is remaining flat instead of rotating to "point" towards the target. So, I adjust fireAtPoint to apply a rotation to the node. Here's the updated method:
void BulletUnit::fireAtPoint(const Point &point) {
angle_ = Trig::angleBetweenPoints(this->getPosition(), point);
// This rotates the node to make it point towards the target
this->setRotation(angle_ * -180.0f/M_PI);
auto physicsBody = this->getPhysicsBody();
physicsBody->setVelocityLimit(maxSpeed_);
physicsBody->setVelocity({
startingSpeed_ * cosf(angle_),
startingSpeed_ * sinf(angle_)
});
}
This almost works. The bullet is pointing in the right direction, but the trajectory is now way off and seems to be arcing away from the target as a result of the rotation: the more drastic the rotation, the more drastic the arcing. The following image illustrates what's happening:
So, it seems that setting the rotation is causing the physics engine to behave in a way I hadn't originally expected. I've been racking my brain on ways to correct the flight path, but so far, no luck! Any suggestions would be greatly apprecitated. Thanks!

Box2D creating rectangular bounding boxes around angled line bodies

I'm having a lot of trouble detecting collisions in a zero-G space game. Hopefully this image will help me explain:
http://i.stack.imgur.com/f7AHO.png
The white rectangle is a static body with a b2PolygonShape fixture attached, as such:
// Create the line physics body definition
b2BodyDef wallBodyDef;
wallBodyDef.position.Set(0.0f, 0.0f);
// Create the line physics body in the physics world
wallBodyDef.type = b2_staticBody; // Set as a static body
m_Body = world->CreateBody(&wallBodyDef);
// Create the vertex array which will be used to make the physics shape
b2Vec2 vertices[4];
vertices[0].Set(m_Point1.x, m_Point1.y); // Point 1
vertices[1].Set(m_Point1.x + (sin(angle - 90*(float)DEG_TO_RAD)*m_Thickness), m_Point1.y - (cos(angle - 90*(float)DEG_TO_RAD)*m_Thickness)); // Point 2
vertices[2].Set(m_Point2.x + (sin(angle - 90*(float)DEG_TO_RAD)*m_Thickness), m_Point2.y - (cos(angle - 90*(float)DEG_TO_RAD)*m_Thickness)); // Point 3
vertices[3].Set(m_Point2.x, m_Point2.y); // Point 3
int32 count = 4; // Vertex count
b2PolygonShape wallShape; // Create the line physics shape
wallShape.Set(vertices, count); // Set the physics shape using the vertex array above
// Define the dynamic body fixture
b2FixtureDef fixtureDef;
fixtureDef.shape = &wallShape; // Set the line shape
fixtureDef.density = 0.0f; // Set the density
fixtureDef.friction = 0.0f; // Set the friction
fixtureDef.restitution = 0.5f; // Set the restitution
// Add the shape to the body
m_Fixture = m_Body->CreateFixture(&fixtureDef);
m_Fixture->SetUserData("Wall");[/code]
You'll have to trust me that that makes the shape in the image. The physics simulation works perfectly, the player (small triangle) collides with the body with pixel perfect precision. However, I come to a problem when I try to determine when a collision takes place so I can remove health and what-not. The code I am using for this is as follows:
/*------ Check for collisions ------*/
if (m_Physics->GetWorld()->GetContactCount() > 0)
{
if (m_Physics->GetWorld()->GetContactList()->GetFixtureA()->GetUserData() == "Player" &&
m_Physics->GetWorld()->GetContactList()->GetFixtureB()->GetUserData() == "Wall")
{
m_Player->CollideWall();
}
}
I'm aware there are probably better ways to do collisions, but I'm just a beginner and haven't found anywhere that explains how to do listeners and callbacks well enough for me to understand. The problem I have is that GetContactCount() shows a contact whenever the player body enters the purple box above. Obviously there is a rectangular bounding box being created that encompasses the white rectangle.
I've tried making the fixture an EdgeShape, and the same thing happens. Does anyone have any idea what is going on here? I'd really like to get collision nailed so I can move on to other things. Thank you very much for any help.
The bounding box is an AABB (axis aligned bounding box) which means it will always be aligned with the the Cartesian axes. AABBs are normally used for broadphase collision detection because it's a relatively simple (and inexpensive) computation.
You need to make sure that you're testing against the OBB (oriented bounding box) for the objects if you want more accurate (but not pixel perfect, as Micah pointed out) results.
Also, I agree with Micah's answer that you will most likely need a more general system for handling collisions. Even if you only ever have just walls and the player, there's no guarantee that which object will be A and which will be B. And as you add other object types, this will quickly unravel.
Creating the contact listener isn't terribly difficult, from the docs (added to attempt to handle your situation):
class MyContactListener:public b2ContactListener
{
private:
PlayerClass *m_Player;
public:
MyContactListener(PlayerClass *player) : m_Player(player)
{ }
void BeginContact(b2Contact* contact)
{ /* handle begin event */ }
void EndContact(b2Contact* contact)
{
if (contact->GetFixtureA()->GetUserData() == m_Player
|| contact->GetFixtureB()->GetUserData() == m_Player)
{
m_Player->CollideWall();
}
}
/* we're not interested in these for the time being */
void PreSolve(b2Contact* contact, const b2Manifold* oldManifold)
{ /* handle pre-solve event */ }
void PostSolve(b2Contact* contact, const b2ContactImpulse* impulse)
{ /* handle post-solve event */ }
};
This requires you to assign m_Player to the player's fixture's user data field. Then you can use the contact listener like so:
m_Physics->GetWorld()->SetContactListener(new MyContactListener(m_Player));
How do you know GetFixtureA is the player and B is the wall? Could it be reversed? Could there be an FixtureC? I would think you would need a more generic solution.
I've used a similar graphics framework (Qt) and it had something so you could grab any two objects and call something like 'hasCollided' which would return a bool. You could get away with not using a callback and just call it in the drawScene() or check it periodically.
In Box2D the existence of a contact just means that the AABBs of two fixtures overlaps. It does not necessarily mean that the shapes of the fixtures themselves are touching.
You can use the IsTouching() function of a contact to check if the shapes are actually touching, but the preferred way to deal with collisions is to use the callback feature to have the engine tell you whenever two fixtures start/stop touching. Using callbacks is much more efficient and easier to manage in the long run, though it may be a little more effort to set up initially and there are a few things to be careful about - see here for an example: http://www.iforce2d.net/b2dtut/collision-callbacks