I have a simple model with a simple skeletal structure that I made in blender. Here's how it looks:
And here's the hierarchy in blender:
As you can see it has two bones: One that goes halfway up the rectangular box ("Bone"), that is completely stationary. And another bone ("Bone.001") that goes from the halfway point and up to the top, that rotates.
I've imported the mesh using AssImpNet, and extracted the rotation, scaling and position keys from the animation node channels. When I apply those transformations to the mesh, I get this result (colored by bone weight):
The motion/animation seems to play correctly, so I believe this part works correctly. Now this is where my understanding starts to break down, but I believe the crucial part I'm missing is calculating the "inverse bind pose" (I've seen a few names for it), and applying that to the bone transformations that I feed into my shader as well. But so far I haven't been able to find exactly what it is that I need to extract from AssImp's format, and multiply together to get the correct final transformation. I've only found vague explanations about traversing the node tree and "undoing" transformations from each parent node, or something.
Here's what I tried, which doesn't seem to be working:
i thought that, for the base bone ("Bone"), i would need:
- the global inverse transform
- the node transform of the node with name "Bone"
- the rotation/position/scale keys from the animation channel with name "Bone"
- the bone offset from Meshes.Bones.OffsetMatrix from the bone named "Bone"
and then multiply them together in that order.
similarly for "Bone.001":
- the global inverse transform
- the node transform of the node with name "Bone"
- the rotation/position/scale keys from the animation channel with name "Bone"
- the node transform of the node with name "Bone.001"
- the rotation/position/scale keys from the animation channel with name "Bone.001"
- the bone offset from Meshes.Bones.OffsetMatrix from the bone named "Bone.001"
My attempt at implementing this (hard coding the indexes for now just to try to get things working), note that it's using C#/AssImpNet, so the naming conventions are a bit different from C++/AssImp:
// "Bone"
public Matrix4 Bone0Transform(double time)
{
var bone0 = MathUtils.ConvertMatrix(Scene.RootNode.Children[1].Children[0].Transform);
var frame0 = GetTransformedFrame(1, TimeToFrame(1, time));
var global = MathUtils.ConvertMatrix(Scene.RootNode.Transform).Inverted();
var offset0 = MathUtils.ConvertMatrix(Scene.Meshes[0].Bones[0].OffsetMatrix);
var total = global * bone0 * frame0 * offset0;
return total;
}
// "Bone.001"
public Matrix4 Bone1Transform(double time)
{
var bone0 = MathUtils.ConvertMatrix(Scene.RootNode.Children[1].Children[0].Transform);
var bone1 = MathUtils.ConvertMatrix(Scene.RootNode.Children[1].Children[0].Children[0].Transform);
var frame0 = GetTransformedFrame(1, TimeToFrame(1, time));
var frame1 = GetTransformedFrame(2, TimeToFrame(2, time));
var global = MathUtils.ConvertMatrix(Scene.RootNode.Transform).Inverted();
var offset1 = MathUtils.ConvertMatrix(Scene.Meshes[0].Bones[1].OffsetMatrix);
var total = global * bone0 * frame0 * bone1 * frame1 * offset1;
return total;
}
GetTransformedFrame returns a Matrix4 combining the scale, rotation and position keys for the frame that corresponds to the current time, and on its own gives the result you can see in the gif where the box is colored red/green.
All this gives me is an obviously incorrect result:
So my question is this: Is my understanding of how to calculate the final bone transformations wrong? If so, what is the correct way of doing it?
Related
I am trying to get animations to work for complex glTF models I am trying the RiggedFigure.
In particular I am trying to calculate the globalTransformOfJointNode as described here: https://github.com/KhronosGroup/glTF-Tutorials/blob/master/gltfTutorial/gltfTutorial_020_Skins.md
For that I have this function:
std::vector<Eigen::Matrix4f> GetGlobalJointMatrices(
const std::vector<Eigen::Matrix4f>& local_matrices,
const std::vector<Eigen::Matrix4f>& animation_matrices,
const std::map<int, int>& skeleton,
const int start_joint)
{
std::vector<Eigen::Matrix4f> skeleton_matrices(skeleton.size());
skeleton_matrices[0] = Eigen::Matrix4f::Identity();
for(auto [child, parent]: skeleton)
{
skeleton_matrices[child - start_joint] = skeleton_matrices[parent - start_joint] *
local_matrices[child - start_joint] * animation_matrices[child - start_joint];
}
return skeleton_matrices;
}
In here local_matrices refers to the local transform of each node in the skeleton as described in the nodes array.
animation_matrices are only the animations that target nodes in the skin.
For both arrays, since the nodes are contiguous, the index i contains the value corresponding to node i + start_joint. I have verified that the animations are loaded correctly and in the correct order.
This is the result of doing:
skeleton_matrices[child - start_joint] = skeleton_matrices[parent - start_joint] * local_matrices[child - start_joint];
If I try to apply the animation matrices Instead I get:
Putting the animation matrices at other points results in similar behaviour.
I am sure I am not loading the martices incorrectly, for example these are the matrices that target nodes 6 and 10:
animation targetting node:10
rotation: 0.0245355 -0.319997 0.9446 0.0687827
translation: -0.00234646 -0.0661734 0.0278567
scales: 1 1 1
node 6:
animation targetting node:6
rotation: -0.0341418 -0.319178 0.946171 -0.0414678
translation: -0.00145852 -0.0661988 0.0278567
I went and verified that those values are correct manually, they are.
I don't understand what is wrong.
I feel the spec should be more clear about this. So as it turns out you should NOT apply the transform of thr nodes and only calculate the animation matrix, the animation matrices already contain the node matrix premultiplied.
TL;DR:
Do this:
skeleton_matrices[child - start_joint] = skeleton_matrices[parent - start_joint] *
animation_matrices[child - start_joint];
I am in the process of writing an animation system with my own Collada parser and am running into an issue that I can't wrap my head around.
I have collected my mesh/skin information (vertices, normals, jointIds, weights, etc), my skeleton information (joints, their local transforms, inverse bind position, hierarchy structure), and my animation (keyframe transform position for each joint, timestamp).
My issue is that with everything calculated and then implemented in the shader (the summation of weights multiplied by the joint transform and vertex position) - I get the following:
When I remove the weight multiplication, the mesh remains fully intact - however the skin doesn't actually follow the animation. I am at a lost as I feel as though the math is correct, but very obviously I am going wrong somewhere. Would someone be able to shine light on the aspect I have misinterpreted?
Here is my current understanding and implementation:
After collecting all of the joint's localTransforms and hierarchy, I calculate their inverse bind transfromation matrix. To do this I multiple each joints localTransform with their parentLocalTransform to get a bindTransform. Inverting that bindTransform results in their inverseBindTransform. Below is my code for that:
// Recursively collect each Joints InverseBindTransform -
// root joint's local position is an identity matrix.
// Function is only called once after data collection.
void Joint::CalcInverseBindTransform(glm::mat4 parentLocalPosition)
{
glm::mat4 bindTransform = parentLocalPosition * m_LocalTransform;
m_InverseBindPoseMatrix = glm::inverse(bindTransform);
for (Joint child : Children) {
child.CalcInverseBindTransform(bindTransform);
}
}
Within my animator during an animation, for each joint I take the two JointTransforms for the two frame's my currentTime is in between and I calculate the interpolated JointTransform. (JointTransform simply has a vec3 for position and quaternion for rotation). I do this for every joint and then apply those interpolated values to each Joint by again recursively muliplying the new frameLocalTransform by their parentLocalTransform. I take that bindTransform and multiply it by the invBindTransform and then transpose the matrix. Below is the code for that:
std::unordered_map<int, glm::mat4> Animator::InterpolatePoses(float time) {
std::unordered_map<int, glm::mat4> poses;
if (IsPlaying()) {
for (std::pair<int, JointTransform> keyframe : m_PreviousFrame.GetJointKeyFrames()) {
JointTransform previousFrame = m_PreviousFrame.GetJointKeyFrames()[keyframe.first];
JointTransform nextFrame = m_NextFrame.GetJointKeyFrames()[keyframe.first];
JointTransform interpolated = JointTransform::Interpolate(previousFrame, nextFrame, time);
poses[keyframe.first] = interpolated.getLocalTransform();
}
}
return poses;
}
void Animator::ApplyPosesToJoints(std::unordered_map<int, glm::mat4> newPose, Joint* j, glm::mat4 parentTransform)
{
if (IsPlaying()) {
glm::mat4 currentPose = newPose[j->GetJointId()];
glm::mat4 modelSpaceJoint = parentTransform * currentPose;
for (Joint child : j->GetChildren()) {
ApplyPosesToJoints(newPose, &child, modelSpaceJoint);
}
modelSpaceJoint = glm::transpose(j->GetInvBindPosition() * modelSpaceJoint);
j->SetAnimationTransform(modelSpaceJoint);
}
}
I then collect all the newly AnimatedTransforms for each joint and send them to the shader:
void AnimationModel::Render(bool& pass)
{
[...]
std::vector<glm::mat4> transforms = GetJointTransforms();
for (int i = 0; i < transforms.size(); ++i) {
m_Shader->SetMat4f(transforms[i], ("JointTransforms[" + std::to_string(i) + "]").c_str());
}
[...]
}
void AnimationModel::AddJointsToArray(Joint current, std::vector<glm::mat4>& matrix)
{
glm::mat4 finalMatrix = current.GetAnimatedTransform();
matrix.push_back(finalMatrix);
for (Joint child : current.GetChildren()) {
AddJointsToArray(child, matrix);
}
}
In the shader, I simply follow the summation formula that can be found all over the web when researchiing this topic:
for (int i = 0; i < total_weight_amnt; ++i) {
mat4 jointTransform = JointTransforms[jointIds[i]];
vec4 newVertexPos = jointTransform * vec4(pos, 1.0);
total_pos += newVertexPos * weights[i];
[...]
---------- Reply to Normalizing Weights ------------
There were a few weights summing above 1, but after solving the error in my code the model looked like this:
For calculating the weights - I loop through all preadded weights in the vector, and if I find a weight that is less than the weight I'm looking to add - I replace that weight in that position. Otherwise, I append the weight onto the end of the vector. If there are less weights in my vector than my specified max_weights (which is 4) - I fill in the remaining weights/jointIds with 0.
I understand when something is going wrong in skinning animations, there can be alot of different areas the problem is occuring. As such, for future googlers experiencing the same issue I am - take this as more of a list of suggestions of what you could be doing wrong rather than absolutely doing wrong.
For my problem - I had the right idea but wrong approach in a lot of minor areas. Which brought me fairly close but, as they say, no cigar.
I had no need to calculate the Inverse Bind Pose myself, Collada's Inverse Bind Pose (sometimes/often declared as an "offsetMatrix") is more than perfect. This wasn't a problem more as I was just doing unnecessary calculations.
In a Collada file, they often provide you more "joints" or "nodes" in the hierarchy than what is needed for the animation. Prior to the start of your actual animated "joints", there is the scene and an initial armature "node" type. The scene is typically an identity matrix that was manipulated based on your "up axis" upon reading in the Collada file. The Node type will determine the overall size of each joint in the skeleton - so if it wasn't resized, its probably the identity matrix. Make sure your hierarchy still contains ALL nodes/joints listed in the hierarchy. I very much was not doing so - which greatly distorted my globalPosition (BindPose).
If you are representing your Joint's transforms rotation through quaternions (which is highly recommended), make sure the resulted quaternion is normalized after interpolating between two rotated positions.
On the same note - when combining the Rotation and Transform into your final matrix - make sure your order of multiplication and the final output is correct.
Finally - your last skinning matrix is comprised of your joints InvBindMatrix * GlobalPosition * GlobalInverseRootTransform (<- this is the inverse of the local transfrom from your "scene" node mentioned in (1), remember?).
Based on your prior matrix multiplications up to this point, you may or may not need to transpose this final matrix.
And with that - I was able to successfully animate my model!
One final note - my mesh and animation files are added in separately. If your animations are in separate files from your mesh, make sure you collect the skinning/joint information from the files with an animation rather than the file with the mesh. I list my steps for loading in a model and then giving it multiple animations through different files:
Load in the Mesh (This contains Vertices,Normals,TexCoords,JointIds,Weights)
Load in the animation file (This gives Skeleton, InverseBindPositions, and other needed info to bind skeleton to mesh) - Once skeleton and binding info is collected, gather first animation info from that file as well.
For another animation, the above Skeleton should work fine for any other animation on the same mesh/model - just read in the animation information and store in your chosen data structure. Repeat step 3 til happy.
I am trying to write a file save application using the Autodesk FBXSDK. I have this working fine using Euler rotations, but I need to update it to use quaternions.
The relevant function is:
bool CreateScene(FbxScene* pScene, double lFocalLength, int startFrame)
{
//Create Camera
FbxNode* lMyCameraNode = FbxNode::Create(pScene, "p_camera");
//connect camera node to root node
FbxNode* lRootNode = pScene->GetRootNode();
lRootNode->ConnectSrcObject(lMyCameraNode);
FbxCamera* lMyCamera = FbxCamera::Create(pScene, "Root_camera");
lMyCameraNode->SetNodeAttribute(lMyCamera);
// Create an animation stack
FbxAnimStack* myAnimStack = FbxAnimStack::Create(pScene, "My stack");
// Create the base layer (this is mandatory)
FbxAnimLayer* pAnimLayer = FbxAnimLayer::Create(pScene, "Layer0");
myAnimStack->AddMember(pAnimLayer);
// Get the camera’s curve node for local translation.
FbxAnimCurveNode* myAnimCurveNodeRot = lMyCameraNode->LclRotation.GetCurveNode(pAnimLayer, true);
//create curve nodes
FbxAnimCurve* myRotXCurve = NULL;
FbxAnimCurve* myRotYCurve = NULL;
FbxAnimCurve* myRotZCurve = NULL;
FbxTime lTime; // For the start and stop keys. int lKeyIndex = 0; // Index for the keys that define the curve
// Get the animation curve for local rotation of the camera.
myRotXCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_X, true);
myRotYCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Y, true);
myRotZCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Z, true);
//This to add keys, per frame.
float frameNumber = startFrame;
for (int i = 0; i < rec.size(); i++)
{
lTime.SetFrame(frameNumber); //frame number
//rx
lKeyIndex = myRotXCurve->KeyAdd(lTime);
myRotXCurve->KeySet(lKeyIndex, lTime, recRotX[i], FbxAnimCurveDef::eInterpolationLinear);
//ry
lKeyIndex = myRotYCurve->KeyAdd(lTime);
myRotYCurve->KeySet(lKeyIndex, lTime, recRotY[i], FbxAnimCurveDef::eInterpolationLinear);
//rz
lKeyIndex = myRotZCurve->KeyAdd(lTime);
myRotZCurve->KeySet(lKeyIndex, lTime, recRotZ[i], FbxAnimCurveDef::eInterpolationLinear);
frameNumber += 1;
}
return true;
}
I would ideally like to pass in quaternion data here, instead of the euler x,y,z values. Is this possible with the fbxsdk? or do I need to convert my quaternion data first, and continue to pass in eulers?
Thank you.
You always need to go back to Euler angles, as you can only get animation curves for the XYZ rotation. The only thing you have control over is the rotation order.
However, you can use FbxQuaternion for your calculations, then use .DecomposeSphericalXYZ() to get XYZ Euler angles.
The accepted answer does not work. Although the documentation definitely implies it should,
Create an Euler XYZ equivalent to the current quaternion.
An Autodesk employee claims that it does not
DecomposeSphericalXYZ does not convert to Euler angles
and this is borne out by my testing. In the current FBX SDK, there are at least two relatively easy ways you can convert a quat to what they call an euler, or to something suitable for LclRotation. First is via FbxAMatrix
FbxQuaternion fq = ...;
FbxAMatrix fa;
fa.SetQ(fq);
FbxVector4 fe = fa.GetR();
Second is via FbxVector::SetXYZ
FbxVector4 fe2;
fe2.SetXYZ(fq);
I've successfully gone from an XYZ rotation sequence → quaternion → euler from both methods, and retrieved the same rotation sequence. When I use DecomposeSphericalXYZ I get a slightly different FbxVector4. I haven't tried to figure out what they mean by "euler in spherical coordinates".
Years later, I hit this issue again, and found a simple answer.
After you have set all the keys that you need, just use this filter:
FbxAnimCurveFilterUnroll filter;
filter.Apply(*myAnimCurveNodeRot);
This seems to function the same as the 'Euler Filter' in Maya, or the 'Gimbal Killer' filter in Motionbuilder.
I have set up a scene in SceneKit and have issued a hit-test to select an item. However, I want to be able to move that item along a plane in my scene. I continue to receive mouse drag events, but don't know how to transform those 2D coordinates into 3D coordinate in the scene.
My case is very simple. The camera is located at 0, 0, 50 and pointed at 0, 0, 0. I just want to drag my object along the z-plane with a z-value of 0.
The hit-test works like a charm, but how do I translate the mouse point from a drag event into a new position in the scene for the 3D object I am dragging?
You don't need to use invisible geometry — Scene Kit can do all the coordinate conversions you need without having to hit test invisible objects. Basically you need to do the same thing you would in a 2D drawing app for moving an object: find the offset between the mouseDown: location and the object position, then for each mouseMoved:, add that offset to the new mouse location to set the object's new position.
Here's an approach you could use...
Hit-test the initial click location as you're already doing. This gets you an SCNHitTestResult object identifying the node you want to move, right?
Check the worldCoordinates property of that hit test result. If the node you want to move is a child of the scene's rootNode, these is the vector you want for finding the offset. (Otherwise you'll need to convert it to the coordinate system of the parent of the node you want to move — see convertPosition:toNode: or convertPosition:fromNode:.)
You're going to need a reference depth for this point so you can compare mouseMoved: locations to it. Use projectPoint: to convert the vector you got in step 2 (a point in the 3D scene) back to screen space — this gets you a 3D vector whose x- and y-coordinates are a screen-space point and whose z-coordinate tells you the depth of that point relative to the clipping planes (0.0 is on the near plane, 1.0 is on the far plane). Hold onto this z-coordinate for use during mouseMoved:.
Subtract the position of the node you want to move from the mouse location vector you got in step 2. This gets you the offset of the mouse click from the object's position. Hold onto this vector — you'll need it until dragging ends.
On mouseMoved:, construct a new 3D vector from the screen coordinates of the new mouse location and the depth value you got in step 3. Then, convert this vector into scene coordinates using unprojectPoint: — this is the mouse location in your scene's 3D space (equivalent to the one you got from the hit test, but without needing to "hit" scene geometry).
Add the offset you got in step 3 to the new location you got in step 5 - this is the new position to move the node to. (Note: for live dragging to look right, you should make sure this position change isn't animated. By default the duration of the current SCNTransaction is zero, so you don't need to worry about this unless you've changed it already.)
(This is sort of off the top of my head, so you should probably double-check the relevant docs and headers. And you might be able to simplify this a bit with some math.)
As an experiment I implemented Mr Bishop's helpful answer. The drag doesn't quite work (the object - a chess piece - jumps off screen) because of differences in the coordinate magnitudes between the mouse click and the 3-D world. I've inserted log outputs here and there among the code.
I asked on the Apple forums if anyone knew the secret sauce to homogenize the coordinates but didn't get a decisive answer. One thing, I had made some experimental changes to Mr Bishop's method and the forum members advised me to return to his technique.
Despite my code's failings, I thought someone might find it a useful starting point. I suspect there are only one or two small problems with the code.
Note that the log of the world transform matrix of the object (chess piece) is not part of the process but one Apple forum member advised me that the matrix often offers a useful 'sanity check' - which indeed it did.
- (NSPoint)
viewPointForEvent: (NSEvent *) event_
{
NSPoint windowPoint = [event_ locationInWindow];
NSPoint viewPoint = [self.view convertPoint: windowPoint
fromView: nil];
return viewPoint;
}
- (SCNHitTestResult *)
hitTestResultForEvent: (NSEvent *) event_
{
NSPoint viewPoint = [self viewPointForEvent: event_];
CGPoint cgPoint = CGPointMake (viewPoint.x, viewPoint.y);
NSArray * points = [(SCNView *) self.view hitTest: cgPoint
options: #{}];
return points.firstObject;
}
- (void)
mouseDown: (NSEvent *) theEvent
{
SCNHitTestResult * result = [self hitTestResultForEvent: theEvent];
SCNVector3 clickWorldCoordinates = result.worldCoordinates;
log output: clickWorldCoordinates x 208.124578, y -12827.223365, z 3163.659073
SCNVector3 screenCoordinates = [(SCNView *) self.view projectPoint: clickWorldCoordinates];
log output: screenCoordinates x 245.128906, y 149.335938, z 0.985565
// save the z coordinate for use in mouseDragged
mouseDownClickOnObjectZCoordinate = screenCoordinates.z;
selectedPiece = result.node; // save selected piece for use in mouseDragged
SCNVector3 piecePosition = selectedPiece.position;
log output: piecePosition x -18.200000, y 6.483060, z 2.350000
offsetOfMouseClickFromPiece.x = clickWorldCoordinates.x - piecePosition.x;
offsetOfMouseClickFromPiece.y = clickWorldCoordinates.y - piecePosition.y;
offsetOfMouseClickFromPiece.z = clickWorldCoordinates.z - piecePosition.z;
log output: offsetOfMouseClickFromPiece x 226.324578, y -12833.706425, z 3161.309073
}
- (void)
mouseDragged: (NSEvent *) theEvent;
{
NSPoint viewClickPoint = [self viewPointForEvent: theEvent];
SCNVector3 clickCoordinates;
clickCoordinates.x = viewClickPoint.x;
clickCoordinates.y = viewClickPoint.y;
clickCoordinates.z = mouseDownClickOnObjectZCoordinate;
log output: clickCoordinates x 246.128906, y 0.000000, z 0.985565
log output: pieceWorldTransform:
m11 = 242.15889219510001, m12 = -0.000045609300002524833, m13 = -0.00000721691076126, m14 = 0,
m21 = 0.0000072168760805499971, m22 = -0.000039452697396149999, m23 = 242.15890446329999, m24 = 0,
m31 = -0.000045609300002524833, m32 = -242.15889219510001, m33 = -0.000039452676995750002, m34 = 0,
m41 = -4268.2349924762348, m42 = -12724.050221935429, m43 = 4852.6652710104272, m44 = 1)
SCNVector3 newPiecePosition;
newPiecePosition.x = offsetOfMouseClickFromPiece.x + clickCoordinates.x;
newPiecePosition.y = offsetOfMouseClickFromPiece.y + clickCoordinates.y;
newPiecePosition.z = offsetOfMouseClickFromPiece.z + clickCoordinates.z;
log output: newPiecePosition x 472.453484, y -12833.706425, z 3162.294639
selectedPiece.position = newPiecePosition;
}
I used the code written by Steve and with little modification it worked for me.
On mouseDown I save clickWorldCoordinates on a property called startClickWorldCoordinates.
On mouseDragged I calculate the selectedPiece position in this way:
SCNVector3 worldClickCoordinate = [(SCNView *) self.view unprojectPoint:clickCoordinates.x];
newPiecePosition.x = selectedPiece.position.x + worldClickCoordinate.x - startClickWorldCoordinates.x;
newPiecePosition.y = selectedPiece.position.y + worldClickCoordinate.y - startClickWorldCoordinates.y;
newPiecePosition.z = selectedPiece.position.z + worldClickCoordinate.z - startClickWorldCoordinates.z;
selectedPiece.position = newPiecePosition;
startClickWorldCoordinates = worldClickCoordinate;
There's many answers to this problem, but I'm not sure that they all work with XTK, such as seeing multiple answers for this in Three.JS, but of course XTK and Three.JS don't have the same API obviously. Using a ray and Matrix seemed very similar to many other solutions for other frameworks, but I'm still not grasping a possible solution here. For now just finding the coordinates X, Y, and Z and recording them into Console.Log is fine, later I was hoping to create a caption/tooltip to display the information, but there is other ways to display it also. But can someone at least tell me if this is possible to use a ray to collide with the objects? I'm not sure how collision works in XTK with meshes or any other files. Any hints right now would be great!!
Here is my function to unproject in xtk. Please tell me if you see mistakes. Now with the resulting point and the camera position I should be able to find my intersections. To make the computation faster for the following step, I'll call it in a pick event and so I'll only have to try the intersections with a given object. If I've time i'll also try with testing the bounding boxes.
Nota bene : the last lines are not required, I could work on the ray instead of the point.
X.camera3D.prototype.unproject = function (x,y) {
// get the 4x4 model-view matrix
var mvMatrix = this._view;
// create the 4x4 projection matrix from the flatten gl version
var pMatrix = new X.matrix(4,4);
for (var i=0 ; i<16 ; i++) {
pMatrix.setValueAt(i - 4*Math.floor(i/4), Math.floor(i/4), this._perspective[i]);
}
// compute the product and inverse it
var mvpMatrxix = pMatrix.multiply(mwMatrix); /** Edit : wrong product corrected **/
var inverse_mvpMatrix = mvpMatrxix.getInverse();
if (!goog.isDefAndNotNull(inverse_mvpMatrix)) throw new Error("Could not inverse the transformation matrix.");
// check if x & y are map in [-1,1] interval (required for the computations)
if (x<-1 || x>1 || y<-1 || y>1) throw new Error("Invalid x or y coordinate, it must be between -1 and 1");
// fill the 4x1 normalized (in [-1,1]⁴) vector of the point of the screen in word camera world's basis
var point4f = new X.matrix(4,1);
point4f.setValueAt(0, 0, x);
point4f.setValueAt(1, 0, y);
point4f.setValueAt(2, 0, -1.0); // 2*?-1, with ?=0 for near plan and ?=1 for far plan
point4f.setValueAt(3, 0, 1.0); // homogeneous coordinate arbitrary set at 1
// compute the picked ray in the world's basis in homogeneous coordinates
var ray4f = inverse_mvpMatrix.multiply(point4f);
if (ray4f.getValueAt(3,0)==0) throw new Error("Ray is not valid.");
// return in not-homogeneous coordinates to compute the 3D direction vector
var point3f = new X.matrix(3,1);
point3f.setValueAt(0, 0, ray4f.getValueAt(0, 0) / ray4f.getValueAt(3, 0) );
point3f.setValueAt(1, 0, ray4f.getValueAt(1, 0) / ray4f.getValueAt(3, 0) );
point3f.setValueAt(2, 0, ray4f.getValueAt(2, 0) / ray4f.getValueAt(3, 0) );
return point3f;
};
Edit
Here, in my repo, you can find functions in camera3D.js and renderer3D.js for efficient 3D picking in xtk.
right now this is not easily possible. I guess you could grab the view matrix of the camera to calculate the position. If you do so, it would be great to bring it back into XTK as built-in functionality!
Currently, only object picking is possible like this (r is a X.renderer3D):
/**
* Picks an object at a position defined by display coordinates. If
* X.renderer3D.config['PICKING_ENABLED'] is FALSE, this function always returns
* -1.
*
* #param {!number} x The X-value of the display coordinates.
* #param {!number} y The Y-value of the display coordinates.
* #return {number} The ID of the found X.object or -1 if no X.object was found.
*/
var pick = r.pick($X, $Y);