What do I want to do?
I work with a Franka Emika Panda and use the "cartesian_impedance_example_controller" with its "equilibrium_pose" topic to move the panda arm.
I want to use a command to rotate the arm along its axes of the "panda_rightfinger" joint axes (axis of interactive marker seen in picture). The roation only happens around the axis and happens by pressing a specific button.
(Right finger frame with the interactive marker around it and panda_link0 frame on the left)
How do I do it?
The rotation quaternion gets created by a function that uses following script:
axis = {
"roll": 0,
"pitch": 0,
"yaw": 0
}
def pyr_producer(self, gesture_msg):
global axis
axis[gesture_msg.cls] += 1 * 0.01
return list(axis.values())
def get_quaternion(self, gesture_msg):
roll, pitch, yaw = pyr_producer(gesture_msg)
q_rot = tf.transformations.quaternion_from_euler(roll, pitch, yaw)
return Quaternion(*q_rot)
Afterwards, this rotation quaterion will be used by another script and gets published to the corresponding equilibrium_pose topic.
This part of the script calculates the rotation:
eq_pose: the new pose that will be used for the topic
current_goal_pose: the pose that contains the actual rotation
last_goal_pose: the pose that contains the last rotation
eq_pose.pose.position = last_goal_pose.pose.position
eq_pose.pose.orientation = orientation_producer.get_quaternion(goal_pose.gesture)
# calculate the relative quaternion from the last pose to the new pose
# (see http://wiki.ros.org/tf2/Tutorials/Quaternions)
# add relative rotation quaternion to the new equilibrium orientation by multiplying
q_equilibrium = [eq_pose.pose.orientation.x, eq_pose.pose.orientation.y,
eq_pose.pose.orientation.z, eq_pose.pose.orientation.w]
q_2 = [current_goal_pose.pose.orientation.x, current_goal_pose.pose.orientation.y,
current_goal_pose.pose.orientation.z, current_goal_pose.pose.orientation.w]
# Negate w value for inverse
q_1_inv = [last_goal_pose.pose.orientation.x, last_goal_pose.pose.orientation.y,
last_goal_pose.pose.orientation.z, (-1)*last_goal_pose.pose.orientation.w]
q_relative = tf.transformations.quaternion_multiply(q_2, q_1_inv)
q_equilibrium = tf.transformations.quaternion_multiply(q_relative, q_equilibrium)
eq_pose.pose.orientation.x = q_equilibrium[0]
eq_pose.pose.orientation.y = q_equilibrium[1]
eq_pose.pose.orientation.z = q_equilibrium[2]
eq_pose.pose.orientation.w = q_equilibrium[3]
# update last pose
last_goal_pose = current_goal_pose
# Only publish poses when there is an interaction
eq_publisher.publish(eq_pose)
The eq_pose gets generated by this part:
def franka_state_callback(msg):
global eq_pose
global initial_eq_pose_found
# the initial pose has to be retrieved only once
if initial_eq_pose_found:
return
initial_quaternion = \
tf.transformations.quaternion_from_matrix(
np.transpose(np.reshape(msg.O_T_EE,
(4, 4))))
initial_quaternion = initial_quaternion / np.linalg.norm(initial_quaternion)
eq_pose.pose.orientation.x = initial_quaternion[0]
eq_pose.pose.orientation.y = initial_quaternion[1]
eq_pose.pose.orientation.z = initial_quaternion[2]
eq_pose.pose.orientation.w = initial_quaternion[3]
eq_pose.pose.position.x = msg.O_T_EE[12]
eq_pose.pose.position.y = msg.O_T_EE[13]
eq_pose.pose.position.z = msg.O_T_EE[14]
initial_eq_pose_found = True
rospy.loginfo("Initial panda pose found: " + str(initial_eq_pose_found))
rospy.loginfo("Initial panda pose: " + str(eq_pose))
if __name__ == "__main__":
state_sub = rospy.Subscriber("/panda/franka_state_controller/franka_states", FrankaState, franka_state_callback)
while not initial_eq_pose_found:
rospy.sleep(1)
state_sub.unregister()
What actually happens
The rotation itself works, but only happens around the "panda_link0" axis, which is the fixed position of the panda foot. The rotation should be the same like the one around the interactive marker in the interactive marker example.
Final Question
So I want to know, how to calculate the quaternions for this rotation?
I am quite new to robotics and hope my description was clear.
Okay, I just found my mistake, as expected, it was very easy:
The multiplication of quaternions is not cummutative. With respect to that, I just had to change the calculation of the quaternion from
q_equilibrium = tf.transformations.quaternion_multiply(q_relative, q_equilibrium)
to
q_equilibrium = tf.transformations.quaternion_multiply(q_equilibrium,q_relative)
Related
The task
I need to convert the coordinate system to +X forward, +Y right and +Z up (left-handed, like the one in Unreal Engine). The crucial part is that I want my camera to face its forward axis (again, like in Unreal Engine). Here is how it works in Unreal:
The problem
I have managed to achieve both things: my camera now faces its forward direction and world coordinates are the same as in UE, HOWEVER, I've stumbled upon a big issue. For each object:
pitch rotation (around RightVector) is now clockwise
yaw rotation (around UpVector) is now clockwise
roll rotation (around ForwardVector) is counter-clockwise (as it was before)
I need these rotations to be all counter-clockwise (as per standard) and keep the camera facing the forward vector.
My current solution attempt
My current setup relies on rotating the projection and camera view matrices (model matrix in my code is called entity matrix):
#define GLM_FORCE_LEFT_HANDED // it's defined somewhere else but so you're aware
// Transform::VectorForward = (1, 0, 0) // these are used below
// Transform::VectorRight = (0, 1, 0)
// Transform::VectorUp = (0, 0, 1)
// The entity matrix update function which updates the model matrix for each object
virtual void UpdateEntityMatrix()
{
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(m_Transform.Rotation[2]), Transform::VectorUp); // yaw rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[1]), Transform::VectorRight); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), Transform::VectorForward); // roll rotation
m_EntityMatrix = glm::scale(m, m_Transform.Scale);
}
// The entity matrix update in my camera class (child of entity) which also updates the view matrix
void Camera::UpdateEntityMatrix()
{
SceneEntity::UpdateEntityMatrix();
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(180.f), GetRightVector()); // NOTE: I'm getting the current Forward/Right/Up vectors of the entity
m = glm::rotate(m, glm::radians(m_Transform.Rotation[2]), GetUpVector()); // yaw rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[1]), GetRightVector()); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), GetForwardVector()); // roll rotation
m = glm::scale(m, m_Transform.Scale);
m_ViewMatrix = glm::inverse(m);
m_ViewProjectionMatrix = m_ProjectionMatrix * m_ViewMatrix;
}
void PerspectiveCamera::UpdateProjectionMatrix()
{
m_ProjectionMatrix = glm::perspective(glm::radians(m_FOV), m_Width / m_Height, m_NearClip, m_FarClip);
// we want the camera to face +x:
m_ProjectionMatrix = glm::rotate(m_ProjectionMatrix, glm::radians(-90.f), Transform::VectorUp);
m_ProjectionMatrix = glm::rotate(m_ProjectionMatrix, glm::radians(90.f), Transform::VectorRight);
}
This is my result so far (the camera visualization thing shows the rotation of currently used camera):
What else I tried
I tried modifying the model matrix (note the -):
virtual void UpdateEntityMatrix()
{
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(-m_Transform.Rotation[2]), Transform::VectorUp); // yaw rotation
m = glm::rotate(m, glm::radians(-m_Transform.Rotation[1]), Transform::VectorRight); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), Transform::VectorForward); // roll rotation
m_EntityMatrix = glm::scale(m, m_Transform.Scale);
}
But it makes my camera not face forward anymore (the view is the camera's rotation but mirrored on the Y and Z axis):
Itried to fix it by applying the same change when calculating the camera view matrix but it didn't help (still mirrored or some other issues).
On top of that, I tried experimenting with the glm::lookAt functions to create the view matrix but never achieved anything.
Update: I've found a solution
I think my problem was actually defining a counter-clockwise rotation. It turns out Unreal's rotations are not consistently counter-clockwise. I think it's best to define a counter-clockwise rotation when looking in the direction pointed by the direction arrow - as if you were behind it. Here is my conclusion:
Conclusion
Unless somebody finds an errorin the system I defined, I'll stick to it. The code is contained in the My current solution attempt section. It seems I was correct from the get-go.
However, the answer provided by #paddy does solve my original issue - converting clockwise to counter-clockwise. Upon further attempts, I was able to correctly replicate Unreal's system while keeping my camera facing forward.
I am trying to write a file save application using the Autodesk FBXSDK. I have this working fine using Euler rotations, but I need to update it to use quaternions.
The relevant function is:
bool CreateScene(FbxScene* pScene, double lFocalLength, int startFrame)
{
//Create Camera
FbxNode* lMyCameraNode = FbxNode::Create(pScene, "p_camera");
//connect camera node to root node
FbxNode* lRootNode = pScene->GetRootNode();
lRootNode->ConnectSrcObject(lMyCameraNode);
FbxCamera* lMyCamera = FbxCamera::Create(pScene, "Root_camera");
lMyCameraNode->SetNodeAttribute(lMyCamera);
// Create an animation stack
FbxAnimStack* myAnimStack = FbxAnimStack::Create(pScene, "My stack");
// Create the base layer (this is mandatory)
FbxAnimLayer* pAnimLayer = FbxAnimLayer::Create(pScene, "Layer0");
myAnimStack->AddMember(pAnimLayer);
// Get the camera’s curve node for local translation.
FbxAnimCurveNode* myAnimCurveNodeRot = lMyCameraNode->LclRotation.GetCurveNode(pAnimLayer, true);
//create curve nodes
FbxAnimCurve* myRotXCurve = NULL;
FbxAnimCurve* myRotYCurve = NULL;
FbxAnimCurve* myRotZCurve = NULL;
FbxTime lTime; // For the start and stop keys. int lKeyIndex = 0; // Index for the keys that define the curve
// Get the animation curve for local rotation of the camera.
myRotXCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_X, true);
myRotYCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Y, true);
myRotZCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Z, true);
//This to add keys, per frame.
float frameNumber = startFrame;
for (int i = 0; i < rec.size(); i++)
{
lTime.SetFrame(frameNumber); //frame number
//rx
lKeyIndex = myRotXCurve->KeyAdd(lTime);
myRotXCurve->KeySet(lKeyIndex, lTime, recRotX[i], FbxAnimCurveDef::eInterpolationLinear);
//ry
lKeyIndex = myRotYCurve->KeyAdd(lTime);
myRotYCurve->KeySet(lKeyIndex, lTime, recRotY[i], FbxAnimCurveDef::eInterpolationLinear);
//rz
lKeyIndex = myRotZCurve->KeyAdd(lTime);
myRotZCurve->KeySet(lKeyIndex, lTime, recRotZ[i], FbxAnimCurveDef::eInterpolationLinear);
frameNumber += 1;
}
return true;
}
I would ideally like to pass in quaternion data here, instead of the euler x,y,z values. Is this possible with the fbxsdk? or do I need to convert my quaternion data first, and continue to pass in eulers?
Thank you.
You always need to go back to Euler angles, as you can only get animation curves for the XYZ rotation. The only thing you have control over is the rotation order.
However, you can use FbxQuaternion for your calculations, then use .DecomposeSphericalXYZ() to get XYZ Euler angles.
The accepted answer does not work. Although the documentation definitely implies it should,
Create an Euler XYZ equivalent to the current quaternion.
An Autodesk employee claims that it does not
DecomposeSphericalXYZ does not convert to Euler angles
and this is borne out by my testing. In the current FBX SDK, there are at least two relatively easy ways you can convert a quat to what they call an euler, or to something suitable for LclRotation. First is via FbxAMatrix
FbxQuaternion fq = ...;
FbxAMatrix fa;
fa.SetQ(fq);
FbxVector4 fe = fa.GetR();
Second is via FbxVector::SetXYZ
FbxVector4 fe2;
fe2.SetXYZ(fq);
I've successfully gone from an XYZ rotation sequence → quaternion → euler from both methods, and retrieved the same rotation sequence. When I use DecomposeSphericalXYZ I get a slightly different FbxVector4. I haven't tried to figure out what they mean by "euler in spherical coordinates".
Years later, I hit this issue again, and found a simple answer.
After you have set all the keys that you need, just use this filter:
FbxAnimCurveFilterUnroll filter;
filter.Apply(*myAnimCurveNodeRot);
This seems to function the same as the 'Euler Filter' in Maya, or the 'Gimbal Killer' filter in Motionbuilder.
I have a sample of images and would like to detect the object among others in the image/video already knowing in advance the real physical dimensions of that object. I have one of the image sample (its airplane door) and would like to find the window in the airplane door knowing its physical dimensions(let we say it has inner radius of 20cm and out radius of 23cm) and its real world position in the door (for example its minimal distance to the door frame is 15cm) .Also I can know prior my camera resolution. Any matlab code or OpenCV C++ that can do that automatically with image processing?
Here is my image sample
And more complex image with round logos.
I run the code for second complex image and do not get the same results. Here is the image result.
You are looking for a circle in the image so i suggest you use Hough circle transform.
Convert image to gray
Find edges in the image
Use Hugh circle transform to find circles in the image.
For each candidate circle sample the values of the circle and if the values corresponds to a predefined values accept.
The code:
clear all
% Parameters
minValueWindow = 90;
maxValueWindow = 110;
% Read file
I = imread('image1.jpg');
Igray = rgb2gray(I);
[row,col] = size(Igray);
% Edge detection
Iedge = edge(Igray,'canny',[0 0.3]);
% Hough circle transform
rad = 40:80; % The approximate radius in pixels
detectedCircle = {};
detectedCircleIndex = 1;
for radIndex=1:1:length(rad)
[y0detect,x0detect,Accumulator] = houghcircle(Iedge,rad(1,radIndex),rad(1,radIndex)*pi/2);
if ~isempty(y0detect)
circles = struct;
circles.X = x0detect;
circles.Y = y0detect;
circles.Rad = rad(1,radIndex);
detectedCircle{detectedCircleIndex} = circles;
detectedCircleIndex = detectedCircleIndex + 1;
end
end
% For each detection run a color filter
ang=0:0.01:2*pi;
finalCircles = {};
finalCircleIndex = 1;
for i=1:1:detectedCircleIndex-1
rad = detectedCircle{i}.Rad;
xp = rad*cos(ang);
yp = rad*sin(ang);
for detectedPointIndex=1:1:length(detectedCircle{i}.X)
% Take each detected center and sample the gray image
samplePointsX = round(detectedCircle{i}.X(detectedPointIndex) + xp);
samplePointsY = round(detectedCircle{i}.Y(detectedPointIndex) + yp);
sampleValueInd = sub2ind([row,col],samplePointsY,samplePointsX);
sampleValueMean = mean(Igray(sampleValueInd));
% Check if the circle color is good
if(sampleValueMean > minValueWindow && sampleValueMean < maxValueWindow)
circle = struct();
circle.X = detectedCircle{i}.X(detectedPointIndex);
circle.Y = detectedCircle{i}.Y(detectedPointIndex);
circle.Rad = rad;
finalCircles{finalCircleIndex} = circle;
finalCircleIndex = finalCircleIndex + 1;
end
end
end
% Find Main circle by merging close hyptosis together
for finaCircleInd=1:1:length(finalCircles)
circleCenter(finaCircleInd,1) = finalCircles{finaCircleInd}.X;
circleCenter(finaCircleInd,2) = finalCircles{finaCircleInd}.Y;
circleCenter(finaCircleInd,3) = finalCircles{finaCircleInd}.Rad;
end
[ind,C] = kmeans(circleCenter,2);
c = [length(find(ind==1));length(find(ind==2))];
[~,maxInd] = max(c);
xCircle = median(circleCenter(ind==maxInd,1));
yCircle = median(circleCenter(ind==maxInd,2));
radCircle = median(circleCenter(ind==maxInd,3));
% Plot circle
imshow(Igray);
hold on
ang=0:0.01:2*pi;
xp=radCircle*cos(ang);
yp=radCircle*sin(ang);
plot(xCircle+xp,yCircle+yp,'Color','red', 'LineWidth',5);
The resulted image:
Remarks:
For other images will still have to fine tune several parameters like the radius that you search for the color and Hough circle threshold and canny edge thresholds.
In the function i searched for circle with radius from 40 pixels to 80. In here you can use your prior information about the real world radius of the window and the resolution of the camera. If you know approximately the distance the camera was from the airplane and the resolution of the camera and also the window radius in cm you can use this to get the radius in pixels and use this for the hough circle transform.
I wouldn't worry too much about the exact geometry and calibration and rather find the window by its own characteristics.
Binarization works relatively well, be it on the whole image or in a large region of interest.
Then you can select the most likely blob based on it approximate area and/or circularity.
I am displaying characters on a screen connected to a flystick (3D tracking object). My goal is to move the characters according to device input.
I noticed the 'zero' of the device (corresponding to the default position) is not corresponding to a not rotated quaternion of my characters. When moving the device, my character moves around some other axis (probably world's) than the device's. So I added a synchronising function which registers the quaternion rotation of my device when it should be at the default position, but now I have no idea how to combine this reference quaternion with the actual quaternion I receive from the device when it moves in order to rotate my character as I intend.
Here is how i am using the quaternions with OpenGL :
glm::quat rotation = QAccumulative.getQuat();
glm::mat4 matrix_rotation = glm::mat4_cast(rotation);
object_transform *= matrix_rotation;
Model *= object_transform;
glm::mat4 MVP = Projection * View * Model;
It works fine with keyboard and mouse if I rotate my objects with this method:
rotate(float angleX,float angleY,float angleZ) {
Quaternion worldRotationx( 1.0,0,0, angleZ);
Quaternion worldRotationy( 0,1.0,0, angleX);
Quaternion worldRotationz( 0,0,1.0, angleY);
QAccumulative = worldRotationx * worldRotationy * worldRotationz * QAccumulative;
QAccumulative.normalise();
}
Here is the main loop, where I compute the data from the tracking device:
void VRPN_CALLBACK handle_tracker(void* userData, const vrpn_TRACKERCB t ) {
Vrpn_tracker* current = Vrpn_tracker::current_vrpn_device;
if (current->calibrating == true) {
current->reference = Quaternion(t.quat[0],t.quat[1],t.quat[2],t.quat[3]);
current->reference.normalise();
} else {
// translation :
current->newPosition = sf::Vector3f(t.pos[0],t.pos[1],t.pos[2]);
current->world->current_object->translate(
current->newPosition.x - current->oldPosition.x,
current->newPosition.y - current->oldPosition.y,
current->newPosition.z - current->oldPosition.z);
current->oldPosition = current->newPosition;
// rotation:
current->newQuat = Quaternion(t.quat[0],t.quat[1],t.quat[2],t.quat[3]);
current->newQuat.normalise();
current->world->current_object->QAccumulative =
(current->reference.getConjugate() * current->newQuat);
Quaternion result = current->world->current_object->QAccumulative;
cout << "result? : " << result.x << result.y << result.z << result.w << endl;
// prints something like:
// x : -0.00207066 y : 0.00186546 z : -0.00165524 w : 0.999995
// when position of flystick = default position
}
}
The simple test I do to check the code: I start my program without moving the flystick, (ref quaternion is getting captured) so after the synchronisation the character should be in the default position (not moved) as I haven't touched the flystick.
I did try to multiply my reference by the quaternion received but it seems my character is moving according to local axis.
If someone could shed some light on how I can get the rotation in global axis from these two quaternions it would be great.
You have different calibrations possible for the flystick that may correspond to what you are expecting, however i suggest you read carefully the documentation about the flystick, your answer must be there.
If you are using ART tracking, page 124 of the User manual will help you.
Change multiplication order. If you rotate first, than do it last if last than do it first.
The hidden problem of your question, we don't know your transformation notation.
Your rotation can transform from World To Object frame or from Object to world. Depending of notation , exact transformation will be different.
I have set up a scene in SceneKit and have issued a hit-test to select an item. However, I want to be able to move that item along a plane in my scene. I continue to receive mouse drag events, but don't know how to transform those 2D coordinates into 3D coordinate in the scene.
My case is very simple. The camera is located at 0, 0, 50 and pointed at 0, 0, 0. I just want to drag my object along the z-plane with a z-value of 0.
The hit-test works like a charm, but how do I translate the mouse point from a drag event into a new position in the scene for the 3D object I am dragging?
You don't need to use invisible geometry — Scene Kit can do all the coordinate conversions you need without having to hit test invisible objects. Basically you need to do the same thing you would in a 2D drawing app for moving an object: find the offset between the mouseDown: location and the object position, then for each mouseMoved:, add that offset to the new mouse location to set the object's new position.
Here's an approach you could use...
Hit-test the initial click location as you're already doing. This gets you an SCNHitTestResult object identifying the node you want to move, right?
Check the worldCoordinates property of that hit test result. If the node you want to move is a child of the scene's rootNode, these is the vector you want for finding the offset. (Otherwise you'll need to convert it to the coordinate system of the parent of the node you want to move — see convertPosition:toNode: or convertPosition:fromNode:.)
You're going to need a reference depth for this point so you can compare mouseMoved: locations to it. Use projectPoint: to convert the vector you got in step 2 (a point in the 3D scene) back to screen space — this gets you a 3D vector whose x- and y-coordinates are a screen-space point and whose z-coordinate tells you the depth of that point relative to the clipping planes (0.0 is on the near plane, 1.0 is on the far plane). Hold onto this z-coordinate for use during mouseMoved:.
Subtract the position of the node you want to move from the mouse location vector you got in step 2. This gets you the offset of the mouse click from the object's position. Hold onto this vector — you'll need it until dragging ends.
On mouseMoved:, construct a new 3D vector from the screen coordinates of the new mouse location and the depth value you got in step 3. Then, convert this vector into scene coordinates using unprojectPoint: — this is the mouse location in your scene's 3D space (equivalent to the one you got from the hit test, but without needing to "hit" scene geometry).
Add the offset you got in step 3 to the new location you got in step 5 - this is the new position to move the node to. (Note: for live dragging to look right, you should make sure this position change isn't animated. By default the duration of the current SCNTransaction is zero, so you don't need to worry about this unless you've changed it already.)
(This is sort of off the top of my head, so you should probably double-check the relevant docs and headers. And you might be able to simplify this a bit with some math.)
As an experiment I implemented Mr Bishop's helpful answer. The drag doesn't quite work (the object - a chess piece - jumps off screen) because of differences in the coordinate magnitudes between the mouse click and the 3-D world. I've inserted log outputs here and there among the code.
I asked on the Apple forums if anyone knew the secret sauce to homogenize the coordinates but didn't get a decisive answer. One thing, I had made some experimental changes to Mr Bishop's method and the forum members advised me to return to his technique.
Despite my code's failings, I thought someone might find it a useful starting point. I suspect there are only one or two small problems with the code.
Note that the log of the world transform matrix of the object (chess piece) is not part of the process but one Apple forum member advised me that the matrix often offers a useful 'sanity check' - which indeed it did.
- (NSPoint)
viewPointForEvent: (NSEvent *) event_
{
NSPoint windowPoint = [event_ locationInWindow];
NSPoint viewPoint = [self.view convertPoint: windowPoint
fromView: nil];
return viewPoint;
}
- (SCNHitTestResult *)
hitTestResultForEvent: (NSEvent *) event_
{
NSPoint viewPoint = [self viewPointForEvent: event_];
CGPoint cgPoint = CGPointMake (viewPoint.x, viewPoint.y);
NSArray * points = [(SCNView *) self.view hitTest: cgPoint
options: #{}];
return points.firstObject;
}
- (void)
mouseDown: (NSEvent *) theEvent
{
SCNHitTestResult * result = [self hitTestResultForEvent: theEvent];
SCNVector3 clickWorldCoordinates = result.worldCoordinates;
log output: clickWorldCoordinates x 208.124578, y -12827.223365, z 3163.659073
SCNVector3 screenCoordinates = [(SCNView *) self.view projectPoint: clickWorldCoordinates];
log output: screenCoordinates x 245.128906, y 149.335938, z 0.985565
// save the z coordinate for use in mouseDragged
mouseDownClickOnObjectZCoordinate = screenCoordinates.z;
selectedPiece = result.node; // save selected piece for use in mouseDragged
SCNVector3 piecePosition = selectedPiece.position;
log output: piecePosition x -18.200000, y 6.483060, z 2.350000
offsetOfMouseClickFromPiece.x = clickWorldCoordinates.x - piecePosition.x;
offsetOfMouseClickFromPiece.y = clickWorldCoordinates.y - piecePosition.y;
offsetOfMouseClickFromPiece.z = clickWorldCoordinates.z - piecePosition.z;
log output: offsetOfMouseClickFromPiece x 226.324578, y -12833.706425, z 3161.309073
}
- (void)
mouseDragged: (NSEvent *) theEvent;
{
NSPoint viewClickPoint = [self viewPointForEvent: theEvent];
SCNVector3 clickCoordinates;
clickCoordinates.x = viewClickPoint.x;
clickCoordinates.y = viewClickPoint.y;
clickCoordinates.z = mouseDownClickOnObjectZCoordinate;
log output: clickCoordinates x 246.128906, y 0.000000, z 0.985565
log output: pieceWorldTransform:
m11 = 242.15889219510001, m12 = -0.000045609300002524833, m13 = -0.00000721691076126, m14 = 0,
m21 = 0.0000072168760805499971, m22 = -0.000039452697396149999, m23 = 242.15890446329999, m24 = 0,
m31 = -0.000045609300002524833, m32 = -242.15889219510001, m33 = -0.000039452676995750002, m34 = 0,
m41 = -4268.2349924762348, m42 = -12724.050221935429, m43 = 4852.6652710104272, m44 = 1)
SCNVector3 newPiecePosition;
newPiecePosition.x = offsetOfMouseClickFromPiece.x + clickCoordinates.x;
newPiecePosition.y = offsetOfMouseClickFromPiece.y + clickCoordinates.y;
newPiecePosition.z = offsetOfMouseClickFromPiece.z + clickCoordinates.z;
log output: newPiecePosition x 472.453484, y -12833.706425, z 3162.294639
selectedPiece.position = newPiecePosition;
}
I used the code written by Steve and with little modification it worked for me.
On mouseDown I save clickWorldCoordinates on a property called startClickWorldCoordinates.
On mouseDragged I calculate the selectedPiece position in this way:
SCNVector3 worldClickCoordinate = [(SCNView *) self.view unprojectPoint:clickCoordinates.x];
newPiecePosition.x = selectedPiece.position.x + worldClickCoordinate.x - startClickWorldCoordinates.x;
newPiecePosition.y = selectedPiece.position.y + worldClickCoordinate.y - startClickWorldCoordinates.y;
newPiecePosition.z = selectedPiece.position.z + worldClickCoordinate.z - startClickWorldCoordinates.z;
selectedPiece.position = newPiecePosition;
startClickWorldCoordinates = worldClickCoordinate;