using minko version 3.0, i am creating a camera as the samples :
auto camera = scene::Node::create("camera")
->addComponent(Renderer::create(0x000000ff))
->addComponent(Transform::create(
//test
Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 3.f)) //ori
//Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 30.f))
))
->addComponent(PerspectiveCamera::create(canvas->aspectRatio()));
Then loading my obj using a similar method :
RotateMyobj(const char *objName,float rotX, rotY, float rotZ)
{
...
auto myObjModel = sceneMan->assets()->symbol(objName);
auto clonedobj = myObjModel->clone(CloneOption::DEEP);
...
clonedobj->component<Transform>()->matrix()->prependRotationX(rotX); //test - ok
clonedobj->component<Transform>()->matrix()->prependRotationY(rotY);
clonedobj->component<Transform>()->matrix()->prependRotationZ(rotZ);
...
//include adding child to rootnode
}
calling it from asset complete callback :
auto _ = sceneManager->assets()->loader()->complete()->connect([=](file::Loader::Ptr loader)
{
...
RotateMyobj(0,0,0);
...
}
The obj does load however it is rotated "to the left" (compared when loaded within blender for example).
if i call my method using RotateMyobj(0,1.5,0); the obj is diaplyed at the right angle, however i think this shouldn't be needed.
PS: tested with many obj, all giving same results.
PS2 : commenting / turning off Matrix4x4::create()->lookAt leads to the same result
PS3 : shouldn't create cam with a position of 30 (Z axis) feels like looking at the ground from the top of a building ?
Any idea if this from the camera creation code or the obj loading one ?
Thx.
Update :
I found the source of my problem, it is being caused by calling this method inside enterFrame callback: UpdateSceneOnMouse( camera );
void UpdateSceneOnMouse( std::shared_ptr<scene::Node> &cam ){
yaw += cameraRotationYSpeed;
cameraRotationYSpeed *= 0.9f;
pitch += cameraRotationXSpeed;
cameraRotationXSpeed *= 0.9f;
if (pitch > maxPitch)
{
pitch = maxPitch;
}
else if (pitch < minPitch)
{
pitch = minPitch;
}
cam->component<Transform>()->matrix()->lookAt(
lookAt,
Vector3::create(
lookAt->x() + distance * cosf(yaw) * sinf(pitch),
lookAt->y() + distance * cosf(pitch),
lookAt->z() + distance * sinf(yaw) * sinf(pitch)
)
);}
with the following initialization parameters :
float CallbackManager::yaw = 0.f;
float CallbackManager::pitch = (float)M_PI * 0.5f;
float CallbackManager::minPitch = 0.f + 1e-5;
float CallbackManager::maxPitch = (float)M_PI - 1e-5;
std::shared_ptr CallbackManager::lookAt = Vector3::create(0.f, .8f, 0.f);
float CallbackManager::distance = 10.f;
float CallbackManager::cameraRotationXSpeed = 0.f;
float CallbackManager::cameraRotationYSpeed = 0.f;
If i turn off the call (inspired by the clone example), the object loads more or less correctly
(still a bit rotated to the left but better than previously). I am no math guru, can anyone suggest better
default parameters so the object / cameras aren't rotated at startup ?
Thx.
Many 3D/CAD tools will export files - including OBJ - with coordinates system different from the one used by Minko:
Minko 3 beta 2 uses a left-handed coordinates system
Minko 3 beta 3 uses the OpenGL coordinates system, which is right-handed.
For example, in a right handed coordinates system x+ goes "right", y+ goes "up" and z+ goes "out from the screen".
Your Minko app likely loads 3D files using the ASSIMP library through the Minko/ASSIMP plugin. If the 3D file (format) provides the information about the coordinate system it was used upon export, then ASSIMP will convert the coordinates to the right-handed system. But the OBJ file format standard does not include such information.
Solution 1 : Try exporting your 3D models using a more versatile format such as Collada (*.dae).
Commenting/turning off Matrix4x4::create()->lookAt() does not affect the orientation of your 3D model because there is no reason for it to do so. Changing the position of the camera has no reason to affect the orientation of a mesh. If you want to change the up axis of the camera you have to use the 3rd parameter of the Matrix4x4::lookAt() method.
Solution 2 : The best thing to do is to properly rotate your mesh** using RotateMyobj(0, PI / 2, 0).
I also recommend using the dev branch (Minko beta 3 instead of beta 2) because there are some API changes -especially math wise - and a tremendous performance boost.
Solution 3 : Try adding the symbol without cloning it, see if it makes any difference.
Related
Problem:
I have an object in 3D space that exists at a given orientation. I need to reorient the object to a new orientation. I'm currently representing the orientations as quaternions, though this is not strictly necessary.
I essentially need to determine the angular velocity needed to orient the body into the desired orientation.
What I'm currently working with looks something like the following:
Psuedocode:
// 4x4 Matrix containing rotation and translation
Matrix4 currentTransform = GetTransform();
// Grab the 3x3 matrix containing orientation only
Matrix3 currentOrientMtx = currentTransform.Get3x3();
// Build a quat based on the rotation matrix
Quaternion currentOrientation(currentOrientMtx);
currentOrientation.Normalize();
// Build a new matrix describing our desired orientation
Vector3f zAxis = desiredForward;
Vector3f yAxis = desiredUp;
Vector3f xAxis = yAxis.Cross(zAxis);
Matrix3 desiredOrientMtx(xAxis, yAxis, zAxis);
// Build a quat from our desired roation matrix
Quaternion desiredOrientation(desiredOrientMtx);
desiredOrientation.Normalize();
// Slerp from our current orientation to the new orientation based on our turn rate and time delta
Quaternion slerpedQuat = currentOrientation.Slerp(desiredOrientation, turnRate * deltaTime);
// Determine the axis and angle of rotation
Vector3f rotationAxis = slerpedQuat.GetAxis();
float rotationAngle = slerpedQuat.GetAngle();
// Determine angular displacement and angular velocity
Vector3f angularDisplacement = rotationAxis * rotationAngle;
Vector3f angularVelocity = angularDisplacement / deltaTime;
SetAngularVelocity(angularVelocity);
This essentially just sends my object spinning to oblivion. I have verified that the desiredOrientMtx I constructed via the axes is indeed the correct final rotation transformation. I feel like I'm missing something silly here.
Thoughts?
To calculate angular velocity, your turnRatealready provides the magnitude (rads/sec), so all you really need is the axis of rotation. That is just given by GetAxis( B * Inverse(A) ). GetAngle of that same quantity would give the total angle to travel between the two. See 'Difference' between two quaternions for further explanation.
SetAngularVelocity( Normalize( GetAxis( B * Inverse(A)) ) * turnRate )
You need to set the angular velocity to 0 at some point (when you reach your goal orientation). One way to do this is by using a quaternion distance. Another simpler way is by checking against the amount of time taken. Finally, you can check the angle between two quats (as discussed above) and check if that is close to 0.
float totalAngle = GetAngle( Normalize( endingPose * Inverse( startingPose ) ) );
if( fabs( totalAngle ) > 0.0001 ) // some epsilon
{
// your setting angular velocity code here
SetAngularVelocity(angularVelocity);
}
else
{
SetAngularVelocity( Vector3f(0) );
// Maybe, if you want high accuracy, call SetTransform here too
}
But, really, I don't see why you don't just use the Slerp to its fullest. Instead of relying on the physics integrator (which can be imprecise) and relying on knowing when you've reached your destination (which is somewhat awkward), you could just move the object frame-by-frame since you know the motion.
Quaternion startingPose;
Quaternion endingPose;
// As discussed earlier...
Quaternion totalAngle = Quaternion.AngleBetween( startingPose, endingPose );
// t is set to 0 whenever you start a new motion
t += deltaT;
float howFarIn = (turnRate * t) / totalAngle;
SetCurrentTransform( startingPose.Slerp( endingPose, howFarIn ) );
See Smooth rotation with quaternions for some discussion on that.
I'm implementing a first person camera using the GLM library that provides me with some useful functions that calculate perspective and 'lookAt' matrices. I'm also using OpenGL but that shouldn't make a difference in this code.
Basically, what I'm experiencing is that I can look around, much like in a regular FPS, and move around. But the movement is constrained to the three axes in a way that if I rotate the camera, I would still move in the same direction as if I had not rotated it... Let me illustrate (in 2D, to simplify things).
In this image, you can see four camera positions.
Those marked with a one are before movement, those marked with a two are after movement.
The red triangles represent a camera that is oriented straight forward along the z axis. The blue triangles represent a camera that hasbeen rotated to look backward along the x axis (to the left).
When I press the 'forward movement key', the camera moves forward along the z axis in both cases, disregarding the camera orientation.
What I want is a more FPS-like behaviour, where pressing forward moves me in the direction the camera is facing. I thought that with the arguments I pass to glm::lookAt, this would be achieved. Apparently not.
What is wrong with my calculations?
// Calculate the camera's orientation
float angleHori = -mMouseSpeed * Mouse::x; // Note that (0, 0) is the center of the screen
float angleVert = -mMouseSpeed * Mouse::y;
glm::vec3 dir(
cos(angleVert) * sin(angleHori),
sin(angleVert),
cos(angleVert) * cos(angleHori)
);
glm::vec3 right(
sin(angleHori - M_PI / 2.0f),
0.0f,
cos(angleHori - M_PI / 2.0f)
);
glm::vec3 up = glm::cross(right, dir);
// Calculate projection and view matrix
glm::mat4 projMatrix = glm::perspective(mFOV, mViewPortSizeX / (float)mViewPortSizeY, mZNear, mZFar);
glm::mat4 viewMatrix = glm::lookAt(mPosition, mPosition + dir, up);
gluLookAt takes 3 parameters: eye, centre and up. The first two are positions while the last is a vector. If you're planning on using this function it's better that you maintain only these three parameters consistently.
Coming to the issue with the calculation. I see that the position variable is unchanged throughout the code. All that changes is the look at point I.e. centre only. The right thing to do is to first do position += dir, which will move the camera (position) along the direction pointed to by dir. Now to update the centre, the second parameter can be left as-is: position + dir; this will work since the position was already updated to the new position and from there we've a point farther in dir direction to look at.
The issue was actually in another method. When moving the camera, I needed to do this:
void Camera::moveX(char s)
{
mPosition += s * mSpeed * mRight;
}
void Camera::moveY(char s)
{
mPosition += s * mSpeed * mUp;
}
void Camera::moveZ(chars)
{
mPosition += s * mSpeed * mDirection;
}
To make the camera move across the correct axes.
I'm trying to figure out how to make the camera in directx move based on the direction it's facing.
Right now the way I move the camera is by passing the camera's current position and rotation to a class called PositionClass. PositionClass takes keyboard input from another class called InputClass and then updates the position and rotation values for the camera, which is then passed back to the camera class.
I've written some code that seems to work great for me, using the cameras pitch and yaw I'm able to get it to go in the direction I've pointed the camera.
However, when the camera is looking straight up (pitch=90) or straight down (pitch=-90), it still changes the cameras X and Z position (depending on the yaw).
The expected behavior is while looking straight up or down it will only move along the Y axis, not along the X or Z axis.
Here's the code that calculates the new camera position
void PositionClass::MoveForward(bool keydown)
{
float radiansY, radiansX;
// Update the forward speed movement based on the frame time
// and whether the user is holding the key down or not.
if(keydown)
{
m_forwardSpeed += m_frameTime * m_acceleration;
if(m_forwardSpeed > (m_frameTime * m_maxSpeed))
{
m_forwardSpeed = m_frameTime * m_maxSpeed;
}
}
else
{
m_forwardSpeed -= m_frameTime * m_friction;
if(m_forwardSpeed < 0.0f)
{
m_forwardSpeed = 0.0f;
}
}
// ToRadians() just multiplies degrees by 0.0174532925f
radiansY = ToRadians(m_rotationY); //yaw
radiansX = ToRadians(m_rotationX); //pitch
// Update the position.
m_positionX += sinf(radiansY) * m_forwardSpeed;
m_positionY += -sinf(radiansX) * m_forwardSpeed;
m_positionZ += cosf(radiansY) * m_forwardSpeed;
return;
}
The significant portion is where the position is updated at the end.
So far I've only been able to deduce that I have horrible math skills.
So, can anyone help me with this dilemma? I've created a fiddle to help test out the math.
Edit: The fiddle uses the same math I used in my MoveForward function, if you set pitch to 90 you can see that the Z axis is still being modified
Thanks to Chaosed0's answer, I was able to figure out the correct formula to calculate movement in a specific direction.
The fixed code below is basically the same as above but now simplified and expanded to make it easier to understand.
First we determine the amount by which the camera will move, in my case this was m_forwardSpeed, but here I will define it as offset.
float offset = 1.0f;
Next you will need to get the camera's X and Y rotation values (in degrees!)
float pitch = camera_rotationX;
float yaw = camera_rotationY;
Then we convert those values into radians
float pitchRadian = pitch * (PI / 180); // X rotation
float yawRadian = yaw * (PI / 180); // Y rotation
Now here is where we determine the new position:
float newPosX = offset * sinf( yawRadian ) * cosf( pitchRadian );
float newPosY = offset * -sinf( pitchRadian );
float newPosZ = offset * cosf( yawRadian ) * cosf( pitchRadian );
Notice that we only multiply the X and Z positions by the cosine of pitchRadian, this is to negate the direction and offset of your camera's yaw when it's looking straight up (90) or straight down (-90).
And finally, you need to tell your camera the new position, which I won't cover because it largely depends on how you've implemented your camera. Apparently doing it this way is out of the norm, and possibly inefficient. However, as Chaosed0 said, it's what makes the most sense to me!
To be honest, I'm not entirely sure I understand your code, so let me try to provide a different perspective.
The way I like to think about this problem is in spherical coordinates, basically just polar in 3D. Spherical coordinates are defined by three numbers: a radius and two angles. One of the angles is yaw, and the other should be pitch, assuming you have no roll (I believe there's a way to get phi if you have roll, but I can't think of how currently). In conventional mathematics notation, theta is your yaw and phi is your pitch, with radius being your move speed, as shown below.
Note that phi and theta are defined differently, depending on where you look.
Basically, the problem is to obtain a point m_forwardSpeed away from your camera, with the right pitch and yaw. To do this, we set the "origin" to your camera position, obtain a spherical coordinate, convert it to cartesian, and then add it to your camera position:
float radius = m_forwardSpeed;
float theta = m_rotationY;
float phi = m_rotationX
//These equations are from the wikipedia page, linked above
float xMove = radius*sinf(phi)*cosf(theta);
float yMove = radius*sinf(phi)*sinf(theta);
float zMove = radius*cosf(phi);
m_positionX += xMove;
m_positionY += yMove;
m_positionZ += zMove;
Of course, you can condense a lot of this code, but I expanded it for clarity.
You can think about this like drawing a sphere around your camera. Each of the points on the sphere is a potential position in the next timestep, depending on the camera's rotation.
This is probably not the most efficient way to do it, but in my opinion it's certainly the easiest way to think about it. It actually looks like this is nearly exactly what you're trying to do in your code, but the operations on the angles are just a little bit off.
Im currently using a JavaCV software called procamcalib to calibrate a Kinect-Projector setup, which has the Kinect RGB Camera as origin. This setup consists solely of a Kinect RGB Camera (Im roughly using the Kinect just as an ordinary camera at the moment) and one Projector. This calibration software uses LibFreenect (OpenKinect) as the Kinect Driver.
Once the software completes its process, it will give me the intrinsics and extrinsics parameters of both the camera and the projector, which are being thrown at an OpenGL software to validate the calibration and is where a few problems begins. Once the Projection and Modelview are correctly set, I should be able to fit what is seen by the Kinect with what is being projected, but in order to achieve this I have to do a manual translation in all 3 axis and this last part isnt making any sense to me! Could you guys please help me sorting this out?
The SDK used to retrieve Kinect data is OpenNI (not the latest 2.x version, it should be 1.5.x)
I'll explain exactly what Im doing to reproduce this error. The calibration parameters is used as follows:
The Projection matrix is set as ( based on http://sightations.wordpress.com/2010/08/03/simulating-calibrated-cameras-in-opengl/ ):
r = width/2.0f; l = -width/2.0f;
t = height/2.0f; b = -height/2.0f;
alpha = fx; beta = fy;
xo = cx; yo = cy;
X = kinectCalibration.c_near + kinectCalibration.c_far;
Y = kinectCalibration.c_near*kinectCalibration.c_far;
d = kinectCalibration.c_near - kinectCalibration.c_far;
float* glOrthoMatrix = (float*)malloc(16*sizeof(float));
glOrthoMatrix[0] = 2/(r-l); glOrthoMatrix[4] = 0.0f; glOrthoMatrix[8] = 0.0f; glOrthoMatrix[12] = (r+l)/(l-r);
glOrthoMatrix[1] = 0.0f; glOrthoMatrix[5] = 2/(t-b); glOrthoMatrix[9] = 0.0f; glOrthoMatrix[13] = (t+b)/(b-t);
glOrthoMatrix[2] = 0.0f; glOrthoMatrix[6] = 0.0f; glOrthoMatrix[10] = 2/d; glOrthoMatrix[14] = X/d;
glOrthoMatrix[3] = 0.0f; glOrthoMatrix[7] = 0.0f; glOrthoMatrix[11] = 0.0f; glOrthoMatrix[15] = 1;
printM( glOrthoMatrix, 4, 4, true, "glOrthoMatrix" );
float* glCameraMatrix = (float*)malloc(16*sizeof(float));
glCameraMatrix[0] = alpha; glCameraMatrix[4] = skew; glCameraMatrix[8] = -xo; glCameraMatrix[12] = 0.0f;
glCameraMatrix[1] = 0.0f; glCameraMatrix[5] = beta; glCameraMatrix[9] = -yo; glCameraMatrix[13] = 0.0f;
glCameraMatrix[2] = 0.0f; glCameraMatrix[6] = 0.0f; glCameraMatrix[10] = X; glCameraMatrix[14] = Y;
glCameraMatrix[3] = 0.0f; glCameraMatrix[7] = 0.0f; glCameraMatrix[11] = -1; glCameraMatrix[15] = 0.0f;
float* glProjectionMatrix = algMult( glOrthoMatrix, glCameraMatrix );
And the Modelview matrix is set as:
proj_loc = new Vec3f( proj_RT[12], proj_RT[13], proj_RT[14] );
proj_fwd = new Vec3f( proj_RT[8], proj_RT[9], proj_RT[10] );
proj_up = new Vec3f( proj_RT[4], proj_RT[5], proj_RT[6] );
proj_trg = new Vec3f( proj_RT[12] + proj_RT[8],
proj_RT[13] + proj_RT[9],
proj_RT[14] + proj_RT[10] );
gluLookAt( proj_loc[0], proj_loc[1], proj_loc[2],
proj_trg[0], proj_trg[1], proj_trg[2],
proj_up[0], proj_up[1], proj_up[2] );
And finally the camera is displayed and moved around with:
glPushMatrix();
glTranslatef(translateX, translateY, translateZ);
drawRGBCamera();
glPopMatrix();
where the translation values are manually adjusted with the keyboard until I have a visual match (I'm projecting on the calibration board what the Kinect-rgb camera is seeing, so I manually adjust the opengl-camera until the projected pattern matches the printed pattern).
My question here is WHY do I have to make this manual adjustment? The modelview and projection setup should take care of it.
I was also wandering if there are any problems when switching drivers like that, since OpenKinect is used for calibration and OpenNI for validation. This came at mind when researching another popular calibration tool called RGBDemo, where it says that if using LibFreenect backend a Kinect calibration is needed.
So, will a calibration go wrong if made with a driver and displayed with another?
Does anyone think it'll be easier to achieve success if this is done with OpenCV rather than OpenGL ?
JavaCV Reference: https://code.google.com/p/javacv/
Procamcalib "short paper": http://www.ok.ctrl.titech.ac.jp/~saudet/research/procamcalib/
Procamcalib source code: https://code.google.com/p/javacv/source/browse?repo=procamcalib
RGBDemo calibration Reference: http://labs.manctl.com/rgbdemo/index.php/Documentation/Calibration
I can upload more things if necessary, just let me know what you guys need to be able to help me out :)
I'm the author of the article you linked to, and I think I can help.
The problem is in how you're setting your modelview matrix. You're using the third column of proj_RT as the camera's position when you call gluLookAt(), but it isn't the camera's position, it's the position of the world origin in camera coordinates. I wrote an article for my new blog that might help clear this up. It describes three different (equivalent) ways of interpreting the extrinsic matrix, with WebGL demos of each:
http://ksimek.github.io/2012/08/22/extrinsic/
If you must use gluLookAt, this article will show you how, but its much simpler to just call glLoadMatrix(proj_RT).
tl;dr: replace gluLookAt() with glLoadMatrix(proj_RT)
For Kinect calibration, take a look at the latest 0.7 release of RGBDemo http://labs.manctl.com/rgbdemo and corresponding Freenect calibration source.
From the v0.7.0 ChangeLogs:
New features since v0.6.1:
New demo to acquire object models using markers
Simple calibration mode for rgbd-multikinect
Much faster grabbing in rgbd-multikinect
Add timestamps and camera serials when saving to disk
Compatibility with PCL 1.4 Various bug fixes
A very good book to follow is Jason McKesson's Learning Modern 3D Graphics Programming You may also read the Kinect's ROS page and Nicolas' Kinect Calibration Page
I'm drawing an object (say, a cube) in OpenGL that a user can rotate by clicking / dragging the mouse across the window. The cube is drawn like so:
void CubeDrawingArea::redraw()
{
Glib::RefPtr gl_drawable = get_gl_drawable();
gl_drawable->gl_begin(get_gl_context());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
{
glRotated(m_angle, m_rotAxis.x, m_rotAxis.y, m_rotAxis.z);
glCallList(m_cubeID);
}
glPopMatrix();
gl_drawable->swap_buffers();
gl_drawable->gl_end();
}
and rotated with this function:
bool CubeDrawingArea::on_motion_notify_event(GdkEventMotion* motion)
{
if (!m_leftButtonDown)
return true;
_3V cur_pos;
get_trackball_point((int) motion->x, (int) motion->y, cur_pos);
const double dx = cur_pos.x - m_lastTrackPoint.x;
const double dy = cur_pos.y - m_lastTrackPoint.y;
const double dz = cur_pos.z - m_lastTrackPoint.z;
if (dx || dy || dz)
{
// Update angle, axis of rotation, and redraw
m_angle = 90.0 * sqrt((dx * dx) + (dy * dy) + (dz * dz));
// Axis of rotation comes from cross product of last / cur vectors
m_rotAxis.x = (m_lastTrackPoint.y * cur_pos.z) - (m_lastTrackPoint.z * cur_pos.y);
m_rotAxis.y = (m_lastTrackPoint.z * cur_pos.x) - (m_lastTrackPoint.x * cur_pos.z);
m_rotAxis.z = (m_lastTrackPoint.x * cur_pos.y) - (m_lastTrackPoint.y * cur_pos.x);
redraw();
}
return true;
}
There is some GTK+ stuff in there, but it should be pretty obvious what it's for. The get_trackball_point() function projects the window coordinates X Y onto a hemisphere (the virtual "trackball") that is used as a reference point for rotating the object. Anyway, this more or less works, but after I'm done rotating, and I go to rotate again, the cube snaps back to the original position, obviously, since m_angle will be reset back to near 0 the next time I rotate. Is there anyway to avoid this and preserve the rotation?
Yeah, I ran into this problem too.
What you need to do is keep a rotation matrix around that "accumulates" the current state of rotation, and use it in addition to the rotation matrix that comes from the current dragging operation.
Say you have two matrices, lastRotMx and currRotMx. Make them members of CubeDrawingArea if you like.
You haven't shown us this, but I assume that m_lastTrackPoint is initialized whenever the mouse button goes down for dragging. When that happens, copy currRotMx into lastRotMx.
Then in on_motion_notify_event(), after you calculate m_rotAxis and m_angle, create a new rotation matrix draggingRotMx based on m_rotAxis and m_angle; then multiply lastRotMx by draggingRotMx and put the result in currRotMx.
Finally, in redraw(), instead of
glRotated(m_angle, m_rotAxis.x, m_rotAxis.y, m_rotAxis.z);
rotate by currRotMx.
Update: Or instead of all that... I haven't tested this, but I think it would work:
Make cur_pos a class member so it stays around, but it's initialized to zero, as is m_lastTrackPoint.
Then, whenever a new drag motion is started, before you initialize m_lastTrackPoint, let _3V dpos = cur_pos - m_lastTrackPoint (pseudocode).
Finally, when you do initialize m_lastTrackPoint based on the mouse event coords, subtract dpos from it.
That way, your cur_pos will already be offset from m_lastTrackPoint by an amount based on the accumulation of offsets from past arcball drags.
Probably error would accumulate as well, but it should be gradual enough so as not to be noticeable. But I'd want to test it to be sure... composed rotations are tricky enough that I don't trust them without seeing them.
P.S. your username is demotivating. Suggest picking another one.
P.P.S. For those who come later searching for answers to this question, the keywords to search on are "arcball rotation". An definitive article is Ken Shoemake's section in Graphical Gems IV. See also this arcball tutorial for JOGL.