incrementing my spherical coordinates clockwise - c++

I am launching a projectile around a sphere. My code moves it in a counterclockwise direction just fine. However, I would like it to move in a clockwise direction instead.
I'm guessing that it's a matter of tuning my math.
// these are my stepping and incrementing variables
int goose1_egg1_step = 1;
int &r_goose1_egg1_step = goose1_egg1_step;
float goose1_egg1_divider = 17500;
// the starting theta/phi values are: 5 and 5
int goose1_egg1_theta=5;
int goose1_egg1_phi=5;
// the ending theta/phi values are: 7 and 1
// there is a difference of 2 between the start and end theta values
// there is a difference of 4 between the start and end phi values
float goose1_egg1_theta_increment = 2/goose1_egg1_divider;
float goose1_egg1_phi_increment = 4/goose1_egg1_divider;
This is my function that displays the updated coordinates each frame with a sphere:
if (goose1_egg1_step < goose1_egg1_divider)
{
float goose1_egg1_theta_math = (goose1_egg1_theta+(goose1_egg1_theta_increment* r_goose1_egg1_step))/10.0*M_PI;
float goose1_egg1_phi_math = (goose1_egg1_phi-(goose1_egg1_phi_increment* r_goose1_egg1_step))/10.0*2*M_PI;
r_goose1_egg1_x = Radius * sin(goose1_egg1_theta_math) * cos(goose1_egg1_phi_math);
r_goose1_egg1_y = Radius * sin(goose1_egg1_theta_math) * sin(goose1_egg1_phi_math);
r_goose1_egg1_z = Radius * cos(goose1_egg1_theta_math);
glPushMatrix();
glTranslatef(r_goose1_egg1_x,r_goose1_egg1_y,r_goose1_egg1_z);
glColor3f (1.0, 0.0, 0.0);
glutSolidSphere (0.075,5,5);
glEnd();
glPopMatrix();
}
And here is how I increment the step value:
if (r_goose1_egg1_step < goose1_egg1_divider)
{
++(r_goose1_egg1_step);
}
else
r_goose1_egg1_step=1;

Even though you are talking about "clockwise motion" in a sphere, when it only makes sense to me in a plane, it seems to me that what you want could be done just by changing the signals in the two lines where you create goose1_egg1_theta_math and goose1_egg1_phi_math, like this:
float goose1_egg1_theta_math = (goose1_egg1_theta-(goose1_egg1_theta_increment* r_goose1_egg1_step))/10.0*M_PI;
float goose1_egg1_phi_math = (goose1_egg1_phi+(goose1_egg1_phi_increment* r_goose1_egg1_step))/10.0*2*M_PI;
This should reverse the way you increment you spherical coordinates, giving you the "counter-clockwise" motion you're looking for.

Related

Rotate vector to new base vector

In raycaster I am developing I am trying to implement hemisphere random sampling, with option to rotate hemisphere to direction and then take random point.
First version worked fine because sampling was uniform, and change of direction was just swapping to other hemisphere, which was simple.
Vec3f UniformSampleSphere() {
const Vec2f& u = GetVec2f(); // get two random numbers
float z = 1 - 2 * u.x;
float r = std::sqrt(std::max((float)0, (float)1 - z * z));
float phi = 2 * PI_F * u.y;
return Vec3f(r * std::cos(phi), r * std::sin(phi), z);
}
Vec3f GetRandomOnHemiSphere(Vec3f direction) {
auto toReturn = GetRandomOnSphere();
if (Dot(toReturn - direction, toReturn) < 0)
toReturn = -toReturn;
return toReturn;
}
But with cosine weighted hemisphere sampling I am in trouble to rotate properly and find random direction in correctly rotated hemisphere.
On picture's left we can see what is working now, and on right is after applying magic rotation that is that big deal I want.
So final function will be something like this:
Vec3f GetRandomOnHemiSphere(Vec3f direction) {
auto toReturn = CosineSampleHemisphere();
/*
Some magic here that rotates to correct direction of hemisphere
*/
return toReturn;
}
I used code from Socine weighted hemisphere sampling.

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

Making balls bounce off each other (openGL)

I'm trying to make an application where balls bounce off the walls and also off each other. The bouncing off the walls works fine, but I'm having some trouble getting them to bounce off each other. Here's the code I'm using to make them bounce off another ball (for testing I only have 2 balls)
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
What happens is that they randomly bounce, or pass through each other. Is there some physics that I'm missing?
EDIT: Here's the full function
// Callback handler for window re-paint event
void display()
{
glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
// FOR LOOP
for (int i = 0; i < numOfBalls; i++)
{
glLoadIdentity(); // Reset model-view matrix
int numSegments = 100;
GLfloat angle = 0;
glTranslatef(balls[i].xPos, balls[i].yPos, 0.0f); // Translate to (xPos, yPos)
// Use triangular segments to form a circle
glBegin(GL_TRIANGLE_FAN);
glColor4f(balls[i].colorR, balls[i].colorG, balls[i].colorB, balls[i].colorA);
glVertex2f(0.0f, 0.0f); // Center of circle
for (int j = 0; j <= numSegments; j++)
{
// Last vertex same as first vertex
angle = j * 2.0f * PI / numSegments; // 360 deg for all segments
glVertex2f(cos(angle) * balls[i].ballRadius, sin(angle) * balls[i].ballRadius);
}
glEnd();
// Animation Control - compute the location for the next refresh
balls[i].xPos += balls[i].xSpeed;
balls[i].yPos += balls[i].ySpeed;
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
else
{
std::cout << "No collision\n";
}
// Check if the ball exceeds the edges
if (balls[i].xPos > balls[i].xPosMax)
{
balls[i].xPos = balls[i].xPosMax;
balls[i].xSpeed = -balls[i].xSpeed;
}
else if (balls[i].xPos < balls[i].xPosMin)
{
balls[i].xPos = balls[i].xPosMin;
balls[i].xSpeed = -balls[i].xSpeed;
}
if (balls[i].yPos > balls[i].yPosMax) {
balls[i].yPos = balls[i].yPosMax;
balls[i].ySpeed = -balls[i].ySpeed;
}
else if (balls[i].yPos < balls[i].yPosMin)
{
balls[i].yPos = balls[i].yPosMin;
balls[i].ySpeed = -balls[i].ySpeed;
}
}
glutSwapBuffers(); // Swap front and back buffers (of double buffered mode)
}
**Note: Most of the function uses a for loop with numOfBalls as the counter, but to test collision, I'm only using 2 balls, hence the balls[0] and balls[1]
Here are some things to consider.
If the length of (xSpeed,ySpeed) and is roughly comparable with .ballRadius it is possible for two balls to travel "thru" each other between "ticks" of the simulation's clock (one step). Consider two balls which are traveling perfectly vertical, one up, one down, and 1 .ballRadius apart horizontally. In real life they would clearly collide but it would be easy for your simulation to miss this event if .ySpeed ~ .ballRadius.
Second, you change in the vector of the balls results in each ball coming to rest, since
balls[0].xSpeed -= balls[0].xSpeed;
is a really exotic way of writing
balls[0].xSpeed = 0;
For the physics almost correct stuff, you need to invert only the component perpendicular to the plane of contact.
In other words take collision_vector to be the vector between the center of the balls (just subtract one point's coordinates from the other's). Because you have spheres this also happens to be the normal of the collision plane.
Now for each ball in turn, you need to decompose their speeds. The A component will be the one aligned with the colision_vector you can obtain it by doing some vector arithmetic A = doc(Speed, collision_vector) * collision_vector. This will be the thing you want to invert. You also want to extract the B component that is parallel to the collision plane. Because it's parallel it won't change because of the collision. You obtain it by subtracting A from the speed vector.
Finally the new speed will be something like B - A. If you want to get the balls to spin you will need an angular momentum in the direction of A - B. If the balls have different mass then you will need use the weight ratio as a multiplier for A in the first formula.
This will make the collision look legit. The detection still needs to happen correctly. Make sure that the speeds are significantly smaller than the radius of the balls. For comparable or bigger speeds you will need more complex algorithms.
Note: most of the stuff above is vector arithmetics. Also It's late here so I might have mixed up some signs (sorry). Take a simple example on paper and work it out. It will also help you understand the solution better.

How do I manually apply an OpenGL translation matrix to a vertex?

I have a specific need to apply a stored openGL matrix to a vertex by hand. I admit a weak spot with regards to matrix math, but I have read through all the documentation I can find and I'm reasonably sure I'm doing this correctly, but I'm getting an unexpected result. What am I missing?
(Note that this may be a math question, but I suspect I'm actually misunderstanding how to apply the translation matrix, so I thought I'd try here)
In the code snippet below, #1 works fine, #2 fails...
float x=1;
float y=1;
float z=1;
float w=1;
float x2=0;
float y2=0;
float z2=0;
float w2=1;
// 1 THIS WORKS:
glLoadIdentity();
// Convert from NSArray to C float
float modelMatrix[16];
for(int x=0;x<16;x++){modelMatrix[x]=[[cs.modelView objectAtIndex:x] floatValue];}
// Load the matrix the openGL way
glLoadMatrixf(modelMatrix);
// Custom function takes two coordinates and draws a box
[self drawBoxFromX:x FromY:y FromZ:z ToX:x2 ToY:y2 ToZ:z2];
//2 THIS DOES NOT WORK: Apply the matrix by hand
glLoadIdentity();
float new_x = (x*modelMatrix[0])+(y*modelMatrix[4])+(z*modelMatrix[8])+(w*modelMatrix[12]);
float new_y = (x*modelMatrix[1])+(y*modelMatrix[5])+(z*modelMatrix[9])+(w*modelMatrix[13]);
float new_z = (x*modelMatrix[2])+(y*modelMatrix[6])+(z*modelMatrix[10])+(w*modelMatrix[14]);
float new_x2 = (x2*modelMatrix[0])+(y2*modelMatrix[4])+(z2*modelMatrix[8])+(w2*modelMatrix[12]);
float new_y2 = (x2*modelMatrix[1])+(y2*modelMatrix[5])+(z2*modelMatrix[9])+(w2*modelMatrix[13]);
float new_z2 = (x2*modelMatrix[2])+(y2*modelMatrix[6])+(z2*modelMatrix[10])+(w2*modelMatrix[14]);
// Should draw a box identical to above, but gives strange result)
[self drawBoxFromX:new_x FromY:new_y FromZ:new_z ToX:new_x2 ToY:new_y2 ToZ:new_z2];
Update:
Based on a helpful comment below I realized I was only rotating two of the vertexes rather than all 8 of the cube. The following code works as expected, posting here for anyone who runs into a similar problem wrapping their head around 3d/opengl stuff. (Note: In case it is not obvious, this is not production code. There are many more efficient and less manual ways multiply matrices and describe cubes (see comments). The purpose of this code is simply to explicitly illustrate a behavior.)
struct Cube myCube;
myCube.a1.x=-1;
myCube.a1.y=-1;
myCube.a1.z=-1;
myCube.b1.x=-1;
myCube.b1.y=-1;
myCube.b1.z=1;
myCube.c1.x=1;
myCube.c1.y=-1;
myCube.c1.z=1;
myCube.d1.x=1;
myCube.d1.y=-1;
myCube.d1.z=-1;
myCube.a2.x=-1;
myCube.a2.y=1;
myCube.a2.z=-1;
myCube.b2.x=-1;
myCube.b2.y=1;
myCube.b2.z=1;
myCube.c2.x=1;
myCube.c2.y=1;
myCube.c2.z=1;
myCube.d2.x=1;
myCube.d2.y=1;
myCube.d2.z=-1;
//1 Load modelview and draw a box (this works fine)
glLoadIdentity();
float modelMatrix[16];
for(int x=0;x<16;x++){modelMatrix[x]=[[cs.modelView objectAtIndex:x] floatValue];}
glLoadMatrixf(modelMatrix);
[self drawCube:myCube];
//2 Load the matrix by hand (identical to above)
glLoadIdentity();
float w=1;
float new_Ax = (myCube.a1.x*modelMatrix[0])+(myCube.a1.y*modelMatrix[4])+(myCube.a1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Ay = (myCube.a1.x*modelMatrix[1])+(myCube.a1.y*modelMatrix[5])+(myCube.a1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Az = (myCube.a1.x*modelMatrix[2])+(myCube.a1.y*modelMatrix[6])+(myCube.a1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Bx = (myCube.b1.x*modelMatrix[0])+(myCube.b1.y*modelMatrix[4])+(myCube.b1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_By = (myCube.b1.x*modelMatrix[1])+(myCube.b1.y*modelMatrix[5])+(myCube.b1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Bz = (myCube.b1.x*modelMatrix[2])+(myCube.b1.y*modelMatrix[6])+(myCube.b1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Cx = (myCube.c1.x*modelMatrix[0])+(myCube.c1.y*modelMatrix[4])+(myCube.c1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Cy = (myCube.c1.x*modelMatrix[1])+(myCube.c1.y*modelMatrix[5])+(myCube.c1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Cz = (myCube.c1.x*modelMatrix[2])+(myCube.c1.y*modelMatrix[6])+(myCube.c1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Dx = (myCube.d1.x*modelMatrix[0])+(myCube.d1.y*modelMatrix[4])+(myCube.d1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Dy = (myCube.d1.x*modelMatrix[1])+(myCube.d1.y*modelMatrix[5])+(myCube.d1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Dz = (myCube.d1.x*modelMatrix[2])+(myCube.d1.y*modelMatrix[6])+(myCube.d1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_A2x = (myCube.a2.x*modelMatrix[0])+(myCube.a2.y*modelMatrix[4])+(myCube.a2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_A2y = (myCube.a2.x*modelMatrix[1])+(myCube.a2.y*modelMatrix[5])+(myCube.a2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_A2z = (myCube.a2.x*modelMatrix[2])+(myCube.a2.y*modelMatrix[6])+(myCube.a2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_B2x = (myCube.b2.x*modelMatrix[0])+(myCube.b2.y*modelMatrix[4])+(myCube.b2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_B2y = (myCube.b2.x*modelMatrix[1])+(myCube.b2.y*modelMatrix[5])+(myCube.b2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_B2z = (myCube.b2.x*modelMatrix[2])+(myCube.b2.y*modelMatrix[6])+(myCube.b2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_C2x = (myCube.c2.x*modelMatrix[0])+(myCube.c2.y*modelMatrix[4])+(myCube.c2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_C2y = (myCube.c2.x*modelMatrix[1])+(myCube.c2.y*modelMatrix[5])+(myCube.c2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_C2z = (myCube.c2.x*modelMatrix[2])+(myCube.c2.y*modelMatrix[6])+(myCube.c2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_D2x = (myCube.d2.x*modelMatrix[0])+(myCube.d2.y*modelMatrix[4])+(myCube.d2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_D2y = (myCube.d2.x*modelMatrix[1])+(myCube.d2.y*modelMatrix[5])+(myCube.d2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_D2z = (myCube.d2.x*modelMatrix[2])+(myCube.d2.y*modelMatrix[6])+(myCube.d2.z*modelMatrix[10])+(w*modelMatrix[14]);
myCube.a1.x=new_Ax;
myCube.a1.y=new_Ay;
myCube.a1.z=new_Az;
myCube.b1.x=new_Bx;
myCube.b1.y=new_By;
myCube.b1.z=new_Bz;
myCube.c1.x=new_Cx;
myCube.c1.y=new_Cy;
myCube.c1.z=new_Cz;
myCube.d1.x=new_Dx;
myCube.d1.y=new_Dy;
myCube.d1.z=new_Dz;
myCube.a2.x=new_A2x;
myCube.a2.y=new_A2y;
myCube.a2.z=new_A2z;
myCube.b2.x=new_B2x;
myCube.b2.y=new_B2y;
myCube.b2.z=new_B2z;
myCube.c2.x=new_C2x;
myCube.c2.y=new_C2y;
myCube.c2.z=new_C2z;
myCube.d2.x=new_D2x;
myCube.d2.y=new_D2y;
myCube.d2.z=new_D2z;
[self drawCube:myCube];
Drawing a rotated box is not the same as rotating two of the box corners and then drawing an axis-parallel box. The simplest way to draw a transformed 3D box is to transform all 8 vertices.

Getting a Virtual Trackball to work from any viewing angle

I am currently trying to work on getting my virtual trackball to work from any angle. When I am looking at it from the z axis, it seems to work fine. I hold my mouse down, and move the mouse up... the rotation will move accordingly.
Now, if I change my viewing angle / position of my camera and try to move my mouse. The rotation will occur as if I were looking from the z axis. I cannot come up with a good way to get this to work.
Here is the code:
void Renderer::mouseMoveEvent(QMouseEvent *e)
{
// Get coordinates
int x = e->x();
int y = e->y();
if (isLeftButtonPressed)
{
// project current screen coordinates onto hemi sphere
Point sphere = projScreenCoord(x,y);
// find axis by taking cross product of current and previous hemi points
axis = Point::cross(previousPoint, sphere);
// angle can be found from magnitude of cross product
double length = sqrt( axis.x * axis.x + axis.y * axis.y + axis.z * axis.z );
// Normalize
axis = axis / length;
double lengthPrev = sqrt( previousPoint.x * previousPoint.x + previousPoint.y * previousPoint.y + previousPoint.z * previousPoint.z );
double lengthCur = sqrt( sphere.x * sphere.x + sphere.y * sphere.y + sphere.z * sphere.z );
angle = asin(length / (lengthPrev * lengthCur));
// Convert into Degrees
angle = angle * 180 / M_PI;
// 'add' this rotation matrix to our 'total' rotation matrix
glPushMatrix(); // save the old matrix so we don't mess anything up
glLoadIdentity();
glRotatef(angle, axis[0], axis[1], axis[2]); // our newly calculated rotation
glMultMatrixf(rotmatrix); // our previous rotation matrix
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*) rotmatrix); // we've let OpenGL do our matrix mult for us, now get this result & store it
glPopMatrix(); // return modelview to its old value;
}
// Project screen coordinates onto a unit hemisphere
Point Renderer::projScreenCoord(int x, int y)
{
// find projected x & y coordinates
double xSphere = ((double)x/width)*2.0 - 1.0;
double ySphere = ( 1 - ((double)y/height)) * 2.0 - 1.0;
double temp = 1.0 - xSphere*xSphere - ySphere*ySphere;
// Do a check so you dont do a sqrt of a negative number
double zSphere;
if (temp < 0){ zSphere = 0.0;}
else
{zSphere = sqrt(temp);}
Point sphere(xSphere, ySphere, zSphere);
// return the point on the sphere
return sphere;
}
I am still fairly new at this. Sorry for the trouble and thanks for all the help =)
The usual way involves quaternions. E.g., in sample code originally from SGI.