Preserving rotations in OpenGL - opengl

I'm drawing an object (say, a cube) in OpenGL that a user can rotate by clicking / dragging the mouse across the window. The cube is drawn like so:
void CubeDrawingArea::redraw()
{
Glib::RefPtr gl_drawable = get_gl_drawable();
gl_drawable->gl_begin(get_gl_context());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
{
glRotated(m_angle, m_rotAxis.x, m_rotAxis.y, m_rotAxis.z);
glCallList(m_cubeID);
}
glPopMatrix();
gl_drawable->swap_buffers();
gl_drawable->gl_end();
}
and rotated with this function:
bool CubeDrawingArea::on_motion_notify_event(GdkEventMotion* motion)
{
if (!m_leftButtonDown)
return true;
_3V cur_pos;
get_trackball_point((int) motion->x, (int) motion->y, cur_pos);
const double dx = cur_pos.x - m_lastTrackPoint.x;
const double dy = cur_pos.y - m_lastTrackPoint.y;
const double dz = cur_pos.z - m_lastTrackPoint.z;
if (dx || dy || dz)
{
// Update angle, axis of rotation, and redraw
m_angle = 90.0 * sqrt((dx * dx) + (dy * dy) + (dz * dz));
// Axis of rotation comes from cross product of last / cur vectors
m_rotAxis.x = (m_lastTrackPoint.y * cur_pos.z) - (m_lastTrackPoint.z * cur_pos.y);
m_rotAxis.y = (m_lastTrackPoint.z * cur_pos.x) - (m_lastTrackPoint.x * cur_pos.z);
m_rotAxis.z = (m_lastTrackPoint.x * cur_pos.y) - (m_lastTrackPoint.y * cur_pos.x);
redraw();
}
return true;
}
There is some GTK+ stuff in there, but it should be pretty obvious what it's for. The get_trackball_point() function projects the window coordinates X Y onto a hemisphere (the virtual "trackball") that is used as a reference point for rotating the object. Anyway, this more or less works, but after I'm done rotating, and I go to rotate again, the cube snaps back to the original position, obviously, since m_angle will be reset back to near 0 the next time I rotate. Is there anyway to avoid this and preserve the rotation?

Yeah, I ran into this problem too.
What you need to do is keep a rotation matrix around that "accumulates" the current state of rotation, and use it in addition to the rotation matrix that comes from the current dragging operation.
Say you have two matrices, lastRotMx and currRotMx. Make them members of CubeDrawingArea if you like.
You haven't shown us this, but I assume that m_lastTrackPoint is initialized whenever the mouse button goes down for dragging. When that happens, copy currRotMx into lastRotMx.
Then in on_motion_notify_event(), after you calculate m_rotAxis and m_angle, create a new rotation matrix draggingRotMx based on m_rotAxis and m_angle; then multiply lastRotMx by draggingRotMx and put the result in currRotMx.
Finally, in redraw(), instead of
glRotated(m_angle, m_rotAxis.x, m_rotAxis.y, m_rotAxis.z);
rotate by currRotMx.
Update: Or instead of all that... I haven't tested this, but I think it would work:
Make cur_pos a class member so it stays around, but it's initialized to zero, as is m_lastTrackPoint.
Then, whenever a new drag motion is started, before you initialize m_lastTrackPoint, let _3V dpos = cur_pos - m_lastTrackPoint (pseudocode).
Finally, when you do initialize m_lastTrackPoint based on the mouse event coords, subtract dpos from it.
That way, your cur_pos will already be offset from m_lastTrackPoint by an amount based on the accumulation of offsets from past arcball drags.
Probably error would accumulate as well, but it should be gradual enough so as not to be noticeable. But I'd want to test it to be sure... composed rotations are tricky enough that I don't trust them without seeing them.
P.S. your username is demotivating. Suggest picking another one.
P.P.S. For those who come later searching for answers to this question, the keywords to search on are "arcball rotation". An definitive article is Ken Shoemake's section in Graphical Gems IV. See also this arcball tutorial for JOGL.

Related

How to find Relative Offset of a point inside a non axis aligned box (box that is arbitrarily rotated)

I'm trying to solve an problem where I cannot find the Relative Offset of a Point inside a Box that exists inside of a space that can be arbitrarily rotated and translated.
I know the WorldSpace Location of the Box (and its 4 Corners, the Coordinates on the Image are Relative) as well as its Rotation. These can be arbitrary (its actually a 3D Trigger Volume within a game, but we are only concerned with it in a 2D plane from top down).
Looking at it Aligned to an Axis the Red Point Relative position would be
0.25, 0.25
If the Box was to be Rotated arbitrarily I cannot seem to figure out how to maintain that given we sample the same Point (its World Location will have changed) its Relative Position doesnt change even though the World Rotation of the Box has.
For reference, the Red Point represents an Object that exists in the scene that the Box is encompassing.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker)
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
float RelativeX = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X)),
FVector2D(0.f, 1.f),
MapMarkerLocation.X
);
float RelativeY = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y)),
FVector2D(0.f, 1.f),
MapMarkerLocation.Y
);
OutPosition.X = FMath::Abs(RelativeX);
OutPosition.Y = FMath::Abs(RelativeY);
bResult = true;
}
return bResult;
}
Currently, you can see with the above code that im only using the Top Left and Bottom Right corners of the Box to try and calculate the offset, I know this is not a sufficient solution as doing this does not allow for Rotation (Id need to use the other 2 corners as well) however I cannot for the life of me work out what I need to do to reach the solution.
FMath::GetMappedRangeValueClamped
This converts one range onto another. (20 - 50) becomes (0 - 1) for example.
Any assistance/advice on how to approach this problem would be much appreciated.
Thanks.
UPDATE
#Voo's comment helped me realize that the solution was much simpler than anticipated.
By knowing the Location of 3 of the Corners of the Box, I'm able to find the points on the 2 lines these 3 Locations create, then simply mapping those points into a 0-1 range gives the appropriate value regardless of how the Box is Translated.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker && GetMapVolume())
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
const FVector TopLeftLocation = GetMapVolume()->GetCornerTopLeftLocation();
const FVector TopRightLocation = GetMapVolume()->GetCornerTopRightLocation();
const FVector BottomLeftLocation = GetMapVolume()->GetCornerBottomLeftLocation();
FVector XPlane = FMath::ClosestPointOnLine(TopLeftLocation, TopRightLocation, MapMarkerLocation);
FVector YPlane = FMath::ClosestPointOnLine(TopLeftLocation, BottomLeftLocation, MapMarkerLocation);
// Convert the X axis into a 0-1 range.
float RelativeX = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerTopRightLocation().X),
FVector2D(0.f, 1.f),
XPlane.X
);
// Convert the Y axis into a 0-1 range.
float RelativeY = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomLeftLocation().Y),
FVector2D(0.f, 1.f),
YPlane.Y
);
OutPosition.X = RelativeX;
OutPosition.Y = RelativeY;
bResult = true;
}
return bResult;
}
The above code is the amended code from the original question with the correct solution.
assume the origin is at (x0, y0), the other three are at (x_x_axis, y_x_axis), (x_y_axis, y_y_axis), (x1, y1), the object is at (x_obj, y_obj)
do these operations to all five points:
(1)translate all five points by (-x0, -y0), to make the origin moved to (0, 0) (after that (x_x_axis, y_x_axis) is moved to (x_x_axis - x0, y_x_axis - y0));
(2)rotate all five points around (0, 0) by -arctan((y_x_axis - y0)/(x_x_axis - x0)), to make the (x_x_axis - x0, y_x_axis - y0) moved to x_axis;
(3)assume the new coordinates are (0, 0), (x_x_axis', 0), (0, y_y_axis'), (x_x_axis', y_y_axis'), (x_obj', y_obj'), then the object's zero-one coordinate is (x_obj'/x_x_axis', y_obj'/y_y_axis');
rotate formula:(x_new, y_new)=(x_old * cos(theta) - y_old * sin(theta), x_old * sin(theta) + y_old * cos(theta))
Update:
Note:
If you use the distance method, you have to take care of the sign of the coordinate if the object might go out of the scene in the future;
If there will be other transformations on the scene in the future (like symmetry transformation if you have mirror magic in the game, or transvection transformation if you have shockwaves, heatwaves or gravitational waves in the game), then the distance method no longer applies and you still have to reverse all the transformations your scene has in order to get the object's coordinate.

OpenGL Raycasting with any object

I'm just wondering if there was any way which one can perform mouse picking detection onto any object. Whether it would be generated object or imported object.
[Idea] -
The idea I have in mind is that, there would be iterations with every object in the scene. Checking if the mouse ray has intersected with an object. For checking the intersection, it would check the mouse picking ray with the triangles that make up the object.
[Pros] -
I believe the benefit of this approach is that, every object can be detected with mouse picking since they all inherit from the detection method.
[Cons] -
I believe this drawbacks are mainly the speed and the method being very expensive. So would need fine tuning of optimization.
[Situation] -
In the past I have read about mouse picking and I too have implemented some basic form of mouse picking. But all those were crappy work which I am not proud of. So again today, I have re-read some of the stuff from online. Nowadays I see alot of mouse picking using color ids and shaders. I'm not too keen for this method. I'm more into a mathematical side.
So here is my mouse picking ray thingamajig.
maths::Vector3 Camera::Raycast(s32 mouse_x, s32 mouse_y)
{
// Normalized Device Coordinates
maths::Vector2 window_size = Application::GetApplication().GetWindowSize();
float x = (2.0f * mouse_x) / window_size.x - 1.0f;
float y = 1.0f;
float z = 1.0f;
maths::Vector3 normalized_device_coordinates_ray = maths::Vector3(x, y, z);
// Homogeneous Clip Coordinates
maths::Vector4 homogeneous_clip_coordinates_ray = maths::Vector4(normalized_device_coordinates_ray.x, normalized_device_coordinates_ray.y, -1.0f, 1.0f);
// 4D Eye (Camera) Coordinates
maths::Vector4 camera_ray = maths::Matrix4x4::Invert(projection_matrix_) * homogeneous_clip_coordinates_ray;
camera_ray = maths::Vector4(camera_ray.x, camera_ray.y, -1.0f, 0.0f);
// 4D World Coordinates
maths::Vector3 world_coordinates_ray = maths::Matrix4x4::Invert(view_matrix_) * camera_ray;
world_coordinates_ray = world_coordinates_ray.Normalize();
return world_coordinates_ray;
}
I have this ray plane intersection function which calculates if a certain ray as intersected with a certain plane. DUH!
Here is the code for that.
bool Camera::RayPlaneIntersection(const maths::Vector3& ray_origin, const maths::Vector3& ray_direction, const maths::Vector3& plane_origin, const maths::Vector3& plane_normal, float& distance)
{
float denominator = plane_normal.Dot(ray_direction);
if (denominator >= 1e-6) // 1e-6 = 0.000001
{
maths::Vector3 vector_subtraction = plane_origin - ray_origin;
distance = vector_subtraction.Dot(plane_normal);
return (distance >= 0);
}
return false;
}
There are many more out there. E.g. Plane Sphere Intersection, Plane Disk Intersection. These things are like very specific. So it feel that is very hard to do mouse picking intersections on a global scale. I feel this way because, for this very RayPlaneIntersection function. What I expect to do with it is, retrieve the objects in the scene and retrieve all the normals for that object (which is a pain in the ass). So now to re-emphasize my question.
Is there already a method out there which I don't know, that does mouse picking in one way for all objects? Or am I just being stupid and not knowing what to do when I have everything?
Thank you. Thank you.
Yes, it is possible to do mouse-picking with OpenGL: you render all the geometry into a special buffer that stores a unique id of the object instead of its shaded color, then you just look at what value you got at the pixel below the mouse and know the object by its id that is written there. However, although it might be simpler, it is not a particularly efficient solution if your camera or geometry constantly moves.
Instead, doing an analytical ray-object intersection is the way to go. However, you don't need to check the intersection of every triangle of every object against the ray. That would be inefficient indeed. You should cull entire objects by their bounding boxes, or even portions of the whole scene. Game engines have their own spacial index data structure to speed-up ray-object intersections. They need it not only for mouse picking, but also for collision-detection, physics simulations, AI, and what-not.
Also note that the geometry used for the picking might be different from the one used for rendering. One example that comes to mind is that of semi-transparent objects.

First Person Camera movement issues

I'm implementing a first person camera using the GLM library that provides me with some useful functions that calculate perspective and 'lookAt' matrices. I'm also using OpenGL but that shouldn't make a difference in this code.
Basically, what I'm experiencing is that I can look around, much like in a regular FPS, and move around. But the movement is constrained to the three axes in a way that if I rotate the camera, I would still move in the same direction as if I had not rotated it... Let me illustrate (in 2D, to simplify things).
In this image, you can see four camera positions.
Those marked with a one are before movement, those marked with a two are after movement.
The red triangles represent a camera that is oriented straight forward along the z axis. The blue triangles represent a camera that hasbeen rotated to look backward along the x axis (to the left).
When I press the 'forward movement key', the camera moves forward along the z axis in both cases, disregarding the camera orientation.
What I want is a more FPS-like behaviour, where pressing forward moves me in the direction the camera is facing. I thought that with the arguments I pass to glm::lookAt, this would be achieved. Apparently not.
What is wrong with my calculations?
// Calculate the camera's orientation
float angleHori = -mMouseSpeed * Mouse::x; // Note that (0, 0) is the center of the screen
float angleVert = -mMouseSpeed * Mouse::y;
glm::vec3 dir(
cos(angleVert) * sin(angleHori),
sin(angleVert),
cos(angleVert) * cos(angleHori)
);
glm::vec3 right(
sin(angleHori - M_PI / 2.0f),
0.0f,
cos(angleHori - M_PI / 2.0f)
);
glm::vec3 up = glm::cross(right, dir);
// Calculate projection and view matrix
glm::mat4 projMatrix = glm::perspective(mFOV, mViewPortSizeX / (float)mViewPortSizeY, mZNear, mZFar);
glm::mat4 viewMatrix = glm::lookAt(mPosition, mPosition + dir, up);
gluLookAt takes 3 parameters: eye, centre and up. The first two are positions while the last is a vector. If you're planning on using this function it's better that you maintain only these three parameters consistently.
Coming to the issue with the calculation. I see that the position variable is unchanged throughout the code. All that changes is the look at point I.e. centre only. The right thing to do is to first do position += dir, which will move the camera (position) along the direction pointed to by dir. Now to update the centre, the second parameter can be left as-is: position + dir; this will work since the position was already updated to the new position and from there we've a point farther in dir direction to look at.
The issue was actually in another method. When moving the camera, I needed to do this:
void Camera::moveX(char s)
{
mPosition += s * mSpeed * mRight;
}
void Camera::moveY(char s)
{
mPosition += s * mSpeed * mUp;
}
void Camera::moveZ(chars)
{
mPosition += s * mSpeed * mDirection;
}
To make the camera move across the correct axes.

opengl - Rotating around a sphere using vectors and NOT glulookat

I'm having an issue with drawing a model and rotating it using the mouse,
I'm pretty sure there's a problem with the mathematics but not sure .
The object just rotates in a weird way.
I want the object to start rotating each click from its current spot and not reset because of the
vectors are now changed and the calculation starts all over again.
void DrawHandler::drawModel(Model * model){
unsigned int l_index;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); // Modeling transformation
glLoadIdentity();
Point tempCross;
crossProduct(tempCross,model->getBeginRotate(),model->getCurrRotate());
float tempInner= innerProduct(model->getBeginRotate(),model->getCurrRotate());
float tempNormA =normProduct(model->getBeginRotate());
float tempNormB=normProduct(model->getCurrRotate());
glTranslatef(0.0,0.0,-250.0);
glRotatef(acos (tempInner/(tempNormA*tempNormB)) * 180.0 / M_PI,tempCross.getX(),tempCross.getY(),tempCross.getZ());
glColor3d(1,1,1);
glBegin(GL_TRIANGLES);
for (l_index=0;l_index < model->getTrianglesDequeSize() ;l_index++)
{
Triangle t = model->getTriangleByPosition(l_index);
Vertex a1 = model->getVertexByPosition(t.getA());
Vertex a2 = model->getVertexByPosition(t.getB());
Vertex a3 = model->getVertexByPosition(t.getC());
glVertex3f( a1.getX(),a1.getY(),a1.getZ());
glVertex3f( a2.getX(),a2.getY(),a2.getZ());
glVertex3f( a3.getX(),a3.getY(),a3.getZ());
}
glEnd();
}
This is the mouse function which saves the beginning vector of the rotating formula
void Controller::mouse(int btn, int state, int x, int y)
{
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if (btn==GLUT_LEFT_BUTTON){
switch(state){
case(GLUT_DOWN):
if(!_rotating){
_model->setBeginRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
_rotating=true;
}
break;
case(GLUT_UP):
_rotating=false;
break;
}
}
}
and finally the following function which holds the current vector.
(the beginning vector is where the mouse was clicked at
and the curr vector is where the mouse position at the moment )
void Controller::getMousePosition(int x,int y){
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if(_rotating){
_model->setCurrRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
}
}
where sphereradius is the sphere radius O_O of 70 degress
is any calculation wrong ? cant seem to find the problem...
thanks
Why so complicated? Either you change the view matrix or you change the model matrix of your focused object. If you choose to change the model matrix and your object is centered in (0,0,0) of the world coordinate system, computing the rotation around a sphere illusion is trivial - you just rotate into the opposite direction. If you want to change the view matrix (which is actually done when you change the position of the camera) you have to approximate the surface points on the chosen sphere. Therefore, you could introduce two parameters specifying two angles. Everytime you click move your mouse, you update the params and compute the new locations on the sphere. There are some useful equations in [http://en.wikipedia.org/wiki/Sphere].
Without knowing what library (or libraries) you're using your code is rather difficult to read. It seems you're setting up your camera at (0, 0, -250), looking towards the origin, then rotating around the origin by the angle between two vectors, model->getCurrRotate() and model->getBeginRotate().
The problem seems to be that in "mouse down" events you explicitly set BeginRotate to the point on the sphere under the mouse, then in "mouse move" events you set CurrRotate to the point under the mouse, so every time you click somewhere else, you lose the previous state of rotation because BeginRotate and CurrRotate are simply overwritten.
Combining multiple rotations around arbitrary different axes is not a trivially simple task. The proper way to do it is to use quaternions. You may find this primer on quaternions and other 3D math concepts useful.
You might also want a more robust algorithm for converting screen coordinates to model coordinates on the sphere. The one you are using is assuming the sphere appears 70 pixels in radius on the screen and that the projection matrix is orthographic.

Determining Resting contact between sphere and plane when using external forces

This question has one major question, and one minor question. I believe I am right in either question from my research, but not both.
For my physics loop, the first thing I do is apply a gravitational force to my TotalForce for a rigid body object. I then check for collisions using my TotalForce and my Velocity. My TotalForce is reset to (0, 0, 0) after every physics loop, although I will keep my velocity.
I am familiar with doing a collision check between a moving sphere and a static plane when using only velocity. However, what if I have other forces besides velocity, such as gravity? I put the other forces into TotalForces (right now I only have gravity). To compensate for that, when I determine that the sphere is not currently overlapping the plane, I do
Vector3 forces = (sphereTotalForces + sphereVelocity);
Vector3 forcesDT = forces * fElapsedTime;
float denom = Vec3Dot(&plane->GetNormal(), &forces);
However, this can be problematic for how I thought was suppose to be resting contact. I thought resting contact was computed by
denom * dist == 0.0f
Where dist is
float dist = Vec3Dot(&plane->GetNormal(), &spherePosition) - plane->d;
(For reference, the obvious denom * dist > 0.0f meaning the sphere is moving away from the plane)
However, this can never be true. Even when there appears to be "resting contact". This is due to my forces calculation above always having at least a .y of -9.8 (my gravity). When when moving towards a plane with a normal of (0, 1, 0) will produce a y of denom of -9.8.
My question is
1) Am I calculating resting contact correctly with how I mentioned with my first two code snippets?
If so,
2) How should my "other forces" such as gravity be used? Is my use of TotalForces incorrect?
For reference, my timestep is
mAcceleration = mTotalForces / mMass;
mVelocity += mAcceleration * fElapsedTime;
Vector3 translation = (mVelocity * fElapsedTime);
EDIT
Since it appears that some suggested changes will change my collision code, here is how i detect my collision states
if(fabs(dist) <= sphereRadius)
{ // There already is a collision }
else
{
Vector3 forces = (sphereTotalForces + sphereVelocity);
float denom = Vec3Dot(&plane->GetNormal(), &forces);
// Resting contact
if(dist == 0) { }
// Sphere is moving away from plane
else if(denom * dist > 0.0f) { }
// There will eventually be a collision
else
{
float fIntersectionTime = (sphereRadius - dist) / denom;
float r;
if(dist > 0.0f)
r = sphereRadius;
else
r = -sphereRadius;
Vector3 collisionPosition = spherePosition + fIntersectionTime * sphereVelocity - r * planeNormal;
}
}
You should use if(fabs(dist) < 0.0001f) { /* collided */ } This is to acocunt for floating point accuracies. You most certainly would not get an exact 0.0f at most angles or contact.
the value of dist if negative, is in fact the actual amount you need to shift the body back onto the surface of the plane in case it goes through the plane surface. sphere.position = sphere.position - plane.Normal * fabs(dist);
Once you have moved it back to the surface, you can optionally make it bounce in the opposite direction about the plane normal; or just stay on the plane.
parallel_vec = Vec3.dot(plane.normal, -sphere.velocity);
perpendicular_vec = sphere.velocity - parallel_vec;
bounce_velocity = parallel - perpendicular_vec;
you cannot blindly do totalforce = external_force + velocity unless everything has unit mass.
EDIT:
To fully define a plane in 3D space, you plane structure should store a plane normal vector and a point on the plane. http://en.wikipedia.org/wiki/Plane_(geometry) .
Vector3 planeToSphere = sphere.point - plane.point;
float dist = Vector3.dot(plane.normal, planeToSphere) - plane.radius;
if(dist < 0)
{
// collided.
}
I suggest you study more Maths first if this is the part you do not know.
NB: Sorry, the formatting is messed up... I cannot mark it as code block.
EDIT 2:
Based on my understanding on your code, either you are naming your variables badly or as I mentioned earlier, you need to revise your maths and physics theory. This line does not do anything useful.
float denom = Vec3Dot(&plane->GetNormal(), &forces);
A at any instance of time, a force on the sphere can be in any direction at all unrelated to the direction of travel. so denom essentially calculates the amount of force in the direction of the plane surface, but tells you nothing about whether the ball will hit the plane. e.g. gravity is downwards, but a ball can have upward velocity and hit a plane above. With that, you need to Vec3Dot(plane.normal, velocity) instead.
Alternatively, Mark Phariss and Gerhard Powell had already give you the physics equation for linear kinematics, you can use those to directly calculate future positions, velocity and time of impact.
e.g. s = 0.5 * (u + v) * t; gives the displacement after future time t. compare that displacement with distance from plane and you get whether the sphere will hit the plane. So again, I suggest you read up on http://en.wikipedia.org/wiki/Linear_motion and the easy stuff first then http://en.wikipedia.org/wiki/Kinematics .
Yet another method, if you expect or assume no other forces to act on the sphere, then you do a ray / plane collision test to find the time t at which it will hit the plane, in that case, read http://en.wikipedia.org/wiki/Line-plane_intersection .
There will always be -9.8y of gravity acting on the sphere. In the case of a suspended sphere this will result in downwards acceleration (net force is non-zero). In the case of the sphere resting on the plane this will result in the plane exerting a normal force on the sphere. If the plane was perfectly horizontal with the sphere at rest this normal force would be exactly +9.8y which would perfectly cancel the force of gravity. For a sphere at rest on a non-horizontal plane the normal force is 9.8y * cos(angle) (angle is between -90 and +90 degrees).
Things get more complicated when a moving sphere hits a plane as the normal force will depend on the velocity and the plane/sphere material properties. Depending what your application requirements are you could either ignore this or try some things with the normal forces and see how it works.
For your specific questions:
I believe contact is more specifically just when dist == 0.0f, that is the sphere and plane are making contact. I assume your collision takes into account that the sphere may move past the plane in any physics time step.
Right now you don't appear to have any normal forces being put on the sphere from the plane when they are making contact. I would do this by checking for contact (dist == 0.0f) and if true adding the normal force to the sphere. In the simple case of a falling sphere onto a near horizontal plane (angle between -90 and +90 degrees) it would just be sphereTotalForces += Vector3D(0, 9.8 * cos(angle), 0).
Edit:
From here your equation for dist to compute the distance from the edge of sphere to the plane may not be correct depending on the details of your problem and code (which isn't given). Assuming your plane goes through the origin the correct equation is:
dist = Vec3Dot(&spherePosition, &plane->GetNormal()) - sphereRadius;
This is the same as your equation if plane->d == sphereRadius. Note that if the plane is not at the origin then use:
D3DXVECTOR3 vecTemp(spherePosition - pointOnPlane);
dist = Vec3Dot(&vecTemp, &plane->GetNormal()) - sphereRadius;
The exact solution to this problem involves some pretty serious math. If you want an approximate solution I strongly recommend developing it in stages.
1) Make sure your sim works without gravity. The ball must travel through space and have inelastic (or partially elastic) collisions with angled frictionless surfaces.
2) Introduce gravity. This will change ballistic trajectories from straight lines to parabolae, and introduce sliding, but it won't have much effect on collisions.
3) Introduce static and kinetic friction (independently). These will change the dynamics of sliding. Don't worry about friction in collisions for now.
4) Give the ball angular velocity and a moment of inertia. This is a big step. Make sure you can apply torques to it and get realistic angular accelerations. Note that realistic behavior of a spinning mass can be counter-intuitive.
5) Try sliding the ball along a level surface, under gravity. If you've done everything right, its angular velocity will gradually increase and its linear velocity gradually decrease, until it breaks into a roll. Experiment with giving the ball some initial spin ("draw", "follow" or "english").
6) Try the same, but on a sloped surface. This is a relatively small step.
If you get this far you'll have a pretty realistic sim. Don't try to skip any of the steps, you'll only give yourself headaches.
Answers to your physics problems:
f = mg + other_f; // m = mass, g = gravity (9.8)
a = f / m; // a = acceleration
v = u + at; // v = new speed, u = old speed, t = delta time
s = 0.5 * (u + v) *t;
When you have a collision, you change the both speeds to 0 (or v and u = -(u * 0.7) if you want it to bounce).
Because speed = 0, the ball is standing still.
If it is 2D or 3D, then you just change the speed in the direction of the normal of the surface to 0, and keep the parallel speed the same. That will result in the ball rolling on the surface.
You must move the ball to the surface if it cuts the surface. You can make collision distance to a small amount (for example 0.001) to make sure it stay still.
http://www.physicsforidiots.com/dynamics.html#vuat
Edit:
NeHe is an amazing source of game engine design:
Here is a page on collision detection with very good descriptions:
http://nehe.gamedev.net/tutorial/collision_detection/17005/
Edit 2: (From NeHe)
double DotProduct=direction.dot(plane._Normal); // Dot Product Between Plane Normal And Ray Direction
Dsc=(plane._Normal.dot(plane._Position-position))/DotProduct; // Find Distance To Collision Point
Tc= Dsc*T / Dst
Collision point= Start + Velocity*Tc
I suggest after that to take a look at erin cato articles (the author of Box2D) and Glenn fiedler articles as well.
Gravity is a strong acceleration and results in strong forces. It is easy to have faulty simulations because of floating imprecisions, variable timesteps and euler integration, very quickly.
The repositionning of the sphere at the plane surface in case it starts to burry itself passed the plane is mandatory, I noticed myself that it is better to do it only if velocity of the sphere is in opposition to the plane normal (this can be compared to face culling in 3D rendering: do not take into account backfaced planes).
also, most physics engine stops simulation on idle bodies, and most games never take gravity into account while moving, only when falling. They use "navigation meshes", and custom systems as long as they are sure the simulated objet is sticking to its "ground".
I don't know of a flawless physics simulator out there, there will always be an integration explosion, a missed collision (look for "sweeped collision").... it takes a lot of empirical fine-tweaking.
Also I suggest you look for "impulses" which is a method to avoid to tweak manually the velocity when encountering a collision.
Also take a look to "what every computer scientist should know about floating points"
good luck, you entered a mine field, randomly un-understandable, finger biting area of numerical computer science :)
For higher fidelity (wouldn't solve your main problem), I'd change your timestep to
mAcceleration = mTotalForces / mMass;
Vector3 translation = (mVelocity * fElapsedTime) + 0.5 * mAcceleration * pow(fElapsedTime, 2);
mVelocity += mAcceleration * fElapsedTime;
You mentioned that the sphere was a rigid body; are you also modeling the plane as rigid? If so, you'd have an infinite point force at the moment of contact & perfectly elastic collision without some explicit dissipation of momentum.
Force & velocity cannot be summed (incompatible units); if you're just trying to model the kinematics, you can disregard mass and work with acceleration & velocity only.
Assuming the sphere is simply dropped onto a horizontal plane with a perfectly inelastic collision (no bounce), you could do [N.B., I don't really know C syntax, so this'll be Pythonic]
mAcceleration = if isContacting then (0, 0, 0) else (0, -9.8, 0)
If you add some elasticity (say half momentum conserved) to the collision, it'd be more like
mAcceleration = (0, -9.8, 0) + if isContacting then (0, 4.9, 0)