Calculating the Angle Between a 3D Object and a Point - c++

I have a 3D object in DirectX11 with a position vector and a rotation angle for the direction it's facing (it only rotates around the Y axis).
D3DXVECTOR3 m_position;
float m_angle;
If I then wanted to rotate the object to face something then I'd need to find the angle between the direction it's facing and the direction it needs to face using the dot product on the two normalised vectors.
The thing I'm having a problem with is how I find the direction the object is currently facing with just its position and the angle. What I currently have is:
D3DXVECTOR3 normDirection, normTarget;
D3DXVec3Normalize( &normDirection, ????);
D3DXVec3Normalize( &normTarget, &(m_position-target));
// I store the values in degrees because I prefer it
float angleToRotate = D3DXToDegree( acos( D3DXVec3Dot( &normDirection, &normTarget)));
Does anyone know how I get the vector for the current direction it's facing from the values I have, or do I need to re-write it so I keep track of the object's direction vector?
EDIT: Changed 'cos' to 'acos'.
SOLUTION (with the assistance of user2802841):
// assuming these are member variables
D3DXVECTOR3 m_position;
D3DXVECTOR3 m_rotation;
D3DXVECTOR3 m_direction;
// and these are local variables
D3DXVECTOR3 target; // passed in as a parameter
D3DXVECTOR3 targetNorm;
D3DXVECTOR3 upVector;
float angleToRotate;
// find the amount to rotate by
D3DXVec3Normalize( &targetNorm, &(target-m_position));
angleToRotate = D3DXToDegree( acos( D3DXVec3Dot( &targetNorm, &m_direction)));
// calculate the up vector between the two vectors
D3DXVec3Cross( &upVector, &m_direction, &targetNorm);
// switch the angle to negative if the up vector is going down
if( upVector.y < 0)
angleToRotate *= -1;
// add the rotation to the object's overall rotation
m_rotation.y += angleToRotate;

If you store the orientation as an angle then you must assume some kind of default orientation when angle is 0, express that default orientation as a vector (like (1.0, 0.0, 0.0) if you assume x+ direction to be the default), then rotate that vector by your angle degrees (either directly with sin/cos or by using rotation matrix created by one of the D3DXMatrixRotation* functions) and the resulting vector will be your current direction.
EDIT:
In answer to your comment, you can determine rotation direction like this. Which in your case translates to something like (untested):
D3DXVECTOR3 cross;
D3DXVec3Cross(&cross, &normDirection, &normTarget);
float dot;
D3DXVec3Dot(&dot, &cross, &normal);
if (dot < 0) {
// angle is negative
} else {
// angle is positive
}
Where normal is most likely a (0, 1, 0) vector (because you said your objects rotate around Y axis).

if, as you say, 0 degrees == (0,0,1) you can use:
normDirection = ( sin( angle ), 0, cos( angle ) )
because rotation is always around y axis.
You will have to change the sign of sin(angle) depending on your system (left handed or right handed).

Related

Determining rotation matrix about an axis for a given angle

I've been trying to understand matrices and vectors and implemented Rodrigue's rotation formula to determine the rotation matrix about an axis for a given angle. I've got function Transform which calls out to function Rotate.
// initial values of eye ={0,0,7}
//initial values of up={0,1,0}
void Transform(float degrees, vec3& eye, vec3& up) {
vec3 axis = glm::cross(glm::normalize(eye), glm::normalize(up));
glm::normalize(axis);
mat3 resultRotate = rotate(degrees, axis);
eye = eye * resultRotate;
glm::normalize(eye);
up = up * resultRotate;`enter code here`
glm::normalize(up);
}
mat3 rotate(const float degrees, const vec3& axis) {
//Implement Rodrigue's axis-angle rotation formula
float radDegree = glm::radians(degrees);
float cosValue = cosf(radDegree);
float minusCos = 1 - cosValue;
float sinValue = sinf(radDegree);
float cartesianX = axis.x;
float cartesianY = axis.y;
float cartesianZ = axis.z;
mat3 myFinalResult = mat3(cosValue +(cartesianX*cartesianX*minusCos), ((cartesianX*cartesianY*minusCos)-(cartesianZ*sinValue)),((cartesianX*cartesianZ*minusCos)+(cartesianY*sinValue)),
((cartesianX*cartesianY*minusCos)+(cartesianZ*sinValue)), (cosValue+(cartesianY*cartesianY*minusCos)), ((cartesianY*cartesianZ*minusCos) - (cartesianX*sinValue)),
((cartesianX*cartesianZ*minusCos)-(cartesianY*sinValue)), ((cartesianY*cartesianZ*minusCos) + (cartesianX*sinValue)), ((cartesianZ*cartesianZ*minusCos) + cosValue));
return myFinalResult;
}
All the values, resultant rotation matrix and the changed vectors are as expected for +angle of rotation, but wrong for negative angles and from then on, has cascading effect until the all the vectors are re-initialised. Can someone please help me figure out the problem? I cannot use any inbuilt functions like glm::rotate.
I do not use Rodrigues_rotation_formula because it needs to compute a system of equation on runtime and gets very complicated in higher dimensions.
Instead I am using axis aligned incremental rotations along with 4x4 homogenous transform matrices which are really easily portable to higher dimensions like 4D rotors.
Now there are local and global rotations. Local rotations will rotate around your matrix coordiante system local axises and global ones will rotate around world (or main coordinate system)
What you want is create a transform matrix around some point,axis and angle. To do that just:
create a transform matrix A
that has one axis aligned to axis of rotation and origin is center of rotation. To construct such matrix you need 2 perpendicular vectors which are easily obtainable from cross product.
rotate A around its local axis aligned to axis of rotation by angle
by simple multiplication of A by axis aligned incremental rotation R so
A*R;
revert the original transform of A before rotation
by simply multiplying inverse of A to the result so
A*R*Inverse(A);
apply this on matrix M you want to rotate
also by simply multiplying this to M so:
M*=A*R*Inverse(A);
And that is it... Here 3D OBB approximation you can find function :
template <class T> _mat4<T> rotate(_mat4<T> &m,T ang,_vec3<T> p0,_vec3<T> dp)
{
int i;
T c=cos(ang),s=sin(ang);
_vec3<T> x,y,z;
_mat4<T> a,_a,r=mat4(
1, 0, 0, 0,
0, c, s, 0,
0,-s, c, 0,
0, 0, 0, 1);
// basis vectors
x=normalize(dp); // axis of rotation
y=_vec3<T>(1,0,0); // any vector non parallel to x
if (fabs(dot(x,y))>0.75) y=_vec3<T>(0,1,0);
z=cross(x,y); // z is perpendicular to x,y
y=cross(z,x); // y is perpendicular to x,z
y=normalize(y);
z=normalize(z);
// feed the matrix
for (i=0;i<3;i++)
{
a[0][i]= x[i];
a[1][i]= y[i];
a[2][i]= z[i];
a[3][i]=p0[i];
a[i][3]=0;
} a[3][3]=1;
_a=inverse(a);
r=m*a*r*_a;
return r;
};
That does exactly that. Where m is original matrix to transform (and returns the rotated one), ang is signed angle in [rad], p0 is center of rotation and dp is axis of rotation direction vector.
This approach does not have any singularities nor problems to rotate by negative angles ...
If you want to use this with glm or any other GLSL like math just change the templates to what you use so float,vec3,mat4 instead of T,_vec3<T>,mat4<T>.

Arcball camera locked when parallel to up vector

I'm currently in the process of finishing the implementation for a camera that functions in the same way as the camera in Maya. The part I'm stuck in the tumble functionality.
The problem is the following: the tumble feature works fine so long as the position of the camera is not parallel with the up vector (currently defined to be (0, 1, 0)). As soon as the camera becomes parallel with this vector (so it is looking straight up or down), the camera locks in place and will only rotate around the up vector instead of continuing to roll.
This question has already been asked here, unfortunately there is no actual solution to the problem. For reference, I also tried updating the up vector as I rotated the camera, but the resulting behaviour is not what I require (the view rolls as a result of the new orientation).
Here's the code for my camera:
using namespace glm;
// point is the position of the cursor in screen coordinates from GLFW
float deltaX = point.x - mImpl->lastPos.x;
float deltaY = point.y - mImpl->lastPos.y;
// Transform from screen coordinates into camera coordinates
Vector4 tumbleVector = Vector4(-deltaX, deltaY, 0, 0);
Matrix4 cameraMatrix = lookAt(mImpl->eye, mImpl->centre, mImpl->up);
Vector4 transformedTumble = inverse(cameraMatrix) * tumbleVector;
// Now compute the two vectors to determine the angle and axis of rotation.
Vector p1 = normalize(mImpl->eye - mImpl->centre);
Vector p2 = normalize((mImpl->eye + Vector(transformedTumble)) - mImpl->centre);
// Get the angle and axis
float theta = 0.1f * acos(dot(p1, p2));
Vector axis = cross(p1, p2);
// Rotate the eye.
mImpl->eye = Vector(rotate(Matrix4(1.0f), theta, axis) * Vector4(mImpl->eye, 0));
The vector library I'm using is GLM. Here's a quick reference on the custom types used here:
typedef glm::vec3 Vector;
typedef glm::vec4 Vector4;
typedef glm::mat4 Matrix4;
typedef glm::vec2 Point2;
mImpl is a PIMPL that contains the following members:
Vector eye, centre, up;
Point2 lastPoint;
Here is what I think. It has something to do with the gimbal lock, that occurs with euler angles (and thus spherical coordinates).
If you exceed your minimal(0, -zoom,0) or maxima(0, zoom,0) you have to toggle a boolean. This boolean will tell you if you must treat deltaY positive or not.
It could also just be caused by a singularity, therefore just limit your polar angle values between 89.99° and -89.99°.
Your problem could be solved like this.
So if your camera is exactly above (0, zoom,0) or beneath (0, -zoom,0) of your object, than the camera only rolls.
(I am also assuming your object is at (0,0,0) and the up-vector is set to (0,1,0).)
There might be some mathematical trick to resolve this, I would do it with linear algebra though.
You need to introduce a new right-vector. If you make a cross product, you will get the camera-vector. Camera-vector = up-vector x camera-vector. Imagine these vectors start at (0,0,0), then easily, to get your camera position just do this subtraction (0,0,0)-(camera-vector).
So if you get some deltaX, you rotate towards the right-vector(around the up-vector) and update it.
Any influence of deltaX should not change your up-vector.
If you get some deltaY you rotate towards the up-vector(around the right-vector) and update it. (This has no influence on the right-vector).
https://en.wikipedia.org/wiki/Rotation_matrix at Rotation matrix from axis and angle you can find a important formula.
You say u is your vector you want to rotate around and theta is the amount you want to pivot. The size of theta is proportional to deltaX/Y.
For example: We got an input from deltaX, so we rotate around the up-vector.
up-vector:= (0,1,0)
right-vector:= (0,0,-1)
cam-vector:= (0,1,0)
theta:=-1*30° // -1 due to the positive mathematical direction of rotation
R={[cos(-30°),0,-sin(-30°)],[0,1,0],[sin(-30°),0,cos(-30°)]}
new-cam-vector=R*cam-vector // normal matrix multiplication
One thing is left to be done: Update the right-vector.
right-vector=camera-vector x up-vector .

Quaternion-Based-Camera unwanted roll

I'm trying to implement a quaternion-based camera, but when moving around the X and Y axis, the camera produces an unwanted roll on the Z axis. I want to be able to look around freely on all axis.
I've read other topics about this problem (for example: http://www.flipcode.com/forums/thread/6525 ), but I'm not getting what is meant by "Fix this by continuously rebuilding the rotation matrix by rotating around the WORLD axis, i.e around <1,0,0>, <0,1,0>, <0,0,1>, not your local coordinates, whatever they might be."
//Camera.hpp
glm::quat rotation;
//Camera.cpp
void Camera::rotate(glm::vec3 vec)
{
glm::quat paramQuat = glm::quat(vec);
rotation = paramQuat * rotation;
}
I call the rotate function like this:
cam->rotate(glm::vec3(0, 0.5, 0));
This must have to do with local/world coordinates, right? I'm just not getting it, since I thought quaternions are always based on each other (thus a quaternion can't be in "world" or "local" space?).
Also, should i use more than one quaternion for a camera?
As far as I understand it, and from looking at the code you provided, what they mean is that you shouldn't store and apply the rotation incrementally by applying rotate on the rotation quat all the time, but instead keeping track of two quaternions for each axis (X and Y in world space) and calculating the rotation vector as the product of those.
[edit: some added (pseudo)code]
// Camera.cpp
void Camera::SetRotation(glm::quat value)
{
rotation = value;
}
// controller.cpp --> probably a place where you'd want to translate user input and store your rotational state
xAngle += deltaX;
yAngle += deltaY;
glm::quat rotationX = QuatAxisAngle(X_AXIS, xAngle);
glm::quat rotationY = QuatAxisAngle(Y_AXIS, yAngle);
camera.SetRotation(rotationX * rotationY);

How to properly move the camera in the direction it's facing

I'm trying to figure out how to make the camera in directx move based on the direction it's facing.
Right now the way I move the camera is by passing the camera's current position and rotation to a class called PositionClass. PositionClass takes keyboard input from another class called InputClass and then updates the position and rotation values for the camera, which is then passed back to the camera class.
I've written some code that seems to work great for me, using the cameras pitch and yaw I'm able to get it to go in the direction I've pointed the camera.
However, when the camera is looking straight up (pitch=90) or straight down (pitch=-90), it still changes the cameras X and Z position (depending on the yaw).
The expected behavior is while looking straight up or down it will only move along the Y axis, not along the X or Z axis.
Here's the code that calculates the new camera position
void PositionClass::MoveForward(bool keydown)
{
float radiansY, radiansX;
// Update the forward speed movement based on the frame time
// and whether the user is holding the key down or not.
if(keydown)
{
m_forwardSpeed += m_frameTime * m_acceleration;
if(m_forwardSpeed > (m_frameTime * m_maxSpeed))
{
m_forwardSpeed = m_frameTime * m_maxSpeed;
}
}
else
{
m_forwardSpeed -= m_frameTime * m_friction;
if(m_forwardSpeed < 0.0f)
{
m_forwardSpeed = 0.0f;
}
}
// ToRadians() just multiplies degrees by 0.0174532925f
radiansY = ToRadians(m_rotationY); //yaw
radiansX = ToRadians(m_rotationX); //pitch
// Update the position.
m_positionX += sinf(radiansY) * m_forwardSpeed;
m_positionY += -sinf(radiansX) * m_forwardSpeed;
m_positionZ += cosf(radiansY) * m_forwardSpeed;
return;
}
The significant portion is where the position is updated at the end.
So far I've only been able to deduce that I have horrible math skills.
So, can anyone help me with this dilemma? I've created a fiddle to help test out the math.
Edit: The fiddle uses the same math I used in my MoveForward function, if you set pitch to 90 you can see that the Z axis is still being modified
Thanks to Chaosed0's answer, I was able to figure out the correct formula to calculate movement in a specific direction.
The fixed code below is basically the same as above but now simplified and expanded to make it easier to understand.
First we determine the amount by which the camera will move, in my case this was m_forwardSpeed, but here I will define it as offset.
float offset = 1.0f;
Next you will need to get the camera's X and Y rotation values (in degrees!)
float pitch = camera_rotationX;
float yaw = camera_rotationY;
Then we convert those values into radians
float pitchRadian = pitch * (PI / 180); // X rotation
float yawRadian = yaw * (PI / 180); // Y rotation
Now here is where we determine the new position:
float newPosX = offset * sinf( yawRadian ) * cosf( pitchRadian );
float newPosY = offset * -sinf( pitchRadian );
float newPosZ = offset * cosf( yawRadian ) * cosf( pitchRadian );
Notice that we only multiply the X and Z positions by the cosine of pitchRadian, this is to negate the direction and offset of your camera's yaw when it's looking straight up (90) or straight down (-90).
And finally, you need to tell your camera the new position, which I won't cover because it largely depends on how you've implemented your camera. Apparently doing it this way is out of the norm, and possibly inefficient. However, as Chaosed0 said, it's what makes the most sense to me!
To be honest, I'm not entirely sure I understand your code, so let me try to provide a different perspective.
The way I like to think about this problem is in spherical coordinates, basically just polar in 3D. Spherical coordinates are defined by three numbers: a radius and two angles. One of the angles is yaw, and the other should be pitch, assuming you have no roll (I believe there's a way to get phi if you have roll, but I can't think of how currently). In conventional mathematics notation, theta is your yaw and phi is your pitch, with radius being your move speed, as shown below.
Note that phi and theta are defined differently, depending on where you look.
Basically, the problem is to obtain a point m_forwardSpeed away from your camera, with the right pitch and yaw. To do this, we set the "origin" to your camera position, obtain a spherical coordinate, convert it to cartesian, and then add it to your camera position:
float radius = m_forwardSpeed;
float theta = m_rotationY;
float phi = m_rotationX
//These equations are from the wikipedia page, linked above
float xMove = radius*sinf(phi)*cosf(theta);
float yMove = radius*sinf(phi)*sinf(theta);
float zMove = radius*cosf(phi);
m_positionX += xMove;
m_positionY += yMove;
m_positionZ += zMove;
Of course, you can condense a lot of this code, but I expanded it for clarity.
You can think about this like drawing a sphere around your camera. Each of the points on the sphere is a potential position in the next timestep, depending on the camera's rotation.
This is probably not the most efficient way to do it, but in my opinion it's certainly the easiest way to think about it. It actually looks like this is nearly exactly what you're trying to do in your code, but the operations on the angles are just a little bit off.

Ray tracing vectors

So I decided to write a ray tracer the other day, but I got stuck because I forgot all my vector math.
I've got a point behind the screen (the eye/camera, 400,300,-1000) and then a point on the screen (a plane, from 0,0,0 to 800,600,0), which I'm getting just by using the x and y values of the current pixel I'm looking for (using SFML for rendering, so it's something like 267,409,0)
Problem is, I have no idea how to cast the ray correctly. I'm using this for testing sphere intersection(C++):
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{ //operator * between 2 vec3s is a dot product
Vec3 dist = ray.start - sphere.pos; //both vec3s
float B = -1 * (ray.dir * dist);
float D = B*B - dist * dist + sphere.radius * sphere.radius; //radius is float
if(D < 0.0f)
return false;
float t0 = B - sqrtf(D);
float t1 = B + sqrtf(D);
bool ret = false;
if((t0 > 0.1f) && (t0 < t))
{
t = t0;
ret = true;
}
if((t1 > 0.1f) && (t1 < t))
{
t = t1;
ret = true;
}
return ret;
}
So I get that the start of the ray would be the eye position, but what is the direction?
Or, failing that, is there a better way of doing this? I've heard of some people using the ray start as (x, y, -1000) and the direction as (0,0,1) but I don't know how that would work.
On a side note, how would you do transformations? I'm assuming that to change the camera angle you just adjust the x and y of the camera (or the screen if you need a drastic change)
The parameter "ray" in the function,
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{
...
}
should already contain the direction information and with this direction you need to check if the ray intersects the sphere or not. (The incoming "ray" parameter is the vector between the camera point and the pixel the ray is sent.)
Therefore the local "dist" variable seems obsolete.
One thing I can see is that when you create your rays you are not using the center of each pixel in the screen as the point for building the direction vector. You do not want to use just the (x, y) coordinates on the grid for building those vectors.
I've taken a look at your sample code and the calculation is indeed incorrect. This is what you want.
http://www.csee.umbc.edu/~olano/435f02/ray-sphere.html (I took this course in college, this guy knows his stuff)
Essentially it means you have this ray, which has an origin and direction. You have a sphere with a point and a radius. You use the ray equation and plug it into the sphere equation and solve for t. That t is the distance between the ray origin and the intersection point on the spheres surface. I do not think your code does this.
So I get that the start of the ray would be the eye position, but what is the direction?
You have camera defined by vectors front, up, and right (perpendicular to each other and normalized) and "position" (eye position).
You also have width and height of viewport (pixels), vertical field of view (vfov) and horizontal field of view (hfov) in degrees or radians.
There are also 2D x and y coordinates of pixel. X axis (2D) points to the right, Y axis (2D) points down.
For a flat screen ray can be calculated like this:
startVector = eyePos;
endVector = startVector
+ front
+ right * tan(hfov/2) * (((x + 0.5)/width)*2.0 - 1.0)
+ up * tan(vfov/2) * (1.0 - ((y + 0.5f)/height)*2.0);
rayStart = startVector;
rayDir = normalize(endVector - startVector);
That assumes that screen plane is flat. For extreme field of view angles (fov >= 180 degreess) you might want to make screen plane spherical, and use different formulas.
how would you do transformations
Matrices.