Quaternion rotation does not work - c++

I want to make a quaternion based camera. In the internet I found this :
https://www.gamedev.net/resources/_/technical/math-and-physics/a-simple-quaternion-based-camera-r1997
From which I took the code :
typedef struct { float w, x, y, z; } quaternion;
double length(quaternion quat)
{
return sqrt(quat.x * quat.x + quat.y * quat.y +
quat.z * quat.z + quat.w * quat.w);
}
quaternion normalize(quaternion quat)
{
double L = length(quat);
quat.x /= L;
quat.y /= L;
quat.z /= L;
quat.w /= L;
return quat;
}
quaternion conjugate(quaternion quat)
{
quat.x = -quat.x;
quat.y = -quat.y;
quat.z = -quat.z;
return quat;
}
quaternion mult(quaternion A, quaternion B)
{
quaternion C;
C.x = A.w*B.x + A.x*B.w + A.y*B.z - A.z*B.y;
C.y = A.w*B.y - A.x*B.z + A.y*B.w + A.z*B.x;
C.z = A.w*B.z + A.x*B.y - A.y*B.x + A.z*B.w;
C.w = A.w*B.w - A.x*B.x - A.y*B.y - A.z*B.z;
return C;
}
void RotateCamera(double Angle, double x, double y, double z)
{
quaternion temp, quat_view, result;
temp.x = x * sin(Angle/2);
temp.y = y * sin(Angle/2);
temp.z = z * sin(Angle/2);
temp.w = cos(Angle/2);
quat_view.x = View.x;
quat_view.y = View.y;
quat_view.z = View.z;
quat_view.w = 0;
result = mult(mult(temp, quat_view), conjugate(temp));
View.x = result.x;
View.y = result.y;
View.z = result.z;
}
But Im having problems when trying to implement this line :
gluLookAt(Position.x, Position.y, Position.z,
View.x, View.y, View.z, Up.x, Up.y, Up.z).
because I have no idea of what to use as 'Up', I tried with 0,0,0, but it only showed a black screen. Any help is greatly appreciated !
EDIT :
Somewhere on this site I found something like that that does convert a quaternion to a matrix. How can I use this matrix using glMultMatrixf();
float *quat_to_matrix(quaternion quat) {
float matrix[16];
double qx=quat.x;
double qy=quat.y;
double qz=quat.z;
double qw=quat.w;
const double n = 1.0f/sqrt(qx*qx+qy*qy+qz*qz+qw*qw);
qx *= n;
qy *= n;
qz *= n;
qw *= n;
matrix={1.0f - 2.0f*qy*qy - 2.0f*qz*qz, 2.0f*qx*qy - 2.0f*qz*qw, 2.0f*qx*qz + 2.0f*qy*qw, 0.0f,
2.0f*qx*qy + 2.0f*qz*qw, 1.0f - 2.0f*qx*qx - 2.0f*qz*qz, 2.0f*qy*qz - 2.0f*qx*qw, 0.0f,
2.0f*qx*qz - 2.0f*qy*qw, 2.0f*qy*qz + 2.0f*qx*qw, 1.0f - 2.0f*qx*qx - 2.0f*qy*qy, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};
return matrix;
}
EDIT 2 :
I used glMultMatrixf() and it worked. But I finally found out, that the output of RotateCamera() makes my Quaternion zero ? Does anybody know whats wrong with this method :
void RotateCamera(double Angle, double x, double y, double z)
{
quaternion temp, quat_view, result;
temp.x = x * sin(Angle/2);
temp.y = y * sin(Angle/2);
temp.z = z * sin(Angle/2);
temp.w = cos(Angle/2);
quat_view.x = View.x;
quat_view.y = View.y;
quat_view.z = View.z;
quat_view.w = 0;
result = mult(mult(temp, quat_view), conjugate(temp));
View.x = result.x;
View.y = result.y;
View.z = result.z;
}

It doesn't really make sense to me , but I will try to answer anyway :D ... why don't you just rotate it using glRotatef(angle,0,0,1) for rotation of the z axis , since the the definition of this function it is as follows : glRotatef(angle,x_axis,y_axis,z_axis) the last 3 parameters clamp to [0,1].
For the second question , from what I know you should decrement the angle, you can anyway experiment with the function to see for yourself ;) .

Related

Function to calculate angle to a point in unusual 2D space

I'm looking for a robust function to calculate the difference(delta) between an object and a point.
For example, it there was an object at point A with an orientation of 1.2 Rad, what would be the required angle for the object to turn in order to face Point B.
Furthermore, I'm working in a odd coordinate system where north(0 Rad) faces towards +X, the image below shows this.
I understand the fundamentals, but I'm struggling to produce something robust.
My c++ function template look like this,
float Robot::getDeltaHeading(float _x1, float _y1, float _x2, float _y2, float _currentHeading) {
//TODO:
return xxxxxxx;
}
Any help would be appreciated.
Cheers in Advance.
Here's the answer.
float Robot::getDeltaHeading(float _x1, float _y1, float _x2, float _y2, float _currentHeading) {
_currentHeading -= 90;
double Ux = 0.0, Uy = 0.0, Vx = 0.0, Vy = 0.0, d = 0.0;
d = sqrtf(powf(abs(_x1 - _x2), 2) + powf(abs(_y1 - _x2), 2));
Ux = (_x2 - _x1) / d;
Uy = (_y2 - _y1) / d;
Vx = cos(_currentHeading * (3.14159f / 180.0));
Vy = sin(_currentHeading * (3.14159f / 180.0));
auto ans = 90 + (atan2(((Ux * Vy) - (Uy * Vx)), ((Ux * Vx) + (Uy * Vy))) * (180.0 / 3.14159f));
while (ans > 180) ans -= 360;
while (ans < -180) ans += 360;
return ans;
}

Determine affine transformation that transform one plane into a parallel plane to another

How i can determine the CGAL affine transformation (Aff_transformation_3) that transform one plane (plane1) into a parallel plane to another (plane2)?
Suppose that i have two object planes:
Plane_3 pl1;
Plane_3 pl2;
and they are not parallels, how determine this kind of affine transformation?
Aff_transformation_3 t3 = ??? (pl1, pl2);
I consulted this question and your answer: CGAL: Transformation Matrix for Rotation given two lines/vectors/directions, but i don't know how it may helpme. I have two planes, but in 3d dimensions.
Thanks.
I don't know how a 2d affine transformation (Aff_transformation_2) may helpme to apply a 3d affine transformation (Aff_transformation_3).
However, i found the solution to my question. This is may bit code that i hope to help someone.
typedef CGAL::Cartesian<double> KC;
typedef KC::Line_3 Line3;
typedef KC::Vector_3 Vector3;
typedef KC::Plane_3 Plane3;
typedef CGAL::Aff_transformation_3<KC> Transform3;
// forwards
struct axis_angle;
typedef boost::shared_ptr<axis_angle> RAxisAngle;
struct axis_angle
{
axis_angle()
{
angle = 0;
axis = Vector3(0.0, 0.0, 0.0);
}
double angle;
Vector3 axis;
};
Vector3 normalize(const Vector3 &v)
{
ldouble len = ::sqrt(v.squared_length());
if (len == 0.0)
return v;
return v / len;
}
// return the angle and axis from two planes that there are not parallels
RAxisAngle axis_angle_from_planes(const Plane3 &pln1, const Plane3 &pln2)
{
RAxisAngle result = RAxisAngle(new axis_angle());
Vector3 norm1 = pln1.orthogonal_vector();
Vector3 norm2 = pln2.orthogonal_vector();
double dot_r = norm1 * norm2;
double len_r = ::sqrt(norm1.squared_length() * norm2.squared_length());
if (len_r)
result->angle = ::acos(dot_r / len_r);
else
result->angle = 0.0;
Line3 l1;
CGAL::Object obj_cgal = CGAL::intersection(pln1, pln2);
if (CGAL::assign(l1, obj_cgal))
{
result->axis = normalize(l1.to_vector());
}
else
{
// when planes are parallels, then use some basic axis
result->axis = Vector3(1.0, 0.0, 0.0);
}
return result;
}
// return a CGAL affine transformation that is builded from a 3x3 matrix
// this transformation is for rotate an object from axis and angle
// http://en.wikipedia.org/wiki/Transformation_matrix
// http://en.wikipedia.org/wiki/Rotation_matrix
// http://www.euclideanspace.com/maths/geometry/rotations/conversions/angleToMatrix/index.htm
Transform3 axis_angle_to_matrix(const RAxisAngle &aa)
{
double tmp1, tmp2;
double c = ::cos(aa->angle);
double s = ::sin(aa->angle);
double t = 1.0 - c;
double m00 = c + aa->axis.x() * aa->axis.x() * t;
double m11 = c + aa->axis.y() * aa->axis.y() * t;
double m22 = c + aa->axis.z() * aa->axis.z() * t;
tmp1 = aa->axis.x() * aa->axis.y() * t;
tmp2 = aa->axis.z() * s;
double m10 = tmp1 + tmp2;
double m01 = tmp1 - tmp2;
tmp1 = aa->axis.x() * aa->axis.z() * t;
tmp2 = aa->axis.y() * s;
double m20 = tmp1 - tmp2;
double m02 = tmp1 + tmp2;
tmp1 = aa->axis.y() * aa->axis.z() * t;
tmp2 = aa->axis.x() * s;
double m21 = tmp1 + tmp2;
double m12 = tmp1 - tmp2;
return Transform3(m00, m01, m02, m10, m11, m12, m20, m21, m22);
}
Then, i can use there as this:
RAxisAngle aa = axis_angle_from_planes(plane1, plane2);
Transform3 t3 = axis_angle_to_matrix(aa);
Plane2 new_transform_plane = plane1.transform(t3);
or maybe a point of this plane:
Point3 new_transform_point = point_of_plane1.transform(t3);
Thanks for giveme the posibility to post my little solution.

What's wrong with my BRDF programm in CG

Now I'm working on a basic CG program about the BRDF. And after I got the image, it seems that, all the points face to the light is too bright, I don't know the reason. And here's my code, where I tried to invoke the lookup_brdf_val function.
Vec3f hitNormal = ray.hit->getNormal(ray);
if(hitNormal * ray.dir > 0)
hitNormal = -hitNormal;
result = Vec3f(0, 0, 0);
Ray lightRay;
lightRay.org = ray.org + ray.dir * ray.t;
Vec3f intensity;
for(unsigned int l = 0; l < scene->lights.size(); l++)
{
scene->lights[l]->illuminate(lightRay, intensity);
if(!scene->isOccluded(lightRay))
{
double theta1,theta2;
// Calculate the theta1 and theta2.
theta1 = acosf(-(ray.dir * hitNormal));
theta2 = acosf(lightRay.dir * hitNormal);
// Calculate the fi1 and fi2.
double fi1 = 0;
Vec3f O = ray.org + ray.dir * ray.t;
Vec3f A = O - ray.dir;
Vec3f C = (ray.dir * hitNormal) * hitNormal + A;
Vec3f B = lightRay.dir + O;
Vec3f D = ((-lightRay.dir) * hitNormal) * hitNormal + B;
Vec3f OC = C - O;
Vec3f OD = D - O;
double fi2 = acosf((OD * OC) / (length(OD) * length(OC)));
double x = 0;
double y = 0;
double z = 0;
double &r = x;
double &g = y;
double &b = z;
read->lookup_brdf_val(theta1, fi1, theta2, fi2, r, g, b);
result += Vec3f(r * scale.x * intensity.x, g * scale.y * intensity.y, b * scale.z * intensity.z);
I suggest start from a simpler BRDF to make sure that your main loop is not broken -- try something simple like lambert: max(0,dot(lightRay,hitNormal)) and be sure that those are normalized vectors. Divide by scene->lights.size() if it's simply too bright because you have too many lights.
If the image looks correct with a simple BRDF, now just try it with variations of your other components. You don't give the code for lookup_brdf_val() at all, so beyond that one can only speculate.
It's just like any other programming, though. Reduce the # of variables until you find the one that's awry.

Projected Shadow with shadow matrix, simple test fails

I wrote a little program to test how projected shadows work.
I wanted to check in particular the case where the point to project (it could be the vertex of a triangle) is not situated between the light source and the plane but behind the light itself, that is the light is between the point and the plane.
The problem is that my little program is not even working in the case where the point is between the light and plane. I checked the calculations tens of times, so I guess the error should be logic, but I cant find it..
Here the code
public class test {
int x = 0;
int y = 1;
int z = 2;
int w = 3;
float floor[][] = {
{-100.0f, -100.0f, 0.0f},
{100.0f, -100.0f, 0.0f},
{100.0f, 100.0f, 0.0f},
{-100.0f, 100.0f, 0.0f}};
private float shadow_floor[] = new float[16];
float light_position[] = {0.0f, 0.0f, 10.0f, 1.0f};
public test() {
//Find floorplane based on thre known points
float plane_floor[] = calculatePlane(floor[1], floor[2], floor[3]);
//store shadowMatrix for floor
shadow_floor = shadowMatrix(plane_floor, light_position);
float[] point = new float[]{1.0f, 0.0f, 5.0f, 1.0f};
float[] projectedPoint = pointFmatrixF(point, shadow_floor);
System.out.println("point: (" + point[x] + ", " + point[y] + ", " + point[z] + ", "
+ point[w] + ")");
System.out.println("projectedPoint: (" + projectedPoint[x] + ", " + projectedPoint[y]
+ ", " + projectedPoint[z] + ", " + projectedPoint[w] + ")");
}
public static void main(String args[]) {
test test = new test();
}
// make shadow matrix
public float[] shadowMatrix(float plane[], float light_pos[]) {
float shadow_mat[] = new float[16];
float dot;
dot = plane[x] * light_pos[x] + plane[y] * light_pos[y]
+ plane[z] * light_pos[z] + plane[w] * light_pos[w];
shadow_mat[0] = dot - light_pos[x] * plane[x];
shadow_mat[4] = -light_pos[x] * plane[y];
shadow_mat[8] = -light_pos[x] * plane[z];
shadow_mat[12] = -light_pos[x] * plane[3];
shadow_mat[1] = -light_pos[y] * plane[x];
shadow_mat[5] = dot - light_pos[y] * plane[y];
shadow_mat[9] = -light_pos[y] * plane[z];
shadow_mat[13] = -light_pos[y] * plane[w];
shadow_mat[2] = -light_pos[z] * plane[x];
shadow_mat[6] = -light_pos[z] * plane[y];
shadow_mat[10] = dot - light_pos[z] * plane[z];
shadow_mat[14] = -light_pos[z] * plane[w];
shadow_mat[3] = -light_pos[w] * plane[x];
shadow_mat[7] = -light_pos[w] * plane[y];
shadow_mat[11] = -light_pos[w] * plane[z];
shadow_mat[15] = dot - light_pos[w] * plane[w];
return shadow_mat;
}
public float[] calculatePlane(float p1[], float p2[], float p3[]) {
//Array for planlikningen
float plane[] = new float[4];
//Gitt to vektorer (tre punkter) i planet kan normalen regnes ut
//Vi vil ha aboluttverdier
plane[x] = Math.abs(((p2[y] - p1[y]) * (p3[z] - p1[z])) - ((p2[z] - p1[z])
* (p3[y] - p1[y])));
plane[y] = Math.abs(((p2[z] - p1[z]) * (p3[x] - p1[x])) - ((p2[x] - p1[x])
* (p3[z] - p1[z])));
plane[z] = Math.abs(((p2[x] - p1[x]) * (p3[y] - p1[y])) - ((p2[y] - p1[y])
* (p3[x] - p1[x])));
plane[w] = -(plane[x] * p1[x] + plane[y] * p1[y] + plane[z] * p1[z]);
return plane;
}
public float[] pointFmatrixF(float[] point, float[] matrix) {
int x = 0;
int y = 1;
int z = 2;
float[] transformedPoint = new float[4];
transformedPoint[x] =
matrix[0] * point[x]
+ matrix[4] * point[y]
+ matrix[8] * point[z]
+ matrix[12];
transformedPoint[y] =
matrix[1] * point[x]
+ matrix[5] * point[y]
+ matrix[9] * point[z]
+ matrix[13];
transformedPoint[z] =
matrix[2] * point[x]
+ matrix[6] * point[y]
+ matrix[10] * point[z]
+ matrix[14];
transformedPoint[w] = 1;
return transformedPoint;
}
}
If the plane is an xy plane, the light is on (0, 0, 10) and the point on (1, 0, 5) then the projected point on the plane should be (2, 0, 0), but the program is returning (400000.0, 0.0, 0.0, 1.0)
Solved, I was assuming incorrectly that the last coordinate of the projected point was 1, but it wasn't.
https://math.stackexchange.com/questions/320527/projecting-a-point-on-a-plane-through-a-matrix

Separating Axis Theorem is driving me nuts!

i am working on an implementation of the Separting Axis Theorem for use in 2D games. It kind of works but just kind of.
I use it like this:
bool penetration = sat(c1, c2) && sat(c2, c1);
Where c1 and c2 are of type Convex, defined as:
class Convex
{
public:
float tx, ty;
public:
std::vector<Point> p;
void translate(float x, float y) {
tx = x;
ty = y;
}
};
(Point is a structure of float x, float y)
The points are typed in clockwise.
My current code (ignore Qt debug):
bool sat(Convex c1, Convex c2, QPainter *debug)
{
//Debug
QColor col[] = {QColor(255, 0, 0), QColor(0, 255, 0), QColor(0, 0, 255), QColor(0, 0, 0)};
bool ret = true;
int c1_faces = c1.p.size();
int c2_faces = c2.p.size();
//For every face in c1
for(int i = 0; i < c1_faces; i++)
{
//Grab a face (face x, face y)
float fx = c1.p[i].x - c1.p[(i + 1) % c1_faces].x;
float fy = c1.p[i].y - c1.p[(i + 1) % c1_faces].y;
//Create a perpendicular axis to project on (axis x, axis y)
float ax = -fy, ay = fx;
//Normalize the axis
float len_v = sqrt(ax * ax + ay * ay);
ax /= len_v;
ay /= len_v;
//Debug graphics (ignore)
debug->setPen(col[i]);
//Draw the face
debug->drawLine(QLineF(c1.tx + c1.p[i].x, c1.ty + c1.p[i].y, c1.p[(i + 1) % c1_faces].x + c1.tx, c1.p[(i + 1) % c1_faces].y + c1.ty));
//Draw the axis
debug->save();
debug->translate(c1.p[i].x, c1.p[i].y);
debug->drawLine(QLineF(c1.tx, c1.ty, ax * 100 + c1.tx, ay * 100 + c1.ty));
debug->drawEllipse(QPointF(ax * 100 + c1.tx, ay * 100 + c1.ty), 10, 10);
debug->restore();
//Carve out the min and max values
float c1_min = FLT_MAX, c1_max = FLT_MIN;
float c2_min = FLT_MAX, c2_max = FLT_MIN;
//Project every point in c1 on the axis and store min and max
for(int j = 0; j < c1_faces; j++)
{
float c1_proj = (ax * (c1.p[j].x + c1.tx) + ay * (c1.p[j].y + c1.ty)) / (ax * ax + ay * ay);
c1_min = min(c1_proj, c1_min);
c1_max = max(c1_proj, c1_max);
}
//Project every point in c2 on the axis and store min and max
for(int j = 0; j < c2_faces; j++)
{
float c2_proj = (ax * (c2.p[j].x + c2.tx) + ay * (c2.p[j].y + c2.ty)) / (ax * ax + ay * ay);
c2_min = min(c2_proj, c2_min);
c2_max = max(c2_proj, c2_max);
}
//Return if the projections do not overlap
if(!(c1_max >= c2_min && c1_min <= c2_max))
ret = false; //return false;
}
return ret; //return true;
}
What am i doing wrong? It registers collision perfectly but is over sensitive on one edge (in my test using a triangle and a diamond):
//Triangle
push_back(Point(0, -150));
push_back(Point(0, 50));
push_back(Point(-100, 100));
//Diamond
push_back(Point(0, -100));
push_back(Point(100, 0));
push_back(Point(0, 100));
push_back(Point(-100, 0));
I am getting this mega-adhd over this, please help me out :)
http://u8999827.fsdata.se/sat.png
OK, I was wrong the first time. Looking at your picture of a failure case it is obvious a separating axis exists and is one of the normals (the normal to the long edge of the triangle). The projection is correct, however, your bounds are not.
I think the error is here:
float c1_min = FLT_MAX, c1_max = FLT_MIN;
float c2_min = FLT_MAX, c2_max = FLT_MIN;
FLT_MIN is the smallest normal positive number representable by a float, not the most negative number. In fact you need:
float c1_min = FLT_MAX, c1_max = -FLT_MAX;
float c2_min = FLT_MAX, c2_max = -FLT_MAX;
or even better for C++
float c1_min = std::numeric_limits<float>::max(), c1_max = -c1_min;
float c2_min = std::numeric_limits<float>::max(), c2_max = -c2_min;
because you're probably seeing negative projections onto the axis.