C++ Finding parallel lines Coordinates inside a boundary - c++

I am given a Line PQ and a boundary. I have to find two parallel lines to the given line but lines should intersect the boundary. Also i know the distance between the parallel lines to the given line. I need to find the P'Q' and P"Q".
Please any one give a simple solution.
float vx = x2 - x1;
float vy = y2 - y1;
float mag = sqrt(vx * vx + vy * vy);
float t = (mag / 2.0) / mag;
float px = (1 - t) * x1 + t * x2;
float py = (1 - t) * y1 + t * y2;
I just found the centre point of PQ by the above code. Further i planned to draw a perpendicular line thru (px, py) with the known distance, then drawing lines perpendicular to that new line(those lines will be parallel to PQ), thru the end points of that new line. But i could not achieve it. can anyone help me or suggest me a way who know maths

Finally i got the solution.
The steps are.
First i am getting the center point of PQ.
POINT find_a_point_in_distance(float x1, float y1, float x2, float y2, float len = 0) {
float vx = x2 - x1;
float vy = y2 - y1;
float mag = sqrt(vx * vx + vy * vy);
float t = len == 0 ? ((mag / 2.0) / mag) : (len / mag);
float px = (1 - t) * x1 + t * x2;
float py = (1 - t) * y1 + t * y2;
POINT res = { px, py };
return res;
}
here (px, py) is center point of PQ.
Then i am finding the perpendicular line through (px, py).
Already i have mentioned in the question that i know distance between PQ and P'Q' also PQ and P"Q". So i am getting two points in that perpendicular line using that distance. Finally i know the angle of the line PQ, so P'Q' and P"Q" should be in that same angle, using these details i can get the lines P'Q' and P"Q" what ever length i want. Here in below code i am getting the line P'Q' and P"Q" with the length of the diagonal of the rectangular box.
POINT res = find_a_point_in_distance(x1, y1, x2, y2);
POINT res2 = find_a_point_in_distance(res.x, res.y, x2, y2, halflen);
float cosA = acos((res2.x - res.x) / halflen) * 180 / PI;
float sinA = asin((res2.y - res.y) / halflen) * 180 / PI;
float cosAngle = cos((cosA + 90.0) * PI / 180.0);
float sinAngle = sin((sinA + 90.0) * PI / 180.0);
float cx1 = res.x + halflen * cosAngle;
float cy1 = res.y + halflen * sinAngle;
float cosAngle2 = cos((cosA - 90.0) * PI / 180.0);
float sinAngle2 = sin((sinA - 90.0) * PI / 180.0);
float cx2 = res.x + halflen * cosAngle2;
float cy2 = res.y + halflen * sinAngle2;
float diagonal = sqrt(width * width + height * height);
float halfdiagonal = diagonal / 2.0;
float cosAngleT = cos(cosA * PI / 180.0);
float sinAngleT = sin(sinA * PI / 180.0);
float cosAngleTD = cos((cosA + 180) * PI / 180.0);
float sinAngleTD = sin((sinA + 180) * PI / 180.0);
float cx10 = cx1 + halfdiagonal * cosAngleT;
float cy10 = cy1 + halfdiagonal * sinAngleT;
float cx11 = cx1 + halfdiagonal * cosAngleTD;
float cy11 = cy1 + halfdiagonal * sinAngleTD;
float cx20 = cx2 + halfdiagonal * cosAngleT;
float cy20 = cy2 + halfdiagonal * sinAngleT;
float cx21 = cx2 + halfdiagonal * cosAngleTD;
float cy21 = cy2 + halfdiagonal * sinAngleTD;
here (cx10, cy10) and (cx11, cy11) is line P'Q' and
(cx20, cy20) and (cx21, cy21) is line P"Q".
then finally im finding the intersect point of P'Q' and P"Q" with all sides of rectangle

Related

How to correctly convert from Euler angles and multiply quaternions?

We are trying to convert from a local space incremental rotation in Euler (X,Y,Z) = Pitch Yaw Roll to an absolute world space rotation Quaternion.
We are doing this by converting each incremental rotation to a quaternion and accumulating (through multiplication) Quaternion rotations to give a world space quaternion.
However our result rotations show the object rotating around world space axes rather than local object axes.
I am following this pseudocode loop and c++, and have built the same in Unity3D (which works correctly since the Quaternion operations are aready provided).
Does anyone have any pointers as to what is going wrong here?
Quaternion qacc;
Quaternion q1;
loop
{
quacc = quacc * q1.degreeToQuaternion(xRot, yRot, zRot);
}
...
void Quaternion::degreeToQuaternion( double yaw, double pitch, double roll) // yaw (Z), pitch (Y), roll (X)
{
yaw = yaw * M_PI / 180.;
pitch = pitch * M_PI / 180.;
roll = roll * M_PI / 180.;
radianToQuaternion(yaw, pitch, roll);
}
void Quaternion::radianToQuaternion( double yaw, double pitch, double roll) // yaw (Z), pitch (Y), roll (X)
{
double cy = cos(yaw * 0.5);
double sy = sin(yaw * 0.5);
double cp = cos(pitch * 0.5);
double sp = sin(pitch * 0.5);
double cr = cos(roll * 0.5);
double sr = sin(roll * 0.5);
this->w = cy * cp * cr + sy * sp * sr;
this->x = cy * cp * sr - sy * sp * cr;
this->y = sy * cp * sr + cy * sp * cr;
this->z = sy * cp * cr - cy * sp * sr;
}
Quaternion operator * (Quaternion q0, Quaternion q1)
{
Quaternion q2;
double mag;
q2.w = q0.w * q1.w - q0.x * q1.x - q0.y * q1.y - q0.z * q1.z;
q2.x = q0.w * q1.x + q0.x * q1.w + q0.y * q1.z - q0.z * q1.y;
q2.y = q0.w * q1.y - q0.x * q1.z + q0.y * q1.w + q0.z * q1.x;
q2.z = q0.w * q1.z + q0.x * q1.y - q0.y * q1.x + q0.z * q1.w;
mag = sqrt (q2.w * q2.w + q2.x * q2.x + q2.y * q2.y + q2.z * q2.z);
q2.w /= mag;
q2.x /= mag;
q2.y /= mag;
q2.z /= mag;
return q2;
}

Calculate position and rotation of camera with mouse events

My plan:
1. Calculate mouse direction [x, y] [success]
I my Mouse Move event:
int directionX = lastPosition.x - position.x;
int directionY = lastPosition.y - position.y;
2. Calculate angles [theta, phi] [success]
float theta = fmod(lastTheta + sensibility * directionY, M_PI);
float phi = fmod(lastPhi + sensibility * directionX * -1, M_PI * 2);
Edit {
bug fix:
float theta = lastTheta + sensibility * directionY * -1;
if (theta < M_PI / -2)theta = M_PI / -2;
else if (theta > M_PI / 2)theta = M_PI / 2;
float phi = fmod(lastPhi + sensibility * directionX * -1, M_PI * 2);
}
Now I have given theta, phi, the centerpoint and the radius and I want to calculate the position and the rotation [that the camera look at the centerpoint]
3. Calculate position coordinates [X,Y,Z] [failed]
float newX = radius * sin(phi) * cos(theta);
float newY = radius * sin(phi) * sin(theta);
float newZ = radius * cos(phi);
Solution [by meowgoesthedog]:
float newX = radius * cos(theta) * cos(phi);
float newY = radius * sin(theta);
float newZ = radius * cos(theta) * sin(phi);
4. Calculate rotation [failed]
float pitch = ?;
float yaw = ?;
Solution [by meowgoesthedog]:
float pitch = -theta;
float yaw = -phi;
Thanks for your solutions!
Your attempt was almost (kinda) correct:
As the diagram shows, in OpenGL the "vertical" direction is conventionally taken to be Y, whereas your formulas assume it is Z
phi and theta are in the wrong order
Very simple conversion: yaw = -phi, pitch = -theta (from the perspective of the camera)
Fixed formulas:
float position_X = radius * cos(theta) * cos(phi);
float position_Y = radius * sin(theta);
float position_Z = radius * cos(theta) * sin(phi);
(There may also be some sign issues with the mouse deltas but they should be easy to fix.)

Algorithm to draw a sphere using quadrilaterals

I am attempting to draw a sphere from scratch using OpenGL. The function must be defined as void drawSphere(float radius, int nSegments, int nSlices), must be centred at the (0, 0, 0) origin and must be created using GL_QUADS.
Firstly, are the "slices" the sort of tapered cylinder shapes that are stacked on top of each other to create the sphere, and the "segments" are the quads that are generated in a circle to generate the wall/side of each of these tapered cylinder slices?
Secondly, I cannot seem to find any algorithms or examples of how to make the calculations to generate this sphere using quadrilaterals - most example seem to be generated from triangles instead.
EDIT
Here is what I have just tried, which is definitely in the right direction, but my coordinate calculations are off somewhere:
void drawSphere(float radius, int nSegments, int nSlices) {
/*
* TODO
* Draw sphere centered at the origin using GL_QUADS
* Compute and set normal vectors for each vertex to ensure proper shading
* Set texture coordinates
*/
for (float slice = 0.0; slice < nSlices; slice += 1.0) {
float lat0 = M_PI * (((slice - 1) / nSlices) - 0.5);
float z0 = sin(lat0);
float zr0 = cos(lat0);
float lat1 = M_PI * ((slice / nSlices) - 0.5);
float z1 = sin(lat1);
float zr1 = cos(lat1);
glBegin(GL_QUADS);
for (float segment = 0.0; segment < nSegments; segment += 1.0) {
float long0 = 2 * M_PI * ((segment -1 ) / nSegments);
float x0 = cos(long0);
float y0 = sin(long0);
float long1 = 2 * M_PI * (segment / nSegments);
float x1 = cos(long1);
float y1 = sin(long1);
glVertex3f(x0 * zr0, y0 * zr0, z0);
glVertex3f(x1 * zr1, y1 * zr1, z0);
glVertex3f(x0 * zr0, y0 * zr0, z1);
glVertex3f(x1 * zr1, y1 * zr1, z1);
}
glEnd();
}
}
I'm not seeing radius being used. Probably a simple omission. Let's assume that for the rest of your computation the radius is 1. You should generate your 4 values by using 0-1 on both (x, y), and (z, zr), but not mix within those tuples.
So x1 * zr1, y1 * zr1, z0 is not right because you're mixing zr1 and z0. You can see that the norm of this vector is not 1 anymore. Your 4 values should be
x0 * zr0, y0 * zr0, z0
x1 * zr0, y1 * zr0, z0
x0 * zr1, y0 * zr1, z1
x1 * zr1, y1 * zr1, z1
I'm not too sure about the order since I don't use Quads but triangles.

Visualize MPU6050 using openGL

After getting accelerometer/gyroscope from MPU6050, I use this function:
void AHRSupdate(float gx, float gy, float gz, float ax, float ay, float az,float mx, float my,float mz,double * quad) {
float norm;
float hx, hy, hz, bx, bz;
float vx, vy, vz, wx, wy, wz;
float ex, ey, ez;
double* tempQuat = quad;
// auxiliary variables to reduce number of repeated operations
float q0q0 = q0*q0;
float q0q1 = q0*q1;
float q0q2 = q0*q2;
float q0q3 = q0*q3;
float q1q1 = q1*q1;
float q1q2 = q1*q2;
float q1q3 = q1*q3;
float q2q2 = q2*q2;
float q2q3 = q2*q3;
float q3q3 = q3*q3;
// normalise the measurements
norm = sqrt(ax*ax + ay*ay + az*az);
ax = ax / norm;
ay = ay / norm;
az = az / norm;
norm = sqrt(mx*mx + my*my + mz*mz);
mx = mx / norm;
my = my / norm;
mz = mz / norm;
// compute reference direction of flux
hx = 2*mx*(0.5 - q2q2 - q3q3) + 2*my*(q1q2 - q0q3) + 2*mz*(q1q3 + q0q2);
hy = 2*mx*(q1q2 + q0q3) + 2*my*(0.5 - q1q1 - q3q3) + 2*mz*(q2q3 - q0q1);
hz = 2*mx*(q1q3 - q0q2) + 2*my*(q2q3 + q0q1) + 2*mz*(0.5 - q1q1 - q2q2)
;
bx = sqrt((hx*hx) + (hy*hy));
bz = hz;
// estimated direction of gravity and flux (v and w)
vx = 2*(q1q3 - q0q2);
vy = 2*(q0q1 + q2q3);
vz = q0q0 - q1q1 - q2q2 + q3q3;
wx = 2*bx*(0.5 - q2q2 - q3q3) + 2*bz*(q1q3 - q0q2);
wy = 2*bx*(q1q2 - q0q3) + 2*bz*(q0q1 + q2q3);
wz = 2*bx*(q0q2 + q1q3) + 2*bz*(0.5 - q1q1 - q2q2);
// error is sum of cross product between reference direction of fields and direction measured by sensors
ex = (ay*vz - az*vy) + (my*wz - mz*wy);
ey = (az*vx - ax*vz) + (mz*wx - mx*wz);
ez = (ax*vy - ay*vx) + (mx*wy - my*wx);
// integral error scaled integral gain
exInt = exInt + ex*Ki;
eyInt = eyInt + ey*Ki;
ezInt = ezInt + ez*Ki;
// adjusted gyroscope measurements
gx = gx + Kp*ex + exInt;
gy = gy + Kp*ey + eyInt;
gz = gz + Kp*ez + ezInt;
// integrate quaternion rate and normalise
q0 = q0 + (-q1*gx - q2*gy - q3*gz)*halfT;
q1 = q1 + (q0*gx + q2*gz - q3*gy)*halfT;
q2 = q2 + (q0*gy - q1*gz + q3*gx)*halfT;
q3 = q3 + (q0*gz + q1*gy - q2*gx)*halfT;
// normalise quaternion
norm = sqrt(q0*q0 + q1*q1 + q2*q2 + q3*q3);
q0 = q0 / norm;
q1 = q1 / norm;
q2 = q2 / norm;
q3 = q3 / norm;
*tempQuat++ = q0;
*tempQuat++ = q1;
*tempQuat++ = q2;
*tempQuat++ = q3;
}
To get quaternion value q0 ,q1 , q3. Then I use this function:
void MPU6050_getYawPitchRoll(double * ypr,double *qu)
{
double gx, gy, gz;
double *tempQ = qu;
double q0,q1,q2,q3;
float sqw = q0*q0;
float sqx = q1*q1;
float sqy = q2*q2;
float sqz = q3*q3;
q0= *tempQ++;
q1= *tempQ++;
q2= *tempQ++;
q3= *tempQ++;
gx = 2 * (q1 * q3 - q0 * q2);
gy = 2 * (q0 * q1 + q2 * q3);
gz = q0 * q0 - q1 * q1 - q2 * q2 + q3 * q3;
yaw = atan2(2* (q1 * q2 - q0 * q3), 2 * (q0 * q0 + q1 * q1) - 1) * 180 / 3.1416; // YAW
pitch = atan(gx / sqrt(gy * gy + gz * gz)) * 180 / 3.1416; // PITCH
roll = atan(gy / sqrt(gx * gx + gz * gz)) * 180 / 3.1416; // ROLL
psi = atan2(2 * q1 * q2 - 2 * q0 * q3, 2 * q0*q0 + 2 * q1* q1 - 1)* 180 / 3.1416; // psi
theta = -asin(2 * q1 * q3 + 2 * q0 * q2)* 180 / 3.1416; // theta
phi = atan2(2 * q2 * q3 - 2 * q0 * q1, 2 * q0 * q0 + 2 * q3 * q3 - 1)* 180 / 3.1416; // phi
}
To get yaw, pitch, roll, psi, theta and phi.
In PC, I use openGL to draw 3D object. How can I use yaw, pitch, roll, psi, theta and phi to control that 3D object (rotation, translation)? Is that right?
glRotatef(-Yaw, 0.0f, 1.0f, 0.0f);
glRotatef(-Pitch, 0.0f, 0.0f, 1.0f);
glRotatef(-Roll, 1.0f, 0.0f, 0.0f);
you doing it all the old way, why don't you use GLM ?
I did it this way in my project with cinder
I read _quat from MPU and arduino ...
vec4 axis = toAxisAngle(_quat);
// ...
rotate(axis[0], -axis[1], axis[3], axis[2]); // Note the orders and signs
and toAxisAngle :
/**
* Converts the quaternion into a float array consisting of: rotation angle
* in radians, rotation axis x,y,z
*
* #return 4-element float array
*/
static const float EPS = 1.1920928955078125E-7f;
vec4 toAxisAngle(const quat& q) {
vec4 res;
float sa = (float) sqrt(1.0f - q.w * q.w);
if (sa < EPS) {
sa = 1.0f;
} else {
sa = 1.0f / sa;
}
res[0] = (float) acos(q.w) * 2.0f;
res[1] = q.x * sa;
res[2] = q.y * sa;
res[3] = q.z * sa;
return res;
}

Incomplete sphere OpenGL

I want to draw a sphere using VBO for vertex, color and UV coordinates for texture. My problem is that the sphere is not 'closed', there is a hole in the origin. I know that this is because my code depends on (1/segments) distance between each vertex; I am working with segments = 40.
I know that, if I rise that value, the hole will be lower, but program is slower. I don't know if there's a way to eliminate the hole without rise the variable.
Here's the code:
for(int i = 0; i <= segments; i++){
double lat0 = pi * (-0.5 + (double)(i - 1) / segments);
double z0 = sin(lat0);
double zr0 = cos(lat0);
// lat1 = [-pi/2..pi/2]
double lat1 = pi * (-0.5 + (double)i / segments);
double z1 = sin(lat1);
double zr1 = cos(lat1);
for (int j = 0; j <= segments; j++){ // Longitud
// lng = [0..2*pi]
double lng = 2 * pi * (double)(j - 1) / segments;
double x = cos(lng);
double y = sin(lng);
//glNormal3f(x * zr0, y * zr0, z0); // Normals
ballVerts.push_back(x * zr0); //X
ballVerts.push_back(y * zr0); //Y
ballVerts.push_back(z0); //Z
ballVerts.push_back(0.0f);
ballVerts.push_back(0.0f);
ballVerts.push_back(0.0f);
ballVerts.push_back(1.0f); //R,G,B,A
texX = abs(1 - (0.5f + atan2(z0, x * zr0) / (2.0 * pi)));
texY = 0.5f - asin(y * zr0) / pi;
ballVerts.push_back(texX); // Texture coords
ballVerts.push_back(texY); // U, V
//glNormal3f(x * zr1, y * zr1, z1); //Normals
ballVerts.push_back(x * zr1); //X
ballVerts.push_back(y * zr1); //Y
ballVerts.push_back(z1); //Z
ballVerts.push_back(0.0f);
ballVerts.push_back(0.0f);
ballVerts.push_back(1.0f);
ballVerts.push_back(1.0f); //R,G,B,A
texX = abs(1 - (0.5f + atan2(z1, x * zr1) / (2.0 * pi)));
texY = 0.5f - asin(y * zr1) / pi;
ballVerts.push_back(texX); // Texture coords
ballVerts.push_back(texY);
}
}
// Create VBO....
And this is the output I have:
I don't think that's a hole. You're drawing one segment too many, and causing it to draw additional triangles at the south pole, with the texture wrapped around:
for(int i = 0; i <= segments; i++){
double lat0 = pi * (-0.5 + (double)(i - 1) / segments);
In the first loop iteration, with i = 0, the angle will be less than -0.5 * pi, resulting in the extra triangles shown in your picture.
If you want to split the latitude range into segments pieces, you only need to run through the outer loop segments times. With the code above, with the loop from 0 up to and including segments, you're iterating segments + 1 times.
The easiest way to fix this is to start the loop at 1:
for(int i = 1; i <= segments; i++){
double lat0 = pi * (-0.5 + (double)(i - 1) / segments);
I would probably loop from 0 and make the end exclusive, and change the angle calculations. But that's really equivalent:
for(int i = 0; i < segments; i++){
double lat0 = pi * (-0.5 + (double) / segments);
...
double lat1 = pi * (-0.5 + (double)(i + 1) / segments);