I've got a gyro hooked up to an arduino and I'm getting angular rate out in rad/sec in all three axes.
I want to be able to get out yaw, pitch, roll in body coordinates so the three axes of rotation are fixed to the body. The problem I'm having now is that when I roll the sensor, the yaw and pitch I get out become swapped. As I roll the sensor 90 degrees, the yaw and pitch change places. Anywhere in between, the yaw and pitch are a mixture between the two.
Instead, I want to keep the pitch and yaw relative to the new body rotation rather than the initial position.
Here is my code:
void loop() {
currentTime = millis();
dt = ((currentTime - prevTime) / 1000.0 );
// Puts gyro data into data[2], data[4], data[5]
readBMI();
if(firstPass == false) {
omega[0] = (data[3]);
omega[1] = (data[4]);
omega[2] = (data[5]);
wLength = sqrt(sq(omega[0]) + sq(omega[1]) + sq(omega[2]));
theta = wLength * dt;
q_new[0] = cos(theta/2);
q_new[1] = (omega[0] / wLength * sin(theta / 2));
q_new[2] = (omega[1] / wLength * sin(theta / 2));
q_new[3] = (omega[2] / wLength * sin(theta / 2));
q[0] = q[0] * q_new[0] - q[1] * q_new[1] - q[2] * q_new[2] - q[3] * q_new[3];
q[1] = q[0] * q_new[1] + q[1] * q_new[0] + q[2] * q_new[3] - q[3] * q_new[2];
q[2] = q[0] * q_new[2] - q[1] * q_new[3] + q[2] * q_new[0] + q[3] * q_new[1];
q[3] = q[0] * q_new[3] + q[1] * q_new[2] - q[2] * q_new[1] + q[3] * q_new[0];
float sinr_cosp = 2 * (q[0] * q[1] + q[2] * q[3]);
float cosr_cosp = 1 - 2 * (sq(q[1]) + sq(q[2]));
roll = atan2(sinr_cosp, cosr_cosp) * 180 / PI;
pitch = asin(2 * (q[0] * q[2] - q[3] * q[1])) * 180 / PI;
double siny_cosp = 2 * (q[0] * q[3] + q[1] * q[2]);
double cosy_cosp = 1 - 2 * (sq(q[2]) + sq(q[3]));
yaw = atan2(siny_cosp, cosy_cosp) * 180 / PI;
}
Serial.print(roll);
Serial.print(" ");
Serial.print(pitch);
Serial.print(" ");
Serial.print(yaw);
Serial.print(" ");
Serial.println();
delay(20);
prevTime = currentTime;
}
I'm getting the angles out correctly but my only problem is the yaw and pitch swap when it rolls. So I'm guessing I need a way to convert from world to body coodrinates?
Related
I have tried to follow the instructions here but I get wild results compared to this site.
Here is my code.
#include <cmath>
double solveNR(double latitude, double epsilon) {
if (abs(latitude) == M_PI / 2) {
return latitude;
}
double theta = latitude;
while (true) {
double nextTheta = theta - (2 * theta * std::sin(2 * theta) - M_PI * std::sin(latitude)) / (2 + 2 * std::cos(2 * theta));
if (abs(theta - nextTheta) < epsilon) {
break;
}
theta = nextTheta;
}
return theta;
}
void convertToXY(double radius, double latitude, double longitude, double* x, double* y) {
latitude = latitude * M_PI / 180;
longitude = longitude * M_PI / 180;
double longitudeZero = 0 * M_PI / 180;
double theta = solveNR(latitude, 1);
*x = radius * 2 * sqrt(2) * (longitude - longitudeZero) * std::cos(theta) / M_PI;
*y = radius * sqrt(2) * std::sin(theta);
}
For instance,
180 longitude = 21
90 latitude = 8.1209e+06
assuming a radius of 5742340.81
I found this resource which seems to calculate the right answer. But I cannot parse how it is different.
In your solveNR() function why do you use
double nextTheta = theta - (2 * theta * std::sin(2 * theta) - PI *
std::sin(latitude)) / (2 + 2 * std::cos(2 * theta));
instead
double nextTheta = theta - (2 * theta + std::sin(2 * theta) - PI *
std::sin(latitude)) / (2 + 2 * std::cos(2 * theta));
Seems like you should use "+" instead "*" (after 2 * theta in the numerator), to accord with wiki-instructions.
I just want to turn image #1 and write it in memory #2 (#1 Body #2 TurnBody) (rotation around the center of the image)
KI and KJ its just (i-radius) and (j-radius) for usage. SIN and COS its just sin and cos of turn angle.
radius - just half of image side (my image is square)
6.28 = pi*2
example i need to turn
example i have:
(i turn not all image, just a small square in center and add it to big screen image)
TurnAngle - just my global value (shows what angle the image is now rotated)
void Turn(double angle, int radius, COLORREF* Body, COLORREF* TurnBody)
{
if (abs(TurnAngle += angle) > 6.28)
{
TurnAngle = 0;
}
int i, ki, j, kj;
const double SIN = sin(TurnAngle), COS = cos(TurnAngle);
for (i = 0, ki = -radius; i < 2 * radius; i++, ki++)
{
for (j = 0, kj = -radius; j < 2 * radius; j++, kj++)
{
if (Body[i * 2 * radius + j]) // if Pixel not black
{
TurnBody[static_cast<int>(kj * COS - ki * SIN + radius + (ki * COS + kj * SIN + radius) * 2 * radius)] = Body[i * 2 * radius + j];
}
}
}
}
this work, smth was wrong with ( ) or double values i rly dont know... Thank you guys
this->TurnBody[(int)(kj * COS - ki * SIN) + this->radius + ((int)(ki * COS + kj * SIN) + this->radius) * 2 * this->radius] = this->Body[i * 2 * this->radius + j];
I think this is wrong:
TurnBody[static_cast<int>(kj * COS - ki * SIN + radius + (ki * COS + kj * SIN + radius) * 2 * radius)] = Body[i * 2 * radius + j];
I think it should be more like this:
TurnBody[(int)(kj * COS) + radius + ((int)(kj * SIN) + radius) * 2*radius] = Body[i * 2 * radius + j];
I need to draw borders of the observation zone of satellite on equirectangular projection. I found this formulas (1) and figure:
sin(fi) = cos(alpha) * sin(fiSat) – sin(alpha) * sin (Beta) * cos (fiSat);
sin(lambda) = (cos(alpha) * cos(fiSat) * sin(lambdaSat)) / cos(asin(sin(fi))) +
(sin(alpha) * sin(Beta) * sin(fiSat) * sin(lambdaSat)) / cos(asin(sin(fi))) -
(sin(alpha) * cos(Beta) * cos(lambdaSat))/cos(asin(sin(fi)));
cos(lambda) = (cos(alpha) * cos(fiSat) * cos(lambdaSat)) / cos(asin(sin(fi))) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat)) / cos(asin(sin(fi))) -
(sin(alpha) * cos(Beta) * sin(lambdaSat)) / cos(asin(sin(fi)));
Cross-sections of the Earth in various planes:
And equations system (2) with figure:
if sin(lambda) > 0, cos(lambda) > 0 then lambda = asin(sin(lambda));
if sin(lambda) > 0, cos(lambda) < 0 then lambda = 180 - asin(sin(lambda));
if sin(lambda) < 0, cos(lambda) < 0 then lambda = 180 - asin(sin(lambda));
if sin(lambda) < 0, cos(lambda) > 0 then lambda = asin(sin(lambda));
Scheme of reference angles for the longitude of the Earth:
Where: alpha – polar angle;
fiSat, lambdaSat – latitude, longitude of satellite;
Beta – angle which change from 0 to 2*Pi and help to draw the observation zone;
fi, lambda – latitude, longitude of point B on the border of observation zone;
I repeat both (1) and (2) formulas in cycle from 0 to 2*Pi to draw border of observation zone. But I am not quite sure in (2) system of equations.
Inside intervals [-180;-90], [-90;90], [90;180] the zone draws correctly.
Center at -35;45:
Center at 120;60:
Center at -120;-25
But on border of -90 and 90 degree it get messy:
Center at -95;-50
Center at 95;30
Can you help me with formulas(1) and (2) or write another ones?
double deltaB = 1.0*M_PI/180;
observerZone.clear();
for (double Beta = 0.0; Beta <= (M_PI * 2) ; Beta += deltaB){
double sinFi = cos(alpha) * sin(fiSat) - sin(alpha) * sin(Beta) * cos(fiSat);
double sinLambda = (cos(alpha) * cos(fiSat) * sin(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * sin(lambdaSat))/cos(asin(sinFi)) -
(sin(alpha) * cos(Beta) * cos(lambdaSat))/cos(asin(sinFi));
double cosLambda = (cos(alpha) * cos(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) -
(sin(alpha) * cos(Beta) * sin(lambdaSat))/cos(asin(sinFi));
if (sinLambda > 0) {
if (cosLambda > 0 ){
sinLambda = asin(sinLambda);
sinFi = asin(sinFi);
}
else {
sinLambda = M_PI - asin(sinLambda);
sinFi = asin(sinFi);
}
}
else if (cosLambda > 0) {
sinLambda = asin(sinLambda);
sinFi = asin(sinFi);
}
else {
sinLambda = -M_PI - asin(sinLambda);
sinFi = asin(sinFi);
}
Point point;
point.latitude = qRadiansToDegrees(sinFi);
point.longitude = qRadiansToDegrees(sinLambda);
observerZone.push_back(point);
}
I solve my problem. In (1) equation when calculating cosLambda should be + instead of -.
double cosLambda = (cos(alpha) * cos(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * cos(Beta) * sin(lambdaSat))/cos(asin(sinFi));
Sorry for disturbing.
Here is the code for an oval drawing method I am working on. I am applying the Bresenham method to plot its co-ordinates, and taking advantage of the ellipse's symmetrical properties to draw the same pixel in four different places.
void cRenderClass::plotEllipse(int xCentre, int yCentre, int width, int height, float angle, float xScale, float yScale)
{
if ((height == width) && (abs(xScale - yScale) < 0.005))
plotCircle(xCentre, yCentre, width, xScale);
std::vector<std::vector <float>> rotate;
if (angle > 360.0f)
{
angle -= 180.0f;
}
rotate = maths.rotateMatrix(angle, 'z');
//rotate[0][0] = cos(angle)
//rotate[0][1] = sin(angle)
float theta = atan2(-height*rotate[0][1], width*rotate[0][0]);
if (angle > 90.0f && angle < 180.0f)
{
theta += PI;
}
//add scalation in at a later date
float xShear = (width * (cos(theta) * rotate[0][0])) - (height * (sin(theta) * rotate[0][1]));
float yShear = (width * (cos(theta) * rotate[0][1])) + (height * (sin(theta) * rotate[0][0]));
float widthAxis = abs(sqrt(((rotate[0][0] * width) * (rotate[0][0] * width)) + ((rotate[0][1] * height) * (rotate[0][1] * height))));
float heightAxis = (width * height) / widthAxis;
int aSquared = widthAxis * widthAxis;
int fourASquared = 4*aSquared;
int bSquared = heightAxis * heightAxis;
int fourBSquared = 4*bSquared;
x0 = 0;
y0 = heightAxis;
int sigma = (bSquared * 2) + (aSquared * (1 - (2 * heightAxis)));
while ((bSquared * x0) <= (aSquared * y0))
{
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
if (sigma >= 0)
{
sigma += (fourASquared * (1 - y0));
y0--;
}
sigma += (bSquared * ((4 * x0) + 6));
x0++;
}
x0 = widthAxis;
y0 = 0;
sigma = (aSquared * 2) + (bSquared * (1 - (2 * widthAxis)));
while ((aSquared * y0) <= (bSquared * x0))
{
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
if (sigma >= 0)
{
sigma += (fourBSquared * (1 - x0));
x0--;
}
sigma += (aSquared * (4 * y0) + 6);
y0++;
}
//the above algorithm hasn't been quite completed
//there are still a few things I want to enquire Andy about
//before I move on
//this other algorithm definitely works
//however
//it is computationally expensive
//and the line drawing isn't as refined as the first one
//only use this as a last resort
/* std::vector<std::vector <float>> rotate;
rotate = maths.rotateMatrix(angle, 'z');
float s = rotate[0][1];
float c = rotate[0][0];
float ratio = (float)height / (float)width;
float px, py, xNew, yNew;
for (int theta = 0; theta <= 360; theta++)
{
px = (xCentre + (cos(maths.degToRad(theta)) * (width / 2))) - xCentre;
py = (yCentre - (ratio * (sin(maths.degToRad(theta)) * (width / 2)))) - yCentre;
x0 = (px * c) - (py * s);
y0 = (px * s) + (py * c);
drawPixel(x0 + xCentre, y0 + yCentre);
}*/
}
Here's the problem. When testing the rotation matrix on my oval drawing function, I expect it to draw an ellipse at a slant from its original horizontal position as signified by 'angle'. Instead, it makes a heart shape. This is sweet, but not the result I want.
I have managed to get the other algorithm (as seen in the bottom part of that code sample) working successfully, but it takes more time to compute, and doesn't draw lines quite as nicely. I only plan to use that if I can't get this Bresenham one working.
Can anyone help?
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I'm essentially working on a function for slerping and while it kinda works, it's having a weird perspective warping issue that I'm stuck trying to work out right now.
Quaternion sLerp(Quaternion start, Quaternion end, float s)
{
float dot = qDot(start, end);
float theta = std::acos(dot);
float sTheta = std::sin(theta);
float w1 = sin((1.0f-s)*theta) / sTheta;
float w2 = sin(s*theta) / sTheta;
Quaternion Temp(0,0,0,0);
Temp = start*w1 + end*w2;
return Temp;
}
Essentially what it's doing (or should be doing) is just slerping between two values to provide a rotation, and the result from this is being converted to a rotation matrix. But what's going wrong is a horribly, horribly stretched view... for some reason during the rotation it stretched everything, starting with everything too long / thin and reaching a midpoint of being much shorter before starting to go back to being thin. Any help would be great.
Your slerp code seems fine, although one would normally make sure that dot>=0 because otherwise, you're rotating the long way around the circle. In general, it's also important to make sure that dot!=1 because you'll run into divide-by-zero problems.
A proper quaternion should never stretch the view. Either you're passing in non-unit-length quaternions for start or end, or your quaternion-to-matrix code is suspect (or you're getting funky behavior because the angle between the two quaternions is very small and you're dividing by almost zero).
My code for converting from quaternion to a matrix for use in OpenGL:
// First row
glMat[ 0] = 1.0f - 2.0f * ( q[1] * q[1] + q[2] * q[2] );
glMat[ 1] = 2.0f * (q[0] * q[1] + q[2] * q[3]);
glMat[ 2] = 2.0f * (q[0] * q[2] - q[1] * q[3]);
glMat[ 3] = 0.0f;
// Second row
glMat[ 4] = 2.0f * ( q[0] * q[1] - q[2] * q[3] );
glMat[ 5] = 1.0f - 2.0f * ( q[0] * q[0] + q[2] * q[2] );
glMat[ 6] = 2.0f * (q[2] * q[1] + q[0] * q[3] );
glMat[ 7] = 0.0f;
// Third row
glMat[ 8] = 2.0f * ( q[0] * q[2] + q[1] * q[3] );
glMat[ 9] = 2.0f * ( q[1] * q[2] - q[0] * q[3] );
glMat[10] = 1.0f - 2.0f * ( q[0] * q[0] + q[1] * q[1] );
glMat[11] = 0.0f;
// Fourth row
glMat[12] = 0.0;
glMat[13] = 0.0;
glMat[14] = 0.0;
glMat[15] = 1.0f;
Do you need to normalise the quaternion?
I think the following:
float sTheta = std::sin(theta);
should be:
float sTheta = sqrt(1.0f - sqr(theta));