draw satellite covarage zone on equirectangular projection - c++

I need to draw borders of the observation zone of satellite on equirectangular projection. I found this formulas (1) and figure:
sin(fi) = cos(alpha) * sin(fiSat) – sin(alpha) * sin (Beta) * cos (fiSat);
sin(lambda) = (cos(alpha) * cos(fiSat) * sin(lambdaSat)) / cos(asin(sin(fi))) +
(sin(alpha) * sin(Beta) * sin(fiSat) * sin(lambdaSat)) / cos(asin(sin(fi))) -
(sin(alpha) * cos(Beta) * cos(lambdaSat))/cos(asin(sin(fi)));
cos(lambda) = (cos(alpha) * cos(fiSat) * cos(lambdaSat)) / cos(asin(sin(fi))) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat)) / cos(asin(sin(fi))) -
(sin(alpha) * cos(Beta) * sin(lambdaSat)) / cos(asin(sin(fi)));
Cross-sections of the Earth in various planes:
And equations system (2) with figure:
if sin(lambda) > 0, cos(lambda) > 0 then lambda = asin(sin(lambda));
if sin(lambda) > 0, cos(lambda) < 0 then lambda = 180 - asin(sin(lambda));
if sin(lambda) < 0, cos(lambda) < 0 then lambda = 180 - asin(sin(lambda));
if sin(lambda) < 0, cos(lambda) > 0 then lambda = asin(sin(lambda));
Scheme of reference angles for the longitude of the Earth:
Where: alpha – polar angle;
fiSat, lambdaSat – latitude, longitude of satellite;
Beta – angle which change from 0 to 2*Pi and help to draw the observation zone;
fi, lambda – latitude, longitude of point B on the border of observation zone;
I repeat both (1) and (2) formulas in cycle from 0 to 2*Pi to draw border of observation zone. But I am not quite sure in (2) system of equations.
Inside intervals [-180;-90], [-90;90], [90;180] the zone draws correctly.
Center at -35;45:
Center at 120;60:
Center at -120;-25
But on border of -90 and 90 degree it get messy:
Center at -95;-50
Center at 95;30
Can you help me with formulas(1) and (2) or write another ones?
double deltaB = 1.0*M_PI/180;
observerZone.clear();
for (double Beta = 0.0; Beta <= (M_PI * 2) ; Beta += deltaB){
double sinFi = cos(alpha) * sin(fiSat) - sin(alpha) * sin(Beta) * cos(fiSat);
double sinLambda = (cos(alpha) * cos(fiSat) * sin(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * sin(lambdaSat))/cos(asin(sinFi)) -
(sin(alpha) * cos(Beta) * cos(lambdaSat))/cos(asin(sinFi));
double cosLambda = (cos(alpha) * cos(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) -
(sin(alpha) * cos(Beta) * sin(lambdaSat))/cos(asin(sinFi));
if (sinLambda > 0) {
if (cosLambda > 0 ){
sinLambda = asin(sinLambda);
sinFi = asin(sinFi);
}
else {
sinLambda = M_PI - asin(sinLambda);
sinFi = asin(sinFi);
}
}
else if (cosLambda > 0) {
sinLambda = asin(sinLambda);
sinFi = asin(sinFi);
}
else {
sinLambda = -M_PI - asin(sinLambda);
sinFi = asin(sinFi);
}
Point point;
point.latitude = qRadiansToDegrees(sinFi);
point.longitude = qRadiansToDegrees(sinLambda);
observerZone.push_back(point);
}

I solve my problem. In (1) equation when calculating cosLambda should be + instead of -.
double cosLambda = (cos(alpha) * cos(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * cos(Beta) * sin(lambdaSat))/cos(asin(sinFi));
Sorry for disturbing.

Related

How do I generate Bezier Curves and NURBS in C++ and import it as an igs?

I am new to C++ NURBS libary. I learnt generating line (by CLine, from nurbs.h ) and save it as igs. But in case of
multiple control points, how to generate a curve ? Every other tutorial using graphics.h
(putpixel), but couldnt find anything about igs.
This should be a simple problem. But I have no idea which function can help me here.
Thanks in advance.
We have 4 control points here to begin with.
for (float t = 0.0; t <= 1.0; t += 0.2) {
double xt = 0.0, yt = 0.0;
xt = pow(1 - t, 3) * x[0] + 3 * t * pow(1 - t, 2) * x[1] + 3 * pow(t, 2) * (1 - t) * x[2]
+ pow(t, 3) * x[3];
yt = pow(1 - t, 3) * y[0] + 3 * t * pow(1 - t, 2) * y[1] + 3 * pow(t, 2) * (1 - t) * y[2]
+ pow(t, 3) * y[3];
count = count + 1;
//Math::Vector4f c(xt, yt, 0);
for (int i = 1; i < 3; i++) {
listt[i][0]= xt;
listt[i][1]= yt;
Math::Vector4f a(listt[i][0], listt[i][1],0);
myvector.push_back (&a);
}
}
......
.....
igs.Write("test.igs");
--- This is to create the points, but after that I dont know how to use the points to create a Bezier curve .

Convert lat, long to x, y on Mollweide

I have tried to follow the instructions here but I get wild results compared to this site.
Here is my code.
#include <cmath>
double solveNR(double latitude, double epsilon) {
if (abs(latitude) == M_PI / 2) {
return latitude;
}
double theta = latitude;
while (true) {
double nextTheta = theta - (2 * theta * std::sin(2 * theta) - M_PI * std::sin(latitude)) / (2 + 2 * std::cos(2 * theta));
if (abs(theta - nextTheta) < epsilon) {
break;
}
theta = nextTheta;
}
return theta;
}
void convertToXY(double radius, double latitude, double longitude, double* x, double* y) {
latitude = latitude * M_PI / 180;
longitude = longitude * M_PI / 180;
double longitudeZero = 0 * M_PI / 180;
double theta = solveNR(latitude, 1);
*x = radius * 2 * sqrt(2) * (longitude - longitudeZero) * std::cos(theta) / M_PI;
*y = radius * sqrt(2) * std::sin(theta);
}
For instance,
180 longitude = 21
90 latitude = 8.1209e+06
assuming a radius of 5742340.81
I found this resource which seems to calculate the right answer. But I cannot parse how it is different.
In your solveNR() function why do you use
double nextTheta = theta - (2 * theta * std::sin(2 * theta) - PI *
std::sin(latitude)) / (2 + 2 * std::cos(2 * theta));
instead
double nextTheta = theta - (2 * theta + std::sin(2 * theta) - PI *
std::sin(latitude)) / (2 + 2 * std::cos(2 * theta));
Seems like you should use "+" instead "*" (after 2 * theta in the numerator), to accord with wiki-instructions.

Getting Quaternions From Gyro Data. How do I get body coordinates?

I've got a gyro hooked up to an arduino and I'm getting angular rate out in rad/sec in all three axes.
I want to be able to get out yaw, pitch, roll in body coordinates so the three axes of rotation are fixed to the body. The problem I'm having now is that when I roll the sensor, the yaw and pitch I get out become swapped. As I roll the sensor 90 degrees, the yaw and pitch change places. Anywhere in between, the yaw and pitch are a mixture between the two.
Instead, I want to keep the pitch and yaw relative to the new body rotation rather than the initial position.
Here is my code:
void loop() {
currentTime = millis();
dt = ((currentTime - prevTime) / 1000.0 );
// Puts gyro data into data[2], data[4], data[5]
readBMI();
if(firstPass == false) {
omega[0] = (data[3]);
omega[1] = (data[4]);
omega[2] = (data[5]);
wLength = sqrt(sq(omega[0]) + sq(omega[1]) + sq(omega[2]));
theta = wLength * dt;
q_new[0] = cos(theta/2);
q_new[1] = (omega[0] / wLength * sin(theta / 2));
q_new[2] = (omega[1] / wLength * sin(theta / 2));
q_new[3] = (omega[2] / wLength * sin(theta / 2));
q[0] = q[0] * q_new[0] - q[1] * q_new[1] - q[2] * q_new[2] - q[3] * q_new[3];
q[1] = q[0] * q_new[1] + q[1] * q_new[0] + q[2] * q_new[3] - q[3] * q_new[2];
q[2] = q[0] * q_new[2] - q[1] * q_new[3] + q[2] * q_new[0] + q[3] * q_new[1];
q[3] = q[0] * q_new[3] + q[1] * q_new[2] - q[2] * q_new[1] + q[3] * q_new[0];
float sinr_cosp = 2 * (q[0] * q[1] + q[2] * q[3]);
float cosr_cosp = 1 - 2 * (sq(q[1]) + sq(q[2]));
roll = atan2(sinr_cosp, cosr_cosp) * 180 / PI;
pitch = asin(2 * (q[0] * q[2] - q[3] * q[1])) * 180 / PI;
double siny_cosp = 2 * (q[0] * q[3] + q[1] * q[2]);
double cosy_cosp = 1 - 2 * (sq(q[2]) + sq(q[3]));
yaw = atan2(siny_cosp, cosy_cosp) * 180 / PI;
}
Serial.print(roll);
Serial.print(" ");
Serial.print(pitch);
Serial.print(" ");
Serial.print(yaw);
Serial.print(" ");
Serial.println();
delay(20);
prevTime = currentTime;
}
I'm getting the angles out correctly but my only problem is the yaw and pitch swap when it rolls. So I'm guessing I need a way to convert from world to body coodrinates?

What did i wrong in my image turn algoritm?

I just want to turn image #1 and write it in memory #2 (#1 Body #2 TurnBody) (rotation around the center of the image)
KI and KJ its just (i-radius) and (j-radius) for usage. SIN and COS its just sin and cos of turn angle.
radius - just half of image side (my image is square)
6.28 = pi*2
example i need to turn
example i have:
(i turn not all image, just a small square in center and add it to big screen image)
TurnAngle - just my global value (shows what angle the image is now rotated)
void Turn(double angle, int radius, COLORREF* Body, COLORREF* TurnBody)
{
if (abs(TurnAngle += angle) > 6.28)
{
TurnAngle = 0;
}
int i, ki, j, kj;
const double SIN = sin(TurnAngle), COS = cos(TurnAngle);
for (i = 0, ki = -radius; i < 2 * radius; i++, ki++)
{
for (j = 0, kj = -radius; j < 2 * radius; j++, kj++)
{
if (Body[i * 2 * radius + j]) // if Pixel not black
{
TurnBody[static_cast<int>(kj * COS - ki * SIN + radius + (ki * COS + kj * SIN + radius) * 2 * radius)] = Body[i * 2 * radius + j];
}
}
}
}
this work, smth was wrong with ( ) or double values i rly dont know... Thank you guys
this->TurnBody[(int)(kj * COS - ki * SIN) + this->radius + ((int)(ki * COS + kj * SIN) + this->radius) * 2 * this->radius] = this->Body[i * 2 * this->radius + j];
I think this is wrong:
TurnBody[static_cast<int>(kj * COS - ki * SIN + radius + (ki * COS + kj * SIN + radius) * 2 * radius)] = Body[i * 2 * radius + j];
I think it should be more like this:
TurnBody[(int)(kj * COS) + radius + ((int)(kj * SIN) + radius) * 2*radius] = Body[i * 2 * radius + j];

Half of my ellipse drawn in the wrong place

Here is the code for an oval drawing method I am working on. I am applying the Bresenham method to plot its co-ordinates, and taking advantage of the ellipse's symmetrical properties to draw the same pixel in four different places.
void cRenderClass::plotEllipse(int xCentre, int yCentre, int width, int height, float angle, float xScale, float yScale)
{
if ((height == width) && (abs(xScale - yScale) < 0.005))
plotCircle(xCentre, yCentre, width, xScale);
std::vector<std::vector <float>> rotate;
if (angle > 360.0f)
{
angle -= 180.0f;
}
rotate = maths.rotateMatrix(angle, 'z');
//rotate[0][0] = cos(angle)
//rotate[0][1] = sin(angle)
float theta = atan2(-height*rotate[0][1], width*rotate[0][0]);
if (angle > 90.0f && angle < 180.0f)
{
theta += PI;
}
//add scalation in at a later date
float xShear = (width * (cos(theta) * rotate[0][0])) - (height * (sin(theta) * rotate[0][1]));
float yShear = (width * (cos(theta) * rotate[0][1])) + (height * (sin(theta) * rotate[0][0]));
float widthAxis = abs(sqrt(((rotate[0][0] * width) * (rotate[0][0] * width)) + ((rotate[0][1] * height) * (rotate[0][1] * height))));
float heightAxis = (width * height) / widthAxis;
int aSquared = widthAxis * widthAxis;
int fourASquared = 4*aSquared;
int bSquared = heightAxis * heightAxis;
int fourBSquared = 4*bSquared;
x0 = 0;
y0 = heightAxis;
int sigma = (bSquared * 2) + (aSquared * (1 - (2 * heightAxis)));
while ((bSquared * x0) <= (aSquared * y0))
{
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
if (sigma >= 0)
{
sigma += (fourASquared * (1 - y0));
y0--;
}
sigma += (bSquared * ((4 * x0) + 6));
x0++;
}
x0 = widthAxis;
y0 = 0;
sigma = (aSquared * 2) + (bSquared * (1 - (2 * widthAxis)));
while ((aSquared * y0) <= (bSquared * x0))
{
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
if (sigma >= 0)
{
sigma += (fourBSquared * (1 - x0));
x0--;
}
sigma += (aSquared * (4 * y0) + 6);
y0++;
}
//the above algorithm hasn't been quite completed
//there are still a few things I want to enquire Andy about
//before I move on
//this other algorithm definitely works
//however
//it is computationally expensive
//and the line drawing isn't as refined as the first one
//only use this as a last resort
/* std::vector<std::vector <float>> rotate;
rotate = maths.rotateMatrix(angle, 'z');
float s = rotate[0][1];
float c = rotate[0][0];
float ratio = (float)height / (float)width;
float px, py, xNew, yNew;
for (int theta = 0; theta <= 360; theta++)
{
px = (xCentre + (cos(maths.degToRad(theta)) * (width / 2))) - xCentre;
py = (yCentre - (ratio * (sin(maths.degToRad(theta)) * (width / 2)))) - yCentre;
x0 = (px * c) - (py * s);
y0 = (px * s) + (py * c);
drawPixel(x0 + xCentre, y0 + yCentre);
}*/
}
Here's the problem. When testing the rotation matrix on my oval drawing function, I expect it to draw an ellipse at a slant from its original horizontal position as signified by 'angle'. Instead, it makes a heart shape. This is sweet, but not the result I want.
I have managed to get the other algorithm (as seen in the bottom part of that code sample) working successfully, but it takes more time to compute, and doesn't draw lines quite as nicely. I only plan to use that if I can't get this Bresenham one working.
Can anyone help?