transform syntax and structures containing vectors c++ - c++

I have a problem with the syntax of the function std::transform. So, I have a structure AirportInfo that contains information about the airports. Every structure is then arranged in a dictionary, so that they have unique IDs. In the structure there is a vector of pairs m_routes which contains the ID of the destination airport and also whether the flight is direct or not. (In this case only direct flight are to be considered, because all non-direct flights have already been deleted, so the second item of the pair will always be 0). The function calculateDistanceBetween returns the distance between 2 airports, by knowing their coordinates, that are being stored also in the structure in pos. Now I have to calculate the distance for every route, but I cannot get over the syntax :( Any Help will be appreciated, Thank you!
This piece of code works
// Calculates the distance between two points on earth specified by longitude/latitude.
// Function taken and adapted from http://www.codeproject.com/Articles/22488/Distance-using-Longitiude-and-latitude-using-c
float calculateDistanceBetween(float lat1, float long1, float lat2, float long2)
{
// main code inside the class
float dlat1 = lat1 * ((float)M_PI / 180.0f);
float dlong1 = long1 * ((float)M_PI / 180.0f);
float dlat2 = lat2 * ((float)M_PI / 180.0f);
float dlong2 = long2 * ((float)M_PI / 180.0f);
float dLong = dlong1 - dlong2;
float dLat = dlat1 - dlat2;
float aHarv = pow(sin(dLat / 2.0f), 2.0f) + cos(dlat1) * cos(dlat2) * pow(sin(dLong / 2), 2);
float cHarv = 2 * atan2(sqrt(aHarv), sqrt(1.0f - aHarv));
// earth's radius from wikipedia varies between 6,356.750 km and 6,378.135 km
// The IUGG value for the equatorial radius of the Earth is 6378.137 km
const float earth = 6378.137f;
return earth * cHarv;
}
struct AirportInfo
{
std::string m_name;
std::string m_city;
std::string m_country;
float pos[2]; // x: latitude, y: longitude
std::vector<std::pair<int, int>> m_routes; // dest_id + numStops
std::vector<float> m_routeLengths;
float m_averageRouteLength;
};
Here is what causes the trouble:
//- For each route in AirportInfo::m_routes, calculate the distance between start and destination. Store the results in AirportInfo::m_routeLengths. Use std::transform() and calculateDistanceBetween().
void calculateDistancePerRoute(std::map<int, AirportInfo>& airportInfo)
{ //loop all structures
for(int i = 0; i < airportInfo.size(); i++ ){
// START END SAVE
std::transform(airportInfo[i].pos[0], airportInfo[i].pos[1], /*...*/ , airportInfo[i].m_routeLengths.begin(),
calculateDistanceBetween);
}
std::cout << "Calculate distance for each route" << std::endl;
}

Use std::back_inserter(airportInfo[i].m_routeLengths) (and if performance is important, reserve vector sizes in advance), instead of airportInfo[i].m_routeLengths.begin(). Also, iterating by index when there is nothing "enforcing" that the indecies in the map are going from 0...map.size() is not safe, you should prefer using a vector for the shown usecase.
I think this is something like what you want:
void calculateDistancePerRoute(std::map<int, AirportInfo>& airportInfo)
{
for(int i = 0; i < airportInfo.size(); i++ )
{
float currentPosX = airportInfo.at(i).pos[0];
float currentPosY = airportInfo.at(i).pos[1];
std::transform(airportInfo.begin(), airportInfo.end(), std::back_inserter(airportInfo.at(i).m_routeLengths), [&] (const auto& otherAirport)
{
return calculateDistanceBetween(currentPosX, currentPosY, otherAirport.second.pos[0], otherAirport.second.pos[1]);
});
}
}
Example in Godbolt

Related

How to, given UV on a triangle, find XYZ?

I have a triangle, each point of which is defined by a position (X,Y,Z) and a UV coordinate (U,V):
struct Vertex
{
Vector mPos;
Point mUV;
inline Vector& ToVector() {return mPos;}
inline Vector& ToUV() {return mUV;}
};
With this function, I am able to get the UV coordinate at a specific XYZ position:
Point Math3D::TriangleXYZToUV(Vector thePos, Vertex* theTriangle)
{
Vector aTr1=theTriangle->ToVector()-(theTriangle+1)->ToVector();
Vector aTr2=theTriangle->ToVector()-(theTriangle+2)->ToVector();
Vector aF1 = theTriangle->ToVector()-thePos;
Vector aF2 = (theTriangle+1)->ToVector()-thePos;
Vector aF3 = (theTriangle+2)->ToVector()-thePos;
float aA=aTr1.Cross(aTr2).Length();
float aA1=aF2.Cross(aF3).Length()/aA;
float aA2=aF3.Cross(aF1).Length()/aA;
float aA3=aF1.Cross(aF2).Length()/aA;
Point aUV=(theTriangle->ToUV()*aA1)+((theTriangle+1)->ToUV()*aA2)+((theTriangle+2)->ToUV()*aA3);
return aUV;
}
I attempted to reverse-engineer this to make a function that gets the XYZ coordinate from a specific UV position:
Vector Math3D::TriangleUVToXYZ(Point theUV, Vertex* theTriangle)
{
Point aTr1=theTriangle->ToUV()-(theTriangle+1)->ToUV();
Point aTr2=theTriangle->ToUV()-(theTriangle+2)->ToUV();
Point aF1 = theTriangle->ToUV()-theUV;
Point aF2 = (theTriangle+1)->ToUV()-theUV;
Point aF3 = (theTriangle+2)->ToUV()-theUV;
float aA=gMath.Abs(aTr1.Cross(aTr2)); // NOTE: Point::Cross looks like this: const float Cross(const Point &thePoint) const {return mX*thePoint.mY-mY*thePoint.mX;}
float aA1=aF2.Cross(aF3)/aA;
float aA2=aF3.Cross(aF1)/aA;
float aA3=aF1.Cross(aF2)/aA;
Vector aXYZ=(theTriangle->ToVector()*aA1)+((theTriangle+1)->ToVector()*aA2)+((theTriangle+2)->ToVector()*aA3);
return aXYZ;
}
This works MOST of the time. However, it seems to exponentially "approach" the right-angled corner of the triangle-- or something. I'm not really sure what's going on except that the result gets wildly inaccurate the closer it gets to the right-angle.
What do I need to do to this TriangleUVtoXYZ function to make it return accurate results?
I haven't tested your implementation, but you only need to compute two parametric coordinates - the third being redundant since they should sum to 1.
Vector Math3D::TriangleUVToXYZ(Point theUV, Vertex* theTriangle)
{
// T2-T1, T3-T1, P-T1
Point aTr12 = theTriangle[1].ToUV() - theTriangle[0].ToUV();
Point aTr13 = theTriangle[2].ToUV() - theTriangle[0].ToUV();
Point aP1 = theUV - theTriangle[0].ToUV();
// don't need Abs() for the denominator
float aA23 = aTr12.Cross(aTr13);
// parametric coordinates [s,t]
// s = (P-T1)x(T2-T1) / (T3-T1)x(T2-T1)
// t = (P-T1)x(T3-T1) / (T2-T1)x(T3-T1)
float aA12 = aP1.Cross(aTr12) / -aA23;
float aA13 = aP1.Cross(aTr13) / aA23;
// XYZ = V1 + s(V2-V1) + t(V3-V1)
return theTriangle[0].ToVector()
+ aA12 * (theTriangle[1].ToVector() - theTriangle[0].ToVector())
+ aA13 * (theTriangle[2].ToVector() - theTriangle[0].ToVector());
}

Using The Dot Product to determine whether an object is on the left hand side or right hand side of the direction of the object

so I currently am trying to create some method which when taking in a simulation vehicles position, direction, and an objects position, Will determine whether or not the object lies on the right and side or left hand side of that vehicles direction. This is what i have implemented so far (Note I am in a 2D co-ord system):
This is the code block that uses the method
void Class::leftOrRight()
{
// Clearing both _lhsCones and _rhsCones vectors
_rhsCones.clear();
_lhsCones.clear();
for (int i =0; i < _cones.size(); i++)
{
if (dotAngleFromYaw(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) > 0)
{
_lhsCones.push_back(_cones[i]);
}
else
{
_rhsCones.push_back(_cones[i]);
}
}
return;
}
This is the code block which computes the angle
double Class::dotAngleFromYaw(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
double iOne = cos(yawCar);
double jOne = sin(yawCar);
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vector two
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
double theta = acos((iOne * iTwo) + (jOne * jTwo)); // in radians
return theta;
}
My issue with this is that dotAngleFromYaw(0,0,0,1,0) = +pi/2 and dotAngleFromYaw(0,0,0,-1,0) = +pi/2 hence the if statements fail to sort the cones.
Any help would be great
*Adjustments made from comment suggestions
I have change the sort method as follows
double Class::indicateSide(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
// Compute the i and j compoents of the yaw measurment as a unit vector i.e Vector Mag = 1
double iOne = cos(yawCar);
double jOne = sin(yawCar);
// Create the Car to Cone Vector
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vCar to Cone Vector
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
// // Using the transformation Matrix with Theta = yaw (angle in radians) transform the axis to the augmented 2D space
// double Ex = cos(yawCar)*iOne - sin(yawCar)*jOne;
// double Ey = sin(yawCar)*iOne + cos(yawCar)*jOne;
// Take the Cross Product of < Ex, 0 > x < x', y' > where x', y' have the same location in the simulation space.
double result = iOne*jTwo - jOne*iTwo;
return result;
}
However I still am having issues defining the left and right, note that I have also become aware that objects behind the vehicle are still passed to every instance of the array of objects to be evaluated and hence I have implemented a dot product check elsewhere that seems to work fine for now, which is why I have not included it here I can make another adjustment to the post to include said code. I did try to implement the co-ordinate system transformation however i did not see improvements compared to when the added lines are not commented out and implemented.
Any further feedback is greatly appreciated
If the angle does not matter and you only want to know whether "left or right" I'd go for another approach.
Set up a plane that has xCar and yCar on its surface. When setting it up it's up to you how to define the plane's normal i.e. the side its facing to.
After that you can apply the dot-product to determine the 'sign' indicating which side it's on.
Note that dot product does not provide information about left/right position.
Sign of dot product says whether position is ahead or backward.
To get left/right side, you need to check sign of cross product
cross = iOne * jTwo - jOne * iTwo
(note subtraction and i/j alternation)
To see the difference between dot and cross product info:
Quick test. Mathematical coordinate system (CCW) is used (left/right depends on CW/CCW)
BTW, in kinematics simulations it is worth to store components of direction vector rather than angle.
#define _USE_MATH_DEFINES // для C++
#include <cmath>
#include <iostream>
void check_target(float carx, float cary, float dirx, float diry, float tx, float ty) {
float cross = (tx - carx) * diry - (ty - cary) * dirx;
float dot = (tx - carx) * dirx + (ty - cary) * diry;
if (cross >= 0) {
if (dot >= 0)
std::cout << "ahead right\n";
else
std::cout << "behind right\n";
}
else {
if (dot >= 0)
std::cout << "ahead left\n";
else
std::cout << "behind left\n";
}
}
int main()
{
float carx, cary, car_dir_angle, dirx, diry;
float tx, ty;
carx = 1;
cary = 1;
car_dir_angle = M_PI / 4;
dirx = cos(car_dir_angle);
diry = sin(car_dir_angle);
check_target(carx, cary, dirx, diry, 2, 3);
check_target(carx, cary, dirx, diry, 2, 1);
check_target(carx, cary, dirx, diry, 1, 0);
check_target(carx, cary, dirx, diry, 0, 1);
}

Sorting RHS/LHS Objects in Vehicle Path C++

So I currently am trying to create some method which when taking in a simulation vehicles position, direction, and an objects position, Will determine whether or not the object lies on the right and side or left hand side of that vehicles direction. An image will be shown here,Simple Diagram of Problem Situation
So far I have tried to use the cross product and some other methods to solve the problem i will include relevant code blocks here:
void Class::sortCones()
{
// Clearing both _lhsCones and _rhsCones vectors
_rhsCones.clear();
_lhsCones.clear();
for (int i =0; i < _cones.size(); i++)
{
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) > 0)
{
_lhsCones.push_back(_cones[i]);
}
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) == 0)
{
return;
}
else
{
_rhsCones.push_back(_cones[i]);
}
}
return;
}
double Class::indicateSide(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
// Compute the i and j compoents of the yaw measurment as a unit vector i.e Vector Mag = 1
double iOne = cos(yawCar);
double jOne = sin(yawCar);
// Create the Car to Cone Vector
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vCar to Cone Vector
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
// - old method
// Using the transformation Matrix with Theta = yaw (angle in radians) transform the axis to the augmented 2D space
// Take the Cross Product of < Ex, 0 > x < x', y' > where x', y' have the same location in the simulation space.
// double Ex = cos(yawCar)*iOne - sin(yawCar)*jOne;
// double Ey = sin(yawCar)*iOne + cos(yawCar)*jOne;
double result = iOne*jTwo - jOne*iTwo;
return result;
}
The car currently just seems to run off in a straight line and one of the funny elements is the sorting method of left and right any direction is GREATLY appreciated.

Is there a method to either recalculate and equation in terms of a different variable?

I am currently a senior in AP Calculus BC and have taken the challenge of replicating a topic in C++ Qt. This topic covers integrals as area beneath a curve, and rotations of said areas to form a solid model with a definite volume.
I have successfully rotated a custom equation defined as:
double y = abs(qSin(qPow(graphXValue,graphXValue))/qPow(2, (qPow(graphXValue,graphXValue)-M_PI/2)/M_PI))
OR
My question is how to rotate such an equation around the Y-Axis instead of the X-Axis. Are there any methods to approximate the solving of this equation in terms of y instead of x? Are there any current implementations of such a task?
Keep in mind, I am calculating each point for the transformation in a 3D coordinate system:
for (float x = 0.0f; x < t_functionMaxX - t_projectionStep; x+=t_projectionStep)
{
currentSet = new QSurfaceDataRow;
nextSet = new QSurfaceDataRow;
float x_pos_mapped = x;
float y_pos_mapped = static_cast<float>(ui->customPlot->graph(0)->data()->findBegin(static_cast<double>(x), true)->value);
float x_pos_mapped_ahead = x + t_projectionStep;
float y_pos_mapped_ahead = static_cast<float>(graph1->data()->findBegin(static_cast<double>(x + t_projectionStep), true)->value);
QList<QVector3D> temp_points;
for (float currentRotation = static_cast<float>(-2*M_PI); currentRotation < static_cast<float>(2*M_PI); currentRotation += static_cast<float>((1) * M_PI / 180))
{
float y_pos_calculated = static_cast<float>(qCos(static_cast<qreal>(currentRotation))) * y_pos_mapped;
float z_pos_calculated = static_cast<float>(qSin(static_cast<qreal>(currentRotation))) * y_pos_mapped;
float y_pos_calculated_ahead = static_cast<float>(qCos(static_cast<qreal>(currentRotation))) * y_pos_mapped_ahead;
float z_pos_calculated_ahead = static_cast<float>(qSin(static_cast<qreal>(currentRotation))) * y_pos_mapped_ahead;
QVector3D point(x_pos_mapped, y_pos_calculated, z_pos_calculated);
QVector3D point_ahead(x_pos_mapped_ahead, y_pos_calculated_ahead, z_pos_calculated_ahead);
*currentSet << point;
*nextSet << point_ahead;
temp_points << point;
}
*data << currentSet << nextSet;
points << temp_points;
}
Essentially, you rotate the vector (x,f(x),0) around the Y axis, so the Y value remains the same but the X and Y parts vary according to rotation.
I also replaced all the static_cast<float> parts by explicit invocations of the float constructor, which (I find) reads a bit better.
// Render the upper part, grow from the inside
for (float x = 0.0f; x < t_functionMaxX - t_projectionStep; x+=t_projectionStep)
{
currentSet = new QSurfaceDataRow;
nextSet = new QSurfaceDataRow;
float x_pos_mapped = x;
float y_pos_mapped = float(ui->customPlot->graph(0)->data()->findBegin(double(x), true)->value);
float x_pos_mapped_ahead = x + t_projectionStep;
float y_pos_mapped_ahead = float(graph1->data()->findBegin(double(x + t_projectionStep), true)->value);
QList<QVector3D> temp_points;
for (float currentRotation = float(-2*M_PI); currentRotation < float(2*M_PI); currentRotation += float((1) * M_PI / 180))
{
float x_pos_calculated = float(qCos(qreal(currentRotation))) * x_pos_mapped;
float z_pos_calculated = float(qSin(qreal(currentRotation))) * x_pos_mapped;
float x_pos_calculated_ahead = float(qCos(qreal(currentRotation))) * x_pos_mapped_ahead;
float z_pos_calculated_ahead = float(qSin(qreal(currentRotation))) * x_pos_mapped_ahead;
QVector3D point(x_pos_calculated, y_pos_mapped, z_pos_calculated);
QVector3D point_ahead(x_pos_calculated_ahead, y_pos_mapped_ahead, z_pos_calculated_ahead);
*currentSet << point;
*nextSet << point_ahead;
temp_points << point;
}
*data << currentSet << nextSet;
points << temp_points;
}
Next, you need to add the bottom "plate". This is simply a bunch of triangles that connect (0,0,0) with two adjacent points of the rotation of (1,0,0) around the Y axis, just like we did above.
Finally, if f(t_functionmaxX) is not zero, you need to add a side that connects (t_functionmaxX, f(t_functionmaxX), 0) to (t_functionmaxX, 0, 0), again rotating in steps around the Y axis.
Note that this will do weird things if y < 0. How you want to solve that is up to you.

C++/SDL angle from one object to another

I want to rotate a 2-D image in the direction to where I click, to all quadrants. To do this, I need to calculate the angle relative to the object. I need 2 vectors.
I have tried to do this: one vector would be the "click" point, the other would be an "imaginary" horizontal vector departing from the object with the same X as the "click" point but with the Y of the object. That would serve as the second vector to where I would calculate the angle from the object.
I have made a test program with 3 objects to see if I can get those angles. b6 is the object, b7 is a "click point" approximately 45º from b6, and b8 is another "click point" approximately 135º from b6.
This is the code I'm using:
#define PI 3.14159265
int main(int argc, char** argv) {
Button b6(100,100);
Button b7(150,50);
Button b8(150,150);
int dot1 = b7.getX() * b7.getX() + b7.getY() * b6.getY();
int det1 = b7.getX() * b6.getY() - b7.getY() * b7.getX();
double angle1 = atan2(det1,dot1)* 180/PI;
int dot2 = b8.getX() * b8.getX() + b8.getY() * b6.getY();
int det2 = b8.getX() * b6.getY() - b8.getY() * b8.getX();
double angle2 = atan2(det2,dot2)* 180/PI;
}
The results don't correspond to the actual position of b7 and b8. angle1 is 15.25, and angle2 is -11.31.
I'm a novice in this, and I don't know if what I'm doing is a total mess. Can anyone help me compute these angles?
As Sam already wrote in comment – not clear, what OP wants to achieve with dot and det. It sounds a bit like dot product but it's not necessary here.
A vector from one point to the other is simply the subtraction of points (point vectors).
Subtraction of point vectors is simply the subtraction of vector components.
Using the components of these vectors in atan2() provides the slope of these vectors:
#include <iostream>
#include <cmath>
const double Pi = 3.14159265;
struct Vec2 {
const double x, y;
Vec2(double x, double y): x(x), y(y) { }
~Vec2() = default;
Vec2(const Vec2&) = default;
Vec2& operator=(const Vec2&) = delete;
};
int main()
{
const Vec2 b6(100, 100);
const Vec2 b7(150, 50);
const Vec2 b8(150, 150);
// vector b6->b7
const Vec2 b67(b7.x - b6.x, b7.y - b6.y);
// vector b6->b8
const Vec2 b68(b8.x - b6.x, b8.y - b6.y);
// slope b67
const double angle1 = atan2(b67.y, b67.x) * 180 / Pi;
// slope b68
const double angle2 = atan2(b68.y, b68.x) * 180 / Pi;
// output
std::cout
<< "angle1: " << angle1 << '\n'
<< "angle2: " << angle2 << '\n';
// done
return 0;
}
Output:
angle1: -45
angle2: 45
Live Demo on coliru
A Sketch of the Vec2 instances: