Rotating a 2D polygon shape algorithm - c++

I have searched stackoverflow and find this question useful and learned about the 2D shape rotation.
I have the coordinates in such format
int x1=-30, x2=-15, x3=20, x4=30;
int my1=-30,y2=-15,y3=0,y4=15,y5=20,y6=30;
and have some center and pivot points like this
int xc=320, yc=240;//Center of the figure
int xp=0, yp=0;//Pivot point for this figure
I used this function to draw the shape
void draw_chair()
{
int loc_xc = xc+xp;
int loc_yc = yc+yp;
line(x2+loc_xc,my1+loc_yc,x2+loc_xc,y5+loc_yc);
line(x3+loc_xc,my1+loc_yc,x3+loc_xc,y5+loc_yc);
line(x2+loc_xc,my1+loc_yc,x3+loc_xc,my1+loc_yc);
line(x2+loc_xc,y2+loc_yc,x3+loc_xc,y2+loc_yc);
line(x2+loc_xc,y3+loc_yc,x3+loc_xc,y3+loc_yc);
line(x1+loc_xc,y4+loc_yc,x4+loc_xc,y4+loc_yc);
line(x2+loc_xc,y3+loc_yc,x1+loc_xc,y4+loc_yc);
line(x3+loc_xc,y3+loc_yc,x4+loc_xc,y4+loc_yc);
line(x1+loc_xc,y4+loc_yc,x1+loc_xc,y6+loc_yc);
line(x4+loc_xc,y4+loc_yc,x4+loc_xc,y6+loc_yc);
}
The problem is that, Now I am confused at how to compute the rotated x and y values
I tried google and found this piece of code to rotate
int tempx=x1;
x1=tempx*cos(angle)-my1*sin(angle);
my1=tempx*sin(angle)+my1*cos(angle);
tempx=x2;
x2=tempx*cos(angle)-y2*sin(angle);
y2=tempx*sin(angle)+y2*cos(angle);
tempx=x3;
x3=tempx*cos(angle)-y3*sin(angle);
y3=tempx*sin(angle)+y3*cos(angle);
tempx=x4;
x4=tempx*cos(angle)-y4*sin(angle);
y4=tempx*sin(angle)+y4*cos(angle);
I tried this but it did not rotated shape properly but instead this code converts shape into some other strange shape. Also I have 4 x points and 6 y points, then how to compute new value for each point?
Any Idea? or hint?
Thanks

You cannot technically rotate a coordinate, as it is just a point with no notion of direction.
The code you found is used to rotate vectors, which is indeed what you'll need, but first you would need to convert your coordinates into vectors. You can think of vectors as being the invisible line that connects the center of the figure to your points, so it consists of two points, which in your case you can assume one to be (0,0) since you later increment them with the center of the figure, and the other corresponds to your pairs such as (x2,my1), (x2,y5)... as used in your line drawing function.
Your code should actually become something like this:
PS: unless you pass in only the perfect angles, you cannot expect the figure to always work with integer coordinates. You would need them to be doubles)
int point1x, point1y;
point1x = (int) round(x2*cos(angle)-m1y*sin(angle));
point1y = (int) round(x2*sin(angle)+m1y*cos(angle));
int point2x, point2y;
point2x = (int) round(x2*cos(angle)-y5*sin(angle));
point2y = (int) round(x2*sin(angle)+y5*cos(angle));
...
line(point1x+loc_xc, point1y+loc_yc, point2x+loc_xc, point2y+loc_yc);
and so on.
Also, make sure your angle value is in radians, as both sin() and cos() functions assume that. If you are passing down degrees, convert them to radians first with the following formula:
double pi = acos(-1);
double rotation_angle = (double) angle / 180.0 * pi;
and use rotation_angle instead of angle in the code above.

Related

Find the distance between a 3D point and an Orientated Ellipse in 3D space (C++)

To give some background to this question, I'm creating a game that needs to know whether the 'Orbit' of an object is within tolerance to another Orbit. To show this, I plot a Torus-shape with a given radius (the tolerance) using the Target Orbit, and now I need to check if the ellipse is within that torus.
I'm getting lost in the equations on Math/Stack exchange so asking for a more specific solution. For clarification, here's an image of the game with the Torus and an Orbit (the red line). Quite simply, I want to check if that red orbit is within that Torus shape.
What I believe I need to do, is plot four points in World-Space on one of those orbits (easy enough to do). I then need to calculate the shortest distance between that point, and the other orbits' ellipse. This is the difficult part. There are several examples out there of finding the shortest distance of a point to an ellipse, but all are 2D and quite difficult to follow.
If that distance is then less than the tolerance for all four points, then in think that equates to the orbit being inside the target torus.
For simplicity, the origin of all of these orbits is always at the world Origin (0, 0, 0) - and my coordinate system is Z-Up. Each orbit has a series of parameters that defines it (Orbital Elements).
Here simple approach:
Sample each orbit to set of N points.
Let points from first orbit be A and from second orbit B.
const int N=36;
float A[N][3],B[N][3];
find 2 closest points
so d=|A[i]-B[i]| is minimal. If d is less or equal to your margin/treshold then orbits are too close to each other.
speed vs. accuracy
Unless you are using some advanced method for #2 then its computation will be O(N^2) which is a bit scary. The bigger the N the better accuracy of result but a lot more time to compute. There are ways how to remedy both. For example:
first sample with small N
when found the closest points sample both orbits again
but only near those points in question (with higher N).
you can recursively increase accuracy by looping #2 until you have desired precision
test d if ellipses are too close to each other
I think I may have a new solution.
Plot the four points on the current orbit (the ellipse).
Project those points onto the plane of the target orbit (the torus).
Using the Target Orbit inclination as the normal of a plane, calculate the angle between each (normalized) point and the argument of periapse
on the target orbit.
Use this angle as the mean anomaly, and compute the equivalent eccentric anomaly.
Use those eccentric anomalies to plot the four points on the target orbit - which should be the nearest points to the other orbit.
Check the distance between those points.
The difficulty here comes from computing the angle and converting it to the anomaly on the other orbit. This should be more accurate and faster than a recursive function though. Will update when I've tried this.
EDIT:
Yep, this works!
// The Four Locations we will use for the checks
TArray<FVector> CurrentOrbit_CheckPositions;
TArray<FVector> TargetOrbit_ProjectedPositions;
CurrentOrbit_CheckPositions.SetNum(4);
TargetOrbit_ProjectedPositions.SetNum(4);
// We first work out the plane of the target orbit.
const FVector Target_LANVector = FVector::ForwardVector.RotateAngleAxis(TargetOrbit.LongitudeAscendingNode, FVector::UpVector); // Vector pointing to Longitude of Ascending Node
const FVector Target_INCVector = FVector::UpVector.RotateAngleAxis(TargetOrbit.Inclination, Target_LANVector); // Vector pointing up the inclination axis (orbit normal)
const FVector Target_AOPVector = Target_LANVector.RotateAngleAxis(TargetOrbit.ArgumentOfPeriapsis, Target_INCVector); // Vector pointing towards the periapse (closest approach)
// Geometric plane of the orbit, using the inclination vector as the normal.
const FPlane ProjectionPlane = FPlane(Target_INCVector, 0.f); // Plane of the orbit. We only need the 'normal', and the plane origin is the Earths core (periapse focal point)
// Plot four points on the current orbit, using an equally-divided eccentric anomaly.
const float ECCAngle = PI / 2.f;
for (int32 i = 0; i < 4; i++)
{
// Plot the point, then project it onto the plane
CurrentOrbit_CheckPositions[i] = PosFromEccAnomaly(i * ECCAngle, CurrentOrbit);
CurrentOrbit_CheckPositions[i] = FVector::PointPlaneProject(CurrentOrbit_CheckPositions[i], ProjectionPlane);
// TODO: Distance from the plane is the 'Depth'. If the Depth is > Acceptance Radius, we are outside the torus and can early-out here
// Normalize the point to find it's direction in world-space (origin in our case is always 0,0,0)
const FVector PositionDirectionWS = CurrentOrbit_CheckPositions[i].GetSafeNormal();
// Using the Inclination as the comparison plane - find the angle between the direction of this vector, and the Argument of Periapse vector of the Target orbit
// TODO: we can probably compute this angle once, using the Periapse vectors from each orbit, and just multiply it by the Index 'I'
float Angle = FMath::Acos(FVector::DotProduct(PositionDirectionWS, Target_AOPVector));
// Compute the 'Sign' of the Angle (-180.f - 180.f), using the Cross Product
const FVector Cross = FVector::CrossProduct(PositionDirectionWS, Target_AOPVector);
if (FVector::DotProduct(Cross, Target_INCVector) > 0)
{
Angle = -Angle;
}
// Using the angle directly will give us the position at th eccentric anomaly. We want to take advantage of the Mean Anomaly, and use it as the ecc anomaly
// We can use this to plot a point on the target orbit, as if it was the eccentric anomaly.
Angle = Angle - TargetOrbit.Eccentricity * FMathD::Sin(Angle);
TargetOrbit_ProjectedPositions[i] = PosFromEccAnomaly(Angle, TargetOrbit);}
I hope the comments describe how this works. Finally solved after several months of head-scratching. Thanks all!

Rotate 2D vector with glm proper

In a game I'm making, guns have a spread (float) and I want to give each bullet's angle a random value from range [-spread, spread]. For this I thought I could use glm::rotate, but the problem is that the bullets spread in almost every direction.
The code I use is:
void Gun::fire(const glm::vec2& direction, const glm::vec2& position, std::vector<Bullet>& bullets) {
static std::mt19937 randomEngine(time(nullptr));
// For offsetting the accuracy
std::uniform_real_distribution<float> randRotate(-_spread, _spread);
for (int i = 0; i < _bulletsPerShot; i++) {
// Add a new bullet
bullets.emplace_back(position,
glm::rotate(direction, randRotate(randomEngine)),
_bulletDamage,
_bulletSpeed);
}
}
(At the top I included vector and glm/gtx/rotate_vector.hpp)
I don't recall if GLM uses Radians or Degrees for calculating rotation, but 2 Radians is nearly a third of a full circle, which means that bullets will vary in direction by as much as 2 thirds of a whole circle. You may wish to test with smaller numbers, or else verify that GLM does indeed use Degrees to calculate rotation.
EDIT: In the most recent version of GLM, I looked through the source code. There's a commented out version of Rotate that explicitly converts Degrees to Radians, but the accessible source code has no such explicit conversion. So I'm left to presume that it is expecting Radians, not Degrees, as your inputs for Rotation.

Best way to get the nearest intersection point in a grid

I'm using Cocos2D for iPhone to build up a game.I have a grid on the screen drawn by horizontal and vertical lines.(I did it with CCDrawNode) As you might guess there're lots of intersection points in there, I mean the points where horizontal and vertical lines intersect. With every touchBegan-Moved-Ended routine I draw a line, a bolder and different color line. In touchesMoved method I need to find the intersection point nearest to the current end point of the line and stick the line end to that point. How can I do that? I have one idea in my mind which is to add all the intersection points to an array when drawing the grid, iterate through that array and find the closest one. But I think this is not the best approach. You have any better ideas?
Assuming it is a normal grid with evenly spaced lines (e.g. every 10 pixels apart), you are much better off using a formula to tell you where an intersection should be.
E.g. given end point X/Y of 17,23, then x(17)/x-spacing(10) = 1.7, rounds to 2. 2*x-spacing = 20. y/y-spacing=2.3 -> 2*20 = 20. Thus your intersection is 20,20.
EDIT: more detailed example, in C# as that's what I use, if I get time I'll write an Objective-C sample
// defined somewhere and used to draw the grid
private int _spacingX = 10;
private int _spacingY = 10;
public Point GetNearestIntersection(int x, int y)
{
// round off to the nearest vertical/horizontal line number
double tempX = Math.Round((double)x / _spacingX);
double tempY = Math.Round((double)y / _spacingY);
// convert back to pixels
int nearestX = (int)tempX * _spacingX;
int nearestY = (int)tempY * _spacingY;
return new Point(nearestX, nearestY);
}
NOTE: the code above is left quite verbose to help you understand, you could easily re-write it to be cleaner

How to properly move the camera in the direction it's facing

I'm trying to figure out how to make the camera in directx move based on the direction it's facing.
Right now the way I move the camera is by passing the camera's current position and rotation to a class called PositionClass. PositionClass takes keyboard input from another class called InputClass and then updates the position and rotation values for the camera, which is then passed back to the camera class.
I've written some code that seems to work great for me, using the cameras pitch and yaw I'm able to get it to go in the direction I've pointed the camera.
However, when the camera is looking straight up (pitch=90) or straight down (pitch=-90), it still changes the cameras X and Z position (depending on the yaw).
The expected behavior is while looking straight up or down it will only move along the Y axis, not along the X or Z axis.
Here's the code that calculates the new camera position
void PositionClass::MoveForward(bool keydown)
{
float radiansY, radiansX;
// Update the forward speed movement based on the frame time
// and whether the user is holding the key down or not.
if(keydown)
{
m_forwardSpeed += m_frameTime * m_acceleration;
if(m_forwardSpeed > (m_frameTime * m_maxSpeed))
{
m_forwardSpeed = m_frameTime * m_maxSpeed;
}
}
else
{
m_forwardSpeed -= m_frameTime * m_friction;
if(m_forwardSpeed < 0.0f)
{
m_forwardSpeed = 0.0f;
}
}
// ToRadians() just multiplies degrees by 0.0174532925f
radiansY = ToRadians(m_rotationY); //yaw
radiansX = ToRadians(m_rotationX); //pitch
// Update the position.
m_positionX += sinf(radiansY) * m_forwardSpeed;
m_positionY += -sinf(radiansX) * m_forwardSpeed;
m_positionZ += cosf(radiansY) * m_forwardSpeed;
return;
}
The significant portion is where the position is updated at the end.
So far I've only been able to deduce that I have horrible math skills.
So, can anyone help me with this dilemma? I've created a fiddle to help test out the math.
Edit: The fiddle uses the same math I used in my MoveForward function, if you set pitch to 90 you can see that the Z axis is still being modified
Thanks to Chaosed0's answer, I was able to figure out the correct formula to calculate movement in a specific direction.
The fixed code below is basically the same as above but now simplified and expanded to make it easier to understand.
First we determine the amount by which the camera will move, in my case this was m_forwardSpeed, but here I will define it as offset.
float offset = 1.0f;
Next you will need to get the camera's X and Y rotation values (in degrees!)
float pitch = camera_rotationX;
float yaw = camera_rotationY;
Then we convert those values into radians
float pitchRadian = pitch * (PI / 180); // X rotation
float yawRadian = yaw * (PI / 180); // Y rotation
Now here is where we determine the new position:
float newPosX = offset * sinf( yawRadian ) * cosf( pitchRadian );
float newPosY = offset * -sinf( pitchRadian );
float newPosZ = offset * cosf( yawRadian ) * cosf( pitchRadian );
Notice that we only multiply the X and Z positions by the cosine of pitchRadian, this is to negate the direction and offset of your camera's yaw when it's looking straight up (90) or straight down (-90).
And finally, you need to tell your camera the new position, which I won't cover because it largely depends on how you've implemented your camera. Apparently doing it this way is out of the norm, and possibly inefficient. However, as Chaosed0 said, it's what makes the most sense to me!
To be honest, I'm not entirely sure I understand your code, so let me try to provide a different perspective.
The way I like to think about this problem is in spherical coordinates, basically just polar in 3D. Spherical coordinates are defined by three numbers: a radius and two angles. One of the angles is yaw, and the other should be pitch, assuming you have no roll (I believe there's a way to get phi if you have roll, but I can't think of how currently). In conventional mathematics notation, theta is your yaw and phi is your pitch, with radius being your move speed, as shown below.
Note that phi and theta are defined differently, depending on where you look.
Basically, the problem is to obtain a point m_forwardSpeed away from your camera, with the right pitch and yaw. To do this, we set the "origin" to your camera position, obtain a spherical coordinate, convert it to cartesian, and then add it to your camera position:
float radius = m_forwardSpeed;
float theta = m_rotationY;
float phi = m_rotationX
//These equations are from the wikipedia page, linked above
float xMove = radius*sinf(phi)*cosf(theta);
float yMove = radius*sinf(phi)*sinf(theta);
float zMove = radius*cosf(phi);
m_positionX += xMove;
m_positionY += yMove;
m_positionZ += zMove;
Of course, you can condense a lot of this code, but I expanded it for clarity.
You can think about this like drawing a sphere around your camera. Each of the points on the sphere is a potential position in the next timestep, depending on the camera's rotation.
This is probably not the most efficient way to do it, but in my opinion it's certainly the easiest way to think about it. It actually looks like this is nearly exactly what you're trying to do in your code, but the operations on the angles are just a little bit off.

2d rotation around point

I'm trying to add some features into my editor that allow to the user things like dragging a selected item to a given direction
although i've ran into a damn issue.
This is my code :
//Origin
double objectx = selection->getX();
double objecty = selection->getY();
//Point
double pointerx = input->getMouseX();
double pointery = input->getMouseY();
//Displacement
double displacementx = fabs(pointerx - objectx);
double displacementy = fabs(pointery - objecty);
//Angle
double angle = atan2(displacementy,displacementx);
//Point
double pointx = displacementx * cosf(angle) + displacementy * sinf(angle);
double pointy = displacementy * cosf(angle) - displacementx * sinf(angle);
//Final position
double fx = objectx + pointx;
double fy = objecty + pointy;
//Save alpha
const bool alpha = graphics->getAlpha();
//Draw selection
graphics->setAlpha(false);
graphics->color(selection->getColor());
graphics->renderQd(selection->getBitmap(),
CRect(objectx,
objecty,
selection->getWidth(),
selection->getHeight()));
//Draw pointer around selection
graphics->setAlpha(true);
graphics->color(editor::ssImg[0]->getColor());
graphics->renderQd(editor::ssImg[0]->getBitmap(),
CRect(objectx + pointx,
objecty + pointy,
editor::ssImg[0]->getWidth(),
editor::ssImg[0]->getHeight()));
//Restore alpha
graphics->setAlpha(alpha);
The exact issue is that the selection pointer doesn't follow mouse's rotation only but its actually at mouse's position(!).
The wanted behaviour is a pointer locked at selection's offset but pointing to the mouse's angle.
Anyone good at math sees anything wrong here ?
As I understand it, your wanted behavior presumes the existence of three points: an origin around which you're rotating, a "mouse" to provide the direction relative to the origin, and a "selection" to provide the distance from the origin. (Somewhat confusingly, the result of your code is the "selection pointer". I take it that "selection" means the original position of the selected object, and "selection pointer" means the position it's been dragged to so far?)
Your code, however, only actually refers to two of those points: (objectx, objecty), which I assume is the origin, and (pointerx, pointery), which I assume is the "mouse". Your code never refers to the "selection"; so, naturally, the "selection" has no effect on the result of your code.
There are a few other problems — Oli Charlesworth points out, in a comment above, that you're wrongly dividing your angle by π/180, which means that you apply a very small rotation (which is why it looks like you end up with selection pointer = mouse; in fact, they can be up to a few degrees apart relative to the origin of rotation, but that's not instantly noticeable) — but rather than fix those problems, I'd recommend you change your approach. Instead of generating "selection pointer" by rotating "selection" to match the angle of "mouse", I recommend that you generate it by scaling "mouse" to match the magnitude of "selection". The math for this is more straightforward, IMHO.
If you do want to stick with the approach of generating "selection pointer" by rotating "selection" to match the angle of "mouse", then you have two main things to fix. Your current code rotates "mouse" by the current angle of "mouse". Part of the fix is to rotate "selection" instead; the other part of the fix is to rotate by the difference between the current angles of "mouse" and "selection".