Extend line to edge of screen - c++

I have a bounding box at (0, 0, w, h) and a point (x, y) somewhere within that, as well as a directional vector (dx, dy) pointing in some random direction, what I am trying to do is create a line from that point, in that direction to the edge of the bounding box.
Looking at the image below, the black dot is the point, the arrow is the directional vector and the red line is the resulting line I want.
What I am doing now is to simply extend the line by the vector times some random big number that is guaranteed to place it outside the box and then using a line clipping algorithm to clip it. And this totally works, but it feels like a very hacky solution, is there a better way to do this?

First, how to find the intersecting point with a vertical line.
Let (x0,y0) be the point inside the box, and (dx,dy) its slope. And say you are trying to find intersection with vertical line y=b.
x0+tdx and y0+tdy are points on the line. So the line intersects the vertical line at y1 such that y1=y0+tdy=b (t>=0). So solve for t (t=(b-y0)/dy) and use the same t to get x1 = x0 + tdx.
Similarly you can also find intersecting point with a horizontal line.
You should find the four points where the line intersects two edges. In most cases two of them will have negative t, discard them. Of the other, pick the one with lowest t and thats your answer.
Further optimization:
Based on the sign of dx and dy, the line could intersect one of the two edges. Eg if both are positive, then it might intersect either top or right side and so on. You can calculate t for only those two edges and pick the one with lowest t.

Related

Intersection between polyline and given horizontal line

How to find number of intersections between polyline and given horizontal line?Horizontal line starts from x1 and ends at x2.
I think it is about segment trees but I dont know how to implement.

The search for a set of points with a minimum sum of lengths to rectangles. What is the algorithm?

Good day.
I have the task of finding the set of points in 2D space for which the sum of the distances to the rectangles is minimal. For example, for two rectangles, the result will be the next area (picture). Any point in this area has the minimum sum of lengths to A and B rectangles.
Which algorithm is suitable for finding a region, all points of which have the minimum sum of lengths? The number of rectangles can be different, they are randomly located. They can even overlap each other. The sides of the rectangles are parallel to the coordinate axes and cannot be rotated. The region must be either a rectangle or a line or a point.
Hint:
The distance map of a rectangle (function that maps any point (x,y) to the closest distance to the rectangle) is made of four slanted planes (slope 45°), four quarter of cones and the rectangle itself, which is at ground level, forming a continuous surface.
To obtain the global distance map, it "suffices" to sum the distance maps of the individual rectangles. A pretty complex surface will result. Depending on the geometries, the minimum might be achieved on a single vertex, a whole edge or a whole face.
The construction of the global map seems more difficult than that of a line arrangement, due to the conic patches. A very difficult problem in the general case, though the axis-aligned constraint might ease it.
Add on Yves's answer.
As Yves described, each rectangle 'divide' plane into 9 parts and adds different distance method in to the sum. Middle part (rectangle) add distance 0, side parts add coordinate distance to that side, corner parts add point distance to that corner. With that approach plan has to be divided into 9^n parts, and distance sum is calculated by adding appropriate rectangle distance functions. That is feasible if number of rectangles is not too large.
Probably it is not needed to calculate all parts since it is easy to calculate some bound on part min value and check is it needed to calculate part at all.
I am not sure, but it seems to me that global distance map is convex function. If that is the case than it can be solved iteratively by similar idea as in linear programming.

How to split a general closed polygon by a line segment

I need a good (robust) algorithm for splitting a polygon into two sets(left/right) by a line segment. My polygon representation is simply a list of integer coordinates(ordered clock wise, never self intersecting) and the line segment is represented by a start and end point. The line always starts and ends outside the polygon, i.e. intersects the polygon an even number of times.
Here is an example:
The output of the algorithm should be the two sets(travelling clock wise):
Left: HABCH, FGDEF
Right: HCDGH, BAB, FEF
I can identify the points A-H by iterating the polygon and checking if a polygon segment crosses the line, taking care to respect border cases. I can also determine which side each multi-line belongs to. I cannot though, for the life of me, decide how to string these segment together.
Before you suggest a general purpose clipping library: I am using boost polygon which is very good at clipping polygons against each other, but I haven't found any library which let's you clip a polygon against a line segment and it is not possible in general to turn the line segment into a polygon which I could clip with.
EDIT: I had missed FEF and the fact that a polygon can have parts on both sides of the line segment.
Ok, here is a rather simple recipe of how to arrive at the answer:
Start with the set of intersection points ordered by traveling the contour clockwise:
ABCDEFGH
Sort them according to distance from the start of line:
HCFEDGBA
We also need to remember for each point if it is a left-to-right or right-to-left intersection.
Start with any point. Let's say G. Follow the contour clockwise and add GH
to the current polygon.
Now we need to travel along the line. The
direction depends on which side of the line we are. We are on the
right side, so we need to pick the value to the right of H in the
sorted set: C. Add HC to the current polygon.
Follow the contour clockwise and add CD to the current polygon.
We are on the right side, so we need to pick the value to the right of D in the sorted set: G. Add DG to the current polygon.
We have now reached the
starting point, so let's save the polygon(GHCDG) and remove used
points from the list.
Start over with another point.
For each intersection of the polygon border with the line segment:
Add a new point to the polygon.
Remember the new points in a new-point set.
Add the original polygon to the polygon set.
For each pair of points in the new-point set:
For each polygon in the current polygon set:
If the line segment between the points is completely inside the polygon.
Replace the polygon in the polygon set with two polygons
generated by dividing the original polygon along the line
segment between the points.
For each polygon in the polygon set:
Add it to the Left result set or the Right result set.
(Note this may not be possible.
Consider your example of the segment starting between C and F:
You will end up with a polygon (GABCFG) that touches both
sides of the dividing segment. Is that a Left or a Right?
I've solved something similar once and I gave up trying to be clever.
Run round all the vertices making them into connected line segments,
starting a new segment with a new point every time you intersect the
cutting line.
Find all segments which share an end point and join them back up into one longer one.
Connect all the open ends.

Algorithms for Collision Detection between Arbitrarily sized Convex Polygons

I am working on an asteroids clone. Everything is 2D, and written in C++.
For the asteroids, I am generating random N-sided polygons. I have guaranteed that they are Convex. I then rotate them, give them a rotspeed, and have them fly through space. It all works, and is very pretty.
For collision, I'm using an Algorithm I thought of myself. This is probably a bad idea, and if push comes to shove, I'll probably scrap the whole thing and find a tutorial on the internet.
I've written and implemented everything, and the collision detection works alright.... most of the time. It will randomly fail when there's obviously a collision on screen, and sometimes indicate collision when nothing is touching. Either I have flubbed my implementation somewhere, or my algorithm is horrible. Due to the size/scope of my implementation (over several source files) I didn't want to bother you with that, and just wanted someone to check that my algorithm is, in fact, sound. At that point I can go on a big bug hunt.
Algorithm:
For each Asteroid, I have a function that outputs where each vertex should be when drawing the asteroid. For each pair of adjacent Vertices, I generate the Formula for the line that they sit on, y=mx+b format. I then start with one of my ships vertices, testing that point to see whether it is inside the asteroid. I start by plugging in the X coordinate of the point, and comparing the output to the Actual Y value. This tells me if the point is above or below the line. I then do the same with the Center of the Asteroid, to determine which half of the line is considered "Inside" the asteroid. I then repeat for each pair of Vertices. IF I ever find a line for which my point is not on the same side as the center of the asteroid, I know there is no collision, and exit detection for that point. Since there are 3 points on my ship, I then have to test for the next point. If all 3 points exit early, then There are no collisions for any of the points on the ship, and we're done. If any point is bound on all sides by the lines made up by the asteroid, then it is inside the asteroid, and the collision flag is set.
The two Issues I've discovered with this algorithm is that:
it doesn't work on concave polygons, and
It has problems with an Edge case where the Slope is Undefined.
I have made sure all polygons are Convex, and have written code to handle the Undefined Slope issue (doubles SHOULD return NAN if we divide by 0, so it's pretty easy to test for that).
So, should this work?
The standard solution to this problem is using the separating axis theorem (SAT). Given two convex polygons, A and B, the algorithm basically goes like this:
for each normal N of the edges of A and B
intervalA = [min, max] of projecting A on N
intervalB = [min, max] of projecting B on N
if intervalA doesn't overlap intervalB
return did not collide
return collided
I did something similar to compute polygon intersections, namely finding if a vertex sits within a given polygon.
Your algorithm is sound, and indeed does not work for concave polys. The line representation you chose is also problematic at slopes approaching infinity. I chose to use a couple of vectors for mine, one for the line direction, and one for a reference point on the line. From these, I can easily derive a parameterized equation of the line, and use that in various ways to find intersections with other shapes.
P = S + t * D
Any point P of the line can be caracterized by its coordinate t on the the line, given the above relation, where S is the reference point, and D the direction vector.
This representation lets you easily define which parts of the plane is the positive and the negative one (ie. above and below the line), thanks to the direction orientation. Now, any region of the plane can be defined as an intersection of several lines' negative or positive subplanes. So your "point within polygon" algorithm could be slightly changed to use that representation, with the added constraint of all the direction pointing clockwise, and testing for the point being in the negative subplane of all the lines (so you don't need the centre of the polygon anymore).
The formula to compute the side of a point wrt a line I used is the following:
(xs - xp) * yd - (ys - yp) * xd
The slope issue appears here when point P is close to S.
That representation can be computed using the edge vertices, but in order to have correct subplanes, you must keep your vertices in your polygon in condecutive orders.
For concave polygons, the problem is a bit more complicated: briefly, you have to test that the point is between two consecutive convex edges. This can be achieved by checking the coordinate of the point on the edge when projected on it, and ensuring it stands between 0 and length(edge) (assuming that the direction is normalized). Note that it boils down to check if the point belongs to a triangle within the polygon.

How to calibrate intuitive pointing mechanism

I'm trying to implement an intuitive pointing mechanism, where the user would use his hands to just point to an object on-screen. I have most of it ready, except I'm not sure how to write the final part.
Basically, I have a list of calibration points like the following:
typdef struct {
Point2D pointOnScreen, // gives an x/y pixel screen position
Point3D pointingFinger, // gives the position of the user's pointing finger, in space
Point3D usersEyes // gives the position of the user's eyes, in space
} CalibrationPoint;
std::vector<CalibrationPoint> calibrationPoints;
Now, the idea is that I could use these calibrationPoints to write a function that would look something like this:
Point2D whereIsTheUserPointing(Point3D pointingFinger, Point3D usersEyes) {
return the corresponding point on screen; // this would need to be calibrated
// somehow using the calibrationPoints
}
But I have trouble figuring out the math of how to do this. The basic idea is that when you're pointing, you're aligning your finger so that your eyes-finger-object you're pointing at are aligned in a straight line. However, since I don't have the position of the screen in 3D, I thought I could instead get the calibration points and deduce where the user is pointing from that. How would I go about writing the whereIsTheUserPointing() function and calibrating the system?
I'm idealizing, but maybe this will be a start:
I assume that you can obtain universal 3D coordinates for the eyes and the tip of the finger.
Three points in 3D space span a plane. If we could determine three points on your screen, we could locate the screen plane in 3D space. To be safe, let's locate all four corners, so we don't just know the plane, but also its boundaries.
Two straight lines in 3D which meet determine a unique point in 3D.
Thus, in order to find the four corners of the screen, produce four pairs of straight lines, two lines through each corner. This could be done by asking the user to point at the four corners, move, and then point at the four corners again.
Let the co-ordinates of the eyes be (a,b,c) and the coordinates of the end of the finger be (x,y,z). You could easily visualise the joining line in 3D. All you need to do now is to extend the line till it intersects the "plane" of your screen.
Parametric coordinates of the line in your case will be:
(a + T(x-a), b + T(y-b), c + T(z-c))
with:
eye at (a,b,c) and finger at (x,y,z).
With T = 0, you get the coordinate of the eye. With T=1 you get the coordinate of the end of the finger. You can "extend" the line with T>1.
Assuming you have the z-coordinate of the plane of the screen, you could easily get the value of T with the following formula:
T = (Z_VALUE_OF_PLANE-c)/(z-c)
Substitute this value of T to get the other two coordinates (x,y).
The final co-ordinates on the 2D plane will be:
X = a + ((Z_VALUE_OF_PLANE-c)/(z-c))*(x-a)
Y = b + ((Z_VALUE_OF_PLANE-c)/(z-c))*(y-b)