Find a point inside a rotated rectangle - c++

Ok so, this should be super simple, but I'm not a smart man. Technically I want to know whether a point resides inside a rectangle, however the rectangle can be in different states. In my current context when I want to draw a rectangle rotated by, lets say, 45° clockwise, what I do is rotate the entire x,y axis centered at the top-left corner of the rectangle and then I just draw the rectangle as if nothing has happened. Same goes if I want to draw the rectangle at a random coordinate. Given that is the coordinate system who gets tossed and rotated, the rectangle always thinks it's being drawn at (0,0) with 0°, therefore, the best way to find if a given point is inside the rectangle would be to find the projection for the point based on the translation + rotation of the rectangle. But I have no idea how to do that.
This is what I currently do in order to find out if a point is inside a rectangle (not taking into consideration rotation):
bool Image::isPointInsideRectangle(int x, int y, const ofRectangle & rectangle){
return x - xOffset >= rectangle.getX() && x - xOffset <= rectangle.getX() + rectangle.getWidth() &&
y - yOffset >= rectangle.getY() && y - yOffset <= rectangle.getY() + rectangle.getHeight();
}
I already have angleInDegrees stored, as long as I could use it to project the (x,y) point I receive I should be able find out if the point is inside the rectangle.
Cheers!
Axel

The easiest way is to un-rotate x,y in the reverse direction relative to the origin and rotation of the rectangle.
For example, if angleInDegrees is 45 degrees, you would rotate the point to test -45 degrees (or 315 degrees if your rotation routine only allows positive rotations). This will plot the x,y on the same coordinate system as the unrotated rectangle.
Then, you can use the function you already provided to test whether the point is within the rectangle.
Note that prior to rotating x,y, you will probably need to adjust the x,y relative to the point of rotation - the upper-left corner of the rectangle. Since the rotation is relative to that point rather than the overall coordinate origin 0,0. You can compute the difference between x,y and the upper-left corner of your rectangle (which won't change during rotation), then simply rotate the adjusted point by -angleToRotate, then add the origin point difference back into the unrotated point to get absolute coordinates on your coordinate system.

Editted:
#include <cmath>
bool Image::isPointInsideRectangle(int x, int y, const ofRectangle & rectangle){
return x*cosd(deg) - y*sin(deg) + xOffset >= rectangle.getX()
&& x*cosd(deg) - y*sin(deg) + xOffset <= rectangle.getX() + rectangle.getWidth()
&& x*sind(deg) + y*cosd(deg) + yOffset >= rectangle.getY()
&& x*sind(deg) + y*cosd(deg) + yOffset <= rectangle.getY() + rectangle.getHeight();

Like you have already told, you could translate the coordinates of your point into the space of the rectangle. This is a common task in many software products which work with geometry. Each object have it own coordinate space and works as it would be at position (0, 0) without rotation. If your rectangle is at position v and rotated about b degree/radian, than you can translate your point P into the space of the rectangle with the following formula:
| cos(-b) -sin(-b) | | P_x - v_x |
| | ⋅ | |
| sin(-b) cos(-b) | | P_y - v_y |
Many of the most important transformations can be represented as matrices. At least if you are using homogeneous coordinates. It is also very common to do that. Depending of the complexity and the goals of your program you could consider to use some math library like glm and use the transformations of your objects in form of matrices. Then you could write something like inverse(rectangle.transformation()) * point to get point translated into the space of rectangle.

Related

Ellipse rotated not centered

I am trying to draw a rotated ellipse not centered at the origin (in c++).
so far my code "works":
for (double i = 0; i <= 360; i = i + 1) {
theta = i*pi / 180;
x = (polygonList[compt]->a_coeff / 2) * sin(theta) + polygonList[compt]->centroid->datapointx;
y = (polygonList[compt]->b_coeff / 2) * cos(theta) + polygonList[compt]->centroid->datapointy;
xTmp = (x - polygonList[compt]->centroid->datapointx)* cos(angle1) - (y - polygonList[compt]->centroid->datapointy)*sin(angle1) + polygonList[compt]->centroid->datapointx;
yTmp = (x - polygonList[compt]->centroid->datapointx)* sin(angle1) + (y - polygonList[compt]->centroid->datapointy)*cos(angle1) + polygonList[compt]->centroid->datapointy;
}
PolygonList is a list of "bloc" which will be replaced by an ellipse of same area.
My issue is that the angles are not quite exact, as if I had to put a protractor that'd fit the shape of my ellipse, the protractor would obviously get squeezed, and so would be the angles (is that clear ?)
Here is an example: I am trying to set a point on the top ellipse (E1) which would be lying on a line drawn between the centroid of E1, and any point on the second ellipse (E2).On this example, the point on E2 lies at an angle of ~220-230 degree. I am able to catch this angle, the angle seems ok.
The problem is that if I try to project this point on E1 by using this angle of ~225 degree, I end up on the second red circle on top. it looks like my angle is now ~265 degree, but in fact, if I shape the protractor to fit in my ellipse, I get the right angle (~225) ,cf img 2)
it is a bit hard to see the angle on that re-shaped protractor, but it does show ~225 degree.
My conclusion is that the ellipse is drawn like if I had to drew a circle and then I'd compress it, which changes the distance between the angles.
Could someone tell me how I could fix that ?
PS: to draw those ellipses I just use a for loop which plots a dot at every angle (from 0 to 360). we clearly see on the first picture that the distance between the dots are different whether we are at 0 or at 90 degree.
your parametrisation is exactly that, a circle is a case of ellipse with both axes are equal. It sounds like you need use rational representation of ellipse instead of standard: https://en.m.wikipedia.org/wiki/Ellipse
So, I've asked the question above so that I could find a possible overlap between 2 ellipses by checking the distance between any point on E2 and its projection on E1: if the distance between the centroid of E1 and the projected dot on E1 is larger than the distance between the centroid of E1 to a dot on E2 I'll assume an overlap. I reckon this solution has never been tried (or I haven't search enough) and should work fine. But before working I needed to get those angles right.
I have found a way to avoid using angles and projected dots, by checking the foci:
the sum of the distance between the focus A and B to any point around an axis is constant (let's call it DE1 for E1).
I then check the distance between my foci and any point on E2. If that distance becomes less than DE1, I'll assume a connection.
So far it seems to work fine :)
I'll put that here for anyone in need.
Flo

Direction to rotation angles in 3D?

Suppose I have this chicken model who I want to constantly look towards the viewer (camera position), or more easily, towards the origin (0,0,0).
How do I calculate the angles for each axis so that I can rotate the object with them?
Edit:
Sorry if my question was too general. I'm still struggling with this though.
Let's say that the 3D model position is (x,y,z) in model space, and I want the model to "look" towards the origin.
My first thoughts were to begin to rotate around the x axis (rotate vertically):
Consider the yellow circle as the y plane.
So I tried the following code, which doesn't rotate the model at all.
glm::vec3 camPos = camera.GetPosition();
float value = camPos.y / glm::sqrt(glm::pow(camPos.x,2.0f) + glm::pow(camPos.y, 2.0f) + glm::pow(camPos.z, 2.0f));
float angle = glm::asin(value);
cow.SetModelMatrix(glm::translate(camPos - glm::vec3(0,0,1.5)) * //then translate so the cow will appear a little bit infront of the camera
glm::rotate(glm::radians(angle), glm::vec3(-1,0,0)) *//then rotate vertically by the angle
glm::scale(glm::vec3(0.1, 0.1, 0.1)) //first scale, cause the cow (i mean chicken) is too big
);
The camera starts at position (0, 0, 5), looking towards the negative z axis.
What am I doing wrong?
If the chicken is at the origin c=(0,0,0) and the camera is at r=(x,y,z) and ground is at y=0. Then what you want is a sequence of rotations to get the local x axis of the chicken pointed towards the camera.
First orient your x axis on the plane with a rotation about the vertical y axis with an angle φ=-ATAN(z/x) and then a rotation about the z axis with an angle ψ=ATAN(y/√(x^2+z^2))
This creates a 3×3 rotation matrix E = ROT_Y(φ)*ROT_Z(ψ)
| x/d -x*y/(d*√(x^2+z^2)) -z/√(x^2+z^2) |
E = | y/d √(x^2+z^2)/d 0 |
| z/d -y*z/(d*√(x^2+z^2)) x/√(x^2+z^2) |
where d=√(x^2+y^2+z^2). You see the local x axis (the first column of E) pointing towards (x,y,z). Also the local z axis has no component on the vertical, so it always lies on the ground plane.
But this depend on the implementation, like if you need to keep the chicken y vertical (as opposed to keeping z in the ground plane) you will need a different set of rotations and angles. So to fully answer you need to provide more information.

How to compute the center of a polygon in 2D and 3D space

Consider a simple convex polygon in 2D Cartesian space. If given a list of vertex coordinates sorted in a counter-clockwise orientation like this [[x0, y0], ..., [xn, yn]]. How could you compute the center of the polygon (the point inside the polygon that is equidistant to all vertices)?
Also consider a second case where the polygon is placed in 3D Cartesian space and its normal vector is not parallel to any of the Cartesian axes. How could you compute the center, without rotating the polygon?
I can read C/C++, Fortran, MATLAB and Python, however any pseudo-code is also well appreciated.
EDIT
I now realise that my question was not well-posed. I am sorry for that. It appears that what I was looking for is the centroid of the polygon (i.e. the point on which a cardboard cut-out would balance while assuming uniform density and a uniform gravity field).
You definition of center doesn't make sense in general.
To see this just draw three non-aligned points on a plane and compute the one an only circle that passes for all three points. Clearly your center of the triangle must be the center of this circle.
Now draw a fourth point that doesn't lie on the circle and form the four sided polygon. What is the center? There is no point in the plane that is equidistant from all vertices.
Note also that even in case of triangles using the point equidistant from the vertices can give you points outside and far away from the polygon and is also numerically unstable (given any ε>0 and M>0 you can always build a triangle in which a specific movement of a vertex by a distance of less than ε moves the center by a distance greater than M).
Commonly used "centers" that are simple to compute are the average of all vertices, the average of the boundary, the center of mass or even just the center of the axis-aligned bounding box. All of them can however fall outside the polygon if the polygon is not convex, but in your case they may work.
The simplest reasonable one (because it doesn't depends on the coordinate system) is the barycenter of the vertices (code in Python):
xc = sum(x for (x, y) in points) / len(points)
yc = sum(y for (x, y) in points) / len(points)
something bad about it it's that just splitting one side of the polygon gives you a different center (in other words it depends on the vertices and not on the set of points bounded by the polygon). The simplest that depends on the polygon is IMO the barycenter of the boundary:
sx = sy = sL = 0
for i in range(len(points)): # counts from 0 to len(points)-1
x0, y0 = points[i - 1] # in Python points[-1] is last element of points
x1, y1 = points[i]
L = ((x1 - x0)**2 + (y1 - y0)**2) ** 0.5
sx += (x0 + x1)/2 * L
sy += (y0 + y1)/2 * L
sL += L
xc = sx / sL
yc = sy / sL
For both of them the extension to 3d is trivial... just add z using the same formulas.
In the case of a general (not necessarily convex, not necessarily simply connected) polygon a "center" that I found useful but that is not trivial to compute is the (an) inner point that is at a maximum distance from the boundary (in other words a "most inner" point).
In this case I resorted to use a discrete (bitmap) representation and a gaussian distance transform.
First of all for a polygon, the centroid may not always imply equidistant lengths from the centroid to the vertices. In most cases this is probably NOT true. That being said, you can find the centroid simply by finding the mean of your x coordinates and the mean of your y coordinates. In Matlab: centroidx = mean(xcoords) and centroidy = mean(ycoords) are the coordinates of the centroid. See this if you really need more.

3D coordinate of 2D point given camera and view plane

I wish to generate rays from the camera through the viewing plane. In order to do this, I need my camera position ("eye"), the up, right, and towards vectors (where towards is the vector from the camera in the direction of the object that the camera is looking at) and P, the point on the viewing plane. Once I have these, the ray that's generated is:
ray = camera_eye + t*(P-camera_eye);
where t is the distance along the ray (assume t = 1 for now).
My question is, how do I obtain the 3D coordinates of point P given that it is located at position (i,j) on the viewing plane? Assume that the upper left and lower right corners of the viewing plane are given.
NOTE: The viewing plane is not actually a plane in the sense that it doesn't extend infinitely in all directions. Rather, one may think of this plane as a widthxheight image. In the x direction, the range is 0-->width and in the y direction the range is 0-->height. I wish to find the 3D coordinate of the (i,j)th element, 0
General solution of the itnersection of a line and a plane see http://local.wasp.uwa.edu.au/~pbourke/geometry/planeline/
Your particular graphics lib (OpenGL/DirectcX etc) may have an standard way to do this
edit: You are trying to find the 3d intersection of a screen point (eg a mouse cursor) with a 3d object in you scene?
To work out P, you need the distance from the camera to the near clipping plane (the screen), the size of the window on the near clipping plane (or the view angle, you can work out the window size from the view angle) and the size of the rendered window.
Scale the screen position to the range -1 < x < +1 and -1 < y < +1 where +1 is the top/right and -1 is the bottom/left
Scale normalised x,y by the view window size
Scale by the right and up vectors of the camera and sum the results
Add the look at vector scaled by the clipping plane distance
In effect, you get:
p = at * near_clip_dist + x * right + y * up
where x and y are:
x = (screen_x - screen_centre_x) / (width / 2) * view_width
y = (screen_y - screen_centre_y) / (height / 2) * view_height
When I directly plugged in suggested formulas into my program, I didn't obtain correct results (maybe some debugging needed to be done). My initial problem seemed to be in the misunderstanding of the (x,y,z) coordinates of the interpolating corner points. I was treating x,y,z-coordinates separately, where I should not (and this may be specific to the application, since the camera can be oriented in any direction). Instead, the solution turned out to be a simple interpolation of the corner points of the viewing plane:
interpolate the bottom corner points in the i direction to get P1
interpolate the top corner points in the i direction to get P2
interpolate P1 and P2 in the j direction to get the world coordinates of the final point

c++ opengl converting model coordinates to world coordinates for collision detection

(This is all in ortho mode, origin is in the top left corner, x is positive to the right, y is positive down the y axis)
I have a rectangle in world space, which can have a rotation m_rotation (in degrees).
I can work with the rectangle fine, it rotates, scales, everything you could want it to do.
The part that I am getting really confused on is calculating the rectangles world coordinates from its local coordinates.
I've been trying to use the formula:
x' = x*cos(t) - y*sin(t)
y' = x*sin(t) + y*cos(t)
where (x, y) are the original points,
(x', y') are the rotated coordinates,
and t is the angle measured in radians
from the x-axis. The rotation is
counter-clockwise as written.
-credits duffymo
I tried implementing the formula like this:
//GLfloat Ax = getLocalVertices()[BOTTOM_LEFT].x * cosf(DEG_TO_RAD( m_orientation )) - getLocalVertices()[BOTTOM_LEFT].y * sinf(DEG_TO_RAD( m_orientation ));
//GLfloat Ay = getLocalVertices()[BOTTOM_LEFT].x * sinf(DEG_TO_RAD( m_orientation )) + getLocalVertices()[BOTTOM_LEFT].y * cosf(DEG_TO_RAD( m_orientation ));
//Vector3D BL = Vector3D(Ax,Ay,0);
I create a vector to the translated point, store it in the rectangles world_vertice member variable. That's fine. However, in my main draw loop, I draw a line from (0,0,0) to the vector BL, and it seems as if the line is going in a circle from the point on the rectangle (the rectangles bottom left corner) around the origin of the world coordinates.
Basically, as m_orientation gets bigger it draws a huge circle around the (0,0,0) world coordinate system origin. edit: when m_orientation = 360, it gets set back to 0.
I feel like I am doing this part wrong:
and t is the angle measured in radians
from the x-axis.
Possibly I am not supposed to use m_orientation (the rectangles rotation angle) in this formula?
Thanks!
edit: the reason I am doing this is for collision detection. I need to know where the coordinates of the rectangles (soon to be rigid bodies) lie in the world coordinate place for collision detection.
What you do is rotation [ special linear transformation] of a vector with angle Q on 2d.It keeps vector length and change its direction around the origin.
[linear transformation : additive L(m + n) = L(m) + L(n) where {m, n} € vector , homogeneous L(k.m) = k.L(m) where m € vector and k € scalar ] So:
You divide your vector into two pieces. Like m[1, 0] + n[0, 1] = your vector.
Then as you see in the image, rotation is made on these two pieces, after that your vector take
the form:
m[cosQ, sinQ] + n[-sinQ, cosQ] = [mcosQ - nsinQ, msinQ + ncosQ]
you can also look at Wiki Rotation
If you try to obtain eye coordinates corresponding to your object coordinates, you should multiply your object coordinates by model-view matrix in opengl.
For M => model view matrix and transpose of [x y z w] is your object coordinates you do:
M[x y z w]T = Eye Coordinate of [x y z w]T
This seems to be overcomplicating things somewhat: typically you would store an object's world position and orientation separately from its set of own local coordinates. Rotating the object is done in model space and therefore the position is unchanged. The world position of each coordinate is the same whether you do a rotation or not - add the world position to the local position to translate the local coordinates to world space.
Any rotation occurs around a specific origin, and the typical sin/cos formula presumes (0,0) is your origin. If the coordinate system in use doesn't currently have (0,0) as the origin, you must translate it to one that does, perform the rotation, then transform back. Usually model space is defined so that (0,0) is the origin for the model, making this step trivial.