C++ Plane Sphere Collision Detection - c++

I'm attempting to implement Sphere-Plane collision detection in C++. I have a Vector3, Plane and Sphere class.
#include "Vector3.h"
#ifndef PLANE_H
#define PLANE_H
class Plane
{
public:
Plane(Vector3, float);
Vector3 getNormal() const;
protected:
float d;
Vector3 normal;
};
#endif
I know the equation for a plane is Ax + By = Cz + D = 0 which we can simplify to N.S + d < r where N is the normal vector of the plane, S is the center of the sphere, r is the radius of the sphere and d is the distance from the origin point. How do I calculate the value of d from my Plane and Sphere?
bool Sphere::intersects(const Plane& other) const
{
// return other.getNormal() * this->currentPosition + other.getDistance() < this->radius;
}

I needed the same computation in a game I made. This is the minimum distance from a point to a plane:
distance = (q - plane.p[0])*plane.normal;
Except distance, all variables are 3D vectors (I use a simple class I made with operator overload).
distance: minimum distance from a point to the plane (scalar).
q: the point (3D vector), in your case is the center of the sphere.
plane.p[0]: a point (3D vector) belonging to the plane. Note that any point belonging to the plane will work.
plane.normal: normal to the plane.
The * is a dot product between vectors. Can be implemented in 3D as a*b = a.x*b.x + a.y*b.y + a.z*b.z and yields a scalar.
Explanantion
The dot product is defined:
a*b = |a| * |b| * cos(angle)
or, in our case:
a = q - plane.p[0]
a*plane.normal = |a| * |plane.normal| * cos(angle)
As plane.normal is unitary (|plane.normal| == 1):
a*plane.normal = |a| * cos(angle)
a is the vector from the point q to a point in the plane. angle is the angle between a and the normal to the plane. Then, the cosinus is the projection over the normal, which is the vertical distance from the point to the plane.

There is rather simple formula for point-plane distance with plane equation
Ax+By+Cz+D=0 (eq.10 here)
Distance = (A*x0+B*y0+C*z0+D)/Sqrt(A*A+B*B+C*C)
where (x0,y0,z0) are point coordinates. If your plane normal vector (A,B,C) is normalized (unit), then denominator may be omitted.
(A sign of distance usually is not important for intersection purposes)

Related

Draw arbitrary plane from plane equation, OpenGL

I have a plane defined by the standard plane equation a*x + b*y + c*z + d = 0, which I would like to be able to draw using OpenGL. How can I derive the four points needed to draw it as a quadrilateral in 3D space?
My plane type is defined as:
struct Plane {
float x,y,z; // plane normal
float d;
};
void DrawPlane(const Plane & p)
{
???
}
EDIT:
So, rethinking the question, what I actually wanted was to draw a discreet representation of a plane in 3D space, not an infinite plane.
Base on the answer provided by #a.lasram, I have produced this implementation, which doest just that:
void DrawPlane(const Vector3 & center, const Vector3 & planeNormal, float planeScale, float normalVecScale, const fColorRGBA & planeColor, const fColorRGBA & normalVecColor)
{
Vector3 tangent, bitangent;
OrthogonalBasis(planeNormal, tangent, bitangent);
const Vector3 v1(center - (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v2(center + (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v3(center + (tangent * planeScale) + (bitangent * planeScale));
const Vector3 v4(center - (tangent * planeScale) + (bitangent * planeScale));
// Draw wireframe plane quadrilateral:
DrawLine(v1, v2, planeColor);
DrawLine(v2, v3, planeColor);
DrawLine(v3, v4, planeColor);
DrawLine(v4, v1, planeColor);
// And a line depicting the plane normal:
const Vector3 pvn(
(center[0] + planeNormal[0] * normalVecScale),
(center[1] + planeNormal[1] * normalVecScale),
(center[2] + planeNormal[2] * normalVecScale)
);
DrawLine(center, pvn, normalVecColor);
}
Where OrthogonalBasis() computes the tangent and bi-tangent from the plane normal.
To see the plane as if it's infinite you can find 4 quad vertices so that the clipped quad and the clipped infinite plane form the same polygon. Example:
Sample 2 random points P1 and P2 on the plane such as P1 != P2.
Deduce a tangent t and bi-tangent b as
t = normalize(P2-P1); // get a normalized tangent
b = cross(t, n); // the bi-tangent is the cross product of the tangent and the normal
Compute the bounding sphere of the view frustum. The sphere would have a diameter D (if this step seems difficult, just set D to a large enough value such as the corresponding sphere encompasses the frustum).
Get the 4 quad vertices v1 , v2 , v3 and v4 (CCW or CW depending on the choice of P1 and P2):
v1 = P1 - t*D - b*D;
v2 = P1 + t*D - b*D;
v3 = P1 + t*D + b*D;
v4 = P1 - t*D + b*D;
One possibility (possibly not the cleanest) is to get the orthogonal vectors aligned to the plane and then choose points from there.
P1 = < x, y, z >
t1 = random non-zero, non-co-linear vector with P1.
P2 = norm(P1 cross t1)
P3 = norm(P1 cross P2)
Now all points in the desired plane are defined as a starting point plus a linear combination of P2 and P3. This way you can get as many points as desired for your geometry.
Note: the starting point is just your plane normal < x, y, z > multiplied by the distance from the origin: abs(d).
Also of interest, with clever selection of t1, you can also get P2 aligned to some view. Say you are looking at the x, y plane from some z point. You might want to choose t1 = < 0, 1, 0 > (as long as it isn't co-linear to P1). This yields P2 with 0 for the y component, and P3 with 0 for the x component.

Need rotation matrix for opengl 3D transformation

The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.

OpenCV Computing Camera Position && Rotation

for a project I need to compute the real world position and orientation of a camera
with respect to a known object.
I have a set of photos, each displays a chessboard from different points of view.
Using CalibrateCamera and solvePnP I am able to reproject Points in 2d, to get a AR-thing.
So my situation is as such:
Intrinsic parameters are known
Distortioncoefficients are known
translation Vector and rotation Vector are known per photo.
I simply cannot figure out how to compute the position of the camera. My guess was:
invert translation vector. (=t')
transform rotation vector to degree (seems to be radian) and invert
use rodriguez on rotation vector
compute RotationMatrix * t'
But the results are somehow totally off...
Basically I want to to compute a ray for each pixel in world coordinates.
If more informations on my problem are needed, I'd be glad to answer quickly.
I dont' get it... somehow the rays are still off. This is my Code btw:
Mat image1CamPos = tvecs[0].clone(); //From calibrateCamera
Mat rot = rvecs[0].clone(); //From calibrateCamera
Rodrigues(rot, rot);
rot = rot.t();
//Position of Camera
Mat pos = rot * image1CamPos;
//Ray-Normal (( (double)mk[i][k].x) are known image-points)
float x = (( (double)mk[i][0].x) / fx) - (cx / fx);
float y = (( (double)mk[i][0].y) / fy) - (cy / fy);
float z = 1;
float mag = sqrt(x*x + y*y + z*z);
x /= mag;
y /= mag;
z /= mag;
Mat unit(3, 1, CV_64F);
unit.at<double>(0, 0) = x;
unit.at<double>(1, 0) = y;
unit.at<double>(2, 0) = z;
//Rotation of Ray
Mat rot = stof1 * unit;
But when plotting this, the rays are off :/
The translation t (3x1 vector) and rotation R (3x3 matrix) of an object with respect to the camera equals the coordinate transformation from object into camera space, which is given by:
v' = R * v + t
The inversion of the rotation matrix is simply the transposed:
R^-1 = R^T
Knowing this, you can easily resolve the transformation (first eq.) to v:
v = R^T * v' - R^T * t
This is the transformation from camera into object space, i.e., the position of the camera with respect to the object (rotation = R^T and translation = -R^T * t).
You can simply get a 4x4 homogeneous transformation matrix from this:
T = ( R^T -R^T * t )
( 0 1 )
If you now have any point in camera coordinates, you can transform it into object coordiantes:
p' = T * (x, y, z, 1)^T
So, if you'd like to project a ray from a pixel with coordinates (a,b) (probably you will need to define the center of the image, i.e. the principal point as reported by CalibrateCamera, as (0,0)) -- let that pixel be P = (a,b)^T. Its 3D coordinates in camera space are then P_3D = (a,b,0)^T. Let's project a ray 100 pixel in positive z-direction, i.e. to the point Q_3D = (a,b,100)^T. All you need to do is transform both 3D coordinates into the object coordinate system using the transformation matrix T and you should be able to draw a line between both points in object space. However, make sure that you don't confuse units: CalibrateCamera will report pixel values while your object coordinate system might be defined in, e.g., cm or mm.

3D Line Segment and Plane Intersection

I'm trying to implement a line segment and plane intersection test that will return true or false depending on whether or not it intersects the plane. It also will return the contact point on the plane where the line intersects, if the line does not intersect, the function should still return the intersection point had the line segmenent had been a ray. I used the information and code from Christer Ericson's Real-time Collision Detection but I don't think im implementing it correctly.
The plane im using is derived from the normal and vertice of a triangle. Finding the location of intersection on the plane is what i want, regardless of whether or not it is located on the triangle i used to derive the plane.
The parameters of the function are as follows:
contact = the contact point on the plane, this is what i want calculated
ray = B - A, simply the line from A to B
rayOrigin = A, the origin of the line segement
normal = normal of the plane (normal of a triangle)
coord = a point on the plane (vertice of a triangle)
Here's the code im using:
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin, Vector normal, Vector coord) {
// calculate plane
float d = Dot(normal, coord);
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// scale the ray by t
Vector newRay = ray * t;
// calc contact point
contact = rayOrigin + newRay;
if (t >= 0.0f && t <= 1.0f) {
return true; // line intersects plane
}
return false; // line does not
}
In my tests, it never returns true... any ideas?
I am answering this because it came up first on Google when asked for a c++ example of ray intersection :)
The code always returns false because you enter the if here :
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
And a dot product is only zero if the vectors are perpendicular, which is the case you want to avoid (no intersection), and non-zero numbers are true in C.
Thus the solution is to negate ( ! ) or do Dot(...) == 0.
In all other cases there will be an intersection.
On to the intersection computation :
All points X of a plane follow the equation
Dot(N, X) = d
Where N is the normal and d can be found by putting a known point of the plane in the equation.
float d = Dot(normal, coord);
Onto the ray, all points s of a line can be expressed as a point p and a vector giving the direction D :
s = p + x*D
So if we search for which x s is in the plane, we have
Dot(N, s) = d
Dot(N, p + x*D) = d
The dot product a.b is transpose(a)*b.Let transpose(N) be Nt.
Nt*(p + x*D) = d
Nt*p + Nt*D*x = d (x scalar)
x = (d - Nt*p) / (Nt*D)
x = (d - Dot(N, p)) / Dot(N, D)
Which gives us :
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
We can now get the intersection point by putting x in the line equation
s = p + x*D
Vector intersection = rayOrigin + x*ray;
The above code updated :
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin,
Vector normal, Vector coord) {
// get d value
float d = Dot(normal, coord);
if (Dot(normal, ray) == 0) {
return false; // No intersection, the line is parallel to the plane
}
// Compute the X value for the directed line ray intersecting the plane
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// output contact point
*contact = rayOrigin + normalize(ray)*x; //Make sure your ray vector is normalized
return true;
}
Aside 1:
What does the d value mean ?
For two vectors a and b a dot product actually returns the length of the orthogonal projection of one vector on the other times this other vector.
But if a is normalized (length = 1), Dot(a, b) is then the length of the projection of b on a. In case of our plane, d gives us the directional distance all points of the plane in the normal direction to the origin (a is the normal). We can then get whether a point is on this plane by comparing the length of the projection on the normal (Dot product).
Aside 2:
How to check if a ray intersects a triangle ? (Used for raytracing)
In order to test if a ray comes into a triangle given by 3 vertices, you first have to do what is showed here, get the intersection with the plane formed by the triangle.
The next step is to look if this point lies in the triangle. This can be achieved using the barycentric coordinates, which express a point in a plane as a combination of three points in it. See Barycentric Coordinates and converting from Cartesian coordinates
I could be wrong about this, but there are a few spots in the code that seem very suspicious. To begin, consider this line:
// calculate plane
float d = Dot(normal, coord);
Here, your value d corresponds to the dot product between the plane normal (a vector) and a point in space (a point on the plane). This seems wrong. In particular, if you have any plane passing through the origin and use the origin as the coordinate point, you will end up computing
d = Dot(normal, (0, 0, 0)) = 0
And immediately returning false. I'm not sure what you intended to do here, but I'm pretty sure that this isn't what you meant.
Another spot in the code that seems suspicious is this line:
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
Note that you're computing the dot product between the plane's normal vector (a vector) and the ray's origin point (a point in space). This seems weird because it means that depending on where the ray originates in space, the scaling factor you use for the ray changes. I would suggest looking at this code one more time to see if this is really what you meant.
Hope this helps!
This all looks fine to me. I've independently checked the algebra and this looks fine for me.
As an example test case:
A = (0,0,1)
B = (0,0,-1)
coord = (0,0,0)
normal = (0,0,1)
This gives:
d = Dot( (0,0,1), (0,0,0)) = 0
Dot( (0,0,1), (0,0,-2)) = -2 // so trap for the line being in the plane passes.
t = (0 - Dot( (0,0,1), (0,0,1) ) / Dot( (0,0,1), (0,0,-2)) = ( 0 - 1) / -2 = 1/2
contact = (0,0,1) + 1/2 (0,0,-2) = (0,0,0) // as expected.
So given the emendation following #templatetypedef's answer, the only area where I can see a problem is with the implementation of one of the other operations, be it Dot(), or the Vector operators.
This version worked for me in OpenGL C# application.
bool GetLinePlaneIntersection(out vec3 contact, vec3 ray_origin, vec3 ray_end, vec3 normal, vec3 coord)
{
contact = new vec3();
vec3 ray = ray_end - ray_origin;
float d = glm.dot(normal, coord);
if (glm.dot(normal, ray) == 0)
{
return false;
}
float t = (d - glm.dot(normal, ray_origin)) / glm.dot(normal, ray);
contact = ray_origin + ray * t;
return true;
}

compute slope of a 3D plane

I have a set of (X,Y,Z) points representing different planar features. I need to calculate the slope of each plane using normal vectors.
i think slope is given by the angle between normal vector (NV) of each plane and NV of imaginary horizontal plane. Assume, the plane equation that I use is; Ax+By+c=z. Then i guess the normal vector of my plane is (a,b, -1). For my plane equation, what should be the equation of imaginary horizontal plane? I think equation of horizontal plane is z=c. Hence, the normal vector is (0,0,-1). Is this correct?
Then the angle between my plane and the horizontal plane is;
cos^(-1)⁡〖(a.0+b.0+(-1).1)/(√(〖a1〗^2+〖b1〗^2+〖c1〗^2 ).√(0^2+0^2+1^2 ))〗
Is that correct? please comment me and give me the correct equation.
Yes, that's mostly correct, but you've made some small mistakes substituting into the expression for the angle. The angle is cos^{-1} [(a * 0 + b * 0 + (-1) * (-1) / (√{a^2 + b^2 + (-1)^2} * √{0^2+0^2+(-1)^2}] = cos^{-1}(1/√{a^2 + b^2 + 1})