I'm trying to implement a line segment and plane intersection test that will return true or false depending on whether or not it intersects the plane. It also will return the contact point on the plane where the line intersects, if the line does not intersect, the function should still return the intersection point had the line segmenent had been a ray. I used the information and code from Christer Ericson's Real-time Collision Detection but I don't think im implementing it correctly.
The plane im using is derived from the normal and vertice of a triangle. Finding the location of intersection on the plane is what i want, regardless of whether or not it is located on the triangle i used to derive the plane.
The parameters of the function are as follows:
contact = the contact point on the plane, this is what i want calculated
ray = B - A, simply the line from A to B
rayOrigin = A, the origin of the line segement
normal = normal of the plane (normal of a triangle)
coord = a point on the plane (vertice of a triangle)
Here's the code im using:
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin, Vector normal, Vector coord) {
// calculate plane
float d = Dot(normal, coord);
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// scale the ray by t
Vector newRay = ray * t;
// calc contact point
contact = rayOrigin + newRay;
if (t >= 0.0f && t <= 1.0f) {
return true; // line intersects plane
}
return false; // line does not
}
In my tests, it never returns true... any ideas?
I am answering this because it came up first on Google when asked for a c++ example of ray intersection :)
The code always returns false because you enter the if here :
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
And a dot product is only zero if the vectors are perpendicular, which is the case you want to avoid (no intersection), and non-zero numbers are true in C.
Thus the solution is to negate ( ! ) or do Dot(...) == 0.
In all other cases there will be an intersection.
On to the intersection computation :
All points X of a plane follow the equation
Dot(N, X) = d
Where N is the normal and d can be found by putting a known point of the plane in the equation.
float d = Dot(normal, coord);
Onto the ray, all points s of a line can be expressed as a point p and a vector giving the direction D :
s = p + x*D
So if we search for which x s is in the plane, we have
Dot(N, s) = d
Dot(N, p + x*D) = d
The dot product a.b is transpose(a)*b.Let transpose(N) be Nt.
Nt*(p + x*D) = d
Nt*p + Nt*D*x = d (x scalar)
x = (d - Nt*p) / (Nt*D)
x = (d - Dot(N, p)) / Dot(N, D)
Which gives us :
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
We can now get the intersection point by putting x in the line equation
s = p + x*D
Vector intersection = rayOrigin + x*ray;
The above code updated :
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin,
Vector normal, Vector coord) {
// get d value
float d = Dot(normal, coord);
if (Dot(normal, ray) == 0) {
return false; // No intersection, the line is parallel to the plane
}
// Compute the X value for the directed line ray intersecting the plane
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// output contact point
*contact = rayOrigin + normalize(ray)*x; //Make sure your ray vector is normalized
return true;
}
Aside 1:
What does the d value mean ?
For two vectors a and b a dot product actually returns the length of the orthogonal projection of one vector on the other times this other vector.
But if a is normalized (length = 1), Dot(a, b) is then the length of the projection of b on a. In case of our plane, d gives us the directional distance all points of the plane in the normal direction to the origin (a is the normal). We can then get whether a point is on this plane by comparing the length of the projection on the normal (Dot product).
Aside 2:
How to check if a ray intersects a triangle ? (Used for raytracing)
In order to test if a ray comes into a triangle given by 3 vertices, you first have to do what is showed here, get the intersection with the plane formed by the triangle.
The next step is to look if this point lies in the triangle. This can be achieved using the barycentric coordinates, which express a point in a plane as a combination of three points in it. See Barycentric Coordinates and converting from Cartesian coordinates
I could be wrong about this, but there are a few spots in the code that seem very suspicious. To begin, consider this line:
// calculate plane
float d = Dot(normal, coord);
Here, your value d corresponds to the dot product between the plane normal (a vector) and a point in space (a point on the plane). This seems wrong. In particular, if you have any plane passing through the origin and use the origin as the coordinate point, you will end up computing
d = Dot(normal, (0, 0, 0)) = 0
And immediately returning false. I'm not sure what you intended to do here, but I'm pretty sure that this isn't what you meant.
Another spot in the code that seems suspicious is this line:
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
Note that you're computing the dot product between the plane's normal vector (a vector) and the ray's origin point (a point in space). This seems weird because it means that depending on where the ray originates in space, the scaling factor you use for the ray changes. I would suggest looking at this code one more time to see if this is really what you meant.
Hope this helps!
This all looks fine to me. I've independently checked the algebra and this looks fine for me.
As an example test case:
A = (0,0,1)
B = (0,0,-1)
coord = (0,0,0)
normal = (0,0,1)
This gives:
d = Dot( (0,0,1), (0,0,0)) = 0
Dot( (0,0,1), (0,0,-2)) = -2 // so trap for the line being in the plane passes.
t = (0 - Dot( (0,0,1), (0,0,1) ) / Dot( (0,0,1), (0,0,-2)) = ( 0 - 1) / -2 = 1/2
contact = (0,0,1) + 1/2 (0,0,-2) = (0,0,0) // as expected.
So given the emendation following #templatetypedef's answer, the only area where I can see a problem is with the implementation of one of the other operations, be it Dot(), or the Vector operators.
This version worked for me in OpenGL C# application.
bool GetLinePlaneIntersection(out vec3 contact, vec3 ray_origin, vec3 ray_end, vec3 normal, vec3 coord)
{
contact = new vec3();
vec3 ray = ray_end - ray_origin;
float d = glm.dot(normal, coord);
if (glm.dot(normal, ray) == 0)
{
return false;
}
float t = (d - glm.dot(normal, ray_origin)) / glm.dot(normal, ray);
contact = ray_origin + ray * t;
return true;
}
Related
I try to write an method bool intersect(const Ray& ray, Intersection& intersection) that returns true, when the Intersection is inside the Triangle.
What i've done so far , is check if there are Points on the Plane, that is created by 2 Vectors of the Triangle.
The Problem is now to check, if the Point is inside the Triangle.I use barycentric Coordinates
Vec3 AB = b_-a_;
Vec3 AC = c_-a_;
double areaABC = vec_normal_triangle.dot(AB.cross(AC));
Vec3 PB = b_-intersection.pos;
Vec3 PC = c_-intersection.pos;
double alpha = vec_normal_triangle.dot(PB.cross(PC));
Vec3 PA = a_-position.pos;
double beta = vec_normal_triangle.dot(PC.cross(PA));
double gamma = 1-alpha-beta;
if((beta+gamma) < 1 && beta > 0 && gamma > 0) {
return true;
}
Actually its not even a triangle, just about random Points.
Can someone explain me or knows how i compute the barycentric Coordinates for 3 given Vectors?
Assuming vec_normal_triangle is the vector computed as AB.cross(AC) normalized (in other words, the triangle's normal), you should divide alpha and beta by areaABC to get the barycentric coordinates of the intersecton point.
double alpha = vec_normal_triangle.dot(PB.cross(PC)) / areaABC;
and
double beta = vec_normal_triangle.dot(PC.cross(PA)) / areaABC;
This normalizes alpha and beta so that your computation of gamma and comparison against 1 make sense.
I'd also like to make a suggestion. To avoid recomputation and make the code a bit cleaner you could replace your test with the following.
if(alpha > 0 && beta > 0 && gamma > 0) {
return true;
}
Aside from that, I see that you first use intersection.pos and then position.pos. Is this intentional? My guess is that you need to use intersection.pos both times.
I am writing a ray tracing project with C++ and OpenGL and am running into some obstacles with my sphere intersection function: I've checked multiple sources and the math looks right, but for some reason for every single ray, the intersection method is returning true. Here is the code to the sphere intersection function as well as some other code for clarification:
bool intersect(Vertex & origin, Vertex & rayDirection, float intersection)
{
bool insideSphere = false;
Vertex oc = position - origin;
float tca = 0.0;
float thcSquared = 0.0;
if (oc.length() < radius)
insideSphere = true;
tca = oc.dot(rayDirection);
if (tca < 0 && !insideSphere)
return false;
thcSquared = pow(radius, 2) - pow(oc.length(), 2) + pow(tca, 2);
if (thcSquared < 0)
return false;
insideSphere ? intersection = tca + sqrt(thcSquared) : intersection = tca - sqrt(thcSquared);
return true;
}
Here is some context from the ray tracing function that calls the intersection function. FYI my camera is at (0, 0, 0) and that is what is in my "origin" variable in the ray tracing function:
#define WINDOW_WIDTH 640
#define WINDOW_HEIGHT 480
#define WINDOW_METERS_WIDTH 30
#define WINDOW_METERS_HEIGHT 20
#define FOCAL_LENGTH 25
rayDirection.z = FOCAL_LENGTH * -1;
for (int r = 0; r < WINDOW_HEIGHT; r++)
{
rayDirection.y = (WINDOW_METERS_HEIGHT / 2 * -1) + (r * ((float)WINDOW_METERS_HEIGHT / (float)WINDOW_HEIGHT));
for (int c = 0; c < WINDOW_WIDTH; c++)
{
intersection = false;
t = 0.0;
rayDirection.x = (WINDOW_METERS_WIDTH / 2 * -1) + (c * ((float)WINDOW_METERS_WIDTH / (float)WINDOW_WIDTH));
rayDirection = rayDirection - origin;
for (int i = 0; i < NUM_SPHERES; i++)
{
if (spheres[i].intersect(CAM_POS, rayDirection, t))
{
intersection = true;
}
}
Thanks for taking a look and let me know if there is any other code that may help!
It seems you got your math a bit mixed. The first part of the function, ie until the first return false, is ok and will return false if the ray start outside of the sphere and don't go toward it. However, I think you put the camera outside all your spheres in such a manner that all spheres are visible, that's why this part never return false.
thcSquared is really wrong and I don't know what it is supposed to represent.
Let's do the intersection mathematically. We have:
origin : the start of the ray, let's call this A
rayDirection : the direction of the infinite ray, let's call this d.
position : the center of the sphere, called P
radius : self-explanatory, called r
What you want is a point on both the sphere and the line, let's call it M:
M = A + t * d because it is on the line
|M - P| = r because it is on the sphere
The second equation can be changed to be |A + t * d - P|² = r², which gives (A - P)² + 2 * t * (A - P).dot(d) + t²d² = r². This is a simple quadratic equation. Once solved, you have 0, 1 or 2 solutions, select the closest to the ray origin (but which is positive).
edit: You are forced to use another approach that I will detail here:
Compute the distance between the center of the sphere and the line (calling it l). This is done by 'projecting' the center on the line. So:
tca = ( (P - A) dot d ) / |d|, or with your variable names, tca = (OC dot rd) / |rd|. The projection is H = A + tca * d, and l = |H - P|.
If l > R then return false, there is no intersection.
Let's call M one intersection point. The triangle MHP have a right angle, so MH² + HP² = MP², in other terms thc² + l² = r², so we now have thc, the distance from H to the sphere.
With all that, t = tca +- thc, simply take the lowest non-negative of the two.
The paper you linked explain this, but without saying that it assumes the norm of the ray direction to be 1. I don't see a normalization in your code, that may be why your code fails (not verified).
Side note: the name Vertex for a 3d vector is really badly chosen, something like Vector3 or vec3 would be way better.
I'm attempting to implement Sphere-Plane collision detection in C++. I have a Vector3, Plane and Sphere class.
#include "Vector3.h"
#ifndef PLANE_H
#define PLANE_H
class Plane
{
public:
Plane(Vector3, float);
Vector3 getNormal() const;
protected:
float d;
Vector3 normal;
};
#endif
I know the equation for a plane is Ax + By = Cz + D = 0 which we can simplify to N.S + d < r where N is the normal vector of the plane, S is the center of the sphere, r is the radius of the sphere and d is the distance from the origin point. How do I calculate the value of d from my Plane and Sphere?
bool Sphere::intersects(const Plane& other) const
{
// return other.getNormal() * this->currentPosition + other.getDistance() < this->radius;
}
I needed the same computation in a game I made. This is the minimum distance from a point to a plane:
distance = (q - plane.p[0])*plane.normal;
Except distance, all variables are 3D vectors (I use a simple class I made with operator overload).
distance: minimum distance from a point to the plane (scalar).
q: the point (3D vector), in your case is the center of the sphere.
plane.p[0]: a point (3D vector) belonging to the plane. Note that any point belonging to the plane will work.
plane.normal: normal to the plane.
The * is a dot product between vectors. Can be implemented in 3D as a*b = a.x*b.x + a.y*b.y + a.z*b.z and yields a scalar.
Explanantion
The dot product is defined:
a*b = |a| * |b| * cos(angle)
or, in our case:
a = q - plane.p[0]
a*plane.normal = |a| * |plane.normal| * cos(angle)
As plane.normal is unitary (|plane.normal| == 1):
a*plane.normal = |a| * cos(angle)
a is the vector from the point q to a point in the plane. angle is the angle between a and the normal to the plane. Then, the cosinus is the projection over the normal, which is the vertical distance from the point to the plane.
There is rather simple formula for point-plane distance with plane equation
Ax+By+Cz+D=0 (eq.10 here)
Distance = (A*x0+B*y0+C*z0+D)/Sqrt(A*A+B*B+C*C)
where (x0,y0,z0) are point coordinates. If your plane normal vector (A,B,C) is normalized (unit), then denominator may be omitted.
(A sign of distance usually is not important for intersection purposes)
I have a plane defined by the standard plane equation a*x + b*y + c*z + d = 0, which I would like to be able to draw using OpenGL. How can I derive the four points needed to draw it as a quadrilateral in 3D space?
My plane type is defined as:
struct Plane {
float x,y,z; // plane normal
float d;
};
void DrawPlane(const Plane & p)
{
???
}
EDIT:
So, rethinking the question, what I actually wanted was to draw a discreet representation of a plane in 3D space, not an infinite plane.
Base on the answer provided by #a.lasram, I have produced this implementation, which doest just that:
void DrawPlane(const Vector3 & center, const Vector3 & planeNormal, float planeScale, float normalVecScale, const fColorRGBA & planeColor, const fColorRGBA & normalVecColor)
{
Vector3 tangent, bitangent;
OrthogonalBasis(planeNormal, tangent, bitangent);
const Vector3 v1(center - (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v2(center + (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v3(center + (tangent * planeScale) + (bitangent * planeScale));
const Vector3 v4(center - (tangent * planeScale) + (bitangent * planeScale));
// Draw wireframe plane quadrilateral:
DrawLine(v1, v2, planeColor);
DrawLine(v2, v3, planeColor);
DrawLine(v3, v4, planeColor);
DrawLine(v4, v1, planeColor);
// And a line depicting the plane normal:
const Vector3 pvn(
(center[0] + planeNormal[0] * normalVecScale),
(center[1] + planeNormal[1] * normalVecScale),
(center[2] + planeNormal[2] * normalVecScale)
);
DrawLine(center, pvn, normalVecColor);
}
Where OrthogonalBasis() computes the tangent and bi-tangent from the plane normal.
To see the plane as if it's infinite you can find 4 quad vertices so that the clipped quad and the clipped infinite plane form the same polygon. Example:
Sample 2 random points P1 and P2 on the plane such as P1 != P2.
Deduce a tangent t and bi-tangent b as
t = normalize(P2-P1); // get a normalized tangent
b = cross(t, n); // the bi-tangent is the cross product of the tangent and the normal
Compute the bounding sphere of the view frustum. The sphere would have a diameter D (if this step seems difficult, just set D to a large enough value such as the corresponding sphere encompasses the frustum).
Get the 4 quad vertices v1 , v2 , v3 and v4 (CCW or CW depending on the choice of P1 and P2):
v1 = P1 - t*D - b*D;
v2 = P1 + t*D - b*D;
v3 = P1 + t*D + b*D;
v4 = P1 - t*D + b*D;
One possibility (possibly not the cleanest) is to get the orthogonal vectors aligned to the plane and then choose points from there.
P1 = < x, y, z >
t1 = random non-zero, non-co-linear vector with P1.
P2 = norm(P1 cross t1)
P3 = norm(P1 cross P2)
Now all points in the desired plane are defined as a starting point plus a linear combination of P2 and P3. This way you can get as many points as desired for your geometry.
Note: the starting point is just your plane normal < x, y, z > multiplied by the distance from the origin: abs(d).
Also of interest, with clever selection of t1, you can also get P2 aligned to some view. Say you are looking at the x, y plane from some z point. You might want to choose t1 = < 0, 1, 0 > (as long as it isn't co-linear to P1). This yields P2 with 0 for the y component, and P3 with 0 for the x component.
The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.