I'm reading along this neat article here: Frustum Culling
and it reads that to find the distance between a sphere and a frustum side (a plane) is:
C = center of sphere
N = normal of plane
D = distance of plane along normal from origin
Distance = DotProduct(C, N) + D
But I don't understand what variable D refers to. Particularly, I don't understand what the origin of the frustum is. Is it where the camera eye would be?
D is the perpendicular distance you would need to travel along the normal of the plane to pass through the origin of whichever space the plane is defined in (I expect this to most often be the origin in world coordinates, but if your planes are described in camera coordinates then use the camera origin. Ultimately it doesn't matter as long as you are doing your calculations all in the same space. In other words, whichever origin you are using for the space that both the sphere and the planes are being compared in.).
This is the same value in the plane equation: Ax + By + Cz + d = 0. d is the value D that you would be using. You can calculate d by taking a known point on the plane and using it to solve the plane equation for d. (A, B, C) are the X,Y,Z elements of your plane's unit normal vector, (x, y, z) are the coordinates of the point on the plane, solve the plane equation for d, and you have your distance.
Just be mindful to do all of your calculations in the same space,,, be that world space or camera space or screen space. I suspect you'll want to do your calculations in world space.
Related
I know that the default OpenGL perspective projection matrix preserves straight lines at least in XY - so if three points are colinear in eye-space, the XY coordinates of the three points in NDC will also lie on a straight line - but what about XYZ in NDC? Will the XYZ coordinates of the projected points still be colinear (I'm asking because to me it currently looks like they're not, but I might be wrong)
If not, is there a way to change the projection matrix so that the post-projection points will have this property?
Proof by geometric argument
For any line L, there is a plane S that contains all the points in L and the origin O.
Any line connecting a point P in L and the origin is also contained in S. So the projection will be in the intersection between the plane S and the plane z=1, and that's a straight line.
Using equations
Usually the projection will map (x, y, z) to (x / z, y / z), for the camera at the origin, pointing to z direction.
Consider the parametric equation of the straight line being (a * t + cx, b* t + cy, c*t + cz), the derivative of the curve in 3D will be
(a,b,c), in the 2D space you have
dx/dt = ((c*t+cz)*a-(a*t+cx)*c)/((c*t+cz)^2) = (cz*a-cx*c)/z^2
dy/dt = ((c*t+cz)*a-(b*t+cy)*c)/((c*t+cz)^2) = (cz*a-cy*c)/z^2
If you divide (dx/dt)/(dy/dt) = (cz*a-cx*c) / (cz*a-cy*c).
This will draw a curve that with tangent to a constant direction, a.k.a straight line.
I'm trying to implement textures for spheres in my ray tracer. I managed to get something working, but I am unsure about its correctness. Below is the code for getting the texture coordinates. For now, the texture is random and is generated at runtime.
virtual void GetTextureCoord(Vect hitPoint, int hres, int vres, int& x, int& y) {
float theta = acos(hitPoint.getVectY());
float phi = atan2(hitPoint.getVectX(), hitPoint.getVectZ());
if (phi < 0.0) {
phi += TWO_PI;
}
float u = phi * INV_TWO_PI;
float v = 1 - theta * INV_PI;
y = (int) ((hres - 1) * u);
x = (int) ((vres - 1) * v);
}
This is how the spheres look now:
I had to normalize the coordinates of the hit point to get the spheres to look like that. Otherwise they would look like:
Was normalising the hit point coordinates the right approach, or is something else broken in my code? Thank you!
Instead of normalising the hit point, I tried translating it to the world origin (as if the sphere center was there) and obtained the following result:
I'm using a 256x256 resolution texture by the way.
It's unclear what you mean by "normalizing" the hit point since there's nothing that normalizes it in the code you posted, but you mentioned that your hit point is in world space.
Also, you didn't say what texture mapping you're trying to implement, but I assume you want your U and V texture coordinates to represent latitude and longitude on the sphere's surface.
Your first problem is that converting Cartesian to spherical coordinates requires that the sphere is centered at the origin in the Cartesian space, which isn't true in world space. If the hit point is in world space, you have to subtract the sphere's world-space center point to get the effective hit point in local coordinates. (You figured this part out already and updated the question with a new image.)
Your second problem is that the way you're calculating theta requires that the the sphere have a radius of 1, which isn't true even after you move the sphere's center to the origin. Remember your trigonometry: the argument to acos is the ratio of a triangle's side to its hypotenuse, and is always in the range (-1, +1). In this case your Y-coordinate is the side, and the sphere's radius is the hypotenuse. So you have to divide by the sphere's radius when calling acos. It's also a good idea to clamp the value to the (-1, +1) range in case floating-point rounding error puts it slightly outside.
(In principle you'd also have to divide the X and Z coordinates by the radius, but you're only using those for an inverse tangent, and dividing them both by the radius won't change their quotient and thus won't change phi.)
Right now your sphere intersection and texture-coordinate functions are operating in world space, but you'll probably find it useful later to implement transformation matrices, which let you transform things from one coordinate space to another. Then you can change your sphere functions to operate in a local coordinate space where the center is the origin and the radius is 1, and give each object an associated transformation matrix that maps the local coordinate space to the world coordinate space. This will simplify your ray/sphere intersection code, and let you remove the origin subtraction and radius division from GetTextureCoord (since they're always (0, 0, 0) and 1 respectively).
To intersect a ray with an object, you'd use the object's transformation matrix to transform the ray into the object's local coordinate space, do the intersection (and compute texture coordinates) there, and then transform the result (e.g. hit point and surface normal) back to world space.
Consider a simple convex polygon in 2D Cartesian space. If given a list of vertex coordinates sorted in a counter-clockwise orientation like this [[x0, y0], ..., [xn, yn]]. How could you compute the center of the polygon (the point inside the polygon that is equidistant to all vertices)?
Also consider a second case where the polygon is placed in 3D Cartesian space and its normal vector is not parallel to any of the Cartesian axes. How could you compute the center, without rotating the polygon?
I can read C/C++, Fortran, MATLAB and Python, however any pseudo-code is also well appreciated.
EDIT
I now realise that my question was not well-posed. I am sorry for that. It appears that what I was looking for is the centroid of the polygon (i.e. the point on which a cardboard cut-out would balance while assuming uniform density and a uniform gravity field).
You definition of center doesn't make sense in general.
To see this just draw three non-aligned points on a plane and compute the one an only circle that passes for all three points. Clearly your center of the triangle must be the center of this circle.
Now draw a fourth point that doesn't lie on the circle and form the four sided polygon. What is the center? There is no point in the plane that is equidistant from all vertices.
Note also that even in case of triangles using the point equidistant from the vertices can give you points outside and far away from the polygon and is also numerically unstable (given any ε>0 and M>0 you can always build a triangle in which a specific movement of a vertex by a distance of less than ε moves the center by a distance greater than M).
Commonly used "centers" that are simple to compute are the average of all vertices, the average of the boundary, the center of mass or even just the center of the axis-aligned bounding box. All of them can however fall outside the polygon if the polygon is not convex, but in your case they may work.
The simplest reasonable one (because it doesn't depends on the coordinate system) is the barycenter of the vertices (code in Python):
xc = sum(x for (x, y) in points) / len(points)
yc = sum(y for (x, y) in points) / len(points)
something bad about it it's that just splitting one side of the polygon gives you a different center (in other words it depends on the vertices and not on the set of points bounded by the polygon). The simplest that depends on the polygon is IMO the barycenter of the boundary:
sx = sy = sL = 0
for i in range(len(points)): # counts from 0 to len(points)-1
x0, y0 = points[i - 1] # in Python points[-1] is last element of points
x1, y1 = points[i]
L = ((x1 - x0)**2 + (y1 - y0)**2) ** 0.5
sx += (x0 + x1)/2 * L
sy += (y0 + y1)/2 * L
sL += L
xc = sx / sL
yc = sy / sL
For both of them the extension to 3d is trivial... just add z using the same formulas.
In the case of a general (not necessarily convex, not necessarily simply connected) polygon a "center" that I found useful but that is not trivial to compute is the (an) inner point that is at a maximum distance from the boundary (in other words a "most inner" point).
In this case I resorted to use a discrete (bitmap) representation and a gaussian distance transform.
First of all for a polygon, the centroid may not always imply equidistant lengths from the centroid to the vertices. In most cases this is probably NOT true. That being said, you can find the centroid simply by finding the mean of your x coordinates and the mean of your y coordinates. In Matlab: centroidx = mean(xcoords) and centroidy = mean(ycoords) are the coordinates of the centroid. See this if you really need more.
i have a 3d world where i have several 2d circles laying on the ground facing to the sky.
how can i check if a line will intersect one of those circles frop top-to-down?
i tried to search but all i get is this kind of intersection test:
http://mathworld.wolfram.com/Circle-LineIntersection.html
but its not what i need, here is image what i mean:
http://imageshack.us/m/192/8343/linecircleintersect.png
If you are in a coordinate system, where the ground is given by z = c for c some constant, then you could simply calculate the x, y coordinates of the line for z = c. Now for a circle of origin x0, y0 and radius R, you would simply check if
(x - x0)^2 + (y - y0)^2 <= R^2.
If this is true, the line intersects the circle.
In a 3D sense you are first concerned with not with a circle but with the plane where the circle lies on. Then you can find the point of intersection between the ray (line) and the plane (disk).
I like to use homogeneous coordinates for point, planes and lines and I hope you are familiar with vector dot · and cross products ×. Here is the method:
Plane (disk) is defined by a point vector r=[rx,ry,rz] and a normal direction vector n=[nx,ny,nz]. Together they form a plane W=[W1,W2]=[n,-r·n].
Line (ray) is defined by two point vectors r_A=[rAx,rAy,rAz] and r_B=[rBx,rBy,rBz]. Together they form the line L=[L1,L2]=[r_B-r_A, r_A×r_B]
The intersecting Point is defined by P=[P1,P2]=[L1×W1-W2*L2, -L2·W1], or expanded out as
P=[ (r_B-r_A)×n-(r·n)*(r_A×r_B), -(r_A×r_B)·n ]
The coordinates for the point are found by r_P = P1/P2 where P1 has three elements and P2 is scalar.
Once you have the coordinates you check the distance with the center of the circle by d=sqrt((r_p-r)·(r_p-r)) and checking d<=R where R is the radius of the circle. Note the difference in notation between a scalar multiplication * and a dot product ·
If you know for sure that the circles lie on the ground (r=[0,0,0]) and face up (n=[0,0,1]) then you can make a lot of simplifications to the above general case.
[ref: Plucker Coordinates]
Update:
When using the ground (with +Z up) as the plane (where circles lie), then use r=[rx,ry,0] and n=[0,0,1] and the above intersection point simplifies to
r_p = [ rBy-rAy, rAx-rBx, 0] / (rAy*rBx-rAx*rBy)
of which you can check the distance to the circle center.
I wish to generate rays from the camera through the viewing plane. In order to do this, I need my camera position ("eye"), the up, right, and towards vectors (where towards is the vector from the camera in the direction of the object that the camera is looking at) and P, the point on the viewing plane. Once I have these, the ray that's generated is:
ray = camera_eye + t*(P-camera_eye);
where t is the distance along the ray (assume t = 1 for now).
My question is, how do I obtain the 3D coordinates of point P given that it is located at position (i,j) on the viewing plane? Assume that the upper left and lower right corners of the viewing plane are given.
NOTE: The viewing plane is not actually a plane in the sense that it doesn't extend infinitely in all directions. Rather, one may think of this plane as a widthxheight image. In the x direction, the range is 0-->width and in the y direction the range is 0-->height. I wish to find the 3D coordinate of the (i,j)th element, 0
General solution of the itnersection of a line and a plane see http://local.wasp.uwa.edu.au/~pbourke/geometry/planeline/
Your particular graphics lib (OpenGL/DirectcX etc) may have an standard way to do this
edit: You are trying to find the 3d intersection of a screen point (eg a mouse cursor) with a 3d object in you scene?
To work out P, you need the distance from the camera to the near clipping plane (the screen), the size of the window on the near clipping plane (or the view angle, you can work out the window size from the view angle) and the size of the rendered window.
Scale the screen position to the range -1 < x < +1 and -1 < y < +1 where +1 is the top/right and -1 is the bottom/left
Scale normalised x,y by the view window size
Scale by the right and up vectors of the camera and sum the results
Add the look at vector scaled by the clipping plane distance
In effect, you get:
p = at * near_clip_dist + x * right + y * up
where x and y are:
x = (screen_x - screen_centre_x) / (width / 2) * view_width
y = (screen_y - screen_centre_y) / (height / 2) * view_height
When I directly plugged in suggested formulas into my program, I didn't obtain correct results (maybe some debugging needed to be done). My initial problem seemed to be in the misunderstanding of the (x,y,z) coordinates of the interpolating corner points. I was treating x,y,z-coordinates separately, where I should not (and this may be specific to the application, since the camera can be oriented in any direction). Instead, the solution turned out to be a simple interpolation of the corner points of the viewing plane:
interpolate the bottom corner points in the i direction to get P1
interpolate the top corner points in the i direction to get P2
interpolate P1 and P2 in the j direction to get the world coordinates of the final point