Analytic method for calculate a mirror angle - c++

I have in a 3D space a fixed light ray Lr and a mirror M that can rotate about the fixed point Mrot, this point is not on the same plane of the mirror, in other words the mirror plane is tangent to a sphere centered in Mrot with a fixed radius d. With that configuration I want to find an equation that receives point P as parameter and results with the rotation of the mirror in a 3D space.
We can consider that the mirror plane has no borders (infinite plane) and it's rotation have no limits. Also, the mirror reflects only on the opposite side of its rotation point.
In the picture are two cases with different input point P1and P2, with their respective solution angles alpha1 and alpha2. The pictures are in 2D to simplify the drawings, the real case is in 3D.
At the moment I am calculating the intersection with the mirror plane in a random rotation, then calculate the ray reflection and see how far is from the point (P) I want to reach. Finally iterate with some condition changing the rotation until it match.
Obviously it's an overkill, but I can't figure it out how to code it in an analytic way.
Any thoughts?
Note: I have noticed that if the mirror rotates about a point (Mrot) contained in it's plane and the ray light is reaching that point (Mrot) I can easily calculate the the mirror angle, but unfortunately is not my case.

First note that there is only one parameter here, namely the distance t along the ray at which it hits the mirror.
For any test value of t, compute in order
The point at which reflection occurs.
The vectors of the incident and reflected rays.
The normal vector of the mirror, which is found by taking the mean of the normalized incident and reflected vectors. Together with 1, you now know the plane of the mirror.
The distance d of the mirror to the rotation point.
The problem is now to choose t to make d take the desired value. This boils down to an octic polynomial in t, so there is no analytic formula[1] and the only solution is to iterate.[2]
Here's a code sample:
vec3 r; // Ray start position
vec3 v; // Ray direction
vec3 p; // Target point
vec3 m; // Mirror rotation point
double calc_d_from_t(double t)
{
vec3 reflection_point = r + t * v;
vec3 incident = normalize(-v);
vec3 reflected = normalize(p - reflection_point);
vec3 mirror_normal = normalize(incident + reflected);
return dot(reflection_point - m, mirror_normal);
}
Now pass calc_d_from_t(t) = d to your favourite root finder, ensuring to find the root with t > 0. Any half-decent root finder (e.g. Newton-Raphson) should be much faster than your current method.
[1] I.e. a formula involving arithmetic operations, nth roots and the coefficients.
[2] Unless the octic factorises identically, potentially reducing the problem to a quartic.

I would do it as 2 separate planar problems (one in xy plane and second in xz or yz plane). The first thing that hits my mind is this iterative process:
start
mirror is turning around Mrot in constant distance creating circle (sphere in 3D)
so compute first intersection of Lr and sphere
or find nearest point on sphere to Lr if no intersection found
compute n0 normal as half angle between Lr and red line from intersection to P
this is mirror start position
place mirror (aqua) to n0 angle
compute reflection of Lr
and compute half angle da0 this is step for new iteration
add da0 to n0 angle and place mirror to this new angle position
compute reflection of Lr
and compute half angle da1 this is step for new iteration
loop bullet 3 until
da(i) is small enough
max number of iteration is reached
[Notes]
This should converge into solution more quickly then random/linear probing
the more distant P from mirror (or smaller radius of rotation) the quicker convergence there is
Not sure if analytic solution to this problem even exists it looks like it would lead to transcendent system ...

Related

Avoid twisting artifacts in spline extrusion?

I am trying to attach 2d shape profiles to a spline curve. At certain points in the spline I get the weird twisting artifacts in my geometry as shown in the image. How can I avoid this using the Frenet-Frame equations?
My current calculations for the normal, binormal and tangent:
forward_tangent_vector = glm::normalize(pointforward - pointmid);
backward_tangent_vector = glm::normalize(pointmid - pointback);
second_order_tangent = glm::normalize(forward_tangent_vector - backward_tangent_vector);
binormal = glm::normalize(glm::cross(forward_tangent_vector,second_order_tangent));
normal = glm::normalize(glm::cross(binormal, forward_tangent_vector));
//translation matrix
T = glm::translate(T, pointmid);
normal_axis = glm::vec3(0, 1, 0);
rotationAxis = glm::cross(normal_axis, forward_tangent_vector);
rotationAngle = glm::acos(glm::dot(normal_axis, forward_tangent_vector));
//rotation matrix
R = glm::rotate(R, glm::degrees(rotationAngle), rotationAxis);
You fell victim to the hairy ball theorem:
A common problem in computer graphics is to generate a non-zero vector in R3 that is orthogonal to a given non-zero one. There is no single continuous function that can do this for all non-zero vector inputs. This is a corollary of the hairy ball theorem. To see this, consider the given vector as the radius of a sphere and note that finding a non-zero vector orthogonal to the given one is equivalent to finding a non-zero vector that is tangent to the surface of that sphere where it touches the radius. However, the hairy ball theorem says there exists no continuous function that can do this for every point on the sphere (i.e. every given vector).
Also see this: http://blog.sigfpe.com/2006/10/oriented-fish-and-hairy-balls.html
The problem lies in these two lines:
normal_axis = glm::vec3(0, 1, 0);
rotationAxis = glm::cross(normal_axis, forward_tangent_vector);
When forward_tangent_vector is colinear with (0,1,0), rotationAxis becomes (0,0,0). That's why you get a jolt in your pipe.
What you need to do instead of hardcoding (0,1,0), is to take the first derivative of the spline (velocity/tangent vector), take the second derivative of the spline (acceleration/normal vector), and take their cross products (binormal). Normalize these three vectors and you get the so-called Frenet-frame, a set of 3 mutually perpendicular vectors around the spline.
Note that your spline has to be C2-continuous, otherwise you would get similar "twists" caused by the discontinuities in the second derivative (aka. acceleration/normal vector).
Once you have the Frenet-frame, it's a matter of a simple change of basis to work in that coordinate system. Don't mess around with glm::rotate, just put the x,y,z unit vectors into a matrix as rows (or columns? I'm not sure what convention GLM uses...) and that'll be your transformation matrix.

Spherical Area Light Source for Soft Shadows

I'm attempting to implement soft shadows in my raytracer. To do so, I plan to shoot multiple shadow rays from the intersection point towards the area light source. I'm aiming to use a spherical area light--this means I need to generate random points on the sphere for the direction vector of my ray (recall that ray's are specified with a origin and direction).
I've looked around for ways to generate a uniform distribution of random points on a sphere, but they seem a bit more complicated than what I'm looking for. Does anyone know of any methods for generating these points on a sphere? I believe my sphere area light source will simply be defined by its XYZ world coordinates, RGB color value, and r radius.
I was referenced this code from Graphics Gems III, page 126 (which is also the same method discussed here and here):
void random_unit_vector(double v[3]) {
double theta = random_double(2.0 * PI);
double x = random_double(2.0) - 1.0;
double s = sqrt(1.0 - x * x);
v[0] = x;
v[1] = s * cos(theta);
v[2] = s * sin(theta);
}
This is fine and I understand this, but my sphere light source will be at some point in space specified by 3D X-Y-Z coordinates and a radius. I understand that the formula works for unit spheres, but I'm not sure how the formula accounts for the location of the sphere.
Thanks and I appreciate the help!
You seem to be confusing the formulas that generate a direction -- ie., a point on a sphere -- and the fact that you're trying to generate a direction /toward/ a sphere.
The formula you gave samples a random ray uniformly : it finds an X,Y,Z triple on the unit sphere, which can be considered as a direction.
What you actually try to achieve is to still generate a direction (a point on a sphere), but which favors a particular direction that points toward a sphere (or which is restricted to a cone : the cone you obtain from the center of your camera and the silhouette of the sphere light source).
Such thing can be done in two ways :
Either importance sampling toward the center of your spherical light source with a cosine lobe.
Uniform sampling in the cone defined above.
In the first cases, the formulas are given in the "Global Illumination COmpendium" :
http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf
(item 38 page 21)..
In the second case, you could do some rejection sampling, but I'm pretty sure there are some close form formula for that.
Finally, there is a last option : you could use your formula, consider the resulting (X,Y,Z) as a point in your scene, and thus translate it to the position of your sphere, and make a vector pointing from your camera toward it. However, it will pose serious issues :
You will be generating vectors toward the back of your sphere light
You won't have any formula for the pdf of the generated set of directions which you would need for later Monter Carlo integration.

C++ raytracer and normalizing vectors

So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.

How to compute Aspect angle

I have many 3D planes. The thing that I need to know is the way of computing aspect angle.
I hope, I can compute the aspect angle by using the projected normal vector of each plane (my plane equation is ax+by-z+c=0; so normal vector of this plane is a,b,-1) to the XY plane. Then, from the Y axis I can compute the aspect angle. But, I don’t know how to get the projected Normal vector after I projected to XY plane. Then, can I apply the equation which gives angle between two vectors to compute angle of my desired vector from the y axis.
On the other hand, I found, aspect angle is defined as the angle between any line which passes along the steepest slope of the plane and north direction (here, Y axis). Does this definition will follow, with my proposed way that is taking normal vectors? I mean, does the projected normal vector always given along the steepest slope of the plane? Also, some one told me, that this problem should consider as a 2D problem.
Please comment me and send me the relevant formulae in order to compute aspect angle. Thank you.
Some quick googling reveals the definition of the aspect angle.
http://www.answers.com/topic/aspect-angle
It's the angle between the geographic north on the northern hemisphere and the geographic south on the southern hemisphere. So basically it's a measure how much a slope faces the closest pole.
If your world is planar as opposed to spherical it will simplify things, so yes - A 2D problem. I'll make this assumption having the following implications:
In a spherical world the north pole is a point on the sphere. In a planar world the "pole" is a plane at infinity. Think about a plane somewhere far away in your world denoting "north". Only the normal of this plane is important in this task. The unit normal of this plane is N(nz,ny,nz).
Up is a vector pointing up U(ux,uy,yz). This is the unit normal vector of the ground plane.
The unit normal vector of the plane V(a,b,c) can now be projected onto a vector P on the ground plane as usual: P = V - (V dot U) U
Now it's easy to measure the aspect angle of the plane - It's the angle between the "pole"-plane N and the projected plane normal P given by acos(P dot N).
Since north is positive Y-axis for you we have N = (0, 1, 0). And then I guess you have up is U = (0, 0, 1), positive Z. This will simplify things even more - To project on the ground plane we just strip the Z-part. The aspect angle is then the angle between (a,b) and (0,1).
aspectAngle = acos(b / sqrt(a*a + b*b))
Note that planes parallell with the ground plane does not have a well-defined aspect angle since there is no slope to measure the aspect angle from.
What kind of surfaces are you working with? TINS (Triangular Irregular Networks) or DEMs (Digital Elevation Models)?
If you are using raster imagery to create your surfaces, the algorithm for calculating aspect is basically a moving window, which checks a central pixel plus the 8 neighbors.
Compare the central one with each neighbor and check for difference in elevation over distance (rise over run). You can parametrize the distance checks (north, south, east and west neighbors are at distance = 1 and northwest, southwest, southeast and northeast are at distance = sqrt(2)) to make it faster.
You can ask this question on gis.stackexchange also. Many people will be able to help you there.
Edit:
http://blog.geoprocessamento.net/2010/03/modelos-digitais-de-elevacao-e-hidrologia/
this website, altought in portuguese, will help you visualize the algorithm. After calculating the highest slope between a central cell and it's eight neighbors, you assign 0, 2, 4, 8, 16, 32, 64 or 128, depending on the location of the cell that presented highest slope between center and neighboors.

Move an object in the direction of a bezier curve?

I have an object with which I would like to make follow a bezier curve and am a little lost right now as to how to make it do that based on time rather than the points that make up the curve.
.::Current System::.
Each object in my scene graph is made from position, rotation and scale vectors. These vectors are used to form their corresponding matrices: scale, rotation and translation. Which are then multiplied in that order to form the local transform matrix.
A world transform (Usually the identity matrix) is then multiplied against the local matrix transform.
class CObject
{
public:
// Local transform functions
Matrix4f GetLocalTransform() const;
void SetPosition(const Vector3f& pos);
void SetRotation(const Vector3f& rot);
void SetScale(const Vector3f& scale);
// Local transform
Matrix4f m_local;
Vector3f m_localPostion;
Vector3f m_localRotation; // rotation in degrees (xrot, yrot, zrot)
Vector3f m_localScale;
}
Matrix4f CObject::GetLocalTransform()
{
Matrix4f out(Matrix4f::IDENTITY);
Matrix4f scale(), rotation(), translation();
scale.SetScale(m_localScale);
rotation.SetRotationDegrees(m_localRotation);
translation.SetTranslation(m_localTranslation);
out = scale * rotation * translation;
}
The big question I have are
1) How do I orientate my object to face the tangent of the Bezier curve?
2) How do I move that object along the curve without just setting objects position to that of a point on the bezier cuve?
Heres an overview of the function thus far
void CNodeControllerPieceWise::AnimateNode(CObject* pSpatial, double deltaTime)
{
// Get object latest pos.
Vector3f posDelta = pSpatial->GetWorldTransform().GetTranslation();
// Get postion on curve
Vector3f pos = curve.GetPosition(m_t);
// Get tangent of curve
Vector3f tangent = curve.GetFirstDerivative(m_t);
}
Edit: sorry its not very clear. I've been working on this for ages and its making my brain turn to mush.
I want the object to be attached to the curve and face the direction of the curve.
As for movement, I want to object to follow the curve based on the time this way it creates smooth movement throughout the curve.
You should have a curve in parametric form and use derivative vector to evaluate the rotation of your object (rotation angle = derivative angle) as #etarion said.
To move the object on a curve with a desired velocity ( i think it is the think you want) each simulation step you shoud estimate the distance the point should move on this step.
The simplest estimation is dist = derivative.length()*TIMER_STEP. When you know the dist should be traversed on the current step and t0 - the current parameter of the curve you can simply increment t0 by some little value epsilon and check the traversed distance is still smaller then estimated. Repeat this until the traversed distance (while increasing t0) is >= estimated. This will be the new current parameter t0 for the next step
EDIT:
Didn't notice first you are in 3d. In 3d space you can't unambiguously define the position of an object on a curve even if you know the initial position. Just imagine your curve is a line - the object still can rotate around the line. This angle is not defined by a curve.
I would do something like this. Let's bind a vector to the object so that in the beginning of the movement (curve parameter t = 0 for example) the object vector direction coincide with the derivative vector. Then during the movement this vector should still coincide with the derivative in each point of a curve. So you will know this object vector and will be able to setup your object according to this vector. But you will still have one degree of freedom.
For example you can say that object does not rotate around this vector.
Knowing the object vector and the angle of rotation around it you can restore the object orientation in 3d world.
PS: such object vector and a rotation angle around it is called quaternion - so you can use quaternion math (simple copy the required formula) to calculate the object rotation matrix!
here are the formulas http://www.euclideanspace.com/maths/geometry/rotations/conversions/quaternionToMatrix/index.htm
You need a parametric formulation of your curve.
The big question I have are
1) How do I orientate my object to
face the tangent of the Bezier curve?
If you have the curve in parametric form, the tangential direction is the derivative of the position wrt. t.
2) How do I move that object along the
curve without just setting objects
position to that of a point on the
bezier cuve?
I'm not sure I get your question - you'd increase t in your parametric form in a small step and update position and direction. You still have a degree of freedom there, btw - the "up" direction is not determined from the curve, so you'd need to take care of that too.
I'm assuming you have all points you need to define the bezier curve.
Then you can compute every point on that curve. Calculate a suitable point taking into account the speed that the object should travel at, and the frame timing and you should have consistent movement.
The vector formed by the points from last and current frame can be used as a rough estimate of the tangent in most cases; e.g. when the curve does not bend too sharp.
Edit:
also have a look here on how to calculate the length of a bezier curve. You will need to transform that, so you can calculate a point on your curve (or rather the t) for a given length. Then just move evenly distances in relation to time and you should be fine.
One approach is to calculate a pyramid of points. The bottom layer is your transformed control points. Now for a given t, and for each each pair of adjacent points, create a new point that is on this line segment weighted by t. This new set of points forms the next layer in the pyramid. Repeat until the current layer has only 1 point. This point is your position. Note that the second layer from the top has 2 points, and determines the tangent line.