Spherical Area Light Source for Soft Shadows - c++

I'm attempting to implement soft shadows in my raytracer. To do so, I plan to shoot multiple shadow rays from the intersection point towards the area light source. I'm aiming to use a spherical area light--this means I need to generate random points on the sphere for the direction vector of my ray (recall that ray's are specified with a origin and direction).
I've looked around for ways to generate a uniform distribution of random points on a sphere, but they seem a bit more complicated than what I'm looking for. Does anyone know of any methods for generating these points on a sphere? I believe my sphere area light source will simply be defined by its XYZ world coordinates, RGB color value, and r radius.
I was referenced this code from Graphics Gems III, page 126 (which is also the same method discussed here and here):
void random_unit_vector(double v[3]) {
double theta = random_double(2.0 * PI);
double x = random_double(2.0) - 1.0;
double s = sqrt(1.0 - x * x);
v[0] = x;
v[1] = s * cos(theta);
v[2] = s * sin(theta);
}
This is fine and I understand this, but my sphere light source will be at some point in space specified by 3D X-Y-Z coordinates and a radius. I understand that the formula works for unit spheres, but I'm not sure how the formula accounts for the location of the sphere.
Thanks and I appreciate the help!

You seem to be confusing the formulas that generate a direction -- ie., a point on a sphere -- and the fact that you're trying to generate a direction /toward/ a sphere.
The formula you gave samples a random ray uniformly : it finds an X,Y,Z triple on the unit sphere, which can be considered as a direction.
What you actually try to achieve is to still generate a direction (a point on a sphere), but which favors a particular direction that points toward a sphere (or which is restricted to a cone : the cone you obtain from the center of your camera and the silhouette of the sphere light source).
Such thing can be done in two ways :
Either importance sampling toward the center of your spherical light source with a cosine lobe.
Uniform sampling in the cone defined above.
In the first cases, the formulas are given in the "Global Illumination COmpendium" :
http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf
(item 38 page 21)..
In the second case, you could do some rejection sampling, but I'm pretty sure there are some close form formula for that.
Finally, there is a last option : you could use your formula, consider the resulting (X,Y,Z) as a point in your scene, and thus translate it to the position of your sphere, and make a vector pointing from your camera toward it. However, it will pose serious issues :
You will be generating vectors toward the back of your sphere light
You won't have any formula for the pdf of the generated set of directions which you would need for later Monter Carlo integration.

Related

OpenGL compare ray distance with depth buffer

I'm trying to mix raymarching and usual geometry rendering but I don't understand how to correctly compare the distance from the ray with the depth value I stored on my buffer.
On the raymarching side, I have rays starting from the eye (red dot). If you only render fragments at fixed ray distance 'T', you will see a curved line (in yellow on my drawing). I understand this because if you start from the ray origins Oa and Ob, and follow the direction Da and Db during T units (Oa + T * Da, and Ob + T * Db), you see that only the ray at the middle of the screen reaches the blue plane.
Now on the geometry side, I stored values directly from gl_FragCoord.z. But I don't understand why we don't see this curved effect there. I edited this picture in gimp, playing with 'posterize' feature to make it clear.
We see straight lines, not curved lines.
I am OK with converting this depth value to a distance (taking into account of the linearization). But then when comparing both distance I've got problems on the side of my screen.
There is one thing I missing with projection and how depth value are stored... I supposed depth value was the (remapped) distance from the near plane following the direction of rays starting from the eye.
EDIT:
It seems writing this question helped me a bit. I see now why we don't see this the curve effect on depth buffer: because the distance between Near and Far is bigger for the ray on the side of my screen. Thus even if the depth values are the same (middle or side screen), the distances are not.
Thus it seems my problem come from the way I convert depth to distance. I used the following:
float z_n = 2.0 * mydepthvalue - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
However, after the conversion I still don't see this curve effect that I need for comparing with my ray distance. Note this code doesn't take into account the FragCoord.x and .y, and that's weird for me...
You're overcomplicating the matter. There is no "curve effect" because a plane isn't curved. The term z value litteraly describes what it is: the z coordinate in some euclidean coordinate space. And in such a space z=C will form a parallel to the plane spanned by the xy-Axes of that space, at distance C.
So if you want the distance to some point, you need to take the x and y coordinates into account as well. In eye space, the camera is typically at the origin, so distance to the camera boils down to length(x_e, y_e, z_e) (which is of course a non-linear operation which would create the "curve" you seem to expect).

Analytic method for calculate a mirror angle

I have in a 3D space a fixed light ray Lr and a mirror M that can rotate about the fixed point Mrot, this point is not on the same plane of the mirror, in other words the mirror plane is tangent to a sphere centered in Mrot with a fixed radius d. With that configuration I want to find an equation that receives point P as parameter and results with the rotation of the mirror in a 3D space.
We can consider that the mirror plane has no borders (infinite plane) and it's rotation have no limits. Also, the mirror reflects only on the opposite side of its rotation point.
In the picture are two cases with different input point P1and P2, with their respective solution angles alpha1 and alpha2. The pictures are in 2D to simplify the drawings, the real case is in 3D.
At the moment I am calculating the intersection with the mirror plane in a random rotation, then calculate the ray reflection and see how far is from the point (P) I want to reach. Finally iterate with some condition changing the rotation until it match.
Obviously it's an overkill, but I can't figure it out how to code it in an analytic way.
Any thoughts?
Note: I have noticed that if the mirror rotates about a point (Mrot) contained in it's plane and the ray light is reaching that point (Mrot) I can easily calculate the the mirror angle, but unfortunately is not my case.
First note that there is only one parameter here, namely the distance t along the ray at which it hits the mirror.
For any test value of t, compute in order
The point at which reflection occurs.
The vectors of the incident and reflected rays.
The normal vector of the mirror, which is found by taking the mean of the normalized incident and reflected vectors. Together with 1, you now know the plane of the mirror.
The distance d of the mirror to the rotation point.
The problem is now to choose t to make d take the desired value. This boils down to an octic polynomial in t, so there is no analytic formula[1] and the only solution is to iterate.[2]
Here's a code sample:
vec3 r; // Ray start position
vec3 v; // Ray direction
vec3 p; // Target point
vec3 m; // Mirror rotation point
double calc_d_from_t(double t)
{
vec3 reflection_point = r + t * v;
vec3 incident = normalize(-v);
vec3 reflected = normalize(p - reflection_point);
vec3 mirror_normal = normalize(incident + reflected);
return dot(reflection_point - m, mirror_normal);
}
Now pass calc_d_from_t(t) = d to your favourite root finder, ensuring to find the root with t > 0. Any half-decent root finder (e.g. Newton-Raphson) should be much faster than your current method.
[1] I.e. a formula involving arithmetic operations, nth roots and the coefficients.
[2] Unless the octic factorises identically, potentially reducing the problem to a quartic.
I would do it as 2 separate planar problems (one in xy plane and second in xz or yz plane). The first thing that hits my mind is this iterative process:
start
mirror is turning around Mrot in constant distance creating circle (sphere in 3D)
so compute first intersection of Lr and sphere
or find nearest point on sphere to Lr if no intersection found
compute n0 normal as half angle between Lr and red line from intersection to P
this is mirror start position
place mirror (aqua) to n0 angle
compute reflection of Lr
and compute half angle da0 this is step for new iteration
add da0 to n0 angle and place mirror to this new angle position
compute reflection of Lr
and compute half angle da1 this is step for new iteration
loop bullet 3 until
da(i) is small enough
max number of iteration is reached
[Notes]
This should converge into solution more quickly then random/linear probing
the more distant P from mirror (or smaller radius of rotation) the quicker convergence there is
Not sure if analytic solution to this problem even exists it looks like it would lead to transcendent system ...

Ripple Effect with GLSL need clarification

I have been going through this blog for simple water ripple effect. It indeed gives a nice ripple effect. But what I dont understand this is this line of code
vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
I dont understand how math translate to this line and achieves sucha nice ripple effect. I need help in to decrypting logic behind this line.
vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
To understand this equation lets break it down into pieces and then join them.
gl_FragCoord.xy/iResolution.xy
gl_FragCoord.xy varies from (0,0) to (xRes, yRes).
We are dividing by the resolution iResolution.xy.
So "gl_FragCoord.xy/iResolution.xy" will range from (0,0) to (1,1).
This is your pixel coordinate position.
So if you give "vec2 uv = gl_FragCoord.xy/iResolution.xy" it will be just a static image.
(cPos/cLength)
cPos is ranging from (-1,-1) to (1,1).
Imagine a 2D plane with origin at center and cPos to be a vector pointing from origin to our current pixel.
cLength will give you the distance from center.
"cPos/cLength" is the unit vector.
Our purpose of finding the unit vector is to find the direction in which the pixel has to be nudged.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(iGlobalTime);
This equation will nudge every pixel along the direction vector(unit vector). But all the waves are nudged along the direction vector in coherence. The effect looks like the image is expanding and contracting.
To get the wave effect we have to introduce phase shift. In the wave every particle will be in different phase. This can be introduced by cos(cLength*12-iGlobalTime).
Here cLength is different for every pixel. So we take this value and treat it as the phase of the pixel.
That multiplying with 12 is for amplifying the effect.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12-iGlobalTime*4.0);
Multiplying iGlobalTime with 4.0 will speed the waves.
Finally multiply the cosine product with 0.03 to move the pixels at max in the range (-0.03,0.03) because moving pixels in (-1,1) range will look weird.
And that is the entire equation.

C++ raytracer and normalizing vectors

So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.

OpenGL find distance to a point

I have a virtual landscape with the ability to walk around in first-person. I want to be able to walk up any slope if it is 45 degrees or less. As far as I know, this involves translating your current position out x units then finding the distance between the translated point and the ground. If that distance is x units or more, the user can walk there. If not, the user cannot. I have no idea how to find the distance between one point and the nearest point in the negative y direction. I have programmed this in Java3D, but I do not know how to program this in OpenGL.
Barking this problem at OpenGL is barking up the wrong tree: OpenGL's sole purpose is drawing nice pictures to the screen. It's not a math library!
Depending you your demands there are several solutions. This is how I'd tackle this problem: The normals you calculate for proper shading give you the slope of each point. Say your heightmap (=terrain) is in the XY plane and your gravity vector g = -Z, then the normal force is terrain_normal(x,y) ยท g. The normal force is, what "pushes" your feet against the ground. Without sufficient normal force, there's not enough friction to convey your muscles force into a movement perpendicular to the ground. If you look at the normal force formula you can see that the more the angle between g and terrain_normal(x,y) deviates, the smaller the normal force.
So in your program you could simply test if the normal force exceeds some threshold; correctly you'd project the excerted friction force onto the terrain, and use that as acceleration vector.
If you just have a regular triangular hightmap you can use barycentric coordinates to interpolate Z values from a given (X,Y) position.