So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.
Related
I'm working on a physics simulation of projectiles, and I'm stuck on the ground collision. I have a ground made of quads,
I have the points stored in an array, so I thought that if I take the quad where the collision happen and calculate the angles of the quad (in x and z directions) I can then use that to change the velocity of the projectile.
This is where I'm stuck. I thought I should find the lowest and the highest point, then find the vector between them, but this will not give the angles in all directions, which is what I want. I know there must be a way of doing this, but how?
What you want is a normal of a quad.
Here's an answer that shows you how to get a quad's normal
After you got the normal, you need to calculate the force of the collision's response. Its direction is the normal of the quad and the strength is the strength the projectile exerts in the direction of the quad. Exerted force is calculated by using dot product of the projectile's velocity and reversed quad's normal (Here's a wiki link for the dot product)
The response vector should be this:
Vector3 responseForce = dot(projectile.vel, -1 * quad.normal) * quad.normal;
projectile.vel += responseForce;
I'm trying to implement PCF for a shadow cube map in OpenGL 2.0. I thought I'd found a solution here (search for Percentage closer filtering (PCF) algorithm to find the start of the section on cube map PCF), but the code is relying on the samplerCubeShadow that is unavailable in OpenGL 2.0 so I can't use the call to texture(samplerCubeShadow(), vec4()) shown on that page.
The first part of this question is: Is there a way to retrieve the same results from a samplerCube in OpenGL 2.0/GLSL 1.10? By using a textureCube or something else?
The second part relates to an idea I have to solve this problem. The image below illustrates what I'd like to do.
The solid blue line and the dotted red lines are all vectors coming from one face of the samplerCube that stores my depth values. The blue line's intersection with the dark grey square in the center of the black squares represents the sampled point from the cube. I'd like to create a plane (represented by the light grey rectangle) that is perpendicular to the blue vector. Then I'd like to sample the 4 black points from the camera by casting Z vectors from the light position (these are the dotted red lines) to their x,Y values on that plane. Afterwards, I would use those values combined with the original sample's values to calculate the PCF shadow value.
Is this a viable and efficient way of calculating PCF for a pointlight/cubemap? And how would I create the parallel plane and then retrieve the X and Y coordinates I need from it?
I can't say whether it is efficient or good (or even correct), but it is absolutely doable.
Given the vector v from the light to the fragment, choose any vector u that is not parallel to v. The cross product w = cross(u, v) will be perpendicular to v. Take w as a axis and cross(v, w) as the second axis.
I have a sphere in my program and I intend to draw some rectangles over at a distance x from the centre of this sphere. The figure looks something below:
The rectangles are drawn at (x,y,z) points that I have already have in a vector of 3d points.
Let's say the distance x from centre is 10. Notice the orientation of these rectangles and these are tangential to an imaginary sphere of radius 10 (perpendicular to an imaginary line from the centre of sphere to the centre of rectangle)
Currently, I do something like the following:
For n points vector<vec3f> pointsInSpace where the rectnagles have to be plotted
for(int i=0;i<pointsInSpace.size();++i){
//draw rectnagle at (x,y,z)
}
which does not have this kind of tangential orientation that I am looking for.
It looked to me of applying roll,pitch,yaw rotations for each of these rectangles and using quaternions somehow to make them tangential as to what I am looking for.
However, it looked a bit complex to me and I wanted to ask about some better method to do this.
Also, the rectangle in future might change to some other shape, so a kind of generic solution would be appreciated.
I think you essentially want the same transformation as would be accomplished with a LookAt() function (you want the rectangle to 'look at' the sphere, along a vector from the rectangle's center, to the sphere's origin).
If your rectangle is formed of the points:
(-1, -1, 0)
(-1, 1, 0)
( 1, -1, 0)
( 1, 1, 0)
Then the rectangle's normal will be pointing along Z. This axis needs to be oriented towards the sphere.
So the normalised vector from your point to the center of the sphere is the Z-axis.
Then you need to define a distinct 'up' vector - (0,1,0) is typical, but you will need to choose a different one in cases where the Z-axis is pointing in the same direction.
The cross of the 'up' and 'z' axes gives the x axis, and then the cross of the 'x' and 'z' axes gives the 'y' axis.
These three axes (x,y,z) directly form a rotation matrix.
This resulting transformation matrix will orient the rectangle appropriately. Either use GL's fixed function pipeline (yuk), in which case you can just use gluLookAt(), or build and use the matrix above in whatever fashion is appropriate in your own code.
Personally I think the answer of JasonD is enough. But here is some info of the calculation involved.
Mathematically speaking this is a rather simple problem, What you have is a 2 known vectors. You know the position vector and the spheres normal vector. Since the square can be rotated arbitrarily along around the vector from center of your sphere you need to define one more vector, the up vector. Without defining up vector it becomes a impossible solution.
Once you define a up vector vector, the problem becomes simple. Assuming your square is on the XY-plane as JasonD suggest above. Then your matrix becomes:
up_dot_n_dot_n.X up_dot_n_dot_n.Y up_dot_n_dot_n.Z 0
n.X n.y n.z 0
up_dot_n.x up_dot_n.x up_dot_n.z 0
p.x p.y p.z 1
Where n is the normal unit vector of p - center of sphere (which is trivial if sphere is in the center of the coordinate system), up is a arbitrary unit vector vector. The p follows form definition and is the position.
The solution has a bit of a singularity at the up direction of the sphere. An alternate solution is to rotate first 360 around up, the 180 around rotated axis dot up. Produces same thing different approach no singularity problem.
I'm attempting to implement soft shadows in my raytracer. To do so, I plan to shoot multiple shadow rays from the intersection point towards the area light source. I'm aiming to use a spherical area light--this means I need to generate random points on the sphere for the direction vector of my ray (recall that ray's are specified with a origin and direction).
I've looked around for ways to generate a uniform distribution of random points on a sphere, but they seem a bit more complicated than what I'm looking for. Does anyone know of any methods for generating these points on a sphere? I believe my sphere area light source will simply be defined by its XYZ world coordinates, RGB color value, and r radius.
I was referenced this code from Graphics Gems III, page 126 (which is also the same method discussed here and here):
void random_unit_vector(double v[3]) {
double theta = random_double(2.0 * PI);
double x = random_double(2.0) - 1.0;
double s = sqrt(1.0 - x * x);
v[0] = x;
v[1] = s * cos(theta);
v[2] = s * sin(theta);
}
This is fine and I understand this, but my sphere light source will be at some point in space specified by 3D X-Y-Z coordinates and a radius. I understand that the formula works for unit spheres, but I'm not sure how the formula accounts for the location of the sphere.
Thanks and I appreciate the help!
You seem to be confusing the formulas that generate a direction -- ie., a point on a sphere -- and the fact that you're trying to generate a direction /toward/ a sphere.
The formula you gave samples a random ray uniformly : it finds an X,Y,Z triple on the unit sphere, which can be considered as a direction.
What you actually try to achieve is to still generate a direction (a point on a sphere), but which favors a particular direction that points toward a sphere (or which is restricted to a cone : the cone you obtain from the center of your camera and the silhouette of the sphere light source).
Such thing can be done in two ways :
Either importance sampling toward the center of your spherical light source with a cosine lobe.
Uniform sampling in the cone defined above.
In the first cases, the formulas are given in the "Global Illumination COmpendium" :
http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf
(item 38 page 21)..
In the second case, you could do some rejection sampling, but I'm pretty sure there are some close form formula for that.
Finally, there is a last option : you could use your formula, consider the resulting (X,Y,Z) as a point in your scene, and thus translate it to the position of your sphere, and make a vector pointing from your camera toward it. However, it will pose serious issues :
You will be generating vectors toward the back of your sphere light
You won't have any formula for the pdf of the generated set of directions which you would need for later Monter Carlo integration.
I am trying to implement a functionality in OpenGL using GLUI such that the "Arcball" control is used to input the position of the light. I'm not sure how to go about this as the rotation matrix given by the arcball is of 4x4 dimensions, while the light position is an 1-D array of 3 coordinates.
Rotationg a light around the scene makes only sense for directional lights (i.e. positions at infinity). So you're not rotating a point, but a direction, just like a normal. Easy enough: Let the unrotated light have the direction (0,0,1,0). Then to rotate this around the scene you multiply it by the transpose-inverse of the given matrix. But since you know, that this matrix does contain only a rotation, this is a special case where the transpose-inverse is the same as the original matrix.
So you just multiply your initial light direction (0,0,1,0) with the matrix.
We can simplify this even further. If you look at the multiplication, you see, that it essentially just extracts the (weighted) column(s) of the matrix for which the original light position vector is nonzero. So, if we really start with a light direction of (0,0,1,0), you just take the 3rd column from the arcball rotation matrix.