Ripple Effect with GLSL need clarification - opengl

I have been going through this blog for simple water ripple effect. It indeed gives a nice ripple effect. But what I dont understand this is this line of code
vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
I dont understand how math translate to this line and achieves sucha nice ripple effect. I need help in to decrypting logic behind this line.

vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
To understand this equation lets break it down into pieces and then join them.
gl_FragCoord.xy/iResolution.xy
gl_FragCoord.xy varies from (0,0) to (xRes, yRes).
We are dividing by the resolution iResolution.xy.
So "gl_FragCoord.xy/iResolution.xy" will range from (0,0) to (1,1).
This is your pixel coordinate position.
So if you give "vec2 uv = gl_FragCoord.xy/iResolution.xy" it will be just a static image.
(cPos/cLength)
cPos is ranging from (-1,-1) to (1,1).
Imagine a 2D plane with origin at center and cPos to be a vector pointing from origin to our current pixel.
cLength will give you the distance from center.
"cPos/cLength" is the unit vector.
Our purpose of finding the unit vector is to find the direction in which the pixel has to be nudged.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(iGlobalTime);
This equation will nudge every pixel along the direction vector(unit vector). But all the waves are nudged along the direction vector in coherence. The effect looks like the image is expanding and contracting.
To get the wave effect we have to introduce phase shift. In the wave every particle will be in different phase. This can be introduced by cos(cLength*12-iGlobalTime).
Here cLength is different for every pixel. So we take this value and treat it as the phase of the pixel.
That multiplying with 12 is for amplifying the effect.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12-iGlobalTime*4.0);
Multiplying iGlobalTime with 4.0 will speed the waves.
Finally multiply the cosine product with 0.03 to move the pixels at max in the range (-0.03,0.03) because moving pixels in (-1,1) range will look weird.
And that is the entire equation.

Related

OpenGL compare ray distance with depth buffer

I'm trying to mix raymarching and usual geometry rendering but I don't understand how to correctly compare the distance from the ray with the depth value I stored on my buffer.
On the raymarching side, I have rays starting from the eye (red dot). If you only render fragments at fixed ray distance 'T', you will see a curved line (in yellow on my drawing). I understand this because if you start from the ray origins Oa and Ob, and follow the direction Da and Db during T units (Oa + T * Da, and Ob + T * Db), you see that only the ray at the middle of the screen reaches the blue plane.
Now on the geometry side, I stored values directly from gl_FragCoord.z. But I don't understand why we don't see this curved effect there. I edited this picture in gimp, playing with 'posterize' feature to make it clear.
We see straight lines, not curved lines.
I am OK with converting this depth value to a distance (taking into account of the linearization). But then when comparing both distance I've got problems on the side of my screen.
There is one thing I missing with projection and how depth value are stored... I supposed depth value was the (remapped) distance from the near plane following the direction of rays starting from the eye.
EDIT:
It seems writing this question helped me a bit. I see now why we don't see this the curve effect on depth buffer: because the distance between Near and Far is bigger for the ray on the side of my screen. Thus even if the depth values are the same (middle or side screen), the distances are not.
Thus it seems my problem come from the way I convert depth to distance. I used the following:
float z_n = 2.0 * mydepthvalue - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
However, after the conversion I still don't see this curve effect that I need for comparing with my ray distance. Note this code doesn't take into account the FragCoord.x and .y, and that's weird for me...
You're overcomplicating the matter. There is no "curve effect" because a plane isn't curved. The term z value litteraly describes what it is: the z coordinate in some euclidean coordinate space. And in such a space z=C will form a parallel to the plane spanned by the xy-Axes of that space, at distance C.
So if you want the distance to some point, you need to take the x and y coordinates into account as well. In eye space, the camera is typically at the origin, so distance to the camera boils down to length(x_e, y_e, z_e) (which is of course a non-linear operation which would create the "curve" you seem to expect).

Getting depth from Float texture in post process

Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.

Calculate vector intersections in GLSL (OpenGL)

I want to add fog to a scene. But instead of adding fog to the fragment color based on its distance to the camera, I want to follow a more realistic approach. I want to calculate the distance, the vector from eye to fragment "travels" through a layer of fog.
With fog layer I mean that the fog has a lower limit (z-coordinate, which is up in this case) and a higher limit. I want to calculate the vector from the eye to the fragment and get the part of it which is inside the fog. This part is marked red in the graphic.
The calculation is actually quite simple. However, I would have to do some tests (if then) with the easy approach.
calculate line from vector and camera position;
get line intersection with lower limit;
get line intersection with higher limit;
do some logic stuff to figure out how to handle the intersections;
calculate deltaZ, based on intersections;
scale the vector (vector = deltaZ/vector.z)
fogFactor = length(vector);
This should be quite easy. However, what makes trouble is that I would have to add some logic to figure out how the camera and the fragment is located in relation to the fog. Also I have to be sure that the vector actually has an intersection with the limits. (It would makes trouble when the vectors z-Value is 0)
The problem is that alternations are not the best friend of shaders, at least this is what the internet has told me. ;)
My first question: Is there an better way of solving this problem? (I actually want to stay with my model of fog since this is about problem solving.)
The second question: I think that the calculation should be done from the fragment shader and not the vertex shader since this is nothing which can be interpolated. Am I right with this?
Here is a second graphic of the scenario.
Problem solved. :)
Instead of defining the fog with a lower limit and a higher limit, I define it with a center height and a radius. So the lower limit equals the center minus the radius, the higher limit is the center plus the radius.
With this, I came up with this calculation: (sorry for the bad variable names)
// Position_worldspace is the fragment position in world space
// delta 1 and 2 are differences in the z-axis from the fragment / eye to
// the center height
float delta1 = clamp(position_worldspace.z - fog_centerZ,
-fog_height, fog_height)
float delta2 = clamp(fog_centerZ - cameraPosition_worldspace.z,
-fog_height, fog_height);
float fogFactor z = delta1 + delta2;
vec3 viewVector = position_worldspace - cameraPosition_worldspace;
float fogFactor = length(viewVector * (fogFactorZ / (viewVector ).z));
I guess this is not the fastest way of calculating this but it does the trick.
HOWEVER!
The effect isn't realy butifule because the higher and lwoer limit of the fog are razor sharp. I forgot about this since it doesn't look bad when the eye isn't near those borders. But I think there is an easy solution to this problem. :)
Thanks for the help!

Spherical Area Light Source for Soft Shadows

I'm attempting to implement soft shadows in my raytracer. To do so, I plan to shoot multiple shadow rays from the intersection point towards the area light source. I'm aiming to use a spherical area light--this means I need to generate random points on the sphere for the direction vector of my ray (recall that ray's are specified with a origin and direction).
I've looked around for ways to generate a uniform distribution of random points on a sphere, but they seem a bit more complicated than what I'm looking for. Does anyone know of any methods for generating these points on a sphere? I believe my sphere area light source will simply be defined by its XYZ world coordinates, RGB color value, and r radius.
I was referenced this code from Graphics Gems III, page 126 (which is also the same method discussed here and here):
void random_unit_vector(double v[3]) {
double theta = random_double(2.0 * PI);
double x = random_double(2.0) - 1.0;
double s = sqrt(1.0 - x * x);
v[0] = x;
v[1] = s * cos(theta);
v[2] = s * sin(theta);
}
This is fine and I understand this, but my sphere light source will be at some point in space specified by 3D X-Y-Z coordinates and a radius. I understand that the formula works for unit spheres, but I'm not sure how the formula accounts for the location of the sphere.
Thanks and I appreciate the help!
You seem to be confusing the formulas that generate a direction -- ie., a point on a sphere -- and the fact that you're trying to generate a direction /toward/ a sphere.
The formula you gave samples a random ray uniformly : it finds an X,Y,Z triple on the unit sphere, which can be considered as a direction.
What you actually try to achieve is to still generate a direction (a point on a sphere), but which favors a particular direction that points toward a sphere (or which is restricted to a cone : the cone you obtain from the center of your camera and the silhouette of the sphere light source).
Such thing can be done in two ways :
Either importance sampling toward the center of your spherical light source with a cosine lobe.
Uniform sampling in the cone defined above.
In the first cases, the formulas are given in the "Global Illumination COmpendium" :
http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf
(item 38 page 21)..
In the second case, you could do some rejection sampling, but I'm pretty sure there are some close form formula for that.
Finally, there is a last option : you could use your formula, consider the resulting (X,Y,Z) as a point in your scene, and thus translate it to the position of your sphere, and make a vector pointing from your camera toward it. However, it will pose serious issues :
You will be generating vectors toward the back of your sphere light
You won't have any formula for the pdf of the generated set of directions which you would need for later Monter Carlo integration.

C++ raytracer and normalizing vectors

So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.