Calculate vector intersections in GLSL (OpenGL) - opengl

I want to add fog to a scene. But instead of adding fog to the fragment color based on its distance to the camera, I want to follow a more realistic approach. I want to calculate the distance, the vector from eye to fragment "travels" through a layer of fog.
With fog layer I mean that the fog has a lower limit (z-coordinate, which is up in this case) and a higher limit. I want to calculate the vector from the eye to the fragment and get the part of it which is inside the fog. This part is marked red in the graphic.
The calculation is actually quite simple. However, I would have to do some tests (if then) with the easy approach.
calculate line from vector and camera position;
get line intersection with lower limit;
get line intersection with higher limit;
do some logic stuff to figure out how to handle the intersections;
calculate deltaZ, based on intersections;
scale the vector (vector = deltaZ/vector.z)
fogFactor = length(vector);
This should be quite easy. However, what makes trouble is that I would have to add some logic to figure out how the camera and the fragment is located in relation to the fog. Also I have to be sure that the vector actually has an intersection with the limits. (It would makes trouble when the vectors z-Value is 0)
The problem is that alternations are not the best friend of shaders, at least this is what the internet has told me. ;)
My first question: Is there an better way of solving this problem? (I actually want to stay with my model of fog since this is about problem solving.)
The second question: I think that the calculation should be done from the fragment shader and not the vertex shader since this is nothing which can be interpolated. Am I right with this?
Here is a second graphic of the scenario.

Problem solved. :)
Instead of defining the fog with a lower limit and a higher limit, I define it with a center height and a radius. So the lower limit equals the center minus the radius, the higher limit is the center plus the radius.
With this, I came up with this calculation: (sorry for the bad variable names)
// Position_worldspace is the fragment position in world space
// delta 1 and 2 are differences in the z-axis from the fragment / eye to
// the center height
float delta1 = clamp(position_worldspace.z - fog_centerZ,
-fog_height, fog_height)
float delta2 = clamp(fog_centerZ - cameraPosition_worldspace.z,
-fog_height, fog_height);
float fogFactor z = delta1 + delta2;
vec3 viewVector = position_worldspace - cameraPosition_worldspace;
float fogFactor = length(viewVector * (fogFactorZ / (viewVector ).z));
I guess this is not the fastest way of calculating this but it does the trick.
HOWEVER!
The effect isn't realy butifule because the higher and lwoer limit of the fog are razor sharp. I forgot about this since it doesn't look bad when the eye isn't near those borders. But I think there is an easy solution to this problem. :)
Thanks for the help!

Related

Simplest 2D Lighting in GLSL

Hullo, I want to implement a simple 2D lighting technique in GLSL. My projection matrix is set up so that the top left corner of the window is (0, 0) and the bottom right is (window.width, window.height). I have one uniform variable in the fragment shader uniform vec2 lightPosition; which is currently set to the mouse position (again, in the same coordinate system). I have also calculated the distance from the light to the pixel.
I want to light up the pixel according to its distance from the light source. But here's the catch, I don't want to light it up more than its original color. For instance if the color of the pixel is (1, 0, 0 (red)), no matter how close the light gets to it, it will not change more that that, which adds annoying specularity. And the farther the light source moves away from the pixel, the darker I want it to get.
I really feel that I'm close to getting what I want, but I just can't get it!
I would really appreciate some help. I feel that this is a rather simple code to implement (and I feel ashamed for not knowing it).
why not scale up the distance to <0..1> range by dividing it and max it by some max visibility distance vd so:
d = min( length(fragment_pos-light_pos) , vd ) / vd;
that should get you <0..1> range for the distance of fragment to light. Now you can optionaly perform simple nonlinearization if you want (using pow which does not change the range...)
d = pow(d,0.5);
or
d = pow(d,2.0);
depending on what you think looks better (you can play with the exponent ...) and finally compute the color:
col = face_color * ((1.0-d)*0.8 + 0.2);
where 0.8 is your lightsource strength and 0.2 is ambient lighting.

Computing bias for spotlight shadowmap

after having implemented shadows for spotlight it appears that the bias computaion make the shadow disappear when my spotlight is too far from objects.
I have been trying to solve this problem for two days and I use Renderdoc to debug my renderer so all data are correct inside the shader.
My Case:
I use a 32 bits depth buffer
I have two cubes, one behind the other (and bigger to see the shadow of its neighboor), and a light looking toward cubes, they are aligned along the z-axis.
I used the following formula found on a tutorial to calculate the bias:
float bias = max(max_bias * (1.0 - dot(normal, lightDir)), min_bias);
And I perform the following comparison:
return (fragment_depth - shadow_texture_depth - bias > 0.0) ? 0.0 : 1.0;
However the more my spotlight is far from objects, the more depth value of the closest cube is close to the depth of the farest cube (difference of 10-3 and it decrease with distance from light).
Everything is working, the perspective made its job.
But the bias calculation doesn't take the distance from fragment to light into account, then if my objects and my light are aligned, normal and lightDir don't change, therefore the bias don't change either : there is no more shadow on my farest cube because the bias doesn't suit anymore.
I have searched on many websites and books (all game programming gems), but I didn't find useful formula.
Here I show you two cases:
Here you have two pair of screenshot the colour result from the camera point of view and the shadowmap from the light point of view.
light position (0, 0, 0), everything works
light position (0, 0, 1.5), doesn't works
Does anybody have a formula or an idea to help me ?
Did I misunderstand something ?
Thanks for reading.
You bias the difference which is in post projective space.
The post projective space is non linear as the depth buffer is logarithmic. So you cannot just offset this difference with this bias which is in "world unit".
If you want to make it work , you have to reconstruct your sampling position with this normal offset.
Or transform your depthes in world space using the inverted projection.
Hope it can help you !

How to choose the Light Size in World Space for Shadow Mapping and Percentage Closer Filtering?

Hi computer graphics and math people :-)
Short question: How to let an artist choose a meaningful light size in world space for shadow maps filtered by percentage closer filtering (PCF) and is it possible to use the same technique to support spot and directional light sources?
Longer question: I have implemented shadow mapping and filter the edges by applying percentage closer filtering (PCF). The filter kernel is a Poisson-disk in contrast to a regular, rectangular filter kernel. You can think of a Poisson-disk as sample positions more or less randomly distributed inside the unit circle. So the size of the filter region is simply a factor multiplied to each of the 2D sample positions of the kernel (Poisson-disk).
I can adjust the radius/factor for the Poisson-disk and change the size of the penumbra at runtime for either a spot light (perspective frustum) or directional light (orhtographic frustum). This works great but the values for the parameter does not really make any sense which is fine for small 3d samples or even games where one can invest some time to adjust the value empirically. What I want is a parameter called "LightSize" that has an actual meaning in world space. A large scene, for example, with a building that is 100 units long the LightSize has to be larger than in a scene with a close-up of a book shelf to result in the same smooth shadows. On the other hand a fixed LightSize would result in extremely smooth shadows on the shelf and quite hard shadows outside the building. This question is not about soft shadows, contact hardening etc. so ignore physically accurate blocker-receiver estimations ;-)
Oh and take a look at my awesome MS Paint illustrations:
Idea 1: If I use the LightSize directly as the filter-size, a factor of 0.5 would result in a Poisson-disk of the diagonal 1.0 and a radius 0.5. Since texture coordinates are in the range [0,1] this leads to a filter size that evaluates the whole texture for each fragment: imagine a fragment in the center of the shadow map, this fragment would fetch neighboring texels that are distributed inside the whole area of the texture. This would of course yield extremely large penumbra, but let's call this the "maximum". A penumbra factor of 0.05 for example would result in a diagonal of 0.1 so that each fragment would evaluate about 10% of its neighboring texels (ignore the circle etc. just think in 2d from a side view). This approach works but when the angle of a spotlight becomes larger or the frustum of a directional light changes its size, the penumbra changes its width because the LightSize defines the penumbra in texture space (UV space). The penumbra should stay the same independent of the size of the near plane. Imagine a fitted orthographic frustum. When the camera rotates the fitted frustum changes in size and so does the size of the penumbra, which is wrong.
Idea 2: Divide the LightSize by the size of the near plane in world space. This works great for orthographic projections because when the size of the frustum becomes larger, the LightSize is divided by a larger value so that the penumbra stays the same in world space. Unfortunately this doesn't work for a perspective frustum because the distance of the near plane leads to a changing size of the near plane so that the penumbra size is now dependent on the near plane distance which is anoying and wrong.
It feels like there has to be a way so that the artist can choose a meaningful light size in world space. I know that PCF is only a (quite bad) approximation of a physically plausible light source, but imagine the following:
When the light source is sampled multiple times by using a Poisson-disk in world space one can create physically accurate shadows by rendering a hard shadow for each sample position. This works for spot lights. The situation is different for directional lights. One can use an "angle in the origin of the directional light" and render multiple hard shadows for each slightly rotated frustum. This does not make any physically sense at all in the real world, but directional lights do not exist, so... by the way the sampling of a light source is often referred to as multi-view soft shadows (MVSS).
Do you have any suggestions? Could it be that spot and directional lights have to be handled differently and PCF does not allow me to use a meaningful real world light size for a perspective frustum?

Ripple Effect with GLSL need clarification

I have been going through this blog for simple water ripple effect. It indeed gives a nice ripple effect. But what I dont understand this is this line of code
vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
I dont understand how math translate to this line and achieves sucha nice ripple effect. I need help in to decrypting logic behind this line.
vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
To understand this equation lets break it down into pieces and then join them.
gl_FragCoord.xy/iResolution.xy
gl_FragCoord.xy varies from (0,0) to (xRes, yRes).
We are dividing by the resolution iResolution.xy.
So "gl_FragCoord.xy/iResolution.xy" will range from (0,0) to (1,1).
This is your pixel coordinate position.
So if you give "vec2 uv = gl_FragCoord.xy/iResolution.xy" it will be just a static image.
(cPos/cLength)
cPos is ranging from (-1,-1) to (1,1).
Imagine a 2D plane with origin at center and cPos to be a vector pointing from origin to our current pixel.
cLength will give you the distance from center.
"cPos/cLength" is the unit vector.
Our purpose of finding the unit vector is to find the direction in which the pixel has to be nudged.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(iGlobalTime);
This equation will nudge every pixel along the direction vector(unit vector). But all the waves are nudged along the direction vector in coherence. The effect looks like the image is expanding and contracting.
To get the wave effect we have to introduce phase shift. In the wave every particle will be in different phase. This can be introduced by cos(cLength*12-iGlobalTime).
Here cLength is different for every pixel. So we take this value and treat it as the phase of the pixel.
That multiplying with 12 is for amplifying the effect.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12-iGlobalTime*4.0);
Multiplying iGlobalTime with 4.0 will speed the waves.
Finally multiply the cosine product with 0.03 to move the pixels at max in the range (-0.03,0.03) because moving pixels in (-1,1) range will look weird.
And that is the entire equation.

Is it possible to thicken a quadratic Bézier curve using the GPU only?

I draw lots of quadratic Bézier curves in my OpenGL program. Right now, the curves are one-pixel thin and software-generated, because I'm at a rather early stage, and it is enough to see what works.
Simply enough, given 3 control points (P0 to P2), I evaluate the following equation with t varying from 0 to 1 (with steps of 1/8) in software and use GL_LINE_STRIP to link them together:
B(t) = (1 - t)2P0 + 2(1 - t)tP1 + t2P2
Where B, obviously enough, results in a 2-dimensional vector.
This approach worked 'well enough', since even my largest curves don't need much more than 8 steps to look curved. Still, one pixel thin curves are ugly.
I wanted to write a GLSL shader that would accept control points and a uniform thickness variable to, well, make the curves thicker. At first I thought about making a pixel shader only, that would color only pixels within a thickness / 2 distance of the curve, but doing so requires solving a third degree polynomial, and choosing between three solutions inside a shader doesn't look like the best idea ever.
I then tried to look up if other people already did it. I stumbled upon a white paper by Loop and Blinn from Microsoft Research where the guys show an easy way of filling the area under a curve. While it works well to that extent, I'm having trouble adapting the idea to drawing between two bouding curves.
Finding bounding curves that match a single curve is rather easy with a geometry shader. The problems come with the fragment shader that should fill the whole thing. Their approach uses the interpolated texture coordinates to determine if a fragment falls over or under the curve; but I couldn't figure a way to do it with two curves (I'm pretty new to shaders and not a maths expert, so the fact I didn't figure out how to do it certainly doesn't mean it's impossible).
My next idea was to separate the filled curve into triangles and only use the Bézier fragment shader on the outer parts. But for that I need to split the inner and outer curves at variable spots, and that means again that I have to solve the equation, which isn't really an option.
Are there viable algorithms for stroking quadratic Bézier curves with a shader?
This partly continues my previous answer, but is actually quite different since I got a couple of central things wrong in that answer.
To allow the fragment shader to only shade between two curves, two sets of "texture" coordinates are supplied as varying variables, to which the technique of Loop-Blinn is applied.
varying vec2 texCoord1,texCoord2;
varying float insideOutside;
varying vec4 col;
void main()
{
float f1 = texCoord1[0] * texCoord1[0] - texCoord1[1];
float f2 = texCoord2[0] * texCoord2[0] - texCoord2[1];
float alpha = (sign(insideOutside*f1) + 1) * (sign(-insideOutside*f2) + 1) * 0.25;
gl_FragColor = vec4(col.rgb, col.a * alpha);
}
So far, easy. The hard part is setting up the texture coordinates in the geometry shader. Loop-Blinn specifies them for the three vertices of the control triangle, and they are interpolated appropriately across the triangle. But, here we need to have the same interpolated values available while actually rendering a different triangle.
The solution to this is to find the linear function mapping from (x,y) coordinates to the interpolated/extrapolated values. Then, these values can be set for each vertex while rendering a triangle. Here's the key part of my code for this part.
vec2[3] tex = vec2[3]( vec2(0,0), vec2(0.5,0), vec2(1,1) );
mat3 uvmat;
uvmat[0] = vec3(pos2[0].x, pos2[1].x, pos2[2].x);
uvmat[1] = vec3(pos2[0].y, pos2[1].y, pos2[2].y);
uvmat[2] = vec3(1, 1, 1);
mat3 uvInv = inverse(transpose(uvmat));
vec3 uCoeffs = vec3(tex[0][0],tex[1][0],tex[2][0]) * uvInv;
vec3 vCoeffs = vec3(tex[0][1],tex[1][1],tex[2][1]) * uvInv;
float[3] uOther, vOther;
for(i=0; i<3; i++) {
uOther[i] = dot(uCoeffs,vec3(pos1[i].xy,1));
vOther[i] = dot(vCoeffs,vec3(pos1[i].xy,1));
}
insideOutside = 1;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_ModelViewProjectionMatrix * pos1[i];
texCoord1 = tex[i];
texCoord2 = vec2(uOther[i], vOther[i]);
EmitVertex();
}
EndPrimitive();
Here pos1 and pos2 contain the coordinates of the two control triangles. This part renders the triangle defined by pos1, but with texCoord2 set to the translated values from the pos2 triangle. Then the pos2 triangle needs to be rendered, similarly. Then the gap between these two triangles at each end needs to filled, with both sets of coordinates translated appropriately.
The calculation of the matrix inverse requires either GLSL 1.50 or it needs to be coded manually. It would be better to solve the equation for the translation without calculating the inverse. Either way, I don't expect this part to be particularly fast in the geometry shader.
You should be able to use technique of Loop and Blinn in the paper you mentioned.
Basically you'll need to offset each control point in the normal direction, both ways, to get the control points for two curves (inner and outer). Then follow the technique in Section 3.1 of Loop and Blinn - this breaks up sections of the curve to avoid triangle overlaps, and then triangulates the main part of the interior (note that this part requires the CPU). Finally, these triangles are filled, and the small curved parts outside of them are rendered on the GPU using Loop and Blinn's technique (at the start and end of Section 3).
An alternative technique that may work for you is described here:
Thick Bezier Curves in OpenGL
EDIT:
Ah, you want to avoid even the CPU triangulation - I should have read more closely.
One issue you have is the interface between the geometry shader and the fragment shader - the geometry shader will need to generate primitives (most likely triangles) that are then individually rasterized and filled via the fragment program.
In your case with constant thickness I think quite a simple triangulation will work - using Loop and Bling for all the "curved bits". When the two control triangles don't intersect it's easy. When they do, the part outside the intersection is easy. So the only hard part is within the intersection (which should be a triangle).
Within the intersection you want to shade a pixel only if both control triangles lead to it being shaded via Loop and Bling. So the fragment shader needs to be able to do texture lookups for both triangles. One can be as standard, and you'll need to add a vec2 varying variable for the second set of texture coordinates, which you'll need to set appropriately for each vertex of the triangle. As well you'll need a uniform "sampler2D" variable for the texture which you can then sample via texture2D. Then you just shade fragments that satisfy the checks for both control triangles (within the intersection).
I think this works in every case, but it's possible I've missed something.
I don't know how to exactly solve this, but it's very interesting. I think you need every different processing unit in the GPU:
Vertex shader
Throw a normal line of points to your vertex shader. Let the vertex shader displace the points to the bezier.
Geometry shader
Let your geometry shader create an extra point per vertex.
foreach (point p in bezierCurve)
new point(p+(0,thickness,0)) // in tangent with p1-p2
Fragment shader
To stroke your bezier with a special stroke, you can use a texture with an alpha channel. You can check the alpha channel on its value. If it's zero, clip the pixel. This way, you can still make the system think it is a solid line, instead of a half-transparent one. You could apply some patterns in your alpha channel.
I hope this will help you on your way. You will have to figure out things yourself a lot, but I think that the Geometry shading will speed your bezier up.
Still for the stroking I keep with my choice of creating a GL_QUAD_STRIP and an alpha-channel texture.