Simplest 2D Lighting in GLSL - opengl

Hullo, I want to implement a simple 2D lighting technique in GLSL. My projection matrix is set up so that the top left corner of the window is (0, 0) and the bottom right is (window.width, window.height). I have one uniform variable in the fragment shader uniform vec2 lightPosition; which is currently set to the mouse position (again, in the same coordinate system). I have also calculated the distance from the light to the pixel.
I want to light up the pixel according to its distance from the light source. But here's the catch, I don't want to light it up more than its original color. For instance if the color of the pixel is (1, 0, 0 (red)), no matter how close the light gets to it, it will not change more that that, which adds annoying specularity. And the farther the light source moves away from the pixel, the darker I want it to get.
I really feel that I'm close to getting what I want, but I just can't get it!
I would really appreciate some help. I feel that this is a rather simple code to implement (and I feel ashamed for not knowing it).

why not scale up the distance to <0..1> range by dividing it and max it by some max visibility distance vd so:
d = min( length(fragment_pos-light_pos) , vd ) / vd;
that should get you <0..1> range for the distance of fragment to light. Now you can optionaly perform simple nonlinearization if you want (using pow which does not change the range...)
d = pow(d,0.5);
or
d = pow(d,2.0);
depending on what you think looks better (you can play with the exponent ...) and finally compute the color:
col = face_color * ((1.0-d)*0.8 + 0.2);
where 0.8 is your lightsource strength and 0.2 is ambient lighting.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

Computing bias for spotlight shadowmap

after having implemented shadows for spotlight it appears that the bias computaion make the shadow disappear when my spotlight is too far from objects.
I have been trying to solve this problem for two days and I use Renderdoc to debug my renderer so all data are correct inside the shader.
My Case:
I use a 32 bits depth buffer
I have two cubes, one behind the other (and bigger to see the shadow of its neighboor), and a light looking toward cubes, they are aligned along the z-axis.
I used the following formula found on a tutorial to calculate the bias:
float bias = max(max_bias * (1.0 - dot(normal, lightDir)), min_bias);
And I perform the following comparison:
return (fragment_depth - shadow_texture_depth - bias > 0.0) ? 0.0 : 1.0;
However the more my spotlight is far from objects, the more depth value of the closest cube is close to the depth of the farest cube (difference of 10-3 and it decrease with distance from light).
Everything is working, the perspective made its job.
But the bias calculation doesn't take the distance from fragment to light into account, then if my objects and my light are aligned, normal and lightDir don't change, therefore the bias don't change either : there is no more shadow on my farest cube because the bias doesn't suit anymore.
I have searched on many websites and books (all game programming gems), but I didn't find useful formula.
Here I show you two cases:
Here you have two pair of screenshot the colour result from the camera point of view and the shadowmap from the light point of view.
light position (0, 0, 0), everything works
light position (0, 0, 1.5), doesn't works
Does anybody have a formula or an idea to help me ?
Did I misunderstand something ?
Thanks for reading.
You bias the difference which is in post projective space.
The post projective space is non linear as the depth buffer is logarithmic. So you cannot just offset this difference with this bias which is in "world unit".
If you want to make it work , you have to reconstruct your sampling position with this normal offset.
Or transform your depthes in world space using the inverted projection.
Hope it can help you !

Getting depth from Float texture in post process

Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.

Calculate vector intersections in GLSL (OpenGL)

I want to add fog to a scene. But instead of adding fog to the fragment color based on its distance to the camera, I want to follow a more realistic approach. I want to calculate the distance, the vector from eye to fragment "travels" through a layer of fog.
With fog layer I mean that the fog has a lower limit (z-coordinate, which is up in this case) and a higher limit. I want to calculate the vector from the eye to the fragment and get the part of it which is inside the fog. This part is marked red in the graphic.
The calculation is actually quite simple. However, I would have to do some tests (if then) with the easy approach.
calculate line from vector and camera position;
get line intersection with lower limit;
get line intersection with higher limit;
do some logic stuff to figure out how to handle the intersections;
calculate deltaZ, based on intersections;
scale the vector (vector = deltaZ/vector.z)
fogFactor = length(vector);
This should be quite easy. However, what makes trouble is that I would have to add some logic to figure out how the camera and the fragment is located in relation to the fog. Also I have to be sure that the vector actually has an intersection with the limits. (It would makes trouble when the vectors z-Value is 0)
The problem is that alternations are not the best friend of shaders, at least this is what the internet has told me. ;)
My first question: Is there an better way of solving this problem? (I actually want to stay with my model of fog since this is about problem solving.)
The second question: I think that the calculation should be done from the fragment shader and not the vertex shader since this is nothing which can be interpolated. Am I right with this?
Here is a second graphic of the scenario.
Problem solved. :)
Instead of defining the fog with a lower limit and a higher limit, I define it with a center height and a radius. So the lower limit equals the center minus the radius, the higher limit is the center plus the radius.
With this, I came up with this calculation: (sorry for the bad variable names)
// Position_worldspace is the fragment position in world space
// delta 1 and 2 are differences in the z-axis from the fragment / eye to
// the center height
float delta1 = clamp(position_worldspace.z - fog_centerZ,
-fog_height, fog_height)
float delta2 = clamp(fog_centerZ - cameraPosition_worldspace.z,
-fog_height, fog_height);
float fogFactor z = delta1 + delta2;
vec3 viewVector = position_worldspace - cameraPosition_worldspace;
float fogFactor = length(viewVector * (fogFactorZ / (viewVector ).z));
I guess this is not the fastest way of calculating this but it does the trick.
HOWEVER!
The effect isn't realy butifule because the higher and lwoer limit of the fog are razor sharp. I forgot about this since it doesn't look bad when the eye isn't near those borders. But I think there is an easy solution to this problem. :)
Thanks for the help!

What is Half vector in modern GLSL?

http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir2
reports that half vector in OpenGL context is 'Eye position - Light position' but then it goes on to say 'luckily OpenGL calculates it for us' [which is now deprecated].
How can, practically be calculated (a simple example would be greatly appreciated) [mainly, it puzzles me what "Eye" is and how it can be derived].
At the moment I managed to make specular calculations work (with good visual result) with half vector being equal to Light where Light is
vec3 Light = normalize(light_position - vec3(out_Vertex));
Now, I've no idea why that worked.
[If at least I knew what "Eye" is and how it can be derived practically.]
The half-vector is used in specular lighting and represents the normal at micro-imperfections in the surface which would cause incoming light to reflect toward the viewer. When the half-vector is closer to the surface normal, more imperfections align with the actual surface normal. Smoother surfaces will have fewer imperfections pointing away from the surface normal and result in a sharper highlight with a more significant drop off of light as the half-vector moves away from the actual normal than a rougher surface. The amount of drop off is controlled by the specular term, which is the power to which the cosine between the half-vector and normal vector is taken, so smoother surfaces have a higher power.
We call it the half-vector (H) because it is half-way between the vector point to the light (light vector, L) and the vector pointing to the viewer (which is the eye position (0,0,0) minus the vertex position in eye space; view vector, V). Before you calculate H, make sure the vector to the light and the eye are in the same coordinate space (legacy OpenGL used eye-space).
H = normalize( L + V )
You did the correct calculation, but your variables could be named more appropriately.
The term light_position here isn't entirely correct, since the tutorial you cited is the Directional light tutorial, which by definition, directional lights don't have position. The light vector for directional lights is independent of the vertex, so here, you have combined a few equations. Keep in mind, the light vector is toward the light, so opposite of the flow of photons from the light.
// i'm keeping your term here... I'm assuming
// you wanted to simulate a light coming from that position
// using a directional light, so the light direction would
// be -light_position, making
// L = -(-light_position) = light_position
vec3 L = light_position;
// For a point light, you would need the direction from
// the point to the light, so instead it would be
// light_position - outVertex
vec3 V = -out_Vertex;
// really it is (eye - vertexPosition), so (0,0,0) - out_Vertex
vec3 H = normalize(L + V);
In the fragment shader the vertex coordinates can be seen as a vector that goes from the camera (or the "eye" of the viewer) to the current fragment so by reversing the direction of this vector we then have the "Eye"-vector you are looking for. When calculating the Half-vector you also need to be aware of the direction of the Light-position vector, on the webpage you link they have that on pointing towards the surface, on http://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model its pointing away from the surface.