I was wondering how I could detect the edges of my shadow map texture in my fragment shader.
I took a look at the following tutorial on youtube:
https://www.youtube.com/watch?v=9sEHkT7N7RM
But in that guy's case, the solution depends on the camera position. I would prefer not to depend on the camera position.
The problem:
If an object moves out of the area that was sampled by the shadow map rendering pass, the object's shadow gets clipped (normal behavior).
So, if I could know that the current fragment's position is close to the shadow map border, I could add a fadeout-mechanism of some sort.
Cheers and thanks in advance!
Alright, I actually cannot believe it, but I solved this one myself and it was quite a simple thing to do:
In my vertex shader I calculate the vertex position from the light's point of view and then calculate how close it is to the texture edge:
vShadowCoord = vec4(uShadowModelViewProjectionMatrix * vec4(vertexPos.xyz, 1.0));
float borderFactorX = abs(0.5 - vShadowCoord.x);
float borderFactorY = abs(0.5 - vShadowCoord.y);
vShadowBorderFactor = max(borderFactorX, borderFactorY) * 2;
Now, vShadowBorderFactor stores a value between 0 and 1.
I pass vShadowBorderFactor to the fragment shader and then calculate the amount of darkening (by the casted shadow):
darkening = textureProj(uShadowTexture, vec4(vShadowCoord.xy, vShadowCoord.z - bias, 1.0)) + (vShadowBorderFactor * vShadowBorderFactor);
I then apply the darkening to the scenes overall color vector. VoilĂ !
Related
I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.
Hullo, I want to implement a simple 2D lighting technique in GLSL. My projection matrix is set up so that the top left corner of the window is (0, 0) and the bottom right is (window.width, window.height). I have one uniform variable in the fragment shader uniform vec2 lightPosition; which is currently set to the mouse position (again, in the same coordinate system). I have also calculated the distance from the light to the pixel.
I want to light up the pixel according to its distance from the light source. But here's the catch, I don't want to light it up more than its original color. For instance if the color of the pixel is (1, 0, 0 (red)), no matter how close the light gets to it, it will not change more that that, which adds annoying specularity. And the farther the light source moves away from the pixel, the darker I want it to get.
I really feel that I'm close to getting what I want, but I just can't get it!
I would really appreciate some help. I feel that this is a rather simple code to implement (and I feel ashamed for not knowing it).
why not scale up the distance to <0..1> range by dividing it and max it by some max visibility distance vd so:
d = min( length(fragment_pos-light_pos) , vd ) / vd;
that should get you <0..1> range for the distance of fragment to light. Now you can optionaly perform simple nonlinearization if you want (using pow which does not change the range...)
d = pow(d,0.5);
or
d = pow(d,2.0);
depending on what you think looks better (you can play with the exponent ...) and finally compute the color:
col = face_color * ((1.0-d)*0.8 + 0.2);
where 0.8 is your lightsource strength and 0.2 is ambient lighting.
I would like to "bypass" the classical light volume approach of deferred lighting.
Usually, when you want to affect pixels within a pointlight volume, you can simply render a sphere mesh.
I would like to try another way to do that, the idea is to render a cube which encompass the sphere, the cube is "circumscribes" to the sphere so each face's center is a sphere's point. Then you only have to know from your point of view which fragment would be a part of the circle (the sphere on your screen) if you had render the sphere instead.
So the main problem is to know which fragment will have to be discarded.
How could I do that:
Into the fragment shader, I have my "camera" world coordinates, my fragment world coordinates, my sphere world center, and my sphere radius.
Thus I have the straight line whose the orientation vector is modelized by camera-fragment world points.
And I can build my sphere equation.
Finally I can know if the line intersect the sphere.
Is is correct to say that, from my point of view, if the line intersect the sphere, thus this fragment must be considered as an highlighted fragment (a fragment that would have been rendered if I had rendered a sphere instead) ?
Thus the check "lenght(fragment - sphereCenter) <= sphereRadius" doesn't really mean something here because the fragment is not on the sphere.
So what?
The standard deferred shading solution for lights is to render a full-screen quad. The purpose of rendering a sphere instead is to avoid doing a bunch of per-fragment calculations for fragments which are outside of the light source's effect. This means that the center of that sphere is the light source, and its radius represents the maximum distance for which the source has an effect.
So the length from the fragment (that is, reconstructed from your g-buffer data, not the fragment produced by the cube) to the sphere's center is very much relevant. That's the length between the fragment and the light source. If that is larger than the sphere radius (AKA: maximum reach of the light), then you can cull the fragment.
Or you can just let your light attenuation calculations do the same job. After all, in order for lights to not look like they are being cropped, that sphere radius must also be used with some form of light attenuation. That is, when a fragment is at that distance, the attenuation of the light must be either 0 or otherwise negligibly small.
As such... it doesn't matter if you're rendering a sphere, cube, or a full-screen quad. You can either cull the fragment or let the light attenuation do its job.
However, if you want to possibly save performance by discarding the fragment before reading any of the g-buffers, you can do this. Assuming you have access to the camera-space position of the sphere/cube's center in the FS:
Convert the position of the cube's fragment into camera-space. You can do this by reverse-transforming gl_FragCoord, but it'd probably be faster to just pass the camera-space position to the fragment shader. It's not like your VS is doing a lot of work or anything.
Because the camera-space position is in camera space, it already represents a direction from the camera into the scene. So now, use this direction to perform part of ray/sphere intersection. Namely, you stop once you compute the discriminant (to avoid an expensive square-root). The discriminant is:
float A = dot(cam_position, cam_position);
float B = -2 * (dot(cam_position, cam_sphere_center);
float C = (dot(cam_sphere_center, cam_sphere_center)) - (radius * radius)
float Discriminant = (B * B) - 4 * A * C;
If the discriminant is negative, discard the fragment. Otherwise, do your usual stuff.
Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.
http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir2
reports that half vector in OpenGL context is 'Eye position - Light position' but then it goes on to say 'luckily OpenGL calculates it for us' [which is now deprecated].
How can, practically be calculated (a simple example would be greatly appreciated) [mainly, it puzzles me what "Eye" is and how it can be derived].
At the moment I managed to make specular calculations work (with good visual result) with half vector being equal to Light where Light is
vec3 Light = normalize(light_position - vec3(out_Vertex));
Now, I've no idea why that worked.
[If at least I knew what "Eye" is and how it can be derived practically.]
The half-vector is used in specular lighting and represents the normal at micro-imperfections in the surface which would cause incoming light to reflect toward the viewer. When the half-vector is closer to the surface normal, more imperfections align with the actual surface normal. Smoother surfaces will have fewer imperfections pointing away from the surface normal and result in a sharper highlight with a more significant drop off of light as the half-vector moves away from the actual normal than a rougher surface. The amount of drop off is controlled by the specular term, which is the power to which the cosine between the half-vector and normal vector is taken, so smoother surfaces have a higher power.
We call it the half-vector (H) because it is half-way between the vector point to the light (light vector, L) and the vector pointing to the viewer (which is the eye position (0,0,0) minus the vertex position in eye space; view vector, V). Before you calculate H, make sure the vector to the light and the eye are in the same coordinate space (legacy OpenGL used eye-space).
H = normalize( L + V )
You did the correct calculation, but your variables could be named more appropriately.
The term light_position here isn't entirely correct, since the tutorial you cited is the Directional light tutorial, which by definition, directional lights don't have position. The light vector for directional lights is independent of the vertex, so here, you have combined a few equations. Keep in mind, the light vector is toward the light, so opposite of the flow of photons from the light.
// i'm keeping your term here... I'm assuming
// you wanted to simulate a light coming from that position
// using a directional light, so the light direction would
// be -light_position, making
// L = -(-light_position) = light_position
vec3 L = light_position;
// For a point light, you would need the direction from
// the point to the light, so instead it would be
// light_position - outVertex
vec3 V = -out_Vertex;
// really it is (eye - vertexPosition), so (0,0,0) - out_Vertex
vec3 H = normalize(L + V);
In the fragment shader the vertex coordinates can be seen as a vector that goes from the camera (or the "eye" of the viewer) to the current fragment so by reversing the direction of this vector we then have the "Eye"-vector you are looking for. When calculating the Half-vector you also need to be aware of the direction of the Light-position vector, on the webpage you link they have that on pointing towards the surface, on http://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model its pointing away from the surface.