Approximate Indirect Diffuse - opengl

Im working on implementing indirect diffuse in my renderer and Im looking at section 8.4.1 of this paper from Valve.
In my renderer Im having a voxel grid of 256x256x256 with radiance and shadow injected and Im trying to convert/use this snippet from the paper to calculate ambient lighting against my voxels:
float3 AmbientLight( const float3 worldNormal )
{
float3 nSquared = worldNormal * worldNormal;
int3 isNegative = ( worldNormal < 0.0 );
float3 linearColor;
linearColor = nSquared.x * cAmbientCube[isNegative.x] +
nSquared.y * cAmbientCube[isNegative.y+2] +
nSquared.z * cAmbientCube[isNegative.z+4];
return linearColor;
}
The problem is that cannot figure out what is cAmbientCube in the calculation...
What is this variable and where it come from?

The 8.4.1 section of the paper, where you copied the code from, pretty much explains it. cAmbientCube is an array of six light values, one for each principal direction: negative x, positive x, negative y, positive y, negative z, positive z. Whereas typical ambient term is assumed to be a constant light that illuminates from all directions, the Ambient Cube technique described in the paper is a generalization of this which assumes a different light coming from each of those directions. The code you posted is responsible for sampling from this ad-hoc six-sided cube-map texture.

Related

Omnidirectional Lighting in OpenGL/GLSL 4.1

I've gotten shadows working properly for my Directional Lights, but I'm a little stumped when it comes to Point Lights. My idea is to use a cube map to render the depth from all six sides surrounding the light. So far, that's all working and good. I have verified this step by rendering each face of my cube to a 2D image, and it appears to be correct.
Now I'm trying to get the shadows to show up in the world. To do so, I am using GLSL's samplerCubeShadow data type. With it, I do:
vec3 lightToFrag = light.position - fragPos
float lenLightToFrag = length(lightToFrag)
vec3 normLightToFrag = normalize(lightToFrag)
float shadow = texture(depthTexture, vec4(normLightToFrag, lightToFrag))
I've tried a number of configurations, and this always renders my scene in black. Any ideas? My fragPos is just the model matrix times the vertex position. Should I be applying the light's model-view matrix to it? Or, similarly, should I be applying the world's model-view matrix to the light? Any feedback is really appreciated!
Assuming you are storing depth values in cubemap;
AFAIK cubemap is an AABB in world space, so you need to do calculations in world space. In your case light.position and fragPos must be in world space, or provide alternative variables/members if you use these names in view space in somewhere else e.g. per-fragment light calculations
Also you need to convert lightToFrag to depth value before pass to texture.
This answer shows how to convert lightToFrag to depth value: Omnidirectional shadow mapping with depth cubemap
Here my implementation (I removed #ifdef SHAD_CUBE because others use same name):
uniform samplerCubeShadow uShadMap;
uniform vec2 uFarNear;
float depthValue(const in vec3 v) {
vec3 absv = abs(v);
float z = max(absv.x, max(absv.y, absv.z));
return uFarNear.x + uFarNear.y / z;
}
float shadowCoef() {
vec3 L;
float d;
L = vPosWS - light.position_ws;
d = depthValue(L);
return texture(uShadMap, vec4(L, d));
}
This may require uniform model matrix if you only have ModelViewProjection (MVP)
Here how to calculate uNearFar at client side:
float n, f, nfsub, nf[2];
n = sm->near;
f = sm->far;
nfsub = f - n;
nf[0] = (f + n) / nfsub * 0.5f + 0.5f;
nf[1] =-(f * n) / nfsub;
glUniform2f(gkUniformLoc(prog, "uFarNear"), nf[0], nf[1]);
this is just optimization but you don't have to use this and follow the link which I mentioned before.
You may need bias value, related answer uses bias but I'm not sure how to apply it to cubemap correctly. I'm not sure d -+ 0.0001 is correct way or not.
If you want to store world distances in cubemap then this tutorial seems god one: https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows

OpenGL GLSL Shadows not working correctly

I`m trying to implement shadowmaps in Java/OpenGL with GLSL.
It seems to be impossible to create shadow maps with Java/OpenGL, there is almost no working example with perspective projection.
What I think is, that the matrix calculation isnt working well.
Here is my shadow result (camera view/proj = shadow view/proj):
And here I have mapped the linearized depth buffer on a rectangle, its a little bit rotated:
It seems like the depth buffer is flipped, because on every surface I have mapped it, it is x or/and y flipped. But maybe its just a UV bug.
So the major question is: Can you give me a hint what may happened?
Here are some code snippets:
Final Shader: Depth & Shadow calculation (uSamplerShadow is sampler2D)
float shadowValue=0.0;
vec4 lightVertexPosition2=vShadowCoord;
lightVertexPosition2/=lightVertexPosition2.w;
for(float x=-0.001;x<=0.001;x+=0.0005)
for(float y=-0.001;y<=0.001;y+=0.0005)
{
if(texture2D(uSamplerShadow,lightVertexPosition2.xy+vec2(x,y)).r>=lightVertexPosition2.z)
shadowValue+=1.0;
}
shadowValue/=16.0;
float f = 100.0;
float n = 0.1;
float z = (2 * n) / (f + n - texture2D(uSamplerShadow,vTexCoords).x * (f - n));
outColor = vec4(vec3(z) , 1.0);
Final Shader: Shadow coord calulation: (No bias matrix implemented yet)
vShadowCoord = uProjectionMatrix * uShadowViewMatrix * uWorldMatrix * vec4(aPosition,1.0);
Depth Shader
fragmentdepth = gl_FragCoord.z;
You can check my texture properties too, but I have already tried all combinations I found in on google :)
shadowTextureProperties.setMagFilter(EnumTextureFilter.NEAREST);
shadowTextureProperties.setMinFilter(EnumTextureFilter.NEAREST);
shadowTextureProperties.setWrapS(EnumTextureWrap.CLAMP_TO_EDGE);
shadowTextureProperties.setWrapT(EnumTextureWrap.CLAMP_TO_EDGE);
shadowTextureProperties.setInternalColorFormat(EnumTextureColorFormat.DEPTH_COMPONENT16);
shadowTextureProperties.setSrcColorFormat(EnumTextureColorFormat.DEPTH_COMPONENT);
shadowTextureProperties.setValueFormat(EnumValueFormat.FLOAT);
shadowTextureProperties.setPname(new int[]{GL14.GL_TEXTURE_COMPARE_MODE, GL14.GL_TEXTURE_COMPARE_FUNC});
shadowTextureProperties.setParam(new int[]{GL11.GL_NONE, GL11.GL_LEQUAL});
First thing:
shadowTextureProperties.setPname(new int[]{GL14.GL_TEXTURE_COMPARE_MODE, GL14.GL_TEXTURE_COMPARE_FUNC});
GL_TEXTURE_COMPARE_FUNC is not a valid parameter for GL_TEXTURE_COMPARE_MODE. According to the reference, only GL_NONE and GL_COMPARE_R_TO_TEXTURE are allowed.
GL_COMPARE_R_TO_TEXTURE
One has to use a shadow sampler (sampler2DShadow) and the corresponding texture overload:
float texture( sampler2DShadow sampler, vec3 P)
Here, the sampler is sampled at location P.xy and the read value is compared to P.z. The result of this operation is a lightning factor (0.0 when completely shadowed, 1.0 when no shadow).
GL_NONE
When you want to do the comparison yourself, then you have to set the GL_COMPARE_R_TO_TEXTURE to GL_NONE

Variance Shadow Map Depth Issue

I have been trying to get variance shadow mapping to work in my webgl application, but I seem to be having an issue that I could use some help with. In short, my shadows seem to vary over a much smaller distance than the examples I have seen out there. I.e. the shadow range is from 0 to 500 units, but the shadow is black 5 units away and almost non-existent 10 units away. The examples I am following are based on these two links:
VSM from Florian Boesch
VSM from Fabian Sanglard
In both of those examples, the authors are using spot light perspective projection to map the variance values to a floating point texture. In my engine, I have so far tried to use the same logic except I am using a directional light and orthographic projection. I tried both techniques and the result seems to always be the same for me. I'm not sure if its the because of me using an orthographic matrix to do projection - I suspect it might be. Here is a picture of the problem:
Notice how the box is only a few units away from the circle but the shadow is much darker even though the camera shadow is 0.1 to 500 units.
In the light shadow pass my code looks like this:
// viewMatrix is a uniform of the inverse world matrix of the camera
// vWorldPosition is the varying vec4 of the vertex position x world matrix
vec3 lightPos = (viewMatrix * vWorldPosition).xyz;
depth = clamp(length(lightPos) / 40.0, 0.0, 1.0);
float moment1 = depth;
float moment2 = depth * depth;
// Adjusting moments (this is sort of bias per pixel) using partial derivative
float dx = dFdx(depth);
float dy = dFdy(depth);
moment2 += pow(depth, 2.0) + 0.25 * (dx * dx + dy * dy) ;
gl_FragColor = vec4(moment1, moment2, 0.0, 1.0);
Then in my shadow pass:
// lightViewMatrix is the light camera's inverse world matrix
// vertWorldPosition is the attribute position x world matrix
vec3 lightViewPos = lightViewMatrix * vertWorldPosition;
float lightDepth2 = clamp(length(lightViewPos) / 40.0, 0.0, 1.0);
float illuminated = vsm( shadowMap[i], shadowCoord.xy, lightDepth2, shadowBias[i] );
shadowColor = shadowColor * illuminated
Firstly, should I be doing anything differently with Orthographic projection (Its probably not this, but I don't know what it might be as it happens using both techniques above :( )? If not, what might I be able to do to get a more even spread of the shadow?
Many thanks

Color fragment based on angle to center of screen GLSL

As an exercise in learning fragment shaders / vector math I am trying to write a post processing shader that colors every point P on the screen based upon the angle (in radians) of the vector PC, between P and the Center of the screen C.
For simplicity sake I will be doing this in grayscale, but a good illustration of the effect I am going for can be seen here, with hue changing as the angle changes, and the hue forming a cycle.
http://demosthenes.info/assets/images/hsl-color-wheel-trans.png
I've searched around, looking for information on finding the angles between vectors, and from those examples I've gotten to here:
#version 110
uniform sampler2D tex0; //Color info
void main()
{
vec2 ScreenCenter = vec2(0.5 , 0.5);
vec2 texCoord = gl_TexCoord[0].st;
vec2 deltaTexCoord = ( texCoord - ScreenCenter.xy);
float angle = dot(deltaTexCoord , vec2(0,-1));
//I've made attempts here to mess with acos as well as angle=pow(angle, somefloat) and
//have not gotten desired results
gl_FragColor = vec4( angle , angle, angle, 1.0 );
}
However this code produces linear gradients rather than the effect I want.
The easiest way is to use the built-in GLSL function atan() with two arguments:
float angle = atan(deltaTexCoord.y, deltaTexCoord.x);
This corresponds to the atan2 function that you're probably familiar with from C/C++. Compared to using acos(), the main advantage is that this gives you the full range of angles [-pi, pi], while the angles produced by acos() are only in the range [0, pi], and are therefore incorrect for the bottom half of the circle. With atan(y, x), there is also no need to normalize the input values.
You're almost there. The inner product (also called scalar or dot product) of two vectors is the cosine of the angles between the vectors times the product of the length of the vectors. So to get back to the angle you have map the dot product through the inverse of the cosine and normalize the vectors first (0,1) is already unit length so.
float angle = acos( dot(normalize(deltaTexCoord), vec2(0, -1)) );
Note that the angle is reported in units of radians, which go from 0 to 2pi.

How to implement a ground fog GLSL shader

I'm trying to implement a ground fog shader for my terrain rendering engine.
The technique is described in this article: http://www.iquilezles.org/www/articles/fog/fog.htm
The idea is to consider the ray going from the camera to the fragment and integrate the fog density function along this ray.
Here's my shader code:
#version 330 core
in vec2 UV;
in vec3 posw;
out vec3 color;
uniform sampler2D tex;
uniform vec3 ambientLightColor;
uniform vec3 camPos;
const vec3 FogBaseColor = vec3(1., 1., 1.);
void main()
{
vec3 light = ambientLightColor;
vec TexBaseColor = texture(tex,UV).rgb;
//***************************FOG********************************************
vec3 camFrag = posw - camPos;
float distance = length(camFrag);
float a = 0.02;
float b = 0.01;
float fogAmount = a * exp(-camPos.z*b) * ( 1.0-exp( -distance*camFrag.z*b ) ) / (b*camFrag.z);
color = mix( light*TexBaseColor, light*FogBaseColor, fogAmount );
}
The first thing is that I don't understand how to choose a and b and what are their physical role in the fog density function.
Then, the result is not what I expect…
I have a ground fog but the transition of fogAmount from 0 to 1 is always centered at the camera altitude. I've tried a lot of different a and b but when I don't have a transition at camera altitude, I either have a full fogged or not fogged at all terrain.
I checked the data I use and everything's correct:
camPos.z is the altitude of my camera
camFrag.z is the vertical component of the vector going from the camera to the fragment
I can't get to understand what part of the equation cause this.
Any idea about this ?
EDIT : Here's the effect I'm looking for :
image1
image2
This is a pretty standard application of atmospheric scattering.
It is discussed under the umbrella of volumetric lighting usually, which involves transmittance of light through different media (e.g. smoke, air, water). In cutting-edge shader-based graphics, this can be achieved in real-time using ray-marching or if there is only one uniform participating medium (as it is in this case -- the fog only applies to air), simplified to integration over some distance.
Ordinarily you would ray-march through participating media in order to determine the properties of light transfer, but this application is simplified to assume a medium that has well-defined distribution characteristics and that is where the coefficients you are confused about come from. The density of fog varies exponentially with distance, and this is what b is controlling, likewise it also varies with altitude (not shown in the equation directly below).
   
(source: iquilezles.org)
What this article introduces to the discussion, however, are poorly named coefficients a and b. These control in-scattering and extinction. The author repeatedly refers to the extinction coefficient as extintion, which really makes no sense to me - hopefully this is just because English was not the author's native language. Extinction can be thought of as how quickly light is absorbed, and it describes the opacity of a medium. If you want a more theoretical basis for all of this, have a look at the following paper.
With this in mind, take another look at the code from your article:
vec3 applyFog( in vec3 rgb, // original color of the pixel
in float distance, // camera to point distance
in vec3 rayOri, // camera position
in vec3 rayDir ) // camera to point vector
{
float fogAmount = c*exp(-rayOri.y*b)*(1.0-exp(-distance*rayDir.y*b))/rayDir.y;
vec3 fogColor = vec3(0.5,0.6,0.7);
return mix( rgb, fogColor, fogAmount );
}
You can see that c in this code actually a from the original equation.
More importantly, there is an additional expression here:
   
This additional expression controls the density with respect to altitude. Judging by your implementation of the shader, you have not correctly implemented the second expression. camFrag.z is very likely not altitude, but rather depth. Furthermore, I do not understand why you are multiplying it by the b coefficient.
I found a method that gives the result I was looking for.
The method is described in this article of Eric Lengyel: http://www.terathon.com/lengyel/Lengyel-UnifiedFog.pdf
It explains how to create a fog layer with density and altitude parameters. You can fly through it, it progressively blends all the geometry above the fog.