I am trying to emulate some kind of specular reflections through anisotropy in my WebGL shader, something like this described in this tutorial here.
I realized that while is easy to get convincing results with spherical shapes like the Utah teapot, it is somewhat more difficult to get nice looking lighting effects by implementing anisotropic lighting for planar or squared geometries.
After reading this paper Anisotropic Lighting using HLS from nVIDIA, I started playing with following shader:
vec3 surfaceToLight(vec3 p) {
return normalize(p - v_worldPos);
}
vec3 anisoVector() {
return vec3(0.0,0.0,1.0);
}
void main() {
vec3 texColor = vec3(0.0);
vec3 N = normalize(v_normal); // Interpolated directions need to be re-normalized
vec3 L = surfaceToLight(lightPos);
vec3 E = anisoVector();
vec3 H = normalize(E + L);
vec2 tCoords = vec2(0.0);
tCoords.x = 2.0 * dot(H, N) - 1.0; //The value varies with the line of sight
tCoords.y = 2.0 * dot(L, N) - 1.0; //each vertex has a fixed value
vec4 tex = texture2D(s_texture, tCoords);
//vec3 anisoColor = tex.rgb; // show texture color only
//vec3 anisoColor = tex.aaa; // show anisotropic term only
vec3 anisoColor = tex.rgb * tex.aaa;
texColor += material.specular * light.color * anisoColor;
gl_FragColor = vec4(texColor, u_opacity);
}
Here is what I get actually (geometries are without texture coordinates and without creased normals):
I am aware that I can't simply use the method described in the above mentioned paper for everything, but, at least, it seems to me a good starting point to achieve fast simulated anisotropic lighting.
Sadly, I am not able to fully understand the math used to create the texture and so I am asking if there is any method to either
tweak the texture coordinates in this part of the fragment shader
tCoords.x = 2.0 * dot(H, N) - 1.0;
tCoords.y = 2.0 * dot(L, N) - 1.0;
tweak the shape of the alpha channel inside the texture (below the RGBA layers and the result)
...to get nice-looking vertical specular reflections for planar geometries, like the cube on the right side.
Does anyone know anything about this?
BTW, if someone interested, pay attention that the texture shall be mirrored:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.MIRRORED_REPEAT);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.MIRRORED_REPEAT);
...and here is the texture as PNG (the original one from nVIDIA is in TGA format).
Related
I'm currently working on a tile game in LibGDX and I'm trying to get a "fog of war" effect by obscuring unexplored tiles. The result I get from this is a dynamically generated black texture of the size of the screen that only covers unexplored tiles leaving the rest of the background visible. This is an example of the fog texture rendered on top of a white background:
What I'm now trying to achieve is to dynamically fade the inner borders of this texture to make it look more like a fog that slowly thickens instead of just a bunch of black boxes put together on top of the background.
Googling the problem I found out I could use shaders to do this, so I tried to learn some glsl (I'm at the very start with shaders) and I came up with this shader:
VertexShader:
//attributes passed from openGL
attribute vec3 a_position;
attribute vec2 a_texCoord0;
//variables visible from java
uniform mat4 u_projTrans;
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
void main() {
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * vec4(a_position, 1f);
}
FragmentShader:
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
//variables visible from java
uniform sampler2D u_texture;
uniform vec2 u_textureSize;
uniform int u_length;
void main() {
vec4 texColor = texture2D(u_texture, v_texCoord0);
vec2 step = 1.0 / u_textureSize;
if(texColor.a > 0) {
int maxNearPixels = (u_length * 2 + 1) * (u_length * 2 + 1) - 1;
for(int i = 0; i <= u_length; i++) {
for(float j = 0; j <= u_length; j++) {
if(i != 0 || j != 0) {
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
}
}
}
}
gl_FragColor = texColor;
}
This is the result I got setting a length of 20:
So the shader I wrote kinda works, but has terrible performance cause it's O(n^2) where n is the length of the fade in pixels (so it can be very high, like 60 or even 80). It also has some problems, like that the edges are still a bit too sharp (I'd like a smother transition) and some of the angles of the border are less faded than others (I'd like to have a fade uniform everywhere).
I'm a little bit lost at this point: is there anything I can do to make it better and faster? Like I said I'm new to shaders, so: is it even the right way to use shaders?
As others mentioned in the comments, instead of blurring in the screen-space, you should filter in the tile-space while potentially exploiting the GPU bilinear filtering. Let's go through it with images.
First define a texture such that each pixel corresponds to a single tile, black/white depending on the fog at that tile. Here's such a texture blown up:
After applying the screen-to-tiles coordinate transformation and sampling that texture with GL_NEAREST interpolation we get the blocky result similar to what you have:
float t = texture2D(u_tiles, M*uv).r;
gl_FragColor = vec4(t,t,t,1.0);
If instead of GL_NEAREST we switch to GL_LINEAR, we get a somewhat better result:
This still looks a little blocky. To improve on that we can apply a smoothstep:
float t = texture2D(u_tiles, M*uv).r;
t = smoothstep(0.0, 1.0, t);
gl_FragColor = vec4(t,t,t,1.0);
Or here is a version with a linear shade-mapping function:
float t = texture2D(u_tiles, M*uv).r;
t = clamp((t-0.5)*1.5 + 0.5, 0.0, 1.0);
gl_FragColor = vec4(t,t,t,1.0);
Note: these images were generated within a gamma-correct pipeline (i.e. sRGB framebuffer enabled). This is one of those few scenarios, however, where ignoring gamma may give better results, so you're welcome to experiment.
As you can tell from the title, I'm trying to create the mirror reflection while using deferred rendering and ambient occlusion. For ambient occlusion I'm specifically using the ssao algorithm.
To create the mirror I use the basic idea of reflecting all the models to the other side of the mirror and then rendering only the parts visible through the mirror.
Using deferred rendering I decided to do this during the creation of the gBuffer. In order to achieve correct lighting of the reflected objects, I made sure that the positions and normals of the reflected objects in the gBuffer are the same with their 'non reflected' version. That way, both the actual models and their images will receive the same lighting.
My problem is now with the ssao algorithm. It seems that the reflected objects are calculated to be highly occluded and this results in black areas which you can see in the mirror:
I've noticed that these black areas appear only in places that are not in my view. Things that I can see without the mirror have no unexpected black spots on them.
Note that the data in the gBuffer are all in view space. So there must be a connection there. Maybe the random samples used during ssao or their normals are not calculated correctly.
So , this is the fragment shader for the ambient occlusion :
void main()
{
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 normal = texture(gNormal, TexCoords).rgb;
vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
float kernelSize=64;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z -
sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
//FragColor = vec4(1,1,1,1);
occl=vec4(occlusion,occlusion,occlusion,1);
}
Any ideas as to why these black areas appear or suggestions to correct them?
I could just ignore the ambient occlusion in the reflection but I'm not happy with that.
Maybe, if the ambient occlusion shader used the positions and normals of the reflected objects there would be no problem. But then I'll get into trouble of saving more things in the buffer so I gave up that idea for now.
Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.
The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.
I've gotten shadows working properly for my Directional Lights, but I'm a little stumped when it comes to Point Lights. My idea is to use a cube map to render the depth from all six sides surrounding the light. So far, that's all working and good. I have verified this step by rendering each face of my cube to a 2D image, and it appears to be correct.
Now I'm trying to get the shadows to show up in the world. To do so, I am using GLSL's samplerCubeShadow data type. With it, I do:
vec3 lightToFrag = light.position - fragPos
float lenLightToFrag = length(lightToFrag)
vec3 normLightToFrag = normalize(lightToFrag)
float shadow = texture(depthTexture, vec4(normLightToFrag, lightToFrag))
I've tried a number of configurations, and this always renders my scene in black. Any ideas? My fragPos is just the model matrix times the vertex position. Should I be applying the light's model-view matrix to it? Or, similarly, should I be applying the world's model-view matrix to the light? Any feedback is really appreciated!
Assuming you are storing depth values in cubemap;
AFAIK cubemap is an AABB in world space, so you need to do calculations in world space. In your case light.position and fragPos must be in world space, or provide alternative variables/members if you use these names in view space in somewhere else e.g. per-fragment light calculations
Also you need to convert lightToFrag to depth value before pass to texture.
This answer shows how to convert lightToFrag to depth value: Omnidirectional shadow mapping with depth cubemap
Here my implementation (I removed #ifdef SHAD_CUBE because others use same name):
uniform samplerCubeShadow uShadMap;
uniform vec2 uFarNear;
float depthValue(const in vec3 v) {
vec3 absv = abs(v);
float z = max(absv.x, max(absv.y, absv.z));
return uFarNear.x + uFarNear.y / z;
}
float shadowCoef() {
vec3 L;
float d;
L = vPosWS - light.position_ws;
d = depthValue(L);
return texture(uShadMap, vec4(L, d));
}
This may require uniform model matrix if you only have ModelViewProjection (MVP)
Here how to calculate uNearFar at client side:
float n, f, nfsub, nf[2];
n = sm->near;
f = sm->far;
nfsub = f - n;
nf[0] = (f + n) / nfsub * 0.5f + 0.5f;
nf[1] =-(f * n) / nfsub;
glUniform2f(gkUniformLoc(prog, "uFarNear"), nf[0], nf[1]);
this is just optimization but you don't have to use this and follow the link which I mentioned before.
You may need bias value, related answer uses bias but I'm not sure how to apply it to cubemap correctly. I'm not sure d -+ 0.0001 is correct way or not.
If you want to store world distances in cubemap then this tutorial seems god one: https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows
I have a fragment shader that does parallax mapping and lighting computations. I've managed to trace the problem to the sampling of textures and/or lighting computation so I'll only post code for that to keep it short:
// sample maps
vec3 albedo = texture2D(u_albedo_texture_1, p_texc).rgb;
vec3 ao = texture2D(u_ambientocclusion_texture, v_texcoord).rgb - 0.5; // ambient occlusion
vec3 normalT = texture2D(u_normalmap_texture, p_texc).rgb * 2.0 - 1.0; // tangent space normal
vec4 normalW = vec4(normalize(TBN * normalT).xyz, 1.0); // world space normal
vec3 color = vec3(0.0);
vec3 ambient = textureCube(u_cubemap_texture, -normalW.xyz).rgb; // get ambient lighting from cubemap
color += ambient * ao;
float d = length(light_vector);
float atten = 1.0 / dot(light_attenuation, vec3(1.0, d, d * d)); // compute attenuation of point light
// diffuse term
float ndotl = max(dot(normalW.xyz, light_vector), 0.0);
color += atten * ndotl * light_diffuse.rgb * albedo;
// specular term
float hdotn = max(0.0, dot(half_vector, normalW.xyz));
color += atten * ((shininess + 8.0) / 8.0 * 3.1416) * light_specular.rgb * p_ks * pow(hdotn, shininess);
gl_FragColor = vec4(color.rgb, 1.0);
I have this code working in other shaders, but here in the parallax mapping shader it just discards the pixels and I can't understand why. So instead of showing the object it simply discards the pixels and the object doesn't show up at all. No wrong or messy colors, nothing.
I managed to trace the problem to the albedo and ambient sampled colors. They're both sampled correctly. If I print out
gl_FragColor = vec4(ambient.rgb, 1.0);
or
gl_FragColor = vec4(albedo.rgb, 1.0);
they both get displayed correctly but if I try to do something with both colors, the pixels get discarded. None of the following work:
gl_FragColor = vec4(ambient + albedo, 1.0);
gl_FragColor = vec4(ambient * albedo, 1.0);
gl_FragColor = vec4(ambient.r, albedo.g, ambient.b, 1.0);
ambient is sampled from a cube map and albedo from a 2D texture. I have other samplers for ambient occlusion and normal mapping and those work and they also work in conjunction with ambient OR albedo.
So ambient + ao works and albedo + ao works.
So, this is my problem and I can't figure out what causes it. Like I said, I have the same code in another shader and that one works.
Worth mentioning, this runs on a PowerVR emulator for OpenGL ES 2.0. and the GPU is an nVidia GT220.
Also feel free to point out other mistakes in my code like inefficient stuff. And if necessary, I'll post the full fragment shader code. Sorry for the long post :P
SOLVED
Wow, can't believe the cause was so stupid, but I completely missed it. Basically, I forgot to load the cubemap for that particular model and it wasn't sent to the shader. Still, how the hell did it sample something in the first place? I mean, it was working separately, so what did the texture unit sample? A cubemap left in the cache from previous processing?
Still, how the hell did it sample something in the first place?
Usually when my shaders doesn't produce the desired result but are looking annoyingly 'normal' it is because the shader compilation discarded it (I made an error somewhere) and decided to use the 'fixed pipeline' shader instead...