I'm baking normal map on the fragment shader from a height map. The height map looks great and looks smooth. However, when I generate the normal map I get very weird result.
Here are two rendered images that show the problem, one with all lighting calculations and second one has the normal map image applied on top of the mesh.
The way I bake the normal map is by sampling neighbor pixels on the fragment shader.
The mesh is 32x32 and the normal map and height map is 64x64. Here's the fragment shader code that samples the neighbor pixels:
float NORMAL_OFF = (1.0 / 64.0);
vec3 off = vec3(-NORMAL_OFF, 0, NORMAL_OFF);
// s11 = Current
float s11 = texture2D(uSampler, texturePos).x;
// s01 = Left
float s01 = texture2D(uSampler, vec2(texturePos.xy + off.xy)).x;
// s21 = Right
float s21 = texture2D(uSampler, vec2(texturePos.xy + off.zy)).x;
// s10 = Below
float s10 = texture2D(uSampler, vec2(texturePos.xy + off.yx)).x;
// s12 = Above
float s12 = texture2D(uSampler, vec2(texturePos.xy + off.yz)).x;
vec3 va = normalize( vec3(off.z, 0.0, s21 - s11) );
vec3 vb = normalize( vec3(0.0, off.z, s12 - s11) );
vec3 normal = normalize( cross(va, vb) );
texturePos is calculated on the vertex shader as vertexPosition.x / 128 (128 because the distance between vertices is 4 pixels. So 32 * 4 = 128).
Why is my result so weird?
Your height map has too few sampling depth resolution, resulting in those hard steps. Probably your height map is 8 bit, giving you a max of 256 height levels. Now if your height map is is higher planar resolution than 256, the lateral resolution is insufficient to represent a smooth heightfield.
Solution: Use a higher sampling resolution for your height map. 16 bits are a popular choice.
Your shader and baking code are fine though, they just don't get input data with enough resolution to work with.
Related
I have a complex 3D scenes, the values in my depth buffer ranges from close shot, several centimeters, to several kilometers.
For some various effects I am using a depth bias, offset to circumvent some artifacts (SSAO, Shadow). Even during depth peeling by comparing depth between the current peel and the previous one some issues can occur.
I have fix those issues for close up shot but when the fragment is far enough, the bias become obsolete.
I am wondering how to tackle the bias for such scenes. Something around bias depending on the current world depth of the current pixel or maybe completely disabling the effect at a given depth?
Is there some good practices regarding those issues, and how can I address them?
It seems I found a way,
I have sound this link for shadow bias
https://digitalrune.github.io/DigitalRune-Documentation/html/3f4d959e-9c98-4a97-8d85-7a73c26145d7.htm
Depth bias and normal offset values are specified in shadow map
texels. For example, depth bias = 3 means that the pixels is moved the
length of 3 shadows map texels closer to the light.
By keeping the bias proportional to the projected shadow map texels,
the same settings work at all distances.
I use the difference in world space between the current point and a neighboring pixel with the same depth component. the bias become something close to "the average distance between 2 neighboring pixels". The further the pixel is the larger the bias will be (from few millimeters close to the near plane to meters at the far plane).
So for each of my sampling point, I offset its position from some
pixels in its x direction (3 pixels give me good results on various
scenes).
I compute the world difference between the currentPoint and this new offsetedPoint
I use this difference as a bias for all my depth testing
code
float compute_depth_offset() {
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec2 nextPixel = vec2(gl_FragCoord.xy + vec2(depth_transparency_bias, 0.0)) / dim;
vec4 currentNDC;
vec4 nextNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - depth_range.near - depth_range.far) / (depth_range.far - depth_range.near);
currentNDC.w = 1.0;
nextNDC.xy = nextPixel * 2.0 - 1.0;
nextNDC.z = currentNDC.z;
nextNDC.w = currentNDC.w;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec4 nextWorld = (inv_mvp * nextNDC);
nextWorld.xyz = nextWorld.xyz / nextWorld.w;
return length(nextWorld.xyz - world.xyz);
}
recently I used only the world space derivative of the current pixels position:
float compute_depth_offset(float zNear, float zFar)
{
mat4 mvp = projection * modelView;
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec4 currentNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - 0.0 - 1.0) / (1.0 - 0.0);
currentNDC.w = 1.0;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec3 depth = max(abs(dFdx(world.xyz)), abs(dFdy(world.xyz)));
return depth.x + depth.y + depth.z;
}
I'm trying to apply a png color table to an image, but can't match the pixel from the png to the target image.
The color table is a png of 64^3
From what I understand each pixel in the larget image, need to use a similar value in the color table. This seem to limit the colors to 262144 = 64 x 64 x 64. But I'm not sure that this was the effect that I was getting, the results are a completely black image which mean no value, or very strange look colors.
This is my code
// The table is 64 x 64 x 64
float size = 64.0;
// This is the original image
// This function returns a pixel value inside a 3d space
// and the `rgb` method will return a vector with the rgb values
vec3 source_image = sample(src_i, samplerCoord(src_i)).rgb;
// Here I take the pixel value of the image for the red channel
// and multiply it by 64.0, then divide by 255.0 for the 8-bit image
float x = floor(floor(source_image.r * size)/255.0);
// The same thing for the green value on the y axis
float y = floor(floor(source_image.g * size)/255.0);
// Match a value from the image in the color table
vec3 color = sample(src_l, vec2(x, y)).rgb;
src_i.r = color.r;
src_i.g = color.g;
// The blue should be on the z axis, or the nth tile, so I think for this
// case it will be easier to convert the color table to one long row
src_i.b = floor(floor(source_image.b * size)/255.0);
// The image is black
Original image
Expected result
If I multiply by 255 instead (which seems right), then I get this result
float x = floor(source_image.r * 255.0);
float y = floor(source_image.g * 255.0);
I would really appreciate if you can point out what is wrong with the math
The lookup table is not 64*64*64, but it is 64*64 in an 8*8 raster. The color channels which are read by texture2D are in range [0, 1] and the texture coordinates are in range [0, 1 ], too.
vec2 tiles = vec2(8.0);
vec2 tileSize = vec2(64.0);
vec3 imageColor = texture(src_i, samplerCoord(src_i)).rgb;
The index of the tile is encoded to the blue color channel. There are 64 tiles, the first tile has the index 0 and the last tile has the index 63. This means that the blue color channel in the range [0, 1] has to be mapped to the the range [0, 63]:
float index = imageColor.b * (tiles.x * tiles.y - 1.0);
Form this linear tile index the 2 dimensional tile index has in range [0, 8] has to be calculated:
vec2 tileIndex;
tileIndex.y = floor(index / tiles.x);
tileIndex.x = floor(index - tileIndex.y * tiles.x);
The texture minifying function (GL_TEXTURE_MIN_FILTER) and texture magnification function (GL_TEXTURE_MAG_FILTER should be set to GL_LINEAR. This causes that the colors on each tile can be linear interpolated.
Each tile has 64x64 texels. The relative coordinate of the lower left texel is (0.5/64.0, 0.5/64.0) and the relative coordinate of the upper right texel is (63.5/64.0, 63.5/64.0).
The red and green color channel in the range [0, 1] have to be mapped to the range [0.5/64.0, 63.5/64.0]:
vec2 tileUV = mix(0.5/tileSize, (tileSize-0.5)/tileSize, imageColor.rg);
Finally the texture coordinate for the color look up table in the range [0, 1] c an be calculated:
vec2 tableUV = tileIndex / tiles + tileUV / tiles;
The final code which decodes the color in the fragment shader may look like this:
vec2 tiles = vec2(8.0, 8.0);
vec2 tileSize = vec2(64.0);
vec4 imageColor = texture(src_i, samplerCoord(src_i));
float index = imageColor.b * (tiles.x * tiles.y - 1.0);
vec2 tileIndex;
tileIndex.y = floor(index / tiles.x);
tileIndex.x = floor(index - tileIndex.y * tiles.x);
vec2 tileUV = mix(0.5/tileSize, (tileSize-0.5)/tileSize, imageColor.rg);
vec2 tableUV = tileIndex / tiles + tileUV / tiles;
vec3 lookUpColor = texture(src_l, tableUV).rgb;
This algorithm can be further improved, by interpolating between the 2 tiles of the table. Calculate the index of the tile below the blue color channel and the index of the tile above the blue color channel:
float index = imageColor.b * (tiles.x * tiles.y - 1.0);
float index_min = min(62.0, floor(index));
float index_max = index_min + 1.0;
Finally interpolate between the colors from both tiles by using the mix function:
vec3 lookUpColor_1 = texture(src_l, tableUV_1).rgb;
vec3 lookUpColor_2 = texture(src_l, tableUV_1).rgb;
vec3 lookUpColor = mix(lookUpColor_1, lookUpColor_2, index-index_min);
Final code:
vec2 tiles = vec2(8.0, 8.0);
vec2 tileSize = vec2(64.0);
vec4 imageColor = texture(src_i, samplerCoord(src_i));
float index = imageColor.b * (tiles.x * tiles.y - 1.0);
float index_min = min(62.0, floor(index));
float index_max = index_min + 1.0;
vec2 tileIndex_min;
tileIndex_min.y = floor(index_min / tiles.x);
tileIndex_min.x = floor(index_min - tileIndex_min.y * tiles.x);
vec2 tileIndex_max;
tileIndex_max.y = floor(index_max / tiles.x);
tileIndex_max.x = floor(index_max - tileIndex_max.y * tiles.x);
vec2 tileUV = mix(0.5/tileSize, (tileSize-0.5)/tileSize, imageColor.rg);
vec2 tableUV_1 = tileIndex_min / tiles + tileUV / tiles;
vec2 tableUV_2 = tileIndex_max / tiles + tileUV / tiles;
vec3 lookUpColor_1 = texture(src_l, tableUV_1).rgb;
vec3 lookUpColor_2 = texture(src_l, tableUV_2).rgb;
vec3 lookUpColor = mix(lookUpColor_1, lookUpColor_2, index-index_min);
See the image which compares the original image (top left) and by the color lookup modified image (bottom right):
The calculation for finding the corresponding position in the color table seem to be off. I think you would have to find the offset for landing inside the correct "red-green plane" (determined by the blue channel of the input, with consideration of the stride of 8 due to the 8x8 layout in the map) first, and then add this offset to the calculation of the x and y value.
However, I would recommend you check out the built-in CIColorCube filter first, because it does exactly what you want to achieve.
I have found this paper dealing with how to compute the perfect bias when dealing with shadow map.
The idea is to:
get the texel used when sampling the shadowMap
project the texel location back to eyeSpace (ray tracing)
get the difference between your frament.z and the intersection with
the fragment's face.
This way you have calculated the error which serve as the appropriate bias for z-fighting.
Now I am trying to implement it, but I experiment some troubles:
I am using a OrthoProjectionMatrix, so i think I don't need to divide by w back and forth.
I am good until I am computing the ray intersection with the face.
I have a lot of faces failing the test, and my bias is way to important.
This is my fragment shader code:
float getBias(float depthFromTexture)
{
vec3 n = lightFragNormal.xyz;
//no need to divide by w, we got an ortho projection
//we are in NDC [-1,1] we go to [0,1]
//vec4 smTexCoord = 0.5 * shadowCoord + vec4(0.5, 0.5, 0.5, 0.0);
vec4 smTexCoord = shadowCoord;
//we are in [0,1] we go to texture_space [0,1]->[0,shadowMap.dimension]:[0,1024]
//get the nearest index in the shadow map, the texel corresponding to our fragment we use floor (125.6,237.9) -> (125,237)
vec2 delta = vec2(xPixelOffset, yPixelOffset);
vec2 textureDim = vec2(1/xPixelOffset, 1/yPixelOffset);
vec2 index = floor(smTexCoord.xy * textureDim);
//we get the center of the current texel, we had 0.5 to put us in the middle (125,237) -> (125.5,237.5)
//we go back to [0,1024] -> [0,1], (125.5,237.5) -> (0.12, 0.23)
vec2 nlsGridCenter = delta*(index + vec2(0.5f, 0.5f));
// go back to NDC [0,1] -> [-1,1]
vec2 lsGridCenter = 2.0 * nlsGridCenter - vec2(1.0);
//compute lightSpace grid direction, multiply by the inverse projection matrice or
vec4 lsGridCenter4 = inverse(lightProjectionMatrix) * vec4(lsGridCenter, -frustrumNear, 0);
vec3 lsGridLineDir = vec3(normalize(lsGridCenter4));
/** Plane ray intersection **/
// Locate the potential occluder for the shading fragment
//compute the distance t we need to continue in the gridDir direction, the point is "t" far
float ls_t_hit = dot(n, lightFragmentCoord.xyz) / dot(n, lsGridLineDir);
if(ls_t_hit<=0){
return 0; // i got a lot of negativ values it shouldn t be the case
}
//compute the point p with the face
vec3 ls_hit_p = ls_t_hit * lsGridLineDir;
float intersectionDepth = lightProjectionMatrix * vec4(ls_hit_p, 1.0f).z / 2 + 0.5;
float fragmentDepth = lightProjectionMatrix * lightFragmentCoord.z / 2 + 0.5;
float result = abs(intersectionDepth - fragmentDepth);
return result;
}
I am struggling with this line:
vec4 lsGridCenter4 = inverse(lightProjectionMatrix) * vec4(lsGridCenter, -frustrumNear, 0);
i don't know if i am correct maybe:
vec4(lsGridCenter, -frustrumNear, 1);
and of course the plane intersection
from wikipedia:
where:
l = my vector normalized direction
Po = a point belonging to the the plane
l0 = offset of the vector, I think it's the origin so in eye space it should be (0,0,0) i might be wrong here
n = normal of the plane, the normal of my fragment in eyespace
in my code:
float ls_t_hit = dot(n, lightFragmentCoord.xyz) / dot(n, lsGridLineDir);
I'm trying to move squared texture texel to the center of the texture following time.
The following code is doing it's job unless I want the pixel drawn vanish when it reaches the center of the geometry (a plane) and for now it's only becoming more a more small while time increase and the texture seems to be like contracted.
uniform float time;
varying vec2 vUv;
void main() {
vec2 center = vec2(0.5, 0.5);
vec2 newPosition = vec2(vUv.x + t * (center.x - vUv.x), vUv.y + t * (center.y - vUv.y);
gl_FragColor = texture2D( texture, vec2(newPosition.x, newPosition.y) );
}
Edit :
Think of this as a black hole in the texture center.
As I understand it, you want the texel to be at vUv when t=0 and at center after a while.
The result is a zoom-in on the center of the texture.
Actually your code do it from t = 0 to t = 1. When t = 1 the texel position is center.
You have the same behavior using the mix function.
vec2 newPosition = mix(vUv, center, t);
Also when t = 1 all the texel are at center and the image is a single color image. (The color of the center of the texture).
Your problem is that t keep growing. And when t > 1 the texel continues on their path. After they all meet at the center, now they stray from each other. The effect is that the texture is reversed and you see a zoom-out.
Depending on how you want it to end there are multiple solutions:
You want to go to the maximal zoom and keep this image: clamp t in the range [0, 1] like this t = clamp(t, 0, 1);.
You want to go to the maximal zoom and the image vanish: stop to draw it when t > 1 (or t >= 1 if you do not want the single color image).
You want an infinite zoom-in, ie texel going closer and closer to the center but never reach it.
For this third behavior, you can use a new t, say t2:
t2 = 1 - 1/(t*k+1); // Where k > 0
t2 = 1 - pow(k, -t); // Where k > 1
t2 = f(t); // Where f(0) = 0, f is increasing, continuous and limit in +∞ is 1
Recently, I've faced the same issue. But I found alternative solution somewhere on Internet which is use tunnel shader instead of moving texture's texels to center. It has similar behaviour as you want.
float time = -u_time * 0.15;
vec2 p = 2.0 * gl_FragCoord.xy / u_res.xy -1.0;
vec2 uv;
float a = atan(p.y,p.x);
float r = sqrt(dot(p,p));
uv.x = time+.1/r;
uv.y = a/3.1416;
vec4 bg = texture2D(u_texture, uv);
I hope it will be helpfull.
I would like to be able to blend three different textures in one fragment so that they interpolate equally.
I managed to get two textures (textureColor1,textureColor2) to blend across the fragment by using a third texture (textureColor3) which was a black to white gradient. I would like to do something similar with three textures but it would be great to be able to interpolate three textures without having to include another texture as a mask. Any help is greatly appreciated.
vec4 textureColor1 = texture2D(uSampler, vec2(vTextureCoord1.s, vTextureCoord1.t));
vec4 textureColor2 = texture2D(uSampler2, vec2(vTextureCoord2.s, vTextureCoord2.t));
vec4 textureColor3 = texture2D(uSampler3, vec2(vTextureCoord1.s, vTextureCoord1.t));
vec4 finalColor = mix(textureColor2, textureColor1, textureColor3.a);
If you want them all to blend equally, then you can simply do something like:
finalColor.x = (textureColor1.x + textureColor2.x + textureColor3.x)/3.0;
finalColor.y = (textureColor1.y + textureColor2.y + textureColor3.y)/3.0;
finalColor.z = (textureColor1.z + textureColor2.z + textureColor3.z)/3.0;
You could also pass in texture weights as floats. For example, Texture1 might have a weight of 0.5, Texture2 a weight of 0.3 and Texture3 a weight of 0.2. As long as the weights add to 1.0, you can simply multiply them by the texture values. It's just like finding a weighted average.
Interpolate 3 textures using weights:
Assume your have weight from 0 to 1 for each texture type
And you have normalized weights - so they equal to 1 in sum
And you input weights as vec3 into shader
varying/uniform/... vec3 weights;
main {
resultColor.x = (texel0.x * weights.x + texel1.x * weights.y + texel2.x * weights.z);
resultColor.y = (texel0.y * weights.x + texel1.y * weights.y + texel2.y * weights.z);
resultColor.z = (texel0.z * weights.x + texel1.z * weights.y + texel2.z * weights.z);
...
}