Mixing normal maps not working - opengl

I am working with data from a game written for DirectX and I'm using OpenGL.
I have managed to sort out 99 % of everything regarding translations of the textures and they look correct on the terrain.
Here's the problem I'm having..
I know mixing normals together directly doesn't work very well. I have read that there is a workable method but I was never able to get satisfactory results.
Here's the basics on what I'm doing.
There are 4 textures per map section.. Three of them have matching normal maps. One of them is the color base and is never translated.. It's mixed in after the translations. I keep track on this texture and send a value to the shader that is used as a mask to avoid translating the color texture or using its NON-existent normal map. I have confirmed that the masking is working by rendering the different textures, normal maps and color map.
The textures are NOT in the same order.. That is to say, though the textures might be the same, they are out of order. The first texture might be rocks and the second grass in one section but its adjoining neighbor might have these 2 reversed. I thought about sorting them but not all sections contain the same textures.
So... What I do is mask the textures and normal maps and do the math on the normal maps individually and sum them. I didn't think that the order of the maps would matter but apparently I'm mistaken? Maybe the transformed textures need the normals translated? I am transforming the normal maps but not the normals themselves.
Here's my code for masking and calculating the NdotL:
MixLevel = texture2D(mixtexture, mix_coords.xy).rgba;
//Which ever is the blend color can't be translated.
t1 = mix(texture2D(layer_1, color_uv), t1, mask_2.r);
t2 = mix(texture2D(layer_2, color_uv), t2, mask_2.g);
t3 = mix(texture2D(layer_3, color_uv), t3, mask_2.b);
t4 = mix(texture2D(layer_4, color_uv), t4, mask_2.a);
//Now we mix our textures
vec4 base;
base = t4 * MixLevel.a ;
base += t3 * MixLevel.b ;
base += t2 * MixLevel.g ;
base += t1 * MixLevel.r ;
//Get our normal maps.
n1.rgb = normalize(2.0 * n1.rgb - 1.0);
n2.rgb = normalize(2.0 * n2.rgb - 1.0);
n3.rgb = normalize(2.0 * n3.rgb - 1.0);
n4.rgb = normalize(2.0 * n4.rgb - 1.0);
//-------------------------------------------------------------
//There is no good way to add normals together and not destroy them.
//We have to do the math on each one THAN add them together.
vec3 N = normalize(n);
vec3 L = normalize(lightDirection);
PN4 = normalize(TBN * n4.rgb)* mask_2.a;
PN3 = normalize(TBN * n3.rgb)* mask_2.b;
PN2 = normalize(TBN * n2.rgb)* mask_2.g;
PN1 = normalize(TBN * n1.rgb)* mask_2.r;
float NdotL = 0.0;
NdotL = NdotL + (max(dot(PN4, L), 0.0) * MixLevel.a );
NdotL = NdotL + (max(dot(PN3, L), 0.0) * MixLevel.b );
NdotL = NdotL + (max(dot(PN2, L), 0.0) * MixLevel.g );
NdotL = NdotL + (max(dot(PN1, L), 0.0) * MixLevel.r );
Update:
As requested I'm attaching some more code:
This is the section in the fragment shader that does the texture transform:
//calcualte texcoords for mix texture.
float scale = 256.0/272.0;
vec2 mc;
mc = color_uv;
mc *= scale;
mc += .030303030;// = 8/264
mix_coords = mc.xy;
// layer 4 ---------------------------------------------
vec2 tv4;
tv4 = vec2(dot(-layer3U, Vertex), dot(layer3V, Vertex));
t4 = texture2D(layer_4, -tv4 + .5);
n4 = texture2D(n_layer_4, -tv4 + .5);
// layer 3 ---------------------------------------------
vec2 tv3;
tv3 = vec2(dot(-layer2U, Vertex), dot(layer2V, Vertex));
t3 = texture2D(layer_3, -tv3 + .5);
n3 = texture2D(n_layer_3, -tv3 + .5);
// layer 2 ---------------------------------------------
vec2 tv2;
tv2 = vec2(dot(-layer1U, Vertex), dot(layer1V, Vertex));
t2 = texture2D(layer_2, -tv2 + .5);
n2 = texture2D(n_layer_2, -tv2 + .5);
// layer 1 ---------------------------------------------
vec2 tv1;
tv1 = vec2(dot(-layer0U, Vertex), dot(layer0V, Vertex));
t1 = texture2D(layer_1, -tv1 + .5);
n1 = texture2D(n_layer_1, -tv1 + .5);
//------------------------------------------------------------------
Vertex is gl_Vertex in model space. Layer0U and Layer0V are vec4 and come from the game data.
This is how the mask_2 is created in the vertex shader.
// Create the mask. Used to cancel transform of color/paint texture;
mask_2 = vec4(1.0 ,1.0 ,1.0 ,1.0);
switch (main_texture){
case 1: mask_2.r = 0.0; break;
case 2: mask_2.g = 0.0; break;
case 3: mask_2.b = 0.0; break;
case 4: mask_2.a = 0.0; break;
}
Here is an image showing the problem with the lighting:
UPDATE: I think I know whats wrong... When the normal maps are transformed (Actually its the UVs), this leaves the normal pointing in the wrong direction.
I need help transforming the actual normal that's read at the transformed UV back to its original state, I can't remove the UV translation.. that simply reads the wrong normal for that textures pixel location. It's all wrong.
What I have to work with are the 2 vec4s that do the uv transform and the vertex. If some how I could build a 3x3 matrix using these, I might be able to multiply the normal by it and get it facing the right direction? I'm not good at advanced math and need help. (For all I know that method wont even work)
Here is an image with the specular turned up.. you can see how the normals orientation is causing lighting problems.

Related

Godot shader: The borders of my Texture is stretching to fit into a plane mesh

I got a simple plane mesh with a shader attached to it.
shader_type spatial;
uniform sampler2D texture_0: repeat_disable;
uniform sampler2D texture_1: repeat_disable;
uniform sampler2D texture_2: repeat_disable;
uniform sampler2D texture_3: repeat_disable;
void fragment(){
ALBEDO = texture(texture_0,UV*2).rgb;
In that code I just multiplied the UV by 2 to divide everything into 4 pieces
Then I added the repeat_disable hint to the textures to prevent the textures to repeat when resized.
My problem is now that the textures are stretching at their borders to fill the empty space vertically and horizontally.
I need to assign the 4 textures to the plane mesh in row, they should not overlap each other,
Cant really tell how to solve this one now
If anyone knows something, id be pleased ;c
Ok, you need variables that you can use to discriminate which texture you will use. To be more specific, four variables (one per texture) which will be 1 where the texture goes, and 0 elsewhere.
We will get there. I'm taking you step by step, so this approach can be adapted to other situations and you have understanding of what is going on.
Let us start by… All white!
void fragment()
{
ALBEDO = vec3(1.0);
}
OK, not super useful. Let us split in two, horizontally. An easy way to do that is with the step function:
void fragment()
{
ALBEDO = vec3(step(0.5, UV.x));
}
That will be black on the left (low x) and white on the right (hi x).
By the way, if you are not sure about the orientation, output the UV:
void fragment()
{
ALBEDO = vec3(UV, 0.0);
}
Alright, if we wanted to flip a variable t, we can do 1.0 - t. So this is white on the left (low x) and black on the right (hi x):
void fragment()
{
ALBEDO = vec3(1.0 - step(UV.x, 0.5));
}
By the way, flipping the parameters of step archives the same result:
void fragment()
{
ALBEDO = vec3(step(0.5, UV.x));
}
And if we wanted to do it vertically, we can work with y:
void fragment()
{
ALBEDO = vec3(step(UV.y, 0.5));
}
Now, to get a quadrant, we can intersect/and these. I mean, multiply them. For example:
void fragment()
{
ALBEDO = vec3(step(UV.y, 0.5) * step(UV.x, 0.5));
}
So, your quadrants look like this:
float q0 = step(UV.y, 0.5) * step(0.5, UV.x);
float q1 = step(UV.y, 0.5) * step(UV.x, 0.5);
float q2 = step(0.5, UV.y) * step(UV.x, 0.5);
float q3 = step(0.5, UV.y) * step(0.5, UV.x);
This might not be the order you want.
Now you can either leave the texture repeat, or we need to compute the appropriate UV. I'll start with the version that needs repeat on.
We can intersect the textures with the values we computed, so they only come out where we want them. I mean, we can use these values to mask the textures with and. I mean, we multiply. Where a variable is 0 (black) you will not get anything from the texture, and where it is 1 (white) you get the texture.
That is something like this:
vec3 t0 = q0 * texture(texture_0, UV * 2.0).rgb;
vec3 t1 = q1 * texture(texture_1, UV * 2.0).rgb;
vec3 t2 = q2 * texture(texture_2, UV * 2.0).rgb;
vec3 t3 = q3 * texture(texture_3, UV * 2.0).rgb;
And we add them:
ALBEDO = t0 + t1 + t2 + t3;
On the other hand, if the textures don't repeat, we need to adjust the UVs. Why? Well, because the valid range is from 0.0 to 1.0, but UV * 2.0 goes from 0.0 to 2.0...
You can output that to get an idea:
void fragment()
{
ALBEDO = vec3(UV * 2.0, 0.0);
}
I'll write that like this, if you don't mind:
void fragment()
{
ALBEDO = vec3(vec2(UV.x, UV.y) * 2.0, 0.0);
}
Which is the same. But since I'll be working on the axis separately, it helps me.
With the UV adjusted, it looks like this:
vec3 t0 = q0 * texture(texture_0, vec2(UV.x - 0.5, UV.y) * 2.0).rgb;
vec3 t1 = q1 * texture(texture_1, vec2(UV.x, UV.y) * 2.0).rgb;
vec3 t2 = q2 * texture(texture_2, vec2(UV.x, UV.y - 0.5) * 2.0).rgb;
vec3 t3 = q3 * texture(texture_3, vec2(UV.x - 0.5, UV.y - 0.5) * 2.0).rgb;
This might not be the order you want.
And again, add them:
ALBEDO = t0 + t1 + t2 + t3;
You can output the adjusted UVs there to have a better idea of what is going on.
Please notice that what we are doing is technically a weighted sum of the textures. Except it is done in such way that only one of them appears at any location (only one has a factor of 1 and the others have a factor of 0). The same approach can be used to make other patterns or textures blend by using other computations for the factors (and once you are beyond using only black and white, you can also apply easing functions). You might even pick the factors by reading yet another texture.
By the way, I told you and/intersection (a * b) and not/complement (1.0 - t). For black and white masks, this is or/union: a + b - a * b. However, if you know there is no overlap you can ignore the last part so it is just addition. So when we add the textures, is an union, you can think of it in term of Venn diagrams.

How to get a smooth result with RSM (Reflective Shadow Mapping)?

I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.

Correct vertex normals on a heightmapped geodesic sphere

Have generated a geodesic sphere, and am using perlin noise to generate hills etc. Will be looking into using the tessalation shader to divide further. However, I'm using normal mapping, and to do this I am generating tangents and bitangents in the following code:
//Calulate the tangents
deltaPos1 = v1 - v0;
deltaPos2 = v2 - v0;
deltaUV1 = t1 - t0;
deltaUV2 = t2 - t0;
float r = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
tangent = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y) * r;
bitangent = (deltaPos2 * deltaUV1.x - deltaPos1 * deltaUV2.x) * r;
Before I was using height mapping, the normals on a sphere are simple.
normal = normalize(point-origin);
But obviously this is very different once you involve a height map. I'm currently crossing the tangent and bitangent in the shader to figure out the normal, but this is produces some weird results
mat3 normalMat = transpose(inverse(mat3(transform)));
//vec3 T = normalize(vec3(transform*tangent));
vec3 T = normalize(vec3(normalMat * tangent.xyz));
vec3 B = normalize(vec3(normalMat * bitangent.xyz));
vec3 N = normalize(cross(T, B));
//old normal line here
//vec3 N = normalize(vec3(normalMat * vec4(normal, 0.0).xyz));
TBN = mat3(T, B, N);
outputVertex.TBN = TBN;
However this produces results looking like this:
What is it I'm doing wrong?
Thanks
Edit-
Have reverted back to not doing any height mapping. This is simply the earth projected onto a geodesic sphere, with a specular and normal map. You can see I'm getting weird lighting across all of the triangles, especially where the angle of the light is steeper (so naturally the tile would be darker). I should note that I'm not indexing the triangles at all at the moment, I've read somewhere that my tangents and bitangents should be averages of all the similar points, not quite understanding what this would achieve or how to do that. Is that something I need to be looking into?
I have also reverted to using the original normals normalize(point-origin) for this example too, meaning my TBN matrix calcs look like
mat3 normalMat = transpose(inverse(mat3(transform)));
vec3 T = normalize(vec3(transform * tangent));
vec3 B = normalize(vec3(transform * bitangent));
vec3 N = normalize(vec3(normalMat * vec4(normal, 0.0).xyz));
TBN = mat3(T, B, N);
outputVertex.TBN = TBN;
The cube is just my "player", I use it just to help with lighting etc and seeing where the camera is. Also note that removing the normal mapping completely and just using the input normals fixes the lighting.
Thanks guys.
The (second) problem was indeed fixed by indexing out all my points, and averaging the results of the tangents and bitangents. This led to the fixing of the first problem, which was indirectly caused by the bad tangents and bitangents.

OpenGL 3.3 GLSL Fragment Shader Fog effect not working

I'm trying to add a fog effect to my scene in OpenGL 3.3. I tried following this tutorial. However, I can't seem to get the same effect on my screen. All that seems to happen is that my objects get darker, but there's no gray foggy mist on the screen. What could be the problem?
Here's my result.
When it should look like:
Here's my Fragment Shader with multiple light sources. It works fine without any fog. All GLSL variables are set and working correctly.
for (int i = 0; i < NUM_LIGHTS; i++)
{
float distance = length(lightVector[i]);
vec3 l;
// point light
attenuation = 1.0 / (gLight[i].attenuation.x + gLight[i].attenuation.y * distance + gLight[i].attenuation.z * distance * distance);
l = normalize( vec3(lightVector[i]) );
float cosTheta = clamp( dot( n, l ), 0,1 );
vec3 E = normalize(eyeVector);
vec3 R = reflect( -l, n );
float cosAlpha = clamp( dot( E, R ), 0,1 );
vec3 MaterialDiffuseColor = v_color * materialCoefficients.diffuse;
vec3 MaterialAmbientColor = v_color * materialCoefficients.ambient;
lighting += vec3(
MaterialAmbientColor
+ (
MaterialDiffuseColor * gLight[i].color * cosTheta * attenuation
)
+ (
materialCoefficients.specular * gLight[i].color * pow(cosAlpha, materialCoefficients.shininess)
)
);
}
float fDiffuseIntensity = max(0.0, dot(normalize(normal), -gLight[0].position.xyz));
color = vec4(lighting, 1.0f) * vec4(gLight[0].color*(materialCoefficients.ambient+fDiffuseIntensity), 1.0f);
float fFogCoord = abs(eyeVector.z/1.0f);
color = mix(color, fogParams.vFogColor, getFogFactor(fogParams, fFogCoord));
Two things.
First you should verify your fogParams.vFogColor value is getting set correctly. The simplest way to do this is to just short-circut the shader and set color to fogParams.vFogColor and immediately return. If the scene is black, then you know your fog color isn't being sent to the shader correctly.
Second, you need to eliminate your skybox. You can simply set glClearColor() with the fog color and not use a skybox at all, since everywhere the skybox should be visible you should be seeing fog instead, right? More advanced usage could modify the skybox shader to move from fog to the skybox texture depending on the angle of the vec3 off of horizontal, so when looking up the sky is (somewhat) visible, but looking horizontally simply shows the fog, and have a smooth transition between the two.

Texture repeating and clamping in shader

I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.