Godot shader: The borders of my Texture is stretching to fit into a plane mesh - glsl

I got a simple plane mesh with a shader attached to it.
shader_type spatial;
uniform sampler2D texture_0: repeat_disable;
uniform sampler2D texture_1: repeat_disable;
uniform sampler2D texture_2: repeat_disable;
uniform sampler2D texture_3: repeat_disable;
void fragment(){
ALBEDO = texture(texture_0,UV*2).rgb;
In that code I just multiplied the UV by 2 to divide everything into 4 pieces
Then I added the repeat_disable hint to the textures to prevent the textures to repeat when resized.
My problem is now that the textures are stretching at their borders to fill the empty space vertically and horizontally.
I need to assign the 4 textures to the plane mesh in row, they should not overlap each other,
Cant really tell how to solve this one now
If anyone knows something, id be pleased ;c

Ok, you need variables that you can use to discriminate which texture you will use. To be more specific, four variables (one per texture) which will be 1 where the texture goes, and 0 elsewhere.
We will get there. I'm taking you step by step, so this approach can be adapted to other situations and you have understanding of what is going on.
Let us start by… All white!
void fragment()
{
ALBEDO = vec3(1.0);
}
OK, not super useful. Let us split in two, horizontally. An easy way to do that is with the step function:
void fragment()
{
ALBEDO = vec3(step(0.5, UV.x));
}
That will be black on the left (low x) and white on the right (hi x).
By the way, if you are not sure about the orientation, output the UV:
void fragment()
{
ALBEDO = vec3(UV, 0.0);
}
Alright, if we wanted to flip a variable t, we can do 1.0 - t. So this is white on the left (low x) and black on the right (hi x):
void fragment()
{
ALBEDO = vec3(1.0 - step(UV.x, 0.5));
}
By the way, flipping the parameters of step archives the same result:
void fragment()
{
ALBEDO = vec3(step(0.5, UV.x));
}
And if we wanted to do it vertically, we can work with y:
void fragment()
{
ALBEDO = vec3(step(UV.y, 0.5));
}
Now, to get a quadrant, we can intersect/and these. I mean, multiply them. For example:
void fragment()
{
ALBEDO = vec3(step(UV.y, 0.5) * step(UV.x, 0.5));
}
So, your quadrants look like this:
float q0 = step(UV.y, 0.5) * step(0.5, UV.x);
float q1 = step(UV.y, 0.5) * step(UV.x, 0.5);
float q2 = step(0.5, UV.y) * step(UV.x, 0.5);
float q3 = step(0.5, UV.y) * step(0.5, UV.x);
This might not be the order you want.
Now you can either leave the texture repeat, or we need to compute the appropriate UV. I'll start with the version that needs repeat on.
We can intersect the textures with the values we computed, so they only come out where we want them. I mean, we can use these values to mask the textures with and. I mean, we multiply. Where a variable is 0 (black) you will not get anything from the texture, and where it is 1 (white) you get the texture.
That is something like this:
vec3 t0 = q0 * texture(texture_0, UV * 2.0).rgb;
vec3 t1 = q1 * texture(texture_1, UV * 2.0).rgb;
vec3 t2 = q2 * texture(texture_2, UV * 2.0).rgb;
vec3 t3 = q3 * texture(texture_3, UV * 2.0).rgb;
And we add them:
ALBEDO = t0 + t1 + t2 + t3;
On the other hand, if the textures don't repeat, we need to adjust the UVs. Why? Well, because the valid range is from 0.0 to 1.0, but UV * 2.0 goes from 0.0 to 2.0...
You can output that to get an idea:
void fragment()
{
ALBEDO = vec3(UV * 2.0, 0.0);
}
I'll write that like this, if you don't mind:
void fragment()
{
ALBEDO = vec3(vec2(UV.x, UV.y) * 2.0, 0.0);
}
Which is the same. But since I'll be working on the axis separately, it helps me.
With the UV adjusted, it looks like this:
vec3 t0 = q0 * texture(texture_0, vec2(UV.x - 0.5, UV.y) * 2.0).rgb;
vec3 t1 = q1 * texture(texture_1, vec2(UV.x, UV.y) * 2.0).rgb;
vec3 t2 = q2 * texture(texture_2, vec2(UV.x, UV.y - 0.5) * 2.0).rgb;
vec3 t3 = q3 * texture(texture_3, vec2(UV.x - 0.5, UV.y - 0.5) * 2.0).rgb;
This might not be the order you want.
And again, add them:
ALBEDO = t0 + t1 + t2 + t3;
You can output the adjusted UVs there to have a better idea of what is going on.
Please notice that what we are doing is technically a weighted sum of the textures. Except it is done in such way that only one of them appears at any location (only one has a factor of 1 and the others have a factor of 0). The same approach can be used to make other patterns or textures blend by using other computations for the factors (and once you are beyond using only black and white, you can also apply easing functions). You might even pick the factors by reading yet another texture.
By the way, I told you and/intersection (a * b) and not/complement (1.0 - t). For black and white masks, this is or/union: a + b - a * b. However, if you know there is no overlap you can ignore the last part so it is just addition. So when we add the textures, is an union, you can think of it in term of Venn diagrams.

Related

Correct vertex normals on a heightmapped geodesic sphere

Have generated a geodesic sphere, and am using perlin noise to generate hills etc. Will be looking into using the tessalation shader to divide further. However, I'm using normal mapping, and to do this I am generating tangents and bitangents in the following code:
//Calulate the tangents
deltaPos1 = v1 - v0;
deltaPos2 = v2 - v0;
deltaUV1 = t1 - t0;
deltaUV2 = t2 - t0;
float r = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
tangent = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y) * r;
bitangent = (deltaPos2 * deltaUV1.x - deltaPos1 * deltaUV2.x) * r;
Before I was using height mapping, the normals on a sphere are simple.
normal = normalize(point-origin);
But obviously this is very different once you involve a height map. I'm currently crossing the tangent and bitangent in the shader to figure out the normal, but this is produces some weird results
mat3 normalMat = transpose(inverse(mat3(transform)));
//vec3 T = normalize(vec3(transform*tangent));
vec3 T = normalize(vec3(normalMat * tangent.xyz));
vec3 B = normalize(vec3(normalMat * bitangent.xyz));
vec3 N = normalize(cross(T, B));
//old normal line here
//vec3 N = normalize(vec3(normalMat * vec4(normal, 0.0).xyz));
TBN = mat3(T, B, N);
outputVertex.TBN = TBN;
However this produces results looking like this:
What is it I'm doing wrong?
Thanks
Edit-
Have reverted back to not doing any height mapping. This is simply the earth projected onto a geodesic sphere, with a specular and normal map. You can see I'm getting weird lighting across all of the triangles, especially where the angle of the light is steeper (so naturally the tile would be darker). I should note that I'm not indexing the triangles at all at the moment, I've read somewhere that my tangents and bitangents should be averages of all the similar points, not quite understanding what this would achieve or how to do that. Is that something I need to be looking into?
I have also reverted to using the original normals normalize(point-origin) for this example too, meaning my TBN matrix calcs look like
mat3 normalMat = transpose(inverse(mat3(transform)));
vec3 T = normalize(vec3(transform * tangent));
vec3 B = normalize(vec3(transform * bitangent));
vec3 N = normalize(vec3(normalMat * vec4(normal, 0.0).xyz));
TBN = mat3(T, B, N);
outputVertex.TBN = TBN;
The cube is just my "player", I use it just to help with lighting etc and seeing where the camera is. Also note that removing the normal mapping completely and just using the input normals fixes the lighting.
Thanks guys.
The (second) problem was indeed fixed by indexing out all my points, and averaging the results of the tangents and bitangents. This led to the fixing of the first problem, which was indirectly caused by the bad tangents and bitangents.

Drawing a sphere normal map in the fragment shader

I'm trying to draw a simple sphere with normal mapping in the fragment shader with GL_POINTS. At present, I simply draw one point on the screen and apply a fragment shader to "spherify" it.
However, I'm having trouble colouring the sphere correctly (or at least I think I am). It seems that I'm calculating the z correctly but when I apply the 'normal' colours to gl_FragColor it just doesn't look quite right (or is this what one would expect from a normal map?). I'm assuming there is some inconsistency between gl_PointCoord and the fragment coord, but I can't quite figure it out.
Vertex shader
precision mediump float;
attribute vec3 position;
void main() {
gl_PointSize = 500.0;
gl_Position = vec4(position.xyz, 1.0);
}
fragment shader
precision mediump float;
void main() {
float x = gl_PointCoord.x * 2.0 - 1.0;
float y = gl_PointCoord.y * 2.0 - 1.0;
float z = sqrt(1.0 - (pow(x, 2.0) + pow(y, 2.0)));
vec3 position = vec3(x, y, z);
float mag = dot(position.xy, position.xy);
if(mag > 1.0) discard;
vec3 normal = normalize(position);
gl_FragColor = vec4(normal, 1.0);
}
Actual output:
Expected output:
The color channels are clamped to the range [0, 1]. (0, 0, 0) is black and (1, 1, 1) is completely white.
Since the normal vector is normalized, its component are in the range [-1, 1].
To get the expected result you have to map the normal vector from the range [-1, 1] to [0, 1]:
vec3 normal_col = normalize(position) * 0.5 + 0.5;
gl_FragColor = vec4(normal_col, 1.0);
If you use the abs value, then a positive and negative value with the same size have the same color representation. The intensity of the color increases with the grad of the value:
vec3 normal_col = abs(normalize(position));
gl_FragColor = vec4(normal_col, 1.0);
First of all, the normal facing the camera [0,0,-1] should be rgb values: [0.5,0.5,1.0]. You have to rescale things to move those negative values to be between 0 and 1.
Second, the normals of a sphere would not change linearly, but in a sine wave. So you need some trigonometry here. It makes sense to me to to start with the perpendicular normal [0,0,-1] and then then rotate that normal by an angle, because that angle is what changing linearly.
As a result of playing around this I came up with this:
http://glslsandbox.com/e#50268.3
which uses some rotation function from here: https://github.com/yuichiroharai/glsl-y-rotate

Mixing normal maps not working

I am working with data from a game written for DirectX and I'm using OpenGL.
I have managed to sort out 99 % of everything regarding translations of the textures and they look correct on the terrain.
Here's the problem I'm having..
I know mixing normals together directly doesn't work very well. I have read that there is a workable method but I was never able to get satisfactory results.
Here's the basics on what I'm doing.
There are 4 textures per map section.. Three of them have matching normal maps. One of them is the color base and is never translated.. It's mixed in after the translations. I keep track on this texture and send a value to the shader that is used as a mask to avoid translating the color texture or using its NON-existent normal map. I have confirmed that the masking is working by rendering the different textures, normal maps and color map.
The textures are NOT in the same order.. That is to say, though the textures might be the same, they are out of order. The first texture might be rocks and the second grass in one section but its adjoining neighbor might have these 2 reversed. I thought about sorting them but not all sections contain the same textures.
So... What I do is mask the textures and normal maps and do the math on the normal maps individually and sum them. I didn't think that the order of the maps would matter but apparently I'm mistaken? Maybe the transformed textures need the normals translated? I am transforming the normal maps but not the normals themselves.
Here's my code for masking and calculating the NdotL:
MixLevel = texture2D(mixtexture, mix_coords.xy).rgba;
//Which ever is the blend color can't be translated.
t1 = mix(texture2D(layer_1, color_uv), t1, mask_2.r);
t2 = mix(texture2D(layer_2, color_uv), t2, mask_2.g);
t3 = mix(texture2D(layer_3, color_uv), t3, mask_2.b);
t4 = mix(texture2D(layer_4, color_uv), t4, mask_2.a);
//Now we mix our textures
vec4 base;
base = t4 * MixLevel.a ;
base += t3 * MixLevel.b ;
base += t2 * MixLevel.g ;
base += t1 * MixLevel.r ;
//Get our normal maps.
n1.rgb = normalize(2.0 * n1.rgb - 1.0);
n2.rgb = normalize(2.0 * n2.rgb - 1.0);
n3.rgb = normalize(2.0 * n3.rgb - 1.0);
n4.rgb = normalize(2.0 * n4.rgb - 1.0);
//-------------------------------------------------------------
//There is no good way to add normals together and not destroy them.
//We have to do the math on each one THAN add them together.
vec3 N = normalize(n);
vec3 L = normalize(lightDirection);
PN4 = normalize(TBN * n4.rgb)* mask_2.a;
PN3 = normalize(TBN * n3.rgb)* mask_2.b;
PN2 = normalize(TBN * n2.rgb)* mask_2.g;
PN1 = normalize(TBN * n1.rgb)* mask_2.r;
float NdotL = 0.0;
NdotL = NdotL + (max(dot(PN4, L), 0.0) * MixLevel.a );
NdotL = NdotL + (max(dot(PN3, L), 0.0) * MixLevel.b );
NdotL = NdotL + (max(dot(PN2, L), 0.0) * MixLevel.g );
NdotL = NdotL + (max(dot(PN1, L), 0.0) * MixLevel.r );
Update:
As requested I'm attaching some more code:
This is the section in the fragment shader that does the texture transform:
//calcualte texcoords for mix texture.
float scale = 256.0/272.0;
vec2 mc;
mc = color_uv;
mc *= scale;
mc += .030303030;// = 8/264
mix_coords = mc.xy;
// layer 4 ---------------------------------------------
vec2 tv4;
tv4 = vec2(dot(-layer3U, Vertex), dot(layer3V, Vertex));
t4 = texture2D(layer_4, -tv4 + .5);
n4 = texture2D(n_layer_4, -tv4 + .5);
// layer 3 ---------------------------------------------
vec2 tv3;
tv3 = vec2(dot(-layer2U, Vertex), dot(layer2V, Vertex));
t3 = texture2D(layer_3, -tv3 + .5);
n3 = texture2D(n_layer_3, -tv3 + .5);
// layer 2 ---------------------------------------------
vec2 tv2;
tv2 = vec2(dot(-layer1U, Vertex), dot(layer1V, Vertex));
t2 = texture2D(layer_2, -tv2 + .5);
n2 = texture2D(n_layer_2, -tv2 + .5);
// layer 1 ---------------------------------------------
vec2 tv1;
tv1 = vec2(dot(-layer0U, Vertex), dot(layer0V, Vertex));
t1 = texture2D(layer_1, -tv1 + .5);
n1 = texture2D(n_layer_1, -tv1 + .5);
//------------------------------------------------------------------
Vertex is gl_Vertex in model space. Layer0U and Layer0V are vec4 and come from the game data.
This is how the mask_2 is created in the vertex shader.
// Create the mask. Used to cancel transform of color/paint texture;
mask_2 = vec4(1.0 ,1.0 ,1.0 ,1.0);
switch (main_texture){
case 1: mask_2.r = 0.0; break;
case 2: mask_2.g = 0.0; break;
case 3: mask_2.b = 0.0; break;
case 4: mask_2.a = 0.0; break;
}
Here is an image showing the problem with the lighting:
UPDATE: I think I know whats wrong... When the normal maps are transformed (Actually its the UVs), this leaves the normal pointing in the wrong direction.
I need help transforming the actual normal that's read at the transformed UV back to its original state, I can't remove the UV translation.. that simply reads the wrong normal for that textures pixel location. It's all wrong.
What I have to work with are the 2 vec4s that do the uv transform and the vertex. If some how I could build a 3x3 matrix using these, I might be able to multiply the normal by it and get it facing the right direction? I'm not good at advanced math and need help. (For all I know that method wont even work)
Here is an image with the specular turned up.. you can see how the normals orientation is causing lighting problems.

Parallax mapping - only works in one direction

I'm working on parallax mapping (from this tutorial: http://sunandblackcat.com/tipFullView.php?topicid=28) and I seem to only get good results when I move along one axis (e.g. left-to-right) while looking at a parallaxed quad. The image below illustrates this:
You can see it clearly at the left and right steep edges. If I'm moving to the right the right steep edge should have less width than the left one (which looks correct on the left image) [Camera is at right side of cube]. However, if I move along a different axis (instead of west to east I now move top to bottom) you can see that this time the steep edges are incorrect [Camera is again on right side of cube].
I'm using the most simple form of parallax mapping and even that has the same problems. The fragment shader looks like this:
void main()
{
vec2 texCoords = fs_in.TexCoords;
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 V = normalize(fs_in.TBN * viewDir);
vec3 L = normalize(fs_in.TBN * lightDir);
float height = texture(texture_height, texCoords).r;
float scale = 0.2;
vec2 texCoordsOffset = scale * V.xy * height;
texCoords += texCoordsOffset;
// calculate diffuse lighting
vec3 N = texture(texture_normal, texCoords).rgb * 2.0 - 1.0;
N = normalize(N); // normal already in tangent-space
vec3 ambient = vec3(0.2f);
float diff = clamp(dot(N, L), 0, 1);
vec3 diffuse = texture(texture_diffuse, texCoords).rgb * diff;
vec3 R = reflect(L, N);
float spec = pow(max(dot(R, V), 0.0), 32);
vec3 specular = vec3(spec);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}
TBN matrix is created as follows in the vertex shader:
vs_out.TBN = transpose(mat3(normalize(tangent), normalize(bitangent), normalize(vs_out.Normal)));
I use the transpose of the TBN to transform all relevant vectors to tangent space. Without offsetting the TexCoords, the lighting looks solid with normal mapped texture so my guess is that it's not the TBN matrix that's causing the issues. What could be causing this that it only works in one direction?
edit
Interestingly, If I invert the y coordinate of the TexCoords input variable parallax mapping seems to work. I have no idea why this works though and I need it to work without the inversion.
vec2 texCoords = vec2(fs_in.TexCoords.x, 1.0 - fs_in.TexCoords.y);

Texture repeating and clamping in shader

I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.