I would like to be able to blend three different textures in one fragment so that they interpolate equally.
I managed to get two textures (textureColor1,textureColor2) to blend across the fragment by using a third texture (textureColor3) which was a black to white gradient. I would like to do something similar with three textures but it would be great to be able to interpolate three textures without having to include another texture as a mask. Any help is greatly appreciated.
vec4 textureColor1 = texture2D(uSampler, vec2(vTextureCoord1.s, vTextureCoord1.t));
vec4 textureColor2 = texture2D(uSampler2, vec2(vTextureCoord2.s, vTextureCoord2.t));
vec4 textureColor3 = texture2D(uSampler3, vec2(vTextureCoord1.s, vTextureCoord1.t));
vec4 finalColor = mix(textureColor2, textureColor1, textureColor3.a);
If you want them all to blend equally, then you can simply do something like:
finalColor.x = (textureColor1.x + textureColor2.x + textureColor3.x)/3.0;
finalColor.y = (textureColor1.y + textureColor2.y + textureColor3.y)/3.0;
finalColor.z = (textureColor1.z + textureColor2.z + textureColor3.z)/3.0;
You could also pass in texture weights as floats. For example, Texture1 might have a weight of 0.5, Texture2 a weight of 0.3 and Texture3 a weight of 0.2. As long as the weights add to 1.0, you can simply multiply them by the texture values. It's just like finding a weighted average.
Interpolate 3 textures using weights:
Assume your have weight from 0 to 1 for each texture type
And you have normalized weights - so they equal to 1 in sum
And you input weights as vec3 into shader
varying/uniform/... vec3 weights;
main {
resultColor.x = (texel0.x * weights.x + texel1.x * weights.y + texel2.x * weights.z);
resultColor.y = (texel0.y * weights.x + texel1.y * weights.y + texel2.y * weights.z);
resultColor.z = (texel0.z * weights.x + texel1.z * weights.y + texel2.z * weights.z);
...
}
Related
I got a simple plane mesh with a shader attached to it.
shader_type spatial;
uniform sampler2D texture_0: repeat_disable;
uniform sampler2D texture_1: repeat_disable;
uniform sampler2D texture_2: repeat_disable;
uniform sampler2D texture_3: repeat_disable;
void fragment(){
ALBEDO = texture(texture_0,UV*2).rgb;
In that code I just multiplied the UV by 2 to divide everything into 4 pieces
Then I added the repeat_disable hint to the textures to prevent the textures to repeat when resized.
My problem is now that the textures are stretching at their borders to fill the empty space vertically and horizontally.
I need to assign the 4 textures to the plane mesh in row, they should not overlap each other,
Cant really tell how to solve this one now
If anyone knows something, id be pleased ;c
Ok, you need variables that you can use to discriminate which texture you will use. To be more specific, four variables (one per texture) which will be 1 where the texture goes, and 0 elsewhere.
We will get there. I'm taking you step by step, so this approach can be adapted to other situations and you have understanding of what is going on.
Let us start by… All white!
void fragment()
{
ALBEDO = vec3(1.0);
}
OK, not super useful. Let us split in two, horizontally. An easy way to do that is with the step function:
void fragment()
{
ALBEDO = vec3(step(0.5, UV.x));
}
That will be black on the left (low x) and white on the right (hi x).
By the way, if you are not sure about the orientation, output the UV:
void fragment()
{
ALBEDO = vec3(UV, 0.0);
}
Alright, if we wanted to flip a variable t, we can do 1.0 - t. So this is white on the left (low x) and black on the right (hi x):
void fragment()
{
ALBEDO = vec3(1.0 - step(UV.x, 0.5));
}
By the way, flipping the parameters of step archives the same result:
void fragment()
{
ALBEDO = vec3(step(0.5, UV.x));
}
And if we wanted to do it vertically, we can work with y:
void fragment()
{
ALBEDO = vec3(step(UV.y, 0.5));
}
Now, to get a quadrant, we can intersect/and these. I mean, multiply them. For example:
void fragment()
{
ALBEDO = vec3(step(UV.y, 0.5) * step(UV.x, 0.5));
}
So, your quadrants look like this:
float q0 = step(UV.y, 0.5) * step(0.5, UV.x);
float q1 = step(UV.y, 0.5) * step(UV.x, 0.5);
float q2 = step(0.5, UV.y) * step(UV.x, 0.5);
float q3 = step(0.5, UV.y) * step(0.5, UV.x);
This might not be the order you want.
Now you can either leave the texture repeat, or we need to compute the appropriate UV. I'll start with the version that needs repeat on.
We can intersect the textures with the values we computed, so they only come out where we want them. I mean, we can use these values to mask the textures with and. I mean, we multiply. Where a variable is 0 (black) you will not get anything from the texture, and where it is 1 (white) you get the texture.
That is something like this:
vec3 t0 = q0 * texture(texture_0, UV * 2.0).rgb;
vec3 t1 = q1 * texture(texture_1, UV * 2.0).rgb;
vec3 t2 = q2 * texture(texture_2, UV * 2.0).rgb;
vec3 t3 = q3 * texture(texture_3, UV * 2.0).rgb;
And we add them:
ALBEDO = t0 + t1 + t2 + t3;
On the other hand, if the textures don't repeat, we need to adjust the UVs. Why? Well, because the valid range is from 0.0 to 1.0, but UV * 2.0 goes from 0.0 to 2.0...
You can output that to get an idea:
void fragment()
{
ALBEDO = vec3(UV * 2.0, 0.0);
}
I'll write that like this, if you don't mind:
void fragment()
{
ALBEDO = vec3(vec2(UV.x, UV.y) * 2.0, 0.0);
}
Which is the same. But since I'll be working on the axis separately, it helps me.
With the UV adjusted, it looks like this:
vec3 t0 = q0 * texture(texture_0, vec2(UV.x - 0.5, UV.y) * 2.0).rgb;
vec3 t1 = q1 * texture(texture_1, vec2(UV.x, UV.y) * 2.0).rgb;
vec3 t2 = q2 * texture(texture_2, vec2(UV.x, UV.y - 0.5) * 2.0).rgb;
vec3 t3 = q3 * texture(texture_3, vec2(UV.x - 0.5, UV.y - 0.5) * 2.0).rgb;
This might not be the order you want.
And again, add them:
ALBEDO = t0 + t1 + t2 + t3;
You can output the adjusted UVs there to have a better idea of what is going on.
Please notice that what we are doing is technically a weighted sum of the textures. Except it is done in such way that only one of them appears at any location (only one has a factor of 1 and the others have a factor of 0). The same approach can be used to make other patterns or textures blend by using other computations for the factors (and once you are beyond using only black and white, you can also apply easing functions). You might even pick the factors by reading yet another texture.
By the way, I told you and/intersection (a * b) and not/complement (1.0 - t). For black and white masks, this is or/union: a + b - a * b. However, if you know there is no overlap you can ignore the last part so it is just addition. So when we add the textures, is an union, you can think of it in term of Venn diagrams.
I'm trying to draw a simple sphere with normal mapping in the fragment shader with GL_POINTS. At present, I simply draw one point on the screen and apply a fragment shader to "spherify" it.
However, I'm having trouble colouring the sphere correctly (or at least I think I am). It seems that I'm calculating the z correctly but when I apply the 'normal' colours to gl_FragColor it just doesn't look quite right (or is this what one would expect from a normal map?). I'm assuming there is some inconsistency between gl_PointCoord and the fragment coord, but I can't quite figure it out.
Vertex shader
precision mediump float;
attribute vec3 position;
void main() {
gl_PointSize = 500.0;
gl_Position = vec4(position.xyz, 1.0);
}
fragment shader
precision mediump float;
void main() {
float x = gl_PointCoord.x * 2.0 - 1.0;
float y = gl_PointCoord.y * 2.0 - 1.0;
float z = sqrt(1.0 - (pow(x, 2.0) + pow(y, 2.0)));
vec3 position = vec3(x, y, z);
float mag = dot(position.xy, position.xy);
if(mag > 1.0) discard;
vec3 normal = normalize(position);
gl_FragColor = vec4(normal, 1.0);
}
Actual output:
Expected output:
The color channels are clamped to the range [0, 1]. (0, 0, 0) is black and (1, 1, 1) is completely white.
Since the normal vector is normalized, its component are in the range [-1, 1].
To get the expected result you have to map the normal vector from the range [-1, 1] to [0, 1]:
vec3 normal_col = normalize(position) * 0.5 + 0.5;
gl_FragColor = vec4(normal_col, 1.0);
If you use the abs value, then a positive and negative value with the same size have the same color representation. The intensity of the color increases with the grad of the value:
vec3 normal_col = abs(normalize(position));
gl_FragColor = vec4(normal_col, 1.0);
First of all, the normal facing the camera [0,0,-1] should be rgb values: [0.5,0.5,1.0]. You have to rescale things to move those negative values to be between 0 and 1.
Second, the normals of a sphere would not change linearly, but in a sine wave. So you need some trigonometry here. It makes sense to me to to start with the perpendicular normal [0,0,-1] and then then rotate that normal by an angle, because that angle is what changing linearly.
As a result of playing around this I came up with this:
http://glslsandbox.com/e#50268.3
which uses some rotation function from here: https://github.com/yuichiroharai/glsl-y-rotate
I'm baking normal map on the fragment shader from a height map. The height map looks great and looks smooth. However, when I generate the normal map I get very weird result.
Here are two rendered images that show the problem, one with all lighting calculations and second one has the normal map image applied on top of the mesh.
The way I bake the normal map is by sampling neighbor pixels on the fragment shader.
The mesh is 32x32 and the normal map and height map is 64x64. Here's the fragment shader code that samples the neighbor pixels:
float NORMAL_OFF = (1.0 / 64.0);
vec3 off = vec3(-NORMAL_OFF, 0, NORMAL_OFF);
// s11 = Current
float s11 = texture2D(uSampler, texturePos).x;
// s01 = Left
float s01 = texture2D(uSampler, vec2(texturePos.xy + off.xy)).x;
// s21 = Right
float s21 = texture2D(uSampler, vec2(texturePos.xy + off.zy)).x;
// s10 = Below
float s10 = texture2D(uSampler, vec2(texturePos.xy + off.yx)).x;
// s12 = Above
float s12 = texture2D(uSampler, vec2(texturePos.xy + off.yz)).x;
vec3 va = normalize( vec3(off.z, 0.0, s21 - s11) );
vec3 vb = normalize( vec3(0.0, off.z, s12 - s11) );
vec3 normal = normalize( cross(va, vb) );
texturePos is calculated on the vertex shader as vertexPosition.x / 128 (128 because the distance between vertices is 4 pixels. So 32 * 4 = 128).
Why is my result so weird?
Your height map has too few sampling depth resolution, resulting in those hard steps. Probably your height map is 8 bit, giving you a max of 256 height levels. Now if your height map is is higher planar resolution than 256, the lateral resolution is insufficient to represent a smooth heightfield.
Solution: Use a higher sampling resolution for your height map. 16 bits are a popular choice.
Your shader and baking code are fine though, they just don't get input data with enough resolution to work with.
I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.
I try to create a 2 pass effect using FBO in OpenGL.
In the first pass, I write the depth in a color buffer (image 1):
Using the following in its vertex shader:
gl_Position = projection * view * gl_Vertex;
vec4 position = gl_Position/gl_Position.w;
position = position / 2.0 + 0.5;
float temp_depth = position.z;
gl_FrontColor = vec4(temp_depth,temp_depth,temp_depth,1);
In the second pass I try to use the texture from the previous pass and color the scene (image 2):
Here is the code in vertex shader:
vec4 shadow_coord = projection * view * gl_Vertex;
shadow_coord = shadow_coord / shadow_coord.w;
shadow_coord = shadow_coord / 2.0 + 0.5;
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy);
The scene is consisted of a quad in the front of a cone. In both cases the fragment shader is gl_FragColor = gl_Color; The view and projection matrices in both cases are exactly the same defined at start. The problems is that there is a deviation in shadow_coord.xy.
As long as the view and projection values are exactly the same, shouldn't I get same result?
What can I do to fix it?
What resolution do you use for the texture you render into? And what kind of filtering? (seems linear, should be nearest). Also try to offset the coordinate you read with like:
// offset should be 0.5 / texture_resolution
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy + offset);
And as the other commenters mentioned, 8 bit is not enough to store depth values, consider using a depth texture or a floating point format (like GL_R32F as in ARB_texture_rg).