I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.
I am working on a C++ program which displays a terrain mesh using GLSL shaders. I want it to be able to use different materials based on the elevation.
I am trying to accomplish this by having an uniform array of materials in the fragment shader and then using the y coordinate of the world-space position of the current fragment to determine which material from the array to use.
Here are the relevant parts of my fragment shader:
#version 430
struct Material
{
vec3 ambient;
vec3 diffuse;
vec3 specular;
int shininess;
sampler2D diffuseTex;
bool hasDiffuseTex;
float maxY; //the upper bound of this material's layer in relation to the height of the mesh (in the range 0-1)
};
in vec2 TexCoords;
in vec3 WorldPos;
const int MAX_MATERIALS = 14;
uniform Material materials[MAX_MATERIALS];
uniform int materialCount; //the actual number of materials in the array
uniform float minY; //the minimum world-space y-coordinate in the mesh
uniform float maxY; //the maximum world-space y-coordinate in the mesh
out vec4 fragColor;
void main()
{
//calculate the y-position of this fragment in relation to the height of the mesh (in the range 0-1)
float y = (WorldPos.y - minY) / (maxY - minY);
//calculate the index into the materials array
int index = 0;
for (int i = 0; i < materialCount; ++i)
{
index += int(y > materials[i].maxY);
}
//calculate the ambient color
vec3 ambient = ...
//calculate the diffuse color
vec3 diffuse = ...
//sample from the texture
vec3 texColor = vec3(texture(materials[index].diffuseTex, TexCoords.xy));
//only multiply diffuse color with texture color if the material has a texture
diffuse += int(materials[index].hasDiffuseTex) * ((texColor * diffuse) - diffuse);
//calculate the specular color
vec3 specular = ...
fragColor = vec4(ambient + diffuse + specular, 1.0f);
}
It works fine if textures are not used:
But if one of the materials has a texture associated with it, it shows some black artifacts near the borders of the material layer which has the texture:
When I add this line after the diffuse calculation part:
if (index == 0 && int(materials[index].hasDiffuseTex) == 1 && texColor == vec3(0, 0, 0)) diffuse = vec3(1, 0, 0);
it draws the artifacts in red:
which tells me that the index is correct (0) but nothing is sampled from the texture.
Furthermore if I hardcode the index into the shader like this:
vec3 texColor = vec3(texture(materials[0].diffuseTex, TexCoords.xy));
it renders correctly. So I am guessing it has something to do with the indexing but the index appears to be correct and the texture is there so why doesn't it sample color?
I have also found out that if I switch the order of the materials and move their borders around in the GUI of my program in a certain fashion it starts to render correctly from that point on which I don't understand at all. I first suspected that this might be due to me sending wrong values of uniforms to the shaders initially and then somehow it gets the correct ones after I make the changes in the GUI but then I have tested all the uniform values I am sending to the shader from the C++ side and they all appear to be correct from the start and I don't see any other possible problem which might cause this from the C++ side. So I am now thinking the problem is probably in the shader.
I'm trying to make an outline shader for 2d sprites, basically it takes a sprite and checks for a color, if the fragment has that color it is considered an outline, it then checks the texels around it and if none of them are transparent its alpha is set to 0.
Basically what I need to do is make the shader ignore the borders of the texture, since if the fragment is on a border it will always be considered to be an outline.
I send the custom_FragCoord variable containing the absolute uv coordinates from the vertex shader to the fragment shader, and then I say, for instance, if "custom_FragCoord.x > 1. do outline check", to make everything drawn on the first column be considered an outline.
The problem is when the sprite has a border with nothing drawn on it, then the shader doesn't seem to start drawing at the border of the sprite, so for instance if a sprite has nothing on its left border, then it will start drawing at custom_FragCoord.x = 1., not 0., so it will not automatically consider it an outline and instead will check the adjacent texels, and when it checks the left texel it won't find a transparent texel because it tried to check the left texel from the texture's boundary.
If someone could please shed some light on what could be done that would be an immense help.
Here's the code if the link doesn't work:
//////////////////////// Vertex shader ////////////////////////
attribute vec3 in_Position; // (x,y,z)
//attribute vec3 in_Normal; // (x,y,z) unused in this shader.
attribute vec4 in_Colour; // (r,g,b,a)
attribute vec2 in_TextureCoord; // (u,v)
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
varying vec2 custom_FragCoord;
void main()
{
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;
v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;
//Send absolute fragment coordinate to fragment shader, maybe there's a different coordinate that should be sent instead since checks using this one only work when the sprite's texture touches all borders of the sprite size
custom_FragCoord = (gm_Matrices[MATRIX_WORLD] * object_space_pos).xy;
}
//////////////////////// Fragment shader ////////////////////////
///Outlines shader
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
uniform vec3 sl_v3_ColorTo; //What color should the outline be
uniform vec2 sl_v2_PixelSize; //Distance to next fragment's x/y, for size of step calculation
uniform vec2 sl_v2_SpriteSize; //Size of current drawn sprite (Not used currently, but could be relevant idk)
varying vec2 custom_FragCoord; //Absolute fragment coordinate
void main()
{
vec3 v3_colorToTest = vec3(1.,1.,1.); //White outline color, for testing
vec3 v3_outLineColor = vec3(0.149, 0.149, 0.149); //Color of outline to look for, if fragment is not this color just ignore
//Check difference between fragment color and acceptable outline color
vec3 v3_colDiff = vec3 ( texture2D(gm_BaseTexture, v_vTexcoord).r - v3_outLineColor.r,
texture2D(gm_BaseTexture, v_vTexcoord).g - v3_outLineColor.g,
texture2D(gm_BaseTexture, v_vTexcoord).b - v3_outLineColor.b);
//How much does the fragment's color differ from the outline color it seeks
float f_colDiff = (v3_colDiff.x+v3_colDiff.y+v3_colDiff.z)/3.;
//If fragment color varies by more than 0.001 set alpha to 0, otherwise set it to 8
float alpha = 8.*floor(texture2D(gm_BaseTexture, v_vTexcoord).a + 0.001 -abs(f_colDiff));
//Bunch of conditionals, just to test, I'll take them off once stuff works
/*Here lies the problem: If the sprite is, for instance, 32x32, but only the bottom-half of it has stuff to draw, the "custom_FragCoord.y > 1" check will be useless,
since it will start drawing at custom_FragCoord.y = 15, not custom_FragCoord.y = 0*/
if (custom_FragCoord.x > 1. && custom_FragCoord.y > 1. && custom_FragCoord.x < sl_v2_SpriteSize.x-1. && custom_FragCoord.y < sl_v2_SpriteSize.y-1.)
{
//Check all around for transparency, if none is found it is not an outline
for (float i = 0.; i <= 315.; i+= 45.)
{
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord +vec2(sign(cos(i))*sl_v2_PixelSize.x,sign(sin(i))*sl_v2_PixelSize.y)).a);
}
}
//Paint result, with a white color to test out
vec4 col = vec4(v3_colorToTest, alpha);
gl_FragColor = col;
}
Figured it out, had to manually pass the sprite's texture UV borders to the shader, through sprite_get_uvs().
Here's the shader if anyone is interested:
//////////////////////// Vertex shader ////////////////////////
attribute vec3 in_Position; // (x,y,z)
//attribute vec3 in_Normal; // (x,y,z) unused in this shader.
attribute vec4 in_Colour; // (r,g,b,a)
attribute vec2 in_TextureCoord; // (u,v)
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
void main()
{
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;
v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;
}
//////////////////////// Fragment shader ////////////////////////
//Outlines shader
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
uniform vec3 sl_v3_ColorTo; //What color should the outline be
uniform vec2 sl_v2_PixelSize; //Size of display, for size of step calculation
uniform vec4 sl_v2_TextureUV; //Texture's UV coordinates
void main()
{
vec3 v3_colorToTest = vec3(1.,1.,1.);
vec3 v3_outLineColor = vec3(0.149, 0.149, 0.149);
vec3 v3_colDiff = vec3 ( texture2D(gm_BaseTexture, v_vTexcoord).r - v3_outLineColor.r,
texture2D(gm_BaseTexture, v_vTexcoord).g - v3_outLineColor.g,
texture2D(gm_BaseTexture, v_vTexcoord).b - v3_outLineColor.b);
float f_colDiff = (v3_colDiff.x+v3_colDiff.y+v3_colDiff.z)/3.;
float alpha = 8.*floor(texture2D(gm_BaseTexture, v_vTexcoord).a + 0.001 -abs(f_colDiff));
vec4 v3_borderCheck = vec4 ( v_vTexcoord.x - sl_v2_TextureUV.x,
v_vTexcoord.y - sl_v2_TextureUV.y,
sl_v2_TextureUV.z - v_vTexcoord.x,
sl_v2_TextureUV.w - v_vTexcoord.y);
//Checks the borders, if on border is always outline
alpha += floor(1.-v3_borderCheck.x +sl_v2_PixelSize.x);
alpha += floor(1.-v3_borderCheck.y +sl_v2_PixelSize.y);
alpha += floor(1.-v3_borderCheck.z +sl_v2_PixelSize.x);
alpha += floor(1.-v3_borderCheck.w +sl_v2_PixelSize.x);
//Check neighbors
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(sl_v2_PixelSize.x, 0.)).a);
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(-sl_v2_PixelSize.x, 0.)).a);
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(0., sl_v2_PixelSize.y)).a);
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(0., -sl_v2_PixelSize.y)).a);
//Check diagonal neighbors
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(sl_v2_PixelSize.x, sl_v2_PixelSize.y)).a);
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(-sl_v2_PixelSize.x, sl_v2_PixelSize.y)).a);
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(sl_v2_PixelSize.x, -sl_v2_PixelSize.y)).a);
alpha -= ceil(texture2D(gm_BaseTexture, v_vTexcoord + vec2(-sl_v2_PixelSize.x, -sl_v2_PixelSize.y)).a);
vec4 col = vec4(v3_colorToTest, alpha); //alpha * sl_f_OutlineAlpha here later, sl_OutlineAlpha being a variable changeable in object (not dependent on object's image_alpha, set it to object_alpha inside object when appropriate)
gl_FragColor = col;
}
It works in a very specific way so I don't know if it will be useful for anyone else.
I'm sure there are places that could be optimized, if someone has suggestions please tell me.
I am have a problem figuring out how to get my fragmentshader to load 3 images properly.
The objective is : Load three texture images IMAGE1, IMAGE2, and IMAGE3. IMAGE3 should be a black and white image. Use IMAGE3 as a filter. For any given pixel P, if the corresponding texture pixel color in IMAGE3 is white, then use IMAGE1 to texture map pixel P. If the corresponding pixel color in MAGE3 is black, then use IMAGE2 to texture map pixel P. If the corresponding pixel color in MAGE3 is neither black or white, then use a mixture of IMAGE1 and IMAGE2 to texture map pixel P.
Now I can get it working with IMAGE1 displaying in the white shaded area of IMAGE3 but I am having a hard time getting IMAGE2 to display in the Black area of IMAGE3 without overlapping IMAGE1 in the HEXAGON. Any help would be appreciated.
#version 330
in vec2 textureCoord;
uniform sampler2D textureMap0;
uniform sampler2D textureMap1;
uniform sampler2D textureMap2;
out vec4 fragColor;
void main() {
// retrieve color from each texture
vec4 textureColor1 = texture2D(textureMap0, textureCoord);
vec4 textureColor2 = texture2D(textureMap1, textureCoord);
vec4 textureColor3 = texture2D(textureMap2, textureCoord);
//vec4 finalColor = textureColor1 + textureColor2 + textureColor3;
// Combine the two texture colors
// Depending on the texture colors, you may multiply, add,
// or mix the two colors.
#if __VERSION__ >= 130
if ((textureColor1.r == 0.0f) && (textureColor1.g == 0.0f)
&& (textureColor1.b == 0.0f)) {
textureColor1.r = 1.0f;
}
//fragColor = mix(fragColor,finalColor,0.5);
fragColor = textureColor1 * textureColor3 ;//* textureColor3;
#else
gl_FragColor = textureColor1 * textureColor2;// * textureColor3;
#endif
}
I think this should do what you describe:
vec4 textureColor4 = vec4(vec3(1.0, 1.0, 1.0) - textureColor3.rgb, 1.0)
//...
fragColor = textureColor3 * textureColor1 + textureColor 4 * textureColor2;
Which is essentially a linear interpolation.
I have a radial blur shader in GLSL, which takes a texture, applies a radial blur to it and renders the result to the screen. This works very well, so far.
The problem is, that this applies the radial blur to the first texture in the scene. But what I actually want to do, is to apply this blur to the whole scene.
What is the best way to achieve this functionality? Can I do this with only shaders, or do I have to render the scene to a texture first (in OpenGL) and then pass this texture to the shader for further processing?
// Vertex shader
varying vec2 uv;
void main(void)
{
gl_Position = vec4( gl_Vertex.xy, 0.0, 1.0 );
gl_Position = sign( gl_Position );
uv = (vec2( gl_Position.x, - gl_Position.y ) + vec2(1.0) ) / vec2(2.0);
}
// Fragment shader
uniform sampler2D tex;
varying vec2 uv;
const float sampleDist = 1.0;
const float sampleStrength = 2.2;
void main(void)
{
float samples[10];
samples[0] = -0.08;
samples[1] = -0.05;
samples[2] = -0.03;
samples[3] = -0.02;
samples[4] = -0.01;
samples[5] = 0.01;
samples[6] = 0.02;
samples[7] = 0.03;
samples[8] = 0.05;
samples[9] = 0.08;
vec2 dir = 0.5 - uv;
float dist = sqrt(dir.x*dir.x + dir.y*dir.y);
dir = dir/dist;
vec4 color = texture2D(tex,uv);
vec4 sum = color;
for (int i = 0; i < 10; i++)
sum += texture2D( tex, uv + dir * samples[i] * sampleDist );
sum *= 1.0/11.0;
float t = dist * sampleStrength;
t = clamp( t ,0.0,1.0);
gl_FragColor = mix( color, sum, t );
}
This basically is called "post-processing" because you're applying an effect (here: radial blur) to the whole scene after it's rendered.
So yes, you're right: the good way for post-processing is to:
create a screen-sized NPOT texture (GL_TEXTURE_RECTANGLE),
create a FBO, attach the texture to it
set this FBO to active, render the scene
disable the FBO, draw a full-screen quad with the FBO's texture.
As for the "why", the reason is simple: the scene is rendered in parallel (the fragment shader is executed independently for many pixels). In order to do radial blur for pixel (x,y), you first need to know the pre-blur pixel values of the surrounding pixels. And those are not available in the first pass, because they are only being rendered in the meantime.
Therefore, you must apply the radial blur only after the whole scene is rendered and fragment shader for fragment (x,y) is able to read any pixel from the scene. This is the reason why you need 2 rendering stages for that.