I have a hexagonal map and try to build a fog of war over it. What I'm doing is building a VBO with vertices being center of hexes. I then assign alpha value to each tile center according to its visibility (0.0f, 0.5f, 1.0f).
Here is vertex shader:
#ifdef GL_ES
precision highp float;
#endif
attribute vec4 a_position;
attribute float a_alpha;
uniform mat4 u_MVPMatrix;
varying float v_alpha;
void main()
{
gl_Position = u_MVPMatrix * a_position;
v_alpha = a_alpha;
}
and my fragment shader:
#ifdef GL_ES
precision highp float;
#endif
varying float v_alpha;
void main()
{
gl_FragColor = vec4(v_alpha, v_alpha, v_alpha, 1.0);
}
I tried to carve a hole from the fogmap to try it out and I'm getting this:
What's wrong with the interpolation? Why is there a triangle structure so visible?
As Nico pointed out, this is an optical illusion.
Basically, what you have is a discontinuity between the two triangles. The shared edges have the same color values, but because the gradient is different on the two sides, it creates the appearance of a line between them. The change in the gradient across the triangle edges is not smooth.
Mathematically, you have C0 continuity across the line, but not C1 continuity.
And the human eye/perception is very used to smooth things. So when something isn't smooth, we perceive a discontinuity.
There's not much you can do about this. You might be able to play around with interpolation schemes, but the best thing you can do is simply texture the hex. The texture's frequently changing colors will help mask the gradient discontinuity.
Related
We have code that mostly works filling polygons on a map, though it draws convex hulls and fills in some areas (will require tessellation).
The shader is given a set of triangle fan operations, and draws using hardcoded color yellow (and it works).
Then we try to interpolate based on the value, and it turns black (does not work).
Here is the fragment shader. Values coming in are all 0.0 to 1.0
With minVal = 0.0, maxVal = 1.0
and colors set to (0,0,1) and (1,0,0)
While I would appreciate knowing the bug, I would much more like to know how I can debug it. I need to be able to get the values in the shader and see what is happening. In short, I need some kind of debugging facility for GLSL. I did find NVIDIA nsight: https://developer.nvidia.com/nsight-graphics but could not get it working on linux.
#version 330 core
out vec4 FragColor;
//in vec2 TexCoord;
in float val;
//uniform sampler2D ourTexture;
uniform vec3 minColor;
uniform vec3 maxColor;
uniform float minVal;
uniform float maxVal;
void main()
{
float f = (val - minVal)/ (maxVal-minVal);
//FragColor = vec4(1,1,0,1);//texture(ourTexture, f);
FragColor = vec4(minColor*(1.0-f) + maxColor * f,1.0);
}
It turns out that we were using glUniform4fv to set a color with rgba.
There was no compile or runtime error. These calls do not have an error return that I know of.
The shader also did not generate an error, but the variables minColor and maxColor were not correctly set.
Thus the interpolation was always black.
vec4(minColor*(1.0-f) + maxColor * f,1.0);
There should have been an error, attempting to set an RGBA color into a vec3 variable.
I have found printf functions on stackoverflow that would have allowed viewing this kind of information: Convert floating-point numbers to decimal digits in GLSL
I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I'm working on shadows for a 2D overhead game. Right now, the shadows are just sprites with the color (0,0,0,0.1) drawn on a layer above the tiles.
The problem: When many entities or trees get clumped together, the shadows overlap, forming unnatural-looking dark areas.
I've tried drawing the shadows to a framebuffer and using a simple shader to prevent overlapping, but that lead to other problems, including layering issues.
Is it possible to enable a certain blend function for the shadows that prevents "stacking", or a better way to use a shader?
If you don't want to deal with sorting issues, I think you could do this with a shader. But every object will have to be either affected by shadow or not. So tall trees could be marked as not shadow receiving, while the ground, grass, and characters would be shadow receiving.
First make a frame buffer with clear color white. Draw all your shadows on it as pure black.
Then make a shadow mapping shader to draw everything in your world. This relies on you not needing all four channels of the sprite's color, because we need one of those channels to mark each sprite as shadow receiving or not. For example, if you aren't using RGB to tint your sprites, we could use the R channel. Or if you aren't fading them in and out, we could use A. I'll assume the latter here:
Vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform mat4 u_projTrans;
void main()
{
v_texCoords = a_texCoord0;
v_color = a_color;
v_color.a = v_color.a * (255.0/254.0); //this is a correction due to color float precision (see SpriteBatch's default shader)
vec3 screenPosition = u_projTrans * a_position;
v_texCoordsShadowmap = (screenPosition.xy * 0.5) + 0.5;
gl_Position = screenPosition;
}
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform sampler2D u_texture;
uniform sampler2D u_textureShadowmap;
void main()
{
vec4 textureColor = texture2D(u_texture, v_texCoords);
float shadowColor = texture2D(u_textureShadowmap, v_texCoordsShadowmap).r;
shadowColor = mix(shadowColor, 1.0, v_color.a);
textureColor.rgb *= shadowColor * v_color.rgb;
gl_FragColor = textureColor;
}
These are completely untested and probably have bugs. Make sure you assign the frame buffer's color texture to "u_textureShadowmap". And for all your sprites, set their color's alpha based on how much shadow you want them to have cast on them, which will generally always be 0 or 0.1 (based on the brightness you were using before).
Draw your shadows to fbo with disabled blending.
Draw background e.g. grass
Draw shadows texture from fbo
Draw all other sprites
I am using GLSL to render a basic cube (made from GL_QUADS surfaces). I would like to pass the gl_Vertex content from the vertex into the fragment shader. Everything works, if I am using gl_FrontColor (vertex shader) and gl_Color (fragment shader) for this, but it doesn't work, when using a plain varying (see code & image below). It appears the varying is not interpolated across the surface for some reason. Any idea what could cause this in OpenGL ?
glShadeModel is set to GL_SMOOTH - I can't think of anything else that could cause this effect right now.
Vertex Shader:
#version 120
varying vec4 frontSideValue;
void main() {
frontSideValue = gl_Vertex;
gl_Position = transformPos;
}
Fragment Shader:
#version 120
varying vec4 frontSideValue;
void main() {
gl_FragColor = frontSideValue;
}
The result looks just like you are not using values in the range [0,1] for the color vector. You basically use the untransformed vertex position, which might be well outside this range. Your cube seems centered around the origin, so you are seeing the small transition where the values are actually in the range [0,1] as that unsharp band.
With the builin gl_FrontColor, the value seems to get clamped before the interpolation.