I'm trying to render a grid-based terrain with flat shading, and my output looks very wrong:
The terrain data is as follows:
0 0 0 0
0 1 1 0
0 1 1 0
0 0 0 0
So the square in the middle is at height 1, flat (ie coplanar triangles) and parallel to the ground, the 4 NSEW squares are half at height 0 half at 1 (or more specifically 1/3), flat and sideways, and the 4 corner squares are about as you'd imagine -- don't worry about the corners, I'm not rotating the seam correctly.
My vertex shader is:
#version 460
layout(location = 0) uniform mat4 projection;
layout(location = 4) uniform mat4 world;
layout(location = 0) in vec3 vPosition;
layout(location = 0) in vec3 fNormal;
layout(location = 2) in vec4 vColor;
flat out vec4 fColor;
const vec3 sunDirection = vec3(0, 0, 1);
void main(void)
{
gl_Position = projection * world * vec4(vPosition, 1);
fColor = (1 - max(0, dot(sunDirection, fNormal))) * vColor;
}
The sun direction is hard coded as vertical along the Z axis. The fragment shader just copies the color as given, nothing fancy.
Below you can see the some of the generated vertex data. Index 24 is the start of the middle square (4 * 6), and before it is the middle-left square, and you can see it's coplanar based on the normals calculated.
So my questions:
Why do I need to subtract the dot product between the sun direction and the normal vector to see anything? Without it I get a completely "light" (ie bright green) picture. In no tutorial I've read have I seen the need to do that subtraction.
Why do the middle-left and middle-right cells look half shaded half bright? They're coplanar, they should share a color.
Why is my flat cell in the middle not colored bright green? Again, it's coplanar with a straight up normal vector.
Why are the top 3 cells completely bright? Assume the vertices are calculated correctly, but I can also provide them if needed.
layout(location = 0) in vec3 vPosition;
layout(location = 0) in vec3 fNormal;
That's not going to work. This is location aliasing and both input variables will receive the same data from attribute slot 0.
Related
So I was trying to implement basic diffuse lighting using OpenGL. I wrote a simple shader that would take a normal vector and a light vector and calculate the brightness of a pixel using the dot product of said vectors. Here are my outputs:
Light coming from the left ([1, 0, 0] as light vector)
Light coming down ([0, -1, 0] as light vector)
Light coming from behind ([0, 0, 1] as light vector)
As you can see, it works just fine for the first two cases, but it completely breaks for the third. By the way, [0, 0, -1] doesn't work either and [0, 1, 1] gives the same output as if the light was coming up ([0, 1, 0]). Here are my shaders :
Vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform vec3 lightDirection;
out vec3 normal;
out vec3 lightDir;
void main()
{
normal = normalize(aNormal);
lightDir = normalize(lightDirection);
gl_Position = projection * view * model * vec4(aPos, 1.0f);
}
Fragment shader:
#version 330 core
in vec3 normal;
in vec3 lightDir;
out vec4 FragColor;
void main()
{
float grey = max(dot(-lightDir, normal), 0.0f);
FragColor = vec4(grey, grey, grey, 1.0f);
}
I assume the issue has something to do with the dot product, but I can't find why.
The diffuse light is calculated using the formula max(dot(-lightDir, normal), 0.0f);. So if dot (-lightDir, normal) is less than 0, the scene is completely black.
The Dot product of 2 Unit vector is the cosine of the angle between the 2 vectors. Hence, if the angle is > 90° and < 270° the result is less than 0.
This means, that when the object is lit at the back, it will appear completely black.
The light direction is a vector in world space. dot(-lightDir, normal) only makes sense if normal is also a vector in world space.
Transform normal from model space to world space:
normal = inverse(transpose(mat3(model))) * normalize(aNormal);
(Why transforming normals with the transpose of the inverse of the modelview matrix?)
I have a deferred renderer which appears to work correctly, depth, colour and shading comes out correctly. However the position buffer is fine for orthographic, while the geometry appears 'inverted' (or depth disabled) when using a perspective projection.
I am getting the following buffer outputs for orthographic.
With the final 'shaded' image currently looking correct.
However when I am using a perspective projection I get the following buffers coming out...
And final image is fine, although I don't incorporate any position buffer information at the moment (N.B Only doing 'headlight' shading at the moment)
While the final image appears correct, the depth buffer appears to be ignored for my position buffer...(there is no glDisable(GL_DEPTH_TEST) in the code.
The depth and normal buffers looks ok to me, it's only the 'position' buffer which appears to be ignoring the depth? The render pipeline is exactly the same in for ortho and perspective with the only difference being the projection matrix.
I use glm::ortho, and glm::perspective and I calculate my near/far clipping distances on the fly based on the scene AABB. For orthographic my near/far is 1 & 11.4734 respectively, and for perspective it is 11.0875 & 22.5609... The width and height values are the same, fov is 45 for perspective projection.
I do have these calls before drawing any geometry...
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Which I use for compositing different layers as part of the render pipeline.
Am I doing anything wrong here? or am I misunderstanding something?
Here are my shaders...
Vertex shader of gBuffer...
#version 430 core
layout (std140) uniform MatrixPV
{
mat4 P;
mat4 V;
};
layout(location = 0) in vec3 InPoint;
layout(location = 1) in vec3 InNormal;
layout(location = 2) in vec2 InUV;
uniform mat4 M;
out vec4 Position;
out vec3 Normal;
out vec2 UV;
void main()
{
mat4 VM = V * M;
gl_Position = P * VM * vec4(InPoint, 1.0);
Position = P * VM * vec4(InPoint, 1.0);
Normal = mat3(M) * InNormal;
UV = InUV;
}
Fragment shader of gBuffer...
#version 430 core
layout(location = 0) out vec4 gBufferPicker;
layout(location = 1) out vec4 gBufferPosition;
layout(location = 2) out vec4 gBufferNormal;
layout(location = 3) out vec4 gBufferDiffuse;
in vec3 Normal;
in vec4 Position;
vec4 Diffuse();
uniform vec4 PickerColour;
void main()
{
gBufferPosition = Position;
gBufferNormal = vec4(Normal.xyz, 1.0);
gBufferPicker = PickerColour;
gBufferDiffuse = Diffuse();
}
And here is the 'second pass' shader to visualise the position buffer...
#version 430 core
uniform sampler2D debugBufferPosition;
in vec2 UV;
out vec4 frag;
void main()
{
vec3 val = texture(debugBufferPosition, UV).xyz;
frag = vec4(val.xyz, 1.0);
}
I haven't used the position buffer data yet, and I know I can reconstruct it without having to store them in another buffer, however the positions are useful for me for other reasons and I would like to know why they are coming out as they are for perspective?
What you actually write in the position buffer is the clip space coordinate
Position = P * VM * vec4(InPoint, 1.0);
The clip space coordinate is a Homogeneous coordinates and transformed to the normaliced device cooridnate (which is a Cartesian coordinate by a Perspective divide.
ndc = gl_Position.xyz / gl_Position.w;
At orthographic projection the w component is 1, but at perspective projection, the w component contains a value which depends on the z component (depth) of the (cartesian ) view space coordinate.
I recommend to store the normalized device coordinate to the position buffer, rather than the clip space coordinate. e.g.:
gBufferPosition = vec4(Position.xyz / Position.w, 1.0);
I am working on a C++ program which displays a terrain mesh using GLSL shaders. I want it to be able to use different materials based on the elevation.
I am trying to accomplish this by having an uniform array of materials in the fragment shader and then using the y coordinate of the world-space position of the current fragment to determine which material from the array to use.
Here are the relevant parts of my fragment shader:
#version 430
struct Material
{
vec3 ambient;
vec3 diffuse;
vec3 specular;
int shininess;
sampler2D diffuseTex;
bool hasDiffuseTex;
float maxY; //the upper bound of this material's layer in relation to the height of the mesh (in the range 0-1)
};
in vec2 TexCoords;
in vec3 WorldPos;
const int MAX_MATERIALS = 14;
uniform Material materials[MAX_MATERIALS];
uniform int materialCount; //the actual number of materials in the array
uniform float minY; //the minimum world-space y-coordinate in the mesh
uniform float maxY; //the maximum world-space y-coordinate in the mesh
out vec4 fragColor;
void main()
{
//calculate the y-position of this fragment in relation to the height of the mesh (in the range 0-1)
float y = (WorldPos.y - minY) / (maxY - minY);
//calculate the index into the materials array
int index = 0;
for (int i = 0; i < materialCount; ++i)
{
index += int(y > materials[i].maxY);
}
//calculate the ambient color
vec3 ambient = ...
//calculate the diffuse color
vec3 diffuse = ...
//sample from the texture
vec3 texColor = vec3(texture(materials[index].diffuseTex, TexCoords.xy));
//only multiply diffuse color with texture color if the material has a texture
diffuse += int(materials[index].hasDiffuseTex) * ((texColor * diffuse) - diffuse);
//calculate the specular color
vec3 specular = ...
fragColor = vec4(ambient + diffuse + specular, 1.0f);
}
It works fine if textures are not used:
But if one of the materials has a texture associated with it, it shows some black artifacts near the borders of the material layer which has the texture:
When I add this line after the diffuse calculation part:
if (index == 0 && int(materials[index].hasDiffuseTex) == 1 && texColor == vec3(0, 0, 0)) diffuse = vec3(1, 0, 0);
it draws the artifacts in red:
which tells me that the index is correct (0) but nothing is sampled from the texture.
Furthermore if I hardcode the index into the shader like this:
vec3 texColor = vec3(texture(materials[0].diffuseTex, TexCoords.xy));
it renders correctly. So I am guessing it has something to do with the indexing but the index appears to be correct and the texture is there so why doesn't it sample color?
I have also found out that if I switch the order of the materials and move their borders around in the GUI of my program in a certain fashion it starts to render correctly from that point on which I don't understand at all. I first suspected that this might be due to me sending wrong values of uniforms to the shaders initially and then somehow it gets the correct ones after I make the changes in the GUI but then I have tested all the uniform values I am sending to the shader from the C++ side and they all appear to be correct from the start and I don't see any other possible problem which might cause this from the C++ side. So I am now thinking the problem is probably in the shader.
I want to texture my terrain without predetermined texture coordinates. I want to determine the coordinates in the vertex or fragmant shader using vertex position coordinates. I now use position 'xz' coordinates (up=(0,1,0)), but if I have a for example wall which is 90 degrees with the ground the texture will be like this:
How can I transform this position these coordinates to work well?
Here's my vertex shader:
#version 430
in layout(location=0) vec3 position;
in layout(location=1) vec2 textCoord;
in layout(location=2) vec3 normal;
out vec3 pos;
out vec2 text;
out vec3 norm;
uniform mat4 transformation;
void main()
{
gl_Position = transformation * vec4(position, 1.0);
norm = normal;
pos = position;
text = position.xz;
}
And here's my fragmant shader:
#version 430
in vec3 pos;
in vec2 text;
in vec3 norm;
//uniform sampler2D textures[3];
layout(binding=3) uniform sampler2D texture_1;
layout(binding=4) uniform sampler2D texture_2;
layout(binding=5) uniform sampler2D texture_3;
vec3 lightPosition = vec3(-200, 700, 50);
vec3 lightAmbient = vec3(0,0,0);
vec3 lightDiffuse = vec3(1,1,1);
vec3 lightSpecular = vec3(1,1,1);
out vec4 fragColor;
vec4 theColor;
void main()
{
vec3 unNormPos = pos;
vec3 lightVector = normalize(lightPosition) - normalize(pos);
//lightVector = normalize(lightVector);
float cosTheta = clamp(dot(normalize(lightVector), normalize(norm)), 0.5, 1.0);
if(pos.y <= 120){
fragColor = texture2D(texture_2, text*0.05) * cosTheta;
}
if(pos.y > 120 && pos.y < 150){
fragColor = (texture2D(texture_2, text*0.05) * (1 - (pos.y-120)/29) + texture2D(texture_3, text*0.05) * ((pos.y-120)/29))*cosTheta;
}
if(pos.y >= 150)
{
fragColor = texture2D(texture_3, text*0.05) * cosTheta;
}
}
EDIT: (Fons)
text = 0.05 * (position.xz + vec2(0,position.y));
text = 0.05 * (position.xz + vec2(position.y,position.y));
Now the wall work but terrain not.
The problem is actually a very difficult one, since you cannot devise a formula for the texture coordinates that displays vertical walls correctly, using only the xyz coordinates.
To visualize this, imagine a hill next to a piece of flat land. Since the path going over the hill is longer than that going over the flat piece of land, the texture should wrap more times on the hill the on the flat piece of land. In the image below, the texture wraps 5 times on the hill and 4 times on the flat piece.
If the texture coordinates are (0,0) on the left, should they be (4,0) or (5,0) on the right? Since both answers are valid, this proves that there is no function that calculates correct texture coordinates based purely on the xyz coordinates. :(
However, your problems might be solved with different methods:
The walls can be corrected by generating them independently from the terrain, and assigning correct texture coordinates to them. It actually makes more sense not to incorporate those in your terrain.
You can add more detail to the sides of steep hills with normal maps, textures of higher resolution, or a combination of different textures. There might be a better solution that I don't know about.
Edit: Triplanar mapping will solve your problem!
Try:
text = position.xz + vec2(0,y);
Also, I recommend setting the *0.05 scale factor in the vertex shader instead of the fragment shader. The final code would be:
text = 0.05 * (position.xz + vec2(0,y));
I need to output 24 indices per fragment in a shader. I already reached the maximum amount of rendertargets because I'm using four other rendertargets for my gbuffer. So I tried to output the data with an SSBO, indexing it with the gl_FragCoord of the pixel. The problem is, that it needs to be depth correct. So I tried to use layout(early_fragment_tests) in; and watched over the indices. I can see strange per pixel errors on some spots now and it looks like the indices from the triangles below are coming through plus it stops when I'm moving the camera closer to those spots.
I double checked the indexing of the ssbo and it's correct + the indices should be the same for a whole triangle, but the flickering is per pixel. So I think the depth test works only for the rasterized per pixel output and not for the whole fragment shader code. Could it be the problem or does somebody know if the depth test should stop the whole processing of the fragment? If that's not the case, even a separate depth pre pass couldn't help me.
Here is a fragment shader example:
#version 440 core
layout(early_fragment_tests) in;
layout(location = 0) uniform sampler2D texSampler;
#include "../Header/MaterialData.glslh"
#include "../Header/CameraUBO.glslh"
layout(location = 3) uniform uint screenWidth;//horizontal screen resolution in pixel
layout(location = 0) out vec4 fsout_color;
layout(location = 1) out vec4 fsout_normal;
layout(location = 2) out vec4 fsout_material;
coherent layout(std430, binding = 3) buffer frameCacheIndexBuffer
{
uvec4 globalCachesIndices[];
};
in vec3 gsout_normal;
in vec2 gsout_texCoord;
flat in uvec4 gsout_cacheIndices[6];
flat in uint gsout_instanceIndex;
void main()
{
uint frameBufferIndex = 6 * (uint(gl_FragCoord.x) + uint(gl_FragCoord.y) * screenWidth);
for(uint i = 0; i < 6; i++)
{
globalCachesIndices[frameBufferIndex + i] = gsout_cacheIndices[i];//only the closest fragment should output
}
fsout_normal = vec4(gsout_normal * 0.5f + 0.5f, 0);
fsout_color = vec4(texture(texSampler, gsout_texCoord));
MaterialData thisMaterial = material[materialIndex[gsout_instanceIndex]];
fsout_material = vec4(thisMaterial.diffuseStrength,
thisMaterial.specularStrength,
thisMaterial.ambientStrength,
thisMaterial.specularExponent);
}