I have an simple Opengl-Application like this :
As you can see i have a plane and suzanne with a texture and also some light.But there is aslo an cube.You can see just the upper half, but the problem ist that the cube stays black, no matter what.I tried to apply a texture --> black.I tried to give the cube just a color ---->still black.
in vec3 surfaceNormal;
in vec3 toLightVector;
smooth in vec2 textureCords;
out vec4 outputColor;
uniform sampler2D s;
uniform int hasTexture;
uniform vec3 LightColor;
void main()
{
vec3 unitNormal = normalize(surfaceNormal);
vec3 unitLightVector = normalize(toLightVector);
float ndot = dot(unitNormal,unitLightVector);
float brightness = max(ndot,0.0);
vec3 diffuse = brightness*LightColor*1.5;
if(hasTexture == 1){
**outputColor = vec4(diffuse,1.0) * texture(s,textureCords);**
}else{
**outputColor = vec4(diffuse,1.0) *vec4(0.4,0.5,0.7,1.0);**
}
}
This is my fragment shader.I pass this shader a uniform hasTexture to check if it should be rendered with texture or without.
For example :
cube.draw("texture.png");
or
cube.draw();
Since this method i use works with the two other models, i really dont know whats the problem with it.Maybe the texture-coordinats are wrong?
Based off of the given information, my guess is that brightness is 0 because the ndot is either 0 or negative. I would say check your normals and your 'toLightVector's. Even if your texture coordinates were wrong, setting the cube to a color should have worked.
Related
I am working on a C++ program which displays a terrain mesh using GLSL shaders. I want it to be able to use different materials based on the elevation.
I am trying to accomplish this by having an uniform array of materials in the fragment shader and then using the y coordinate of the world-space position of the current fragment to determine which material from the array to use.
Here are the relevant parts of my fragment shader:
#version 430
struct Material
{
vec3 ambient;
vec3 diffuse;
vec3 specular;
int shininess;
sampler2D diffuseTex;
bool hasDiffuseTex;
float maxY; //the upper bound of this material's layer in relation to the height of the mesh (in the range 0-1)
};
in vec2 TexCoords;
in vec3 WorldPos;
const int MAX_MATERIALS = 14;
uniform Material materials[MAX_MATERIALS];
uniform int materialCount; //the actual number of materials in the array
uniform float minY; //the minimum world-space y-coordinate in the mesh
uniform float maxY; //the maximum world-space y-coordinate in the mesh
out vec4 fragColor;
void main()
{
//calculate the y-position of this fragment in relation to the height of the mesh (in the range 0-1)
float y = (WorldPos.y - minY) / (maxY - minY);
//calculate the index into the materials array
int index = 0;
for (int i = 0; i < materialCount; ++i)
{
index += int(y > materials[i].maxY);
}
//calculate the ambient color
vec3 ambient = ...
//calculate the diffuse color
vec3 diffuse = ...
//sample from the texture
vec3 texColor = vec3(texture(materials[index].diffuseTex, TexCoords.xy));
//only multiply diffuse color with texture color if the material has a texture
diffuse += int(materials[index].hasDiffuseTex) * ((texColor * diffuse) - diffuse);
//calculate the specular color
vec3 specular = ...
fragColor = vec4(ambient + diffuse + specular, 1.0f);
}
It works fine if textures are not used:
But if one of the materials has a texture associated with it, it shows some black artifacts near the borders of the material layer which has the texture:
When I add this line after the diffuse calculation part:
if (index == 0 && int(materials[index].hasDiffuseTex) == 1 && texColor == vec3(0, 0, 0)) diffuse = vec3(1, 0, 0);
it draws the artifacts in red:
which tells me that the index is correct (0) but nothing is sampled from the texture.
Furthermore if I hardcode the index into the shader like this:
vec3 texColor = vec3(texture(materials[0].diffuseTex, TexCoords.xy));
it renders correctly. So I am guessing it has something to do with the indexing but the index appears to be correct and the texture is there so why doesn't it sample color?
I have also found out that if I switch the order of the materials and move their borders around in the GUI of my program in a certain fashion it starts to render correctly from that point on which I don't understand at all. I first suspected that this might be due to me sending wrong values of uniforms to the shaders initially and then somehow it gets the correct ones after I make the changes in the GUI but then I have tested all the uniform values I am sending to the shader from the C++ side and they all appear to be correct from the start and I don't see any other possible problem which might cause this from the C++ side. So I am now thinking the problem is probably in the shader.
This is the process I go through to render the scene:
Bind MSAA x4 GBuffer (4 Color Attachments, Position, Normal, Color and Unlit Color (skybox only. I also have a Depth component/Texture).
Draw SkyBox
Draw Geo
Blit all Color and Depth Components to a Single Sample FBO
Apply Lighting (I use the depth texture to check if it should be lit by checking if depth texture value is less than 1).
Render Quad
And this is what is happening:
As you can see I get these white and black artefacts around the edge instead of smooth edge. (Good to note that if I remove the lighting and just render the texture without lighting, I don't get this and it smooths correctly).
Here is my shader (it has SSAO implemented but that seem to not effect this).
#version 410 core
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D ssaoTex;
uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedo;
uniform sampler2D gAlbedoUnlit;
uniform sampler2D gDepth;
uniform mat4 View;
struct Light {
vec3 Pos;
vec3 Color;
float Linear;
float Quadratic;
float Radius;
};
const int MAX_LIGHTS = 32;
uniform Light lights[MAX_LIGHTS];
uniform vec3 viewPos;
uniform bool SSAO;
void main()
{
vec3 color = texture(gAlbedo, Texcoord).rgb;
vec3 colorUnlit = texture(gAlbedoUnlit, Texcoord).rgb;
vec3 pos = texture(gPosition, Texcoord).rgb;
vec3 norm = normalize(texture( gNormal, Texcoord)).rgb;
vec3 depth = texture(gDepth, Texcoord).rgb;
float ssaoValue = texture(ssaoTex, Texcoord).r;
// then calculate lighting as usual
vec3 lighting;
if(SSAO)
{
lighting = vec3(0.3 * color.rgb * ssaoValue); // hard-coded ambient component
}
else
{
lighting = vec3(0.3 * color.rgb); // hard-coded ambient component
}
vec3 posWorld = pos.rgb;
vec3 viewDir = normalize(viewPos - posWorld);
for(int i = 0; i < MAX_LIGHTS; ++i)
{
vec4 lightPos = View * vec4(lights[i].Pos,1.0);
vec3 normLight = normalize(lightPos.xyz);
float distance = length(lightPos.xyz - posWorld);
if(distance < lights[i].Radius)
{
// diffuse
vec3 lightDir = normalize(lightPos.xyz - posWorld);
vec3 diffuse = max(dot(norm.rgb, lightDir), 0.0) * color.rgb *
lights[i].Color;
float attenuation = 1.0 / (1.0 + lights[i].Linear * distance + lights[i].Quadratic * distance * distance);
lighting += (diffuse*attenuation);
}
}
if(depth.r >= 1)
{
outColor = vec4(colorUnlit, 1.0);
}
else
{
outColor = vec4(lighting, 1.0);
}
}
So the last if statement checks if it is in the depth texture, if it is then apply lighting, if it is not then just draw the skybox (this is so lighting is not applied to the skybox).
I have spent a few days trying to work this out, changing ways of checking if it should be light by comparing normals, position and depth, changing the formats to a higher res (e.g. using RGB16F instead of RGB8 etc.) but I can't figure out what is causing it and doing lighting per sample (using texel fetch) would be way to intensive.
Any Ideas?
This question is a bit old now but I thought I would say how I solved my issue.
I run basic Sobel Filter in my shader which I use to do screen-space outlines, but in addition I also check if MSAA is enabled and if so compute lighting per texel around the edge pixels!
I want to texture my terrain without predetermined texture coordinates. I want to determine the coordinates in the vertex or fragmant shader using vertex position coordinates. I now use position 'xz' coordinates (up=(0,1,0)), but if I have a for example wall which is 90 degrees with the ground the texture will be like this:
How can I transform this position these coordinates to work well?
Here's my vertex shader:
#version 430
in layout(location=0) vec3 position;
in layout(location=1) vec2 textCoord;
in layout(location=2) vec3 normal;
out vec3 pos;
out vec2 text;
out vec3 norm;
uniform mat4 transformation;
void main()
{
gl_Position = transformation * vec4(position, 1.0);
norm = normal;
pos = position;
text = position.xz;
}
And here's my fragmant shader:
#version 430
in vec3 pos;
in vec2 text;
in vec3 norm;
//uniform sampler2D textures[3];
layout(binding=3) uniform sampler2D texture_1;
layout(binding=4) uniform sampler2D texture_2;
layout(binding=5) uniform sampler2D texture_3;
vec3 lightPosition = vec3(-200, 700, 50);
vec3 lightAmbient = vec3(0,0,0);
vec3 lightDiffuse = vec3(1,1,1);
vec3 lightSpecular = vec3(1,1,1);
out vec4 fragColor;
vec4 theColor;
void main()
{
vec3 unNormPos = pos;
vec3 lightVector = normalize(lightPosition) - normalize(pos);
//lightVector = normalize(lightVector);
float cosTheta = clamp(dot(normalize(lightVector), normalize(norm)), 0.5, 1.0);
if(pos.y <= 120){
fragColor = texture2D(texture_2, text*0.05) * cosTheta;
}
if(pos.y > 120 && pos.y < 150){
fragColor = (texture2D(texture_2, text*0.05) * (1 - (pos.y-120)/29) + texture2D(texture_3, text*0.05) * ((pos.y-120)/29))*cosTheta;
}
if(pos.y >= 150)
{
fragColor = texture2D(texture_3, text*0.05) * cosTheta;
}
}
EDIT: (Fons)
text = 0.05 * (position.xz + vec2(0,position.y));
text = 0.05 * (position.xz + vec2(position.y,position.y));
Now the wall work but terrain not.
The problem is actually a very difficult one, since you cannot devise a formula for the texture coordinates that displays vertical walls correctly, using only the xyz coordinates.
To visualize this, imagine a hill next to a piece of flat land. Since the path going over the hill is longer than that going over the flat piece of land, the texture should wrap more times on the hill the on the flat piece of land. In the image below, the texture wraps 5 times on the hill and 4 times on the flat piece.
If the texture coordinates are (0,0) on the left, should they be (4,0) or (5,0) on the right? Since both answers are valid, this proves that there is no function that calculates correct texture coordinates based purely on the xyz coordinates. :(
However, your problems might be solved with different methods:
The walls can be corrected by generating them independently from the terrain, and assigning correct texture coordinates to them. It actually makes more sense not to incorporate those in your terrain.
You can add more detail to the sides of steep hills with normal maps, textures of higher resolution, or a combination of different textures. There might be a better solution that I don't know about.
Edit: Triplanar mapping will solve your problem!
Try:
text = position.xz + vec2(0,y);
Also, I recommend setting the *0.05 scale factor in the vertex shader instead of the fragment shader. The final code would be:
text = 0.05 * (position.xz + vec2(0,y));
I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.
I've been trying to study opengl for a fun side project and ran into some issue while learning.
Below is a fragment shader:
#version 330 core
in vec3 Normal;
in vec3 Position;
in vec2 TexCoords;
out vec4 color;
uniform vec3 cameraPos;
uniform samplerCube skybox;
struct Material{
sampler2D diffuse0;
sampler2D specular0;
sampler2D emitter0;
sampler2D reflection0;
float shininess;
};
uniform Material material;
void main(){
vec3 I = normalize(Position - cameraPos);
vec3 R = reflect(I, normalize(Normal));
float intensity = 0;
intensity += texture(material.reflection0, TexCoords).x;
vec4 rfl = texture(skybox, R);
//this line doesnt produce anything
color = rfl * intensity;
}
When i used the code above, my model is completely gone from view.
But, if i debug it out separately such as changing the line
color = rfl * intensity;
to
color = rfl;
This actually renders and returns the following picture
And changing that line to
color = vec4(intensity);
It renders and returns the following picture
I've tried changing
color = rfl * some constant
//or
color = vec4(0.5) * intensity
And both rendered my model normally. I'm stumped as to why it doesnt render when i tried multiplying both rfl and intensity together. I think it might be because there are values that the multiplication to fail, but i have no idea what they might be.
Then you change
color = rfl * intensity;
to
color = rfl;
GLSL compiler will drop
uniform Material material;
due optimization. Same happens with skybox when you change line to:
color = intensity;
Make sure that you binding of texture uniforms is correct.