How to avoid black lines between triangles on modern GPUs? - opengl

I am building a parametric 3d modeler with obj export.
I am really puzzled.
I have changed my GPU last night and now, there are cracks between the vertices, I can see what is behind. My old card was an Nvidia GTX275 and the one, NVIDIA GTX960. I didn't change anything in the code, shader or otherwise. I use only flat colors (not textures).
On the image below, the black lines that cut the pentagon into triangle shouldn't be there.
It seems to be purely an OpenGL problem as when I export the model and look in Blender, the faces are contiguous and there is no duplicate vertices.
the shader code is quite simple :
_VERTEX2 = """
#version 330
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0 ) in vec3 position;
layout(location = 1 ) in vec4 color;
layout(location = 2 ) in vec3 normal;
varying vec4 baseColor;
// uniform mat4 proj;
layout(location = 0) uniform mat4 view;
layout(location = 4) uniform mat4 proj;
varying vec3 fragVertexEc;
void main(void) {
gl_Position = proj * view * vec4(position, 1.0);
fragVertexEc = (view * vec4(position, 1.0)).xyz;
baseColor = color;
}
"""
_FRAGMENT2 = """
#version 330
#extension GL_OES_standard_derivatives : enable
varying vec3 fragVertexEc;
varying vec4 baseColor;
const vec3 lightPosEc = vec3(0,0,10);
const vec3 lightColor = vec3(1.0,1.0,1.0);
void main()
{
vec3 X = dFdx(fragVertexEc);
vec3 Y = dFdy(fragVertexEc);
vec3 normal=normalize(cross(X,Y));
vec3 lightDirection = normalize(lightPosEc - fragVertexEc);
float light = max(0.0, dot(lightDirection, normal));
gl_FragColor = vec4(normal, 1.0);
gl_FragColor = vec4(baseColor.xyz * light, baseColor.w);
}
"""
the rendering code is also quite simple :
def draw(self, view_transform, proj, transform):
self.shader.use()
gl_wrap.glBindVertexArray(self.vao)
try:
self.vbo.bind()
view_transform = view_transform * transform
GL.glUniformMatrix4fv(0, 1, False, (ctypes.c_float*16)(*view_transform.toList()))
GL.glUniformMatrix4fv(4, 1, False, (ctypes.c_float*16)(*proj.toList()))
GL.glEnableVertexAttribArray(self.shader.attrib['position'])
GL.glEnableVertexAttribArray(self.shader.attrib['color'])
GL.glEnableVertexAttribArray(self.shader.attrib['normal'])
STRIDE = 40
GL.glVertexAttribPointer(
self.shader.attrib['position'], len(Vector._fields), GL.GL_FLOAT,
False, STRIDE, self.vbo)
GL.glVertexAttribPointer(
self.shader.attrib['color'], len(Color._fields), GL.GL_FLOAT,
False, STRIDE, self.vbo+12)
GL.glVertexAttribPointer(
self.shader.attrib['normal'], len(Vector._fields), GL.GL_FLOAT,
False, STRIDE, self.vbo+28)
GL.glDrawElements(
GL.GL_TRIANGLES,
len(self.glindices),
self.index_type,
self.glindices)
finally:
self.vbo.unbind()
gl_wrap.glBindVertexArray(0)
self.shader.unuse()
A simple quad has the following data sent to OpenGL :
vertices ( flat array of pos+rgba+normal) :
[5.0, -3.061616997868383e-16, 5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17,
-5.0, 3.061616997868383e-16, -5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17,
-5.0, -3.061616997868383e-16, 5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17,
5.0, 3.061616997868383e-16, -5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17]
indices :: [0, 1, 2, 0, 3, 1]

ok, got it !
the strange display is due to glEnabled(GL_POLYGON_SMOOTH).
I had completely forgotten about it as my old card wasn't doing anything with this setting. It wasn't visible at least.
But the new one has the correct behaviour : it tries to anti-alias each triangle independently, hence the black lines.
Many thanks to #Jerem who helped me cover the other possibilities.

Related

OpenGL shadow mapping - shadow map texture doesn't get sampled at all?

I'm currently working on an OpenGL project and I'm trying to get shadow mapping to work properly. I could get to a point where the shadow map gets rendered into a texture, but it doesn't seem to get applied to the scenery when rendered. Here's the most important bits of my code:
The shadow map vertex shader, basically a simple pass through shader (also does some additional stuff like normals, but that shouldn't distract you); it basically just transforms the vertices so they're seen from the perspective of the light (it's a directional light but since we need to assume a position, it's basically a point far away):
#version 430 core
layout(location = 0) in vec3 v_position;
layout(location = 1) in vec3 v_normal;
layout(location = 2) in vec3 v_texture;
layout(location = 3) in vec4 v_color;
out vec3 f_texture;
out vec3 f_normal;
out vec4 f_color;
uniform mat4 modelMatrix;
uniform mat4 depthViewMatrix;
uniform mat4 depthProjectionMatrix;
// Shadow map vertex shader.
void main() {
mat4 mvp = depthProjectionMatrix * depthViewMatrix * modelMatrix;
gl_Position = mvp * vec4(v_position, 1.0);
// Passing attributes on to the fragment shader
f_texture = v_texture;
f_normal = (transpose(inverse(modelMatrix)) * vec4(v_normal, 1.0)).xyz;
f_color = v_color;
}
The shadow map fragment shader that writes the depth value to a texture:
#version 430 core
layout(location = 0) out float fragmentDepth;
in vec3 f_texture;
in vec3 f_normal;
in vec4 f_color;
uniform vec3 lightDirection;
uniform sampler2DArray texSampler;
// Shadow map fragment shader.
void main() {
fragmentDepth = gl_FragCoord.z;
}
The vertex shader that actually renders the scene, but also calculates the position of the current vertex from the lights point of view (shadowCoord) to compare against the depth texture; it also applies a bias matrix, since the coordinates aren't in the correct [0, 1] interval for sampling:
#version 430 core
layout(location = 0) in vec3 v_position;
layout(location = 1) in vec3 v_normal;
layout(location = 2) in vec3 v_texture;
layout(location = 3) in vec4 v_color;
out vec3 f_texture;
out vec3 f_normal;
out vec4 f_color;
out vec3 f_shadowCoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 depthViewMatrix;
uniform mat4 depthProjectionMatrix;
// Simple vertex shader.
void main() {
mat4 mvp = projectionMatrix * viewMatrix * modelMatrix;
gl_Position = mvp * vec4(v_position, 1.0);
// This bias matrix adjusts the projection of a given vertex on a texture to be within 0 and 1 for proper sampling
mat4 depthBias = mat4(0.5, 0.0, 0.0, 0.5,
0.0, 0.5, 0.0, 0.5,
0.0, 0.0, 0.5, 0.5,
0.0, 0.0, 0.0, 1.0);
mat4 depthMVP = depthProjectionMatrix * depthViewMatrix * modelMatrix;
mat4 biasedDMVP = depthBias * depthMVP;
// Passing attributes on to the fragment shader
f_shadowCoord = (biasedDMVP * vec4(v_position, 1.0)).xyz;
f_texture = v_texture;
f_normal = (transpose(inverse(modelMatrix)) * vec4(v_normal, 1.0)).xyz;
f_color = v_color;
}
The fragment shader that applies textures from a texture array and receives the depth texture (uniform sampler2D shadowMap) and checks if a fragment is behind something:
#version 430 core
in vec3 f_texture;
in vec3 f_normal;
in vec4 f_color;
in vec3 f_shadowCoord;
out vec4 color;
uniform vec3 lightDirection;
uniform sampler2D shadowMap;
uniform sampler2DArray tileTextureArray;
// Very basic fragment shader.
void main() {
float visibility = 1.0;
if (texture(shadowMap, f_shadowCoord.xy).z < f_shadowCoord.z) {
visibility = 0.5;
}
color = texture(tileTextureArray, f_texture) * visibility;
}
And finally: the function that renders multiple chunks to generate the shadow map and then renders the scene with the shadow map applied:
// Generating the shadow map
glBindFramebuffer(GL_FRAMEBUFFER, m_framebuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
m_shadowShader->bind();
glViewport(0, 0, 1024, 1024);
glDisable(GL_CULL_FACE);
glm::vec3 lightDir = glm::vec3(1.0f, -0.5f, 1.0f);
glm::vec3 sunPosition = FPSCamera::getPosition() - lightDir * 64.0f;
glm::mat4 depthViewMatrix = glm::lookAt(sunPosition, FPSCamera::getPosition(), glm::vec3(0, 1, 0));
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-100.0f, 100.0f, -100.0f, 100.0f, 0.1f, 800.0f);
m_shadowShader->setUniformMatrix("depthViewMatrix", depthViewMatrix);
m_shadowShader->setUniformMatrix("depthProjectionMatrix", depthProjectionMatrix);
for (Chunk *chunk : m_chunks) {
m_shadowShader->setUniformMatrix("modelMatrix", chunk->getModelMatrix());
chunk->drawElements();
}
m_shadowShader->unbind();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Normal draw call
m_chunkShader->bind();
glEnable(GL_CULL_FACE);
glViewport(0, 0, Window::getWidth(), Window::getHeight());
glm::mat4 viewMatrix = FPSCamera::getViewMatrix();
glm::mat4 projectionMatrix = FPSCamera::getProjectionMatrix();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glActiveTexture(GL_TEXTURE1);
m_textures->bind();
m_chunkShader->setUniformMatrix("depthViewMatrix", depthViewMatrix);
m_chunkShader->setUniformMatrix("depthProjectionMatrix", depthProjectionMatrix);
m_chunkShader->setUniformMatrix("viewMatrix", viewMatrix);
m_chunkShader->setUniformMatrix("projectionMatrix", projectionMatrix);
m_chunkShader->setUniformVec3("lightDirection", lightDir);
m_chunkShader->setUniformInteger("shadowMap", 0);
m_chunkShader->setUniformInteger("tileTextureArray", 1);
for (Chunk *chunk : m_chunks) {
m_chunkShader->setUniformMatrix("modelMatrix", chunk->getModelMatrix());
chunk->drawElements();
}
Most of the code should be self-explanatory, I'm binding a FBO with a texture attached, we do a normal rendering call into the framebuffer, it gets rendered into a texture and then I'm trying to pass it into the shader for normal rendering. I've tested whether the texture gets properly generated and it does: See the generated shadow map here
However, when rendering the scene, all I see is this.
No shadows applied, visibility is 1.0 everywhere. I also use a debug context which works properly and logs errors when there are any, but it seems to be completely fine, no warnings or errors, so I'm the one doing something terribly wrong here. I'm on OpenGL 4.3 by the way.
Hopefully one of you can help me out on this, I've never got shadow maps to work before, this is the closest I've ever come, lol. Thanks in advance.
Commonly a mat4 OpenGL transformation matrix looks like this:
( X-axis.x, X-axis.y, X-axis.z, 0 )
( Y-axis.x, Y-axis.y, Y-axis.z, 0 )
( Z-axis.x, Z-axis.y, Z-axis.z, 0 )
( trans.x, trans.y, trans.z, 1 )
So your depthBias matrix, which you use to convert from normalized device coordinates (in ranage [-1, 1]) to texture coordinates (in range [0, 1]), should look like this:
mat4 depthBias = mat4(0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0);
or this:
mat4 depthBias = mat4(
vec4( 0.5, 0.0, 0.0, 0.0 ),
vec4( 0.0, 0.5, 0.0, 0.0 ),
vec4( 0.0, 0.0, 0.5, 0.0 ),
vec4( 0.5, 0.5, 0.5, 1.0 ) );
After you have transformed a vertex position by the model matrix, the view matrix and the projection matrix, the vertex position is in clip space (homogeneous coordinates). You have to convert from clip space to normalized device coordinates (cartesian coordinates in range [-1, 1]). This can be done by dividing, by the w component of the homogeneous coordinate:
mat4 depthMVP = depthProjectionMatrix * depthViewMatrix * modelMatrix;
vec4 clipPos = depthMVP * vec4(v_position, 1.0);
vec4 ndcPos = vec4(clipPos.xyz / clipPos.w, 1.0);
f_shadowCoord = (depthBias * ndcPos).xyz;
A depth texture has one channel only. If you read data from the depth texture, then the data is contained in the x (or r) component of the vector.
Adapt the fragment shader code like this:
if ( texture(shadowMap, f_shadowCoord.xy).x < f_shadowCoord.z)
visibility = 0.5;
The Image Format specification of Khronos group says:
Image formats do not have to store each component. When the shader
samples such a texture, it will still resolve to a 4-value RGBA
vector. The components not stored by the image format are filled in
automatically. Zeros are used if R, G, or B is missing, while a
missing Alpha always resolves to 1.
see further:
Data Type (GLSL)
GLSL Programming/Vector and Matrix Operations
Transform the modelMatrix
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
OpenGL Shadow map problems
Addition to the solution:
This is an important part of the solution, but there was another step needed to properly render the shadow map. The second mistake was using the wrong component of the texture to compare against f_shadowCoord.z: it should've been
texture(shadowMap, f_shadowCoord.xy).r
instead of
texture(shadowMap, f_shadowCoord.xy).z

Glitchy Facial Morph Target Animation in OpenGL (C++)

I've been trying to implement Morph Target animation in OpenGL with Facial Blendshapes but following this tutorial. The vertex shader for the animation looks something like this:
#version 400 core
in vec3 vNeutral;
in vec3 vSmile_L;
in vec3 nNeutral;
in vec3 nSmile_L;
in vec3 vSmile_R;
in vec3 nSmile_R;
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
uniform vec4 lightPosition;
out vec3 lPos;
out vec3 vPos;
out vec3 vNorm;
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
uniform float smile_w;
void main(){
//float smile_l_w = 0.9;
float neutral_w = 1 - 2 * smile_w;
clamp(neutral_w, 0.0, 1.0);
vec3 vPosition = neutral_w * vNeutral + smile_w * vSmile_L + smile_w * vSmile_R;
vec3 vNormal = neutral_w * nNeutral + smile_w * nSmile_L + smile_w * nSmile_R;
//vec3 vPosition = vNeutral + (vSmile_L - vNeutral) * smile_w;
//vec3 vNormal = nNeutral + (nSmile_L - nNeutral) * smile_w;
normalize(vPosition);
normalize(vNormal);
mat4 translate = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
pos.x, pos.y, pos.z, 1.0);
mat4 scale = mat4(size.x, 0.0, 0.0, 0.0,
0.0, size.y, 0.0, 0.0,
0.0, 0.0, size.z, 0.0,
0.0, 0.0, 0.0, 1.0);
mat4 model = translate * scale * quaternion;
vec3 n = normalize(cameraPosition - lookAtPosition);
vec3 u = normalize(cross(upVector, n));
vec3 v = cross(n, u);
mat4 view=mat4(u.x,v.x,n.x,0,
u.y,v.y,n.y,0,
u.z,v.z,n.z,0,
dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1);
mat4 modelView = view * model;
float p11=((2.0*near)/(right-left));
float p31=((right+left)/(right-left));
float p22=((2.0*near)/(top-bottom));
float p32=((top+bottom)/(top-bottom));
float p33=-((far+near)/(far-near));
float p43=-((2.0*far*near)/(far-near));
mat4 projection = mat4(p11, 0, 0, 0,
0, p22, 0, 0,
p31, p32, p33, -1,
0, 0, p43, 0);
//lighting calculation
vec4 vertexInEye = modelView * vec4(vPosition, 1.0);
vec4 lightInEye = view * lightPosition;
vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0));
lPos = lightInEye.xyz;
vPos = vertexInEye.xyz;
vNorm = normalInEye.xyz;
gl_Position = projection * modelView * vec4(vPosition, 1.0);
}
Although the algorithm for morph target animation works, I get missing faces on the final calculated blend shape. The animation somewhat looks like the follow gif.
The blendshapes are exported from a markerless facial animation software known as FaceShift.
But also, the algorithm works perfectly on a normal cuboid with it's twisted blend shape created in Blender:
Could it something wrong with the blendshapes I am using for the facial animation? Or I am doing something wrong in the vertex shader?
--------------------------------------------------------------Update----------------------------------------------------------
So as suggested, I made the changes required to the vertex shader, and made a new animation, and still I am getting the same results.
Here's the updated vertex shader code:
#version 400 core
in vec3 vNeutral;
in vec3 vSmile_L;
in vec3 nNeutral;
in vec3 nSmile_L;
in vec3 vSmile_R;
in vec3 nSmile_R;
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
uniform vec4 lightPosition;
out vec3 lPos;
out vec3 vPos;
out vec3 vNorm;
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
uniform float smile_w;
void main(){
float neutral_w = 1.0 - smile_w;
float neutral_f = clamp(neutral_w, 0.0, 1.0);
vec3 vPosition = neutral_f * vNeutral + smile_w/2 * vSmile_L + smile_w/2 * vSmile_R;
vec3 vNormal = neutral_f * nNeutral + smile_w/2 * nSmile_L + smile_w/2 * nSmile_R;
mat4 translate = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
pos.x, pos.y, pos.z, 1.0);
mat4 scale = mat4(size.x, 0.0, 0.0, 0.0,
0.0, size.y, 0.0, 0.0,
0.0, 0.0, size.z, 0.0,
0.0, 0.0, 0.0, 1.0);
mat4 model = translate * scale * quaternion;
vec3 n = normalize(cameraPosition - lookAtPosition);
vec3 u = normalize(cross(upVector, n));
vec3 v = cross(n, u);
mat4 view=mat4(u.x,v.x,n.x,0,
u.y,v.y,n.y,0,
u.z,v.z,n.z,0,
dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1);
mat4 modelView = view * model;
float p11=((2.0*near)/(right-left));
float p31=((right+left)/(right-left));
float p22=((2.0*near)/(top-bottom));
float p32=((top+bottom)/(top-bottom));
float p33=-((far+near)/(far-near));
float p43=-((2.0*far*near)/(far-near));
mat4 projection = mat4(p11, 0, 0, 0,
0, p22, 0, 0,
p31, p32, p33, -1,
0, 0, p43, 0);
//lighting calculation
vec4 vertexInEye = modelView * vec4(vPosition, 1.0);
vec4 lightInEye = view * lightPosition;
vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0));
lPos = lightInEye.xyz;
vPos = vertexInEye.xyz;
vNorm = normalInEye.xyz;
gl_Position = projection * modelView * vec4(vPosition, 1.0);
}
Also, my fragment shader looks something like this. (I just added new material settings as compared to earlier)
#version 400 core
uniform vec4 lightColor;
uniform vec4 diffuseColor;
in vec3 lPos;
in vec3 vPos;
in vec3 vNorm;
void main(){
//copper like material light settings
vec4 ambient = vec4(0.19125, 0.0735, 0.0225, 1.0);
vec4 diff = vec4(0.7038, 0.27048, 0.0828, 1.0);
vec4 spec = vec4(0.256777, 0.137622, 0.086014, 1.0);
vec3 L = normalize (lPos - vPos);
vec3 N = normalize (vNorm);
vec3 Emissive = normalize(-vPos);
vec3 R = reflect(-L, N);
float dotProd = max(dot(R, Emissive), 0.0);
vec4 specColor = lightColor*spec*pow(dotProd,0.1 * 128);
vec4 diffuse = lightColor * diff * (dot(N, L));
gl_FragColor = ambient + diffuse + specColor;
}
And finally the animation I got from updating the code:
As you can see, I am still getting some missing triangles/faces in the morph target animation. Any more suggestions/comments regarding the issue would be really helpful. Thanks again in advance. :)
Update:
So as suggested, I flipped the normals if dot(vSmile_R, nSmile_R) < 0 and I got the following image result.
Also, instead of getting the normals from the obj files, I tried calculating my own (face and vertex normals) and still I got the same result.
Not an answer attempt, I just need more formatting than available for comments.
I cannot tell which data was actually exported from Fasceshift and how that was put into the custom ADTs of the app; my crystal ball is currently busy with predicting the FIFA Wold Cup results.
But generally, a linear morph is a very simple thing:
There is one vector "I" of data for the initial mesh and a vector "F" of equal size for the position data of the final mesh; their count and ordering must match for the tessellation to remain intact.
Given j ∈ [0,count), corresponding vectors initial_ = I[j], final_ = F[j] and a morph factor λ ∈ [0,1] the j-th (zero-based) current vector current_(λ) is given by
current_(λ) = initial_ + λ . (final_ - initial_) = (1 - λ ) . initial_ + λ . final_.
From this perspective, this
vec3 vPosition = neutral_w * vNeutral +
smile_w/2 * vSmile_L + smile_w/2 * vSmile_R;
looks dubious at best.
As I said, my crystal ball is currently defunct, but the naming would imply that, given the OpenGL standard reference frame,
vSmile_L = vSmile_R * (-1,1,1),
this "*" denoting component-wise multiplication, and that in turn would imply cancelling out the morph x-component by above addition.
But apparently, the face does not degenerate into a plane (a line from the projected pov), so the meaning of those attributes is unclear.
That's the reason why I want to look at the effective data, as stated in the comments.
Another thing, not related to the effect in question, but to the the shading algorithm.
As stated in the answer to this
Can OpenGL shader compilers optimize expressions on uniforms?,
the shader optimizer could well optimize pure uniform expressions like the M/V/P calculations done with
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
/* */
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
but I find it highly optimistic to rely on such assumed optimizations.
if it is not optimized accordingly doing this means doing it once per frame per vertex so for a human face with a LOD of 1000 vertices, and 60Hz that would be done 60,000 times per second by the GPU, instead of once and for all by the CPU.
No modern CPU would give up soul if these calculations are put once on her shoulders, so passing the common trinity of M/V/P matrices as uniforms seems appropriate instead of constructing those matrices in the shader.
For reusing the code from the shaders - glm provides a very glsl-ish way to do GL-related maths in C++.
I had a very similar problem some time ago. As you eventually noticed, your problem most probably lies in the mesh itself. In my case, it was inconsistent mesh triangulation. Using the Triangluate Modifier in Blender solved the problem for me. Perhaps you should give it a try too.

Lighting doesn't show in OpenGL

I'm trying to do point source directional lighting in OpenGL using my textbooks examples. I'm showing a rectangle centered at the origin, and doing the lighting computations in the shader. The rectangle appears, but it is black even when I try to put colored lights on it. Normals for the rectangle are all (0, 1.0, 0). I'm not doing any non-uniform scaling, so the regular model view matrix should also transform the normals.
I have code that sets the light parameters(as uniforms) and material parameters(also as uniforms) for the shader. There is no per vertex color information.
void InitMaterial()
{
color material_ambient = color(1.0, 0.0, 1.0);
color material_diffuse = color(1.0, 0.8, 0.0);
color material_specular = color(1.0, 0.8, 0.0);
float material_shininess = 100.0;
// set uniforms for current program
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialAmbient"), 1, material_ambient);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialDiffuse"), 1, material_diffuse);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialSpecular"), 1, material_specular);
glUniform1f(glGetUniformLocation(Programs[lightingType], "shininess"), material_shininess);
}
For the lights:
void InitLight()
{
// need light direction and light position
point4 light_position = point4(0.0, 0.0, -1.0, 0.0);
color light_ambient = color(0.2, 0.2, 0.2);
color light_diffuse = color(1.0, 1.0, 1.0);
color light_specular = color(1.0, 1.0, 1.0);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightPosition"), 1, light_position);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightAmbient"), 1, light_ambient);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightDiffuse"), 1, light_diffuse);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightSpecular"), 1, light_specular);
}
The fragment shader is a simple pass through shader that sets the color to the one input from the vertex shader. Here is the vertex shader :
#version 150
in vec4 vPosition;
in vec3 vNormal;
out vec4 color;
uniform vec4 materialAmbient, materialDiffuse, materialSpecular;
uniform vec4 lightAmbient, lightDiffuse, lightSpecular;
uniform float shininess;
uniform mat4 modelView;
uniform vec4 lightPosition;
uniform mat4 projection;
void main()
{
// Transform vertex position into eye coordinates
vec3 pos = (modelView * vPosition).xyz;
vec3 L = normalize(lightPosition.xyz - pos);
vec3 E = normalize(-pos);
vec3 H = normalize(L + E);
// Transform vertex normal into eye coordinates
vec3 N = normalize(modelView * vec4(vNormal, 0.0)).xyz;
// Compute terms in the illumination equation
vec4 ambient = materialAmbient * lightAmbient;
float Kd = max(dot(L, N), 0.0);
vec4 diffuse = Kd * materialDiffuse * lightDiffuse;
float Ks = pow(max(dot(N, H), 0.0), shininess);
vec4 specular = Ks * materialSpecular * lightSpecular;
if(dot(L, N) < 0.0) specular = vec4(0.0, 0.0, 0.0, 1.0);
gl_Position = projection * modelView * vPosition;
color = ambient + diffuse + specular;
color.a = 1.0;
}
Ok, it's working now. The solution was to replace glUniform3fv with glUniform4fv, I guess because the glsl counterpart is a vec4 instead of a vec3. I thought that it would be able to recognize this and simply add a 1.0 to the end, but no.

Sun shader not working

I'm trying to get a sun shader to work, but I can't get it to work.
What I currently get is a quarter of a circle/elipsis on the lower left of my screen, that is really stuck to my screen (if I move the camera, it also moves).
All I do is render two triangles to form a screen-covering quad, with screen width and height in uniforms.
Vertex Shader
#version 430 core
void main(void) {
const vec4 vertices[6] = vec4[](
vec4(-1.0, -1.0, 1.0, 1.0),
vec4(-1.0, 1.0, 1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
vec4(1.0, -1.0, 1.0, 1.0),
vec4(-1.0, -1.0, 1.0, 1.0)
);
gl_Position = vertices[gl_VertexID];
}
Fragment Shader
#version 430 core
layout(location = 7) uniform int screen_width;
layout(location = 8) uniform int screen_height;
layout(location = 1) uniform mat4 view_matrix;
layout(location = 2) uniform mat4 proj_matrix;
out vec4 color;
uniform vec3 light_pos = vec3(-20.0, 7.5, -20.0);
void main(void) {
//calculate light position in screen space and get x, y components
vec2 screen_space_light_pos = (proj_matrix * view_matrix * vec4(light_pos, 1.0)).xy;
//calculate fragment position in screen space
vec2 screen_space_fragment_pos = vec2(gl_FragCoord.x / screen_width, gl_FragCoord.y / screen_height);
//check if it is in the radius of the sun
if (length(screen_space_light_pos - screen_space_fragment_pos) < 0.1) {
color = vec4(1.0, 1.0, 0.0, 1.0);
}
else {
discard;
}
}
What I think it does:
Get the position of the sun (light_pos) in screen space.
Get the fragment position in screen space.
If the distance between them is below a certain value, draw fragment with yellow color;
Else discard.
screen_space_light_pos is not yet in clip space. You've missed perspective division:
vec3 before_division = (proj_matrix * view_matrix * vec4(light_pos, 1.0)).xyw;
vec2 screen_space_light_pos = before_division.xy / before_division.z;
With common proj_matrix configurations, screen_space_light_pos will be in [-1,1]x[-1,1]. To match screen_space_fragment_pos range, you probably need to adjust screen_space_light_pos:
screen_space_light_pos = screen_space_light_pos * 0.5 + 0.5;

glsl light doesn't seem to be working

I'm working on some basic lighting in my application, and am unable to get a simple light to work (so far..).
Here's the vertex shader:
#version 150 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform mat4 pvmMatrix;
uniform mat3 normalMatrix;
in vec3 in_Position;
in vec2 in_Texture;
in vec3 in_Normal;
out vec2 textureCoord;
out vec4 pass_Color;
uniform LightSources {
vec4 ambient = vec4(0.5, 0.5, 0.5, 1.0);
vec4 diffuse = vec4(0.5, 0.5, 0.5, 1.0);
vec4 specular = vec4(0.5, 0.5, 0.5, 1.0);
vec4 position = vec4(1.5, 7, 0.5, 1.0);
vec4 direction = vec4(0.0, -1.0, 0.0, 1.0);
} lightSources;
struct Material {
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
Material mymaterial = Material(
vec4(1.0, 0.8, 0.8, 1.0),
vec4(1.0, 0.8, 0.8, 1.0),
vec4(1.0, 0.8, 0.8, 1.0),
0.995
);
void main() {
gl_Position = pvmMatrix * vec4(in_Position, 1.0);
textureCoord = in_Texture;
vec3 normalDirection = normalize(normalMatrix * in_Normal);
vec3 lightDirection = normalize(vec3(lightSources.direction));
vec3 diffuseReflection = vec3(lightSources.diffuse) * vec3(mymaterial.diffuse) * max(0.0, dot(normalDirection, lightDirection));
/*
float bug = 0.0;
bvec3 result = equal( diffuseReflection, vec3(0.0, 0.0, 0.0) );
if(result[0] && result[1] && result[2]) bug = 1.0;
diffuseReflection.x += bug;
*/
pass_Color = vec4(diffuseReflection, 1.0);
}
And here's the fragment shader:
#version 150 core
uniform sampler2D texture;
in vec4 pass_Color;
in vec2 textureCoord;
void main() {
vec4 out_Color = texture2D(texture, textureCoord);
gl_FragColor = pass_Color;
//gl_FragColor = out_Color;
}
I'm rendering a textured wolf to the screen as a test. If I change the fragment shader to use out_Color, I see the wolf rendered properly. If I use the pass_Color, I see nothing on the screen.
This is what the screen looks like when I use out_Color in the fragment shader:
I know the diffuseReflection vector is full of 0's, by uncommenting this code in the vertex shader:
...
/*
float bug = 0.0;
bvec3 result = equal( diffuseReflection, vec3(0.0, 0.0, 0.0) );
if(result[0] && result[1] && result[2]) bug = 1.0;
diffuseReflection.x += bug;
*/
...
This will make the x component of the diffuseReflection vector 1.0, which turns the wolf red.
Does anyone see anything obvious I'm doing wrong here?
As suggested in the comments, try debugging incrementally. I see a number of ways your shader could be wrong. Maybe your normalMatrix isn't being passed properly? Maybe your in_Normal isn't mapped to the appropriate input? Maybe when you're casting lightSources.direction to a vec3, the compiler's doing something funky? Maybe your shader isn't even running at all, but you think it is? Maybe you have geometry or tessellation units and it's not passing correctly?
No one really has a chance of answering this correctly. As for me, it looks fine--but again, any of the factors above could happen--and probably more.
To debug this, you need to break it down. As suggested in the comments, try rendering the normals. Then, you should try rendering the light direction. Then render your n dot l term. Then multiply by your material parameters, then by your texture. Somewhere along the way you'll figure out the problem. As an additional tip, change your clear color to something other than black so that any black-rendered objects stand out.
It's worth noting that the above advice--break it down--is applicable to all things debugging, not just shaders. As I see it, you haven't done so here.