Vertex shader (GLSL) strange behaviour; Does not draw - opengl

I'm trying to implement skeletal animation using my vertex shader. I pass the indices and weights of my vertices as attributes, and upon drawing I pass the animation matrix for every bone as an array to my shader.
Now for some reason when I add the calculation to my shader my model disappears. Even if I do not use the result of the calculation ANYWHERE in my shader, it causes my shader to disappear. No errors are thrown.
I've done a lot of testing and found out that it only happens when I try to access animationMatrices[60]. As far as I've seen any index below 60 works. This is wierd however since I only have 53 bones in my model.
To illustrate, the following code doesn't draw anything:
#version 330
uniform vec3 light = vec3(10,2,8);
uniform mat4 modelmatrix;
uniform mat4[64] animationMatrices;
attribute vec3 a_tangent;
attribute vec4 a_boneIndices;
attribute vec4 a_boneWeights;
out vec3 lightVec;
out vec3 eyeVec;
out vec4 test;
void main()
{
vec4 newVert = (animationMatrices[60] * gl_Vertex) * a_boneWeights.x + gl_Vertex;
vec4 vertPos = modelmatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * vertPos;
eyeVec = normalize(-vec3(gl_ModelViewMatrix*vertPos));
gl_TexCoord[0] = gl_MultiTexCoord0;
//some more stuff here but not important
}
If I comment out the newVert line, my model shows up.
So, if I remove or comment out this line the shader works fine (or change the index to something lower than 60):
vec4 newVert = (animationMatrices[60] * gl_Vertex) * a_boneWeights.x + gl_Vertex;
As soon as I re-introduce this line nothing shows up anymore even though newVert is not used anywhere in the shader.

Assuming modelmatrix is not an identity matrix, you are transforming your vertex incorrectly.
First from object-space to world-space:
vec4 vertPos = modelmatrix * gl_Vertex;
And then from (what is supposed to be) object-space to clip-space:
gl_Position = gl_ModelViewProjectionMatrix * vertPos; // vertPos is in world-space!!!
You need to stop doing this, or isolate your View and Projection matrices.
Possible solutions:
1. vertPos = gl_Vertex;
2. gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
3. gl_Position = projectionmatrix * viewmatrix * vertPos;
Considering a few lines below it, you compute something called eyeVec:
eyeVec = normalize(-vec3(gl_ModelViewMatrix*vertPos));
You probably should not be touching vertPos at all, the only way this shader will work properly is if vertPos = gl_Vertex (potential solution #1).

Related

Render a set of 3d points as rectangles while maintaining aspect ratio

I have a set of 3d points and I want to render each of these points as rectangles(for ease). I want these rectangles to simulate the behaviour of 3d objects in a sense that they maintain aspect ratio in regards to camera. Basically I want them to do something like this:
Here is what I do: In the vertex shader I basically do nothing and just pass the vertex down the pipeline
gl_Position = vec4(vtx_position, 1.0);
In the geometry shader I try to generate these rectangles by projecting the input vertices to modelview space and then generating 4 output vertices with the same offset from input and emitting them after multiplying them with projection matrix:
uniform mat4 MV;
uniform mat4 PROJ;
uniform float size;
position = MV * gl_in[0].gl_Position;
gl_Position = position;
gl_Position.xy += vec2(-size, -size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(-size, size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(size, -size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(size, size);
gl_Position = PROJ * gl_Position;
EmitVertex();
Finally in fragment shader I just fill them with color. However on output I get something like this:
While each rectangle is positioned correctly thir sizes are off. What did I do wrong? What should be done to achieve result like in the first picture?
As it turned out while I was searching for the problem in shades I was also multiplying size uniform with aspect ratio in c++ code.

GLSL shader uniform locations can't be found

I'm working on a vertex skinning shader, and for some reason my program can't find the uniform locations.
Vertex shader code:
#version 330
const int MAX_JOINTS = 30;
const int MAX_WEIGHTS = 3;
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
in ivec3 boneIndices;
in vec3 weights;
out vec4 fragPos;
out vec3 n;
out vec2 texCoords;
out vec4 mcolor;
uniform mat4 modelMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 normalMatrix;
uniform mat4[MAX_JOINTS] boneTransforms;
void main() {
vec4 totalLocalPos = vec4(0.0);
vec4 totalNormal = vec4(0.0);
for(int i = 0; i < 3; i++){
mat4 boneTransform = boneTransforms[boneIndices[i]];
vec4 posePosition = boneTransform * vec4(position, 1);
totalLocalPos += posePosition * weights[i];
vec4 worldNormal = boneTransform * vec4(normal, 1);
totalNormal += worldNormal * weights[i];
}
texCoords = textureCoords;
fragPos = modelMatrix * vec4(position,1);
n = totalNormal.xyz;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * totalLocalPos;
}
The boneTransforms uniform doesn't seem to be set correctly; if I query the active uniforms with
GLint uniforms;
glGetProgramiv(shaderProgramID, GL_ACTIVE_UNIFORMS, &uniforms);
for (int i = 0; i < uniforms; i++){
int name_len = -1, num = -1;
GLenum type = GL_ZERO;
char name[100];
glGetActiveUniform(shaderProgramID, GLuint(i), sizeof(name) - 1,
&name_len, &num, &type, name);
name[name_len] = 0;
}
i always get zero; However, if I just put gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position,1) I get the expected result (correct rendering without any vertex skinning), so the other transforms appear to be working despite it telling me they don't exist?
EDIT: this is sometimes the case, other times I get the model at position (0,0,0) but otherwise rendered correctly with this.
I have read about the compiler stripping unused/inactive uniforms, but if I use boneTransforms to calculate totalLocalPos and use that for gl_Positions the uniform should be active.
I try to set the uniform with
vector<glm::mat4> boneTransforms = model.getBoneTransforms();
int location = glGetUniformLocation(shaderProgramID, "boneTransforms");
glUniformMatrix4fv(location, boneTransforms.size(), false, (GLfloat*)&boneTransforms);
location is always -1.
Is there something wrong with how I try to set this particular uniform, or is the mistake in the shader code?
EDIT2: I just noticed that the behaviour of my shader changes when I add or remove objects (which use a different shader) from the scene. I don't know what to make of that.
EDIT3: If I remove all other meshes from my scene the shader crashes with an access violation. As long as one other object is being rendered there are currently no crashes.
another EDIT: Apparently accessing the weights variable crashes my shader.
I was reading this quick tutorial about vertex skinning shader found here: khronos and it seems to be using a slightly older version of GLSL how ever they do make a mention about the multiplication of the MVP matrix(model view proj matrix) or in your case the PVM matrix( proj view model matrix) with the vec4 for total position in you case and storing it back into gl_Position and they claim that the w may not always have a value of 1 so to be safe they recommend doing this instead and I'll use your code as their example to fix this possible problem.
Change this:
gl_Position = projectionMatrix * viewMatrix * modelMatrix * totalLocalPos;
To this:
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(totalLocalPos.xyz, 1.0);
To see if this helps you out. I don't know if this is the cause of your problem or not but from what you have shown your shader appears to be okay other than that.
I saw that the way you push the mat4 into the shader is a bit off. When I do "glUniformMatrix4fv" I say the first parameter is the ID of your shader, the one you get from "glCreateProgram()". The second is the name of the variable in the shader, so in your case "boneTransforms". The third is a "1". The fourth I put "GL_FALSE" not "false" but I don't think that should make a difference. The next is your first float of the mat4 which would look more like this :
&boneTransforms[0][0]

Issues when simulating directional light in OpenGL

I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.
The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.
QOpenGLWidget paintGL() body.
void OpenGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
m_camera = camera.toMatrix();
m_world.setToIdentity();
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
QVector3D lightDirection = QVector3D(1,1,1);
lightDirection.normalize();
QVector3D directionalColor = QVector3D(1,1,1);
QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
m_program->setUniformValue(m_directionalColorLoc, directionalColor);
m_program->setUniformValue(m_ambientColorLoc, ambientLight);
geometries->drawGeometry(m_program);
m_program->release();
}
}
Vertex Shader
#version 330
layout(location = 0) in vec4 vertex;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;
void main()
{
highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
highp vec3 color = vec3(1, 1, 0.0);
fColor = vec4(color*vLightWeighting, 1.0);
}
The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is.
The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.
The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.
EDIT: 2nd issue solution.
I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I
changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically
EDIT2: I tried debug drawing the normals as suggested in the comments,
I quickly realized that I probably shouldn't calculate them based on
gl_Position (after MVP matrix multiplication in VS) instead I should use the
original vertex locations, so i modified the the shaders as follows:
Vertex Shader (UPDATED)
#version 330
layout(location = 0) in vec4 vertex;
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
vert = vertex.xyz;
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader (UPDATED)
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = vert[2].xyz - vert[0].xyz;
vec3 B = vert[1].xyz - vert[0].xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
EndPrimitive();
}
But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...
EDIT3: 1st Issue Solution: changing normal calculation in GS from
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
to transformedNormal = normalize(cross(A,B)); seems to solve the
problem. Omitting the normalMatrix from the calculation fixed the
issue and the normals dont change with the viewing angle.
If I missed any important/relevant information, please notify me in a comment.
Depth buffer precision
Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.
if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.
I empirically got this for setting the planes in my views:
(zfar-znear)/desired_accuracy_step > 0.3*(2^n)
Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.

Fragment Diffuse value changing with camera location/rotation

I am attempting to get some simple diffuse lighting to work in GLSL. I have a cube that is being passed in as an array of points and I'm calculating the face normals inside my geometry shader (because I intend to deform the mesh at run-time so I'll need the new face normals.)
My problem is that the diffuse value is changing as I move the camera around the world. so the shading on a face of my cube changes as the camera moves. I have not been able to figure out what I am missing that is causing this. My shaders are as follows:
Vertex:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Geometry:
#version 330
precision highp float;
layout (triangles) in;
layout (triangle_strip) out;
layout (max_vertices = 3) out;
out vec3 normal;
uniform mat4 MVP;
uniform mat4 MV;
void main(void)
{
for (int i = 0; i < gl_in.length(); i++) {
gl_Position = gl_in[i].gl_Position;
vec3 U = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 V = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
normal.x = (U.y * V.z) - (U.z * V.y);
normal.y = (U.z * V.x) - (U.x * V.z);
normal.z = (U.x * V.y) - (U.y * V.x);
normal = normalize(transpose(inverse(MV)) * vec4(normal,1)).xyz;
EmitVertex();
}
EndPrimitive();
}
Fragment:
#version 330 core
in vec3 normal;
out vec4 out_color;
const vec3 lightDir = vec3(-1,-1,1);
uniform mat4 MV;
void main()
{
vec3 nlightDir = normalize(vec4(lightDir,1)).xyz;
float diffuse = clamp(dot(nlightDir,normal),0,1);
out_color = vec4(diffuse*vec3(0,1,0),1.0);
}
Thanks
There are a lot of wrong things in your code. Most of your problems come from completely forgetting what space various vectors are in. You cannot meaningfully do computations between vectors that are in different spaces.
normal = normalize(transpose(inverse(MV)) * vec4(normal,1)).xyz;
By using 1 as the fourth component of the normal, you completely break this computation. It causes the normal to be translated, which is not appropriate.
Furthermore, your normal value is computed based on gl_Position. And gl_Position is in clip space, not model space. So all you get is the clip-space normal, which is not what you need, want, or can even use.
If you want to compute the camera-space normal, then compute it from camera-space positions. Or compute the model-space normal from model-space positions and use the model/view matrix to transform it to camera-space.
Also, do the inverse/transpose on the CPU and pass it to the shader. Oh, and take all of the normal computations out of the loop; you only need to do it once per triangle (store it in a local variable and copy it to the output for each vertex). And stop doing the cross-product manually; use the built-in GLSL cross function.
vec3 nlightDir = normalize(vec4(lightDir,1)).xyz;
This makes no more sense than using 1 as the forth component in your transform before. Just normalize lightDir directly.
Equally importantly, if you're doing lighting in camera space, then the light direction needs to change with the camera in order for it to remain in the same apparent direction in the world. So you're going to have to take your world-space light position and transform it to camera space (typically on the CPU, passed in as a uniform).

Phong-specular lighting in glsl (lwjgl)

I'm currently trying to make specular lighting on an sphere using glsl and using Phong-model.
This is how my fragment shader looks like:
#version 120
uniform vec4 color;
uniform vec3 sunPosition;
uniform mat4 normalMatrix;
uniform mat4 modelViewMatrix;
uniform float shininess;
// uniform vec4 lightSpecular;
// uniform vec4 materialSpecular;
varying vec3 viewSpaceNormal;
varying vec3 viewSpacePosition;
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = (modelViewMatrix * vec4(gl_FragCoord.x, gl_FragCoord.y, gl_FragCoord.z, 1.0)).xyz;
vec4 specular = calculateSpecular(sunPosition, normal, viewSpacePosition, vec4(0.3,0.3,0.3,0.3), vec4(0.3,0.3,0.3,0.3));
gl_FragColor = color+specular;
}
The sunPosition is not moving and is set to the value (2.0f, 3.0f, -1.0f).
The problem is that the image looks nothing as it's supose to do if the specular calculations were correct.
This is how it looks like:
http://i.imgur.com/Na2C6.png
The reason i don't have any ambient-/emissiv-/deffuse- lighting in this code is because i want to get the specular light part working first.
Thankful for any help!
Edit:
#Darcy Rayner
That indead helped alot tough it seams to be something that is still not right...
The current code looks like this:
Vertex Shader:
viewSpacePosition = (modelViewMatrix*gl_Vertex).xyz;
viewSpaceSunPosition = (modelViewMatrix*vec4(sunPosition,1)).xyz;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
viewSpaceNormal = (normalMatrix * vec4(gl_Position.xyz, 0.0)).xyz;
Fragment Shader:
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = normalize(viewSpacePosition);
vec3 viewSpaceSunPosition = normalize(viewSpaceSunPosition);
vec4 specular = calculateSpecular(viewSpaceSunPosition, normal, viewSpacePosition, vec4(0.7,0.7,0.7,1.0), vec4(0.6,0.6,0.6,1.0));
gl_FragColor = color+specular;
}
And the sphere looks like this:
-->Picture-link<--
with the sun position: sunPosition = new Vector(12.0f, 15.0f, -1.0f);
Try not using gl_FragCoord, as it is stored in screen coordinates, (and I don't think transforming it by the modelViewMatrix will get it back to view coordinates). Easiest thing to do, set viewSpacePosition in your vertex shader as:
// Change gl_Vertex to whatever attribute you are using.
viewSpacePosition = (modelViewMatrix * gl_Vertex).xyz;
This should get you viewSpacePosition in view coordinates, (ie. before projection is applied). You can then go ahead and normalise viewSpacePosition in the fragment shader. Not sure if you are storing the sun vector in world coordinates, but you will probably want to transform it into view space then normalise it as well. Give it a go and see what happens, these things tend to be very error prone.