GLSL vertex shader breaks when trying to access gl_Position - opengl

In trying to create a weak depth effect for a wireframe model, where distance from the camera plane changes the colour, I first tried
gl_Position = camMatrix * vec4(aPos, 1.0);
color = vec4(0.0f,pow(2.0f, -gl_Position.z),0.0f,1.0f);
However this would render the wireframe entirely black, with the edges of it touching the edges of the window and my camera controls would have no effect.
However if I instead recalculate the depth when calculating the colour vector i.e.
color = vec4(0.0f,pow(2.0f, -(camMatrix * vec4(aPos, 1.0)).z),0.0f,1.0f);
everything works fine. Is accessing gl_Position invalid in some way? This is with version 330 core for GLSL
The entire shader file is
#version 330 core
layout (location = 0) in vec3 aPos;
out vec4 color;
uniform mat4 camMatrix;
void main()
{
gl_Position = camMatrix * vec4(aPos, 1.0);
color = vec4(0.0f,pow(2.0f, -(camMatrix * vec4(aPos, 1.0)).z),0.0f,1.0f);
}

Related

Drawing smooth circle

I'm using OpenTK (C#) but OpenGL suggestions are welcome too.
I have a point list generated by iteration having 1 degrees per point around the center point which means there are 361 point including the center point. Point list can be different with different approaches, that's ok. I can draw the circle with the below simple Vertex and Fragment shaders. How can change the fragment and/or vertex shaders to have a smooth circle.
Vertex shader:
#version 330
in vec3 vPosition;
in vec4 vColor;
out vec4 color;
out vec4 fPosition;
uniform mat4 modelview;
void main()
{
fPosition = modelview * vec4(vPosition, 1.0);
gl_Position = fPosition;
color = vColor;
}
Fragment shader:
#version 330
in vec4 color;
in vec4 fPosition;
out vec4 outputColor;
void main()
{
outputColor = color;
}
C# code:
GL.DrawArrays(PrimitiveType.TriangleFan, 0, points.Length);
Hello what do you actually see ? Post a screenshot. Anyway for smooth edges we have what's called anti alising.
Use this line for your glControl to enable it
glControl = new GLControl(new OpenTK.Graphics.GraphicsMode(32, 24, 0, 8));

Scale 2D Texture to model scaling to prevent streching

I have an OpenGL 3.3 program whichts has different objects in, for example a simple cube. The cube's dimensions are 1x1x1 (vertices from -0.5, -0.5, -0.5 to 0.5, 0.5, 0.5) and is textured with one 2D texture on each side. The texture is repeatable (seamless).
With my actual code the model scaling looks like this (ignore the actual texture):
After scaling like this:
In this case the texture in should stay at size in z-direction but repeate over the z-axis.
Is there a good way to scale the texture properly to the model's scaling to prevent it from stretching? Or do I have to create a 3D texture?
The problem i found is that in my shader I get only the (scaled) point of the cube, for example -0.5, -1,5, -0.5 but the texture's coordinates are only 2D (0.0, 0.0) and I don't know which side of the texture I have to scale since I don't know which side it will currently be rendered on.
For for the sake of completeness, however, the vertex shader code:
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
out vec2 TexCoord;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
TexCoord = aTexCoord;
gl_Position = projection * view * model * vec4(aPos, 1.0);
//gl_Position = projection * view * model * vec4(aPos, 1.0f);
//TexCoord = aTexCoord;
}
The fragment shader looks like this:
out vec4 FragColor;
in vec2 TexCoord;
// texture samplers
uniform sampler2D texture_diffuse1;
uniform vec4 color;
void main()
{
FragColor = color + texture(texture_diffuse1, TexCoord);
}

OpenGL 3D terrain lighting artefacts

I'm doing per-pixel lighting(phong shading) on my terrain. I'm using a heightmap to generate the terrain height and then calculating the normal for each vertex. The normals are interpolated in the fragment shader and also normalized.
I am getting some weird dark lines near the edges of triangles where there shouldn't be.
http://imgur.com/L2kj4ca
I checked if the normals were correct using a geometry shader to draw the normals on the terrain and they seem to be correct.
http://imgur.com/FrJpdXI
There is no point using a normal map for the terrain it will just give pretty much the same normals. The problem lies with the way the normals are interpolated across a triangle.
I am out of idea's how to solve this. I couldn't find any working solution online.
Terrain Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 textureCoords;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
out float visibility;
uniform mat4 transformationMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 lightPosition;
const float density = 0.0035;
const float gradient = 5.0;
void main()
{
vec4 worldPosition = transformationMatrix * vec4(position, 1.0f);
vec4 positionRelativeToCam = viewMatrix * worldPosition;
gl_Position = projectionMatrix * positionRelativeToCam;
pass_textureCoords = textureCoords;
surfaceNormal = (transformationMatrix * vec4(normal, 0.0f)).xyz;
toLightVector = lightPosition - worldPosition.xyz;
float distance = length(positionRelativeToCam.xyz);
visibility = exp(-pow((distance * density), gradient));
visibility = clamp(visibility, 0.0, 1.0);
}
Terrain Fragment Shader:
#version 330 core
in vec2 pass_textureCoords;
in vec3 surfaceNormal;
in vec3 toLightVector;
in float visibility;
out vec4 colour;
uniform vec3 lightColour;
uniform vec3 fogColour;
uniform sampler2DArray blendMap;
uniform sampler2DArray diffuseMap;
void main()
{
vec4 blendMapColour = texture(blendMap, vec3(pass_textureCoords, 0));
float backTextureAmount = 1 - (blendMapColour.r + blendMapColour.g + blendMapColour.b);
vec2 tiledCoords = pass_textureCoords * 255.0;
vec4 backgroundTextureColour = texture(diffuseMap, vec3(tiledCoords, 0)) * backTextureAmount;
vec4 rTextureColour = texture(diffuseMap, vec3(tiledCoords, 1)) * blendMapColour.r;
vec4 gTextureColour = texture(diffuseMap, vec3(tiledCoords, 2)) * blendMapColour.g;
vec4 bTextureColour = texture(diffuseMap, vec3(tiledCoords, 3)) * blendMapColour.b;
vec4 diffuseColour = backgroundTextureColour + rTextureColour + gTextureColour + bTextureColour;
vec3 unitSurfaceNormal = normalize(surfaceNormal);
vec3 unitToLightVector = normalize(toLightVector);
float brightness = dot(unitSurfaceNormal, unitToLightVector);
float ambient = 0.2;
brightness = max(brightness, ambient);
vec3 diffuse = brightness * lightColour;
colour = vec4(diffuse, 1.0) * diffuseColour;
colour = mix(vec4(fogColour, 1.0), colour, visibility);
}
This can be either two issues :
1. Incorrect normals :
There is different types of shading : Flat shading, Gouraud shading and Phong shading (different of Phong specular) example :
You usually want to do a Phong shading. To do that, OpenGL make your life easier and interpolate for you the normals between each vertex of each triangle, so at each pixel you have the correct normal for this point: but you still need to feed it proper normal values, that are the average of the normals of every triangles attached to this vertex. So in your function that create the vertex, the normals and the UVs, you need to compute the normal at each vertex by averaging every triangle normal attached to this vertex. illustration
2. Subdivision problem :
The other possible issue is that your terrain is not subdivided enough, or your heightmap resolution is too low, resulting to this kind of glitch because of the difference of height between two vertex in one triangle (so between two pixels in your heightmap).
Maybe if you can provide some of your code and shaders, maybe even the heightmap so we can pin exactly what is happening in your case.
This is old, but I suspect you're not transforming your normal using the transposed inverse of the upper 3x3 part of your modelview matrix. See this. Not sure what's in "transformationMatrix", but if you're using it to transform the vertex and the normal something is probably fishy...

Issues when simulating directional light in OpenGL

I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.
The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.
QOpenGLWidget paintGL() body.
void OpenGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
m_camera = camera.toMatrix();
m_world.setToIdentity();
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
QVector3D lightDirection = QVector3D(1,1,1);
lightDirection.normalize();
QVector3D directionalColor = QVector3D(1,1,1);
QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
m_program->setUniformValue(m_directionalColorLoc, directionalColor);
m_program->setUniformValue(m_ambientColorLoc, ambientLight);
geometries->drawGeometry(m_program);
m_program->release();
}
}
Vertex Shader
#version 330
layout(location = 0) in vec4 vertex;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;
void main()
{
highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
highp vec3 color = vec3(1, 1, 0.0);
fColor = vec4(color*vLightWeighting, 1.0);
}
The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is.
The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.
The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.
EDIT: 2nd issue solution.
I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I
changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically
EDIT2: I tried debug drawing the normals as suggested in the comments,
I quickly realized that I probably shouldn't calculate them based on
gl_Position (after MVP matrix multiplication in VS) instead I should use the
original vertex locations, so i modified the the shaders as follows:
Vertex Shader (UPDATED)
#version 330
layout(location = 0) in vec4 vertex;
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
vert = vertex.xyz;
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader (UPDATED)
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = vert[2].xyz - vert[0].xyz;
vec3 B = vert[1].xyz - vert[0].xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
EndPrimitive();
}
But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...
EDIT3: 1st Issue Solution: changing normal calculation in GS from
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
to transformedNormal = normalize(cross(A,B)); seems to solve the
problem. Omitting the normalMatrix from the calculation fixed the
issue and the normals dont change with the viewing angle.
If I missed any important/relevant information, please notify me in a comment.
Depth buffer precision
Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.
if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.
I empirically got this for setting the planes in my views:
(zfar-znear)/desired_accuracy_step > 0.3*(2^n)
Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.

Diffuse normal inversed when 3D model too large (cilinder) using ASSIMP and Phong shading

Currently I'm setting up some lighting in a 3D scene I created in Blender and loaded via assimp with the following options set:
aiProcess_GenSmoothNormals | aiProcess_Triangulate | aiProcess_CalcTangentSpace | aiProcess_FlipUVs
Currently I'm stuck on a really weird glitch in my program. I'm implementing Phong shading on the fragment shader for lighting with the following properties:
Each of the models have textures set up.
Each of the models have normal vectors loaded from the model (with some pre-calculations on them, probably because of the aiProcess_GenSmoothNormals flag)
The specular light is working on all objects.
Diffuse colors work as they should.
However, the pipe objects are different: the diffuse colors are always at the opposite side of the pipe that should be lit, while the specular light is on the correct side. This makes things weird, since the specular light is working as it should, while the diffuse component is always on the wrong side. I noticed this effect when scaling my cilinder objects beyond a certain point in blender (smaller scaled cilinders still work as they should) so scaling cilinder objects beyond a certain treshold probably has something to do with it.
My scene, where the pipe-like objects have working specular components but the diffuse colors are on the opposite side of the light source.
Normals as seen in blender
My first guess was that it had something to do with normal scaling, but I already used a normal matrix for that purpose in the vertex shader and the other objects in my scene work just fine.
Vertex Shader:
#version 330
layout (location = 0) in vec3 vertex;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 color;
layout(location = 4) in vec2 texCoord;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform vec3 lightPos;
out vec3 Position;
out vec3 Normal;
out vec3 LightPos;
out vec2 TexCoord;
void main()
{
gl_Position = projection * view * model * vec4(vertex, 1.0);
// Position
Position = vec3(view * model * vec4(vertex, 1.0));
// Normal
mat3 normalMat = transpose(inverse(mat3(view * model)));
Normal = normalMat * normal;
// Lighting
LightPos = vec3(view * vec4(lightPos, 1.0));
// Texture
TexCoord = texCoord;
}
Fragment Shader:
#version 330
in vec3 Position;
in vec3 Normal;
in vec3 LightPos;
in vec2 TexCoord;
uniform sampler2D texture0;
out vec4 outColor;
void main()
{
// defaults
vec4 ambient = vec4(0.2);
vec4 diffuse = vec4(0.4);
vec4 specular = vec4(0.5);
vec4 texColor = texture(texture0, TexCoord);
// Phong shading
vec3 LightDir = normalize(LightPos - Position);
vec3 Norm = normalize(Normal);
vec3 ViewDir = normalize(-Position);
vec3 ReflectDir = reflect(-LightDir,Norm);
float specularContribution = pow(max(dot(ViewDir, ReflectDir), 0.0), 32);
// Calculate diffuse component
vec4 I = diffuse * max(dot(Norm, LightDir), 0.0);
diffuse = clamp(I, 0.0, 1.0);
// Calculate specular component
specular = specular * specularContribution;
outColor = texColor * (diffuse + ambient + specular);
}
Edit
I added a geometry shader that displays all the vertex normals in a second drawing pass for debugging purposes. However, when displaying the normals they are slightly moving with camera movement which they should not do. I am guessing this is probably the cause for my beforementioned issue.
I made a small video illustrating the normal movement: Youtube video that displays the normal movement issues.
The video shows the pink normals changing direction as the camera moves. This should not be the case and I don't know why. Is it an incorrect normal matrix or maybe assimp loads the normals incorrectly?