C++, OpenGL - geometry shader - c++

I'm stuck with geometry shaders in OpenGL - c++ programming. I want to create simple cube by repeating 6 times drawing one rotated wall. Here is my vertex shader (everyting has #version 330 core in preamble):
uniform mat4 MVP;
uniform mat4 ROT;
layout(location=0) in vec3 vertPos;
void main(){
vec4 pos=(MVP*ROT*vec4(vertPos,1));
pos.x/=1.5;
pos.y/=1.5;
gl_Position=pos;
}
Now geometry shader:
layout (triangles) in;
layout (triangle_strip, max_vertices = 6) out;
out vec4 pos;
void main(void)
{
pos=vec4(1,1,1,1);
for (int i = 0; i < 3; i++)
{
vec4 offset=vec4(i/2.,0,0,0);
gl_Position = gl_in[i].gl_Position+offset;
EmitVertex();
}
EndPrimitive();
}
And now fragment shader:
uniform mat4 MVP;
in vec4 pos;
out vec3 color;
void main(){
vec3 light=(MVP*vec4(0,0,0,1)).xyz;
vec3 dd=pos.xyz-light;
float cosTheta=length(dd)*length(dd);
color=vec3(1,0,0);
}
Well, there is some junk, I wanted also put shading into my cube, but I've got a problem with sending coordinates. The main problem is - here I get my scaled square (by MVP matrix), I can even rotate it with basic interface (ROT matrix), but when I uncomment my "+offset" line I get some mess. What should I do to make clean 6-times repeating?

It looks like the error is here, in your geometry shader.
gl_Position = gl_in[i].gl_Position+offset;
This adds an offset... but it adds the offset in clip space, which is probably not what you want. Add the offset in your vertex shader, or do the perspective projection in your geometry shader.
/* Passthrough vertex shader */
layout(location=0) in vec3 vertPos;
void main(){
gl_Position = vec4(vertPos, 1.0);
}
/* Geometry shader */
...
gl_Position = MVP * (ROT * (gl_in[i].gl_Position + offset));
EmitVertex();
...
Also, I noticed something unusual in your vertex shader.
pos.x/=1.5;
pos.y/=1.5;
This is unusual because it is a linear transformation that directly follows a matrix multiplication. It would probably be more straightforward to multiply your MVP matrix by the following matrix:
1/1.5 0 0 0
0 1/1.5 0 0
0 0 1 0
0 0 0 1
This would achieve the same result with less shader code.

Related

Why output from vertex shader is being corrupted when an unsued output is not passed. (OpenGL/GLSL)

I'm receiving three attributes in the vertex shader and passing them to the fragment shader. If I omit one particular channel, that is not used in the frament shader at all, the fragment shader produces invalid output.
I reduced the code to the following simple examples:
A. (corrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
// it doesn't matter if flat is specified or not for the joint0 (apparently)
// out uvec4 o_Joint0;
flat out vec4 o_Joint0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Joint0=a_Joint0;
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in uvec4 o_Joint0;
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attributes o_Joint0 and o_Weight0, the fragment shader produces this correct output:
B. (incorrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attribute o_Weight0, as you can see the only thing omitted in both shaders was o_Joint0, the fragment shader produces this in incorrect output:
First, try completely omitting the a_Joint0 variable from the vertex shader (do not load it to the vertex shader at all, not even as a buffer).
If this does not work, try reverting your code back to before you omitted the variable and see if it works again, and then try and find out how it is actually affecting the fragment shader.

Drawing smooth circle

I'm using OpenTK (C#) but OpenGL suggestions are welcome too.
I have a point list generated by iteration having 1 degrees per point around the center point which means there are 361 point including the center point. Point list can be different with different approaches, that's ok. I can draw the circle with the below simple Vertex and Fragment shaders. How can change the fragment and/or vertex shaders to have a smooth circle.
Vertex shader:
#version 330
in vec3 vPosition;
in vec4 vColor;
out vec4 color;
out vec4 fPosition;
uniform mat4 modelview;
void main()
{
fPosition = modelview * vec4(vPosition, 1.0);
gl_Position = fPosition;
color = vColor;
}
Fragment shader:
#version 330
in vec4 color;
in vec4 fPosition;
out vec4 outputColor;
void main()
{
outputColor = color;
}
C# code:
GL.DrawArrays(PrimitiveType.TriangleFan, 0, points.Length);
Hello what do you actually see ? Post a screenshot. Anyway for smooth edges we have what's called anti alising.
Use this line for your glControl to enable it
glControl = new GLControl(new OpenTK.Graphics.GraphicsMode(32, 24, 0, 8));

OpenGL: Terrain deformation using a heightmap in the vertex shader

I have been trying to implement a heightmap to my terrain shader, but the terrain remains flat. The texture is properly loaded in the vertex shader, and I try to use the greyscale values of the texture based on the mesh's uvs to adjust the vertex height:
//DIFFUSE VERTEX SHADER
#version 330
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec3 vertex;
in vec3 normal;
in vec2 uv;
uniform sampler2D heightmap;
out vec2 texCoord;
void main( void ){
vec3 _vertex = vertex;
_vertex.y = texture(heightmap, uv).r * 2.f;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(_vertex, 1.f);
texCoord = uv;
}
Fragment: (the splatmap works so ignore that)
uniform sampler2D splatmap;
uniform sampler2D diffuse1;
uniform sampler2D diffuse2;
uniform sampler2D diffuse3;
uniform sampler2D diffuse4;
in vec2 texCoord;
out vec4 fragment_color;
void main( void ) {
///Loading the splatmap and the diffuse textures
vec4 splatTexture = texture2D(splatmap, texCoord);
vec4 diffuseTexture1 = texture2D(diffuse1, texCoord);
vec4 diffuseTexture2 = texture2D(diffuse2, texCoord);
vec4 diffuseTexture3 = texture2D(diffuse3, texCoord);
vec4 diffuseTexture4 = texture2D(diffuse4, texCoord);
//Interpolate between the different textures using the splatmap's rgb values (works)
diffuseTexture1 *= splatTexture.r;
diffuseTexture2 = mix (diffuseTexture1, diffuseTexture2, splatTexture.g);
diffuseTexture3 = mix (diffuseTexture2,diffuseTexture3, splatTexture.b);
vec4 outcolor = mix (diffuseTexture3, diffuseTexture4, splatTexture.a);
fragment_color = outcolor;
}
Some additional info:
All textures are loaded like this in my terrain material and passed to the shader (works properly):
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, heightMap->getId());
glUniform1i (_shader->getUniformLocation("heightMap"),0);
...
The plane mesh uvs are mapped like this:
(0,1) (1,1)
(0,0) (1,0)
I guess I am doing something horribly wrong, but I can't figure out what. Any help is appreciated!
Does your writing this:
The plane mesh uvs are mapped like this:
(0,1) (1,1)
(0,0) (1,0)
… mean that your mesh consists of just 4 vertices? If so, then that's your problem right there: The Vertex shader can not magically create "new" vertices, so your heightmap texture is sampled at only 4 points (and nothing in between).
And because you sample the texture coordinates at integer values and your texture coordinates and are at 0 and 1, you're effectively sampling the very same texture coordinate, so you're going to see the same displacement for all four vertices.
Solution: Tesselate your base mesh so that there are actually vertices available to displace. A tesselation shader is perfectly fine for that.
EDIT:
BTW, you can simplyfiy your vertex shader a bit: For the attributes make it a
in vec2 vertex;
which requires just 2/3 of the space of vec3, since you're not using the z component anyway.
float y = texture(heightmap, uv).r * 2.f;
gl_Position =
projectionMatrix
* viewMatrix
* modelMatrix
* vec4(vertex.x, y, vertex.y, 1.f);

OpenGL - uniform presence causing shader to be bypassed

UPDATE: So it turns out this was due to a bug in the C side of things, causing some of the matrix to become malformed. The shaders are all fine. So if adding uniforms causes weird things to happen, my advice would be to use a debugger to check the value of ALL uniforms and make sure that they are all being set correctly.
So I am trying to render depth to a cube map to use as a shadow map, but when I add and use a uniform in the fragment shader everything becomes white as if the shader isn't being used. No warnings or errors are generated when compiling/linking the shader.
The shader program I am using to render the depth map (setting the depth simply to the fragment z position as a test) is as follows:
//vertex shader
#version 430
in layout(location=0) vec4 vertexPositionModel;
uniform mat4 modelToWorldMatrix;
void main() {
gl_Position = modelToWorldMatrix * vertexPositionModel;
}
//geometry shader
#version 430
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
out vec4 fragPositionWorld;
uniform mat4 projectionMatrices[6];
void main() {
for (int face = 0; face < 6; face++) {
gl_Layer = face;
for (int i = 0; i < 3; i++) {
fragPositionWorld = gl_in[i].gl_Position;
gl_Position = projectionMatrices[face] * fragPositionWorld;
EmitVertex();
}
EndPrimitive();
}
}
//Fragment shader
#version 430
in vec4 fragPositionWorld;
void main() {
gl_FragDepth = abs(fragPositionWorld.z);
}
The main shader samples from the cubemap and simply renders the depth as greyscale colour:
vec3 lightDirection = fragPositionWorld - pointLight.position;
float closestDepth = texture(shadowMap, lightDirection).r;
finalColour = vec4(vec3(closestDepth), 1.0);
The scene is a small cube in a larger cubic room, and renders as expected, dark near z = 0 and the cube projected back onto the wall (The depth map is being rendered from the centre of the room):
Good:
[2
I can move the small cube around and the projection projects correctly onto all the sides of the cubemap. All good so far.
The problem is when I add a uniform to the fragment shader, i.e:
#version 430
in vec4 fragPositionWorld;
uniform vec3 lightPos;
void main() {
gl_FragDepth = min(lightPos.y, 0.5);
}
Everything renders as white, same as if the render failed to compile:
Bad:
gDEBugger reports that the uniform is set correctly (0,4,0) but regardless of what that lightPos is, gl_FragDepth should be set to a value less than 0.5 and appear a shade of grey (which is what happens if I set gl_FragDepth = 0.5 directly), so I can only conclude that the fragment shader is not being used for some reason and the default one is being use instead. Unfortunately I have no idea why.

OpenGL Simple Shading, Artifacts

I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.