I've been learning OpenGL (actually OpenTK) for a month or so, and I've started playing with geometry shaders. I originally wrote a much more complicated shader, which didn't work, so I've stripped everything down a simple passthrough shader, which still doesn't work.
Vertex Shader:
#version 420 core
layout (location = 0) in vec3 position;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(position, 1);
}
Geometry Shader:
#version 420 core
layout (triangles) in;
layout (triangle_strip) out;
void main()
{
int i;
for(i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
Fragment Shader:
#version 420 core
layout (binding = 0) uniform sampler2D diffuse;
layout(location = 0) out vec4 color;
void main()
{
color = vec4(1, 1, 1, 1);
}
It should draw a white square in the bottom right of the window, and without the Geometry shader, it does, which is what I would expect. With the Geometry Shader, it renders nothing. Info logs show everything compiles and links fine.
What am I missing?
There must be a max_vertices declaration for the output. The number must be a compile-time constant, and it defines the maximum number of vertices that will be written by a single invocation of the GS. It may be no larger than the implementation-defined limit of MAX_GEOMETRY_OUTPUT_VERTICES
Related
I'm receiving three attributes in the vertex shader and passing them to the fragment shader. If I omit one particular channel, that is not used in the frament shader at all, the fragment shader produces invalid output.
I reduced the code to the following simple examples:
A. (corrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
// it doesn't matter if flat is specified or not for the joint0 (apparently)
// out uvec4 o_Joint0;
flat out vec4 o_Joint0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Joint0=a_Joint0;
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in uvec4 o_Joint0;
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attributes o_Joint0 and o_Weight0, the fragment shader produces this correct output:
B. (incorrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attribute o_Weight0, as you can see the only thing omitted in both shaders was o_Joint0, the fragment shader produces this in incorrect output:
First, try completely omitting the a_Joint0 variable from the vertex shader (do not load it to the vertex shader at all, not even as a buffer).
If this does not work, try reverting your code back to before you omitted the variable and see if it works again, and then try and find out how it is actually affecting the fragment shader.
i have attached the shader but i can't find any info on how to use glDrawElements with a goemetry shader attached to the shader program.
The program would output a quad on the screen without the geometry shader, now i'm trying to do the same but with a geometry shader attached.
//In my .cpp file
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
// Vertex shader
#version 440
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_color;
uniform mat4 world_matrix;
uniform mat4 view_matrix;
uniform mat4 projection_matrix;
out vec3 color;
void main() {
color = vertex_color;
gl_Position = projection_matrix*view_matrix* world_matrix *
vec4(vertex_position, 1.0);
}
// Geometry shader
#version 440 core
layout (triangle_strip) in;
layout (triangle_strip, max_vertices = 6) out;
layout(location = 1) in vec3 vertex_color;
out vec3 color;
void main()
{
for(int i = 0; i < gl_in.length(); i++)
{
// copy attributes
gl_Position = gl_in[i].gl_Position;
color=vertex_color;
// done with the vertex
EmitVertex();
}
EndPrimitive();
}
//Fragment shader
#version 440
in vec3 color;
out vec4 fragment_color;
void main () {
fragment_color = vec4 (color, 1.0);
}
See the handy OpenGL wiki site of Khronos Group for Shader stage inputs and outputs:
Global variables declared with the in qualifier are shader stage input variables. These variables are given values by the previous stage (possibly via interpolation of values output from multiple shader executions).
Global variables declared with the out qualifier are shader stage output variables. These values are passed to the next stage of the pipeline (possibly via interpolation of values output from multiple shader executions).
Geometry Shader inputs are aggregated into arrays, one per vertex in the primitive. The length of the array depends on the input primitive type used by the GS. Each array index represents a single vertex in the input primitive.
You have a vertex shader, a geometry shader and a fragment shader. In this case the vertex shader is the first shader stage, followed by the geometry shader and the last shader stage is the fragment shader.
So the input variables of the geometry shader have to match to the output variables of the vertex shader. The input variables of the fragment shader have to match to the output variables of the geometry shader.
Further note, that the possible input primitive specifier are points, lines, lines_adjacency, triangles and triangles_adjacency.
See also Geometry Shader - Primitive in/out specification.
This means you code has to look somehow like this:
Vertex shader:
#version 440
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_color;
uniform mat4 world_matrix;
uniform mat4 view_matrix;
uniform mat4 projection_matrix;
out vec3 vert_stage_color;
void main()
{
vert_out_color = vertex_color;
gl_Position = projection_matrix*view_matrix* world_matrix * vec4(vertex_position, 1.0);
}
Geometry shader:
#version 440 core
layout (triangles) in;
layout (triangle_strip, max_vertices = 6) out;
layout(location = 1) in vec3 vertex_color;
in vec3 vert_stage_color[];
out vec3 geo_stage_color;
void main()
{
for(int i = 0; i < gl_in.length(); i++)
{
// copy attributes
gl_Position = gl_in[i].gl_Position;
geo_stage_color = vert_stage_color[i];
// done with the vertex
EmitVertex();
}
EndPrimitive();
}
Fragment shader:
#version 440
in vec3 geo_stage_color;
out vec4 fragment_color;
void main ()
{
fragment_color = vec4(geo_stage_color, 1.0);
}
So I have three shaders in my program.
Vertex:
#version 330 core
in vec2 Inpoint;
in vec2 texCoords;
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 projection;
void main()
{
TexCoords = texCoords;
gl_Position = projection * model * vec4(Inpoint, 0.0, 1.0);
}
Geometry:
#version 330 core
layout(triangles) in;
layout(triangle_strip, max_vertices = 4) out;
void main()
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
And finally the fragment shader:
#version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D image;
uniform vec3 spriteColor;
void main()
{
color = vec4(spriteColor, 1.0) * texture(image, TexCoords);
}
Now without the geometry shader, everything displays just fine. But as soon as I include the geometry shader, everything goes ... bad.
It acts like its not getting chords for the textures.
So, the question is, does the geometry shader need to pass the data through itself to the fragment shader? I mean the geometry shader is basically doing nothing so it shouldn't. Unless there is some giant mistake I am missing.
I tried to add a pass-though but it complains that everything needs to be an array, and even when I did make it an array it didn't quite work right.
Quoting GLSL 3.30 specs 4.3.1 Inputs :
Fragment shader inputs get per-fragment values, typically interpolated
from a previous stage's outputs
Having a geometry shader is the previous stage. So yes, your FS uses inputs from your GS and only from it.
I'm trying to translate some old OpenGL code to modern OpenGL. This code is reading data from a texture and displaying it. The fragment shader is currently created using ARB_fragment_program commands:
static const char *gl_shader_code =
"!!ARBfp1.0\n"
"TEX result.color, fragment.texcoord, texture[0], RECT; \n"
"END";
GLuint program_id;
glGenProgramsARB(1, &program_id);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program_id);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, (GLsizei) strlen(gl_shader_code ), (GLubyte *) gl_shader_code );
I'd simply like to translate this into GLSL code. I think the fragment shader should look something like this:
#version 430 core
uniform sampler2DRect s;
void main(void)
{
gl_FragColor = texture2DRect(s, ivec2(gl_FragCoord.xy), 0);
}
But I'm not sure of a couple of details:
Is this the right usage of texture2DRect?
Is this the right usage of gl_FragCoord?
The texture is being fed with a pixel buffer object using GL_PIXEL_UNPACK_BUFFER target.
I think you can just use the standard sampler2D instead of sampler2DRect (if you do not have a real need for it) since, quoting the wiki, "From a modern perspective, they (rectangle textures) seem essentially useless.".
You can then change your texture2DRect(...) to texture(...) or texelFetch(...) (to mimic your rectangle fetching).
Since you seem to be using OpenGL 4, you do not need to (should not ?) use gl_FragColor but instead declare an out variable and write to it.
Your fragment shader should look something like this in the end:
#version 430 core
uniform sampler2D s;
out vec4 out_color;
void main(void)
{
out_color = texelFecth(s, vec2i(gl_FragCoord.xy), 0);
}
#Zouch, thank you very much for your response. I took it and worked on this for a bit. My final cores were very similar to what you suggested. For the record the final vertex and fragment shaders I implemented were as follows:
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D myTextureSampler;
void main()
{
color = texture2D(myTextureSampler, UV).rgb;
}
That seemed to work.
I am using indexed rendering and a geometry shader. If I pass gl_VertexID to the geometry shader, it works fine as long as I do not emit any vertices; if I emit one or more vertices, gl_VertexID (passed as any name) is zero. Why?
Using the shaders below, the geometry shader will put the correct indices into my feedback buffer, if and only if I comment out both EmitVertex calls. What am I missing?
(I can work around it, but it is bugging the hell out of me!)
VERTEX SHADER
#version 440
in vec4 position;
out VSOUT{
vec4 gl_Position;
int index;
} vsout;
uniform mat4 gl_ModelViewMatrix;
void main(){
gl_Position = gl_ModelViewMatrix * position;
vsout.index = gl_VertexID;
vsout.gl_Position = gl_Position;
}
GEOMETRY SHADER
#version 440
#extension GL_ARB_shader_storage_buffer_object : enable
layout (lines) in;
layout (line_strip) out;
in VSOUT{
vec4 gl_Position;
int index;
} vdata[];
layout (std430, binding=0) buffer FeedbackBuffer{
vec2 fb[];
};
void main(){
int i = vdata[0].index;
int j = vdata[1].index;
fb[gl_PrimitiveIDIn][0] = vdata[0].index;
fb[gl_PrimitiveIDIn][1] = vdata[1].index;
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gl_Position = gl_in[1].gl_Position;
EmitVertex();
}
FRAGMENT SHADER
#version 430
out vec4 outputColor;
void main(){
outputColor = vec4(.5,.5,.5,.5);
}
So this looks like an nVidia implementation thing. If I run these shaders on a GeForce GTX580, behaviour is as described above. Using an AMD FirePro V5900, it behaves as I'd expect, with the correct values in the feedback buffer whether or not I emit vertices.