OpenGL - Adding Tessellation Control Shader yields black screen - opengl

When I add my tesselation control shader to my rendering program, the viewport gets black. Without the TSC the vertex and fragment shader work fine. I also checked for compile errors but no occurs.
Vertex shader:
#version 410 core
layout (location = 0) in vec4 offset;
layout (location = 1) in vec4 color;
out VS_OUT {
vec4 color;
} vs_out;
void main(void) {
const vec4 vertices[3] = vec4[3]
(
vec4( 0.25, -0.25, 0.5, 1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4( 0.25, 0.25, 0.5, 1.0)
);
// Add "offset" to our hard-coded vertex position
gl_Position = vertices[gl_VertexID] + offset;
// Output the color from input attrib
vs_out.color = color;
}
Tessellation control shader:
#version 410 core
layout (vertices = 3) out;
void main(void) {
if (gl_InvocationID == 0) {
gl_TessLevelInner[0] = 5.0;
gl_TessLevelOuter[0] = 5.0;
gl_TessLevelOuter[1] = 5.0;
gl_TessLevelOuter[2] = 5.0;
}
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
Tessellation evaluation shader:
#version 410 core
layout (triangles, equal_spacing, cw) in;
void main(void) {
gl_Position = (gl_TessCoord.x * gl_in[0].gl_Position +
gl_TessCoord.y * gl_in[1].gl_Position +
gl_TessCoord.z * gl_in[2].gl_Position);
}
Fragment shader:
#version 410 core
in VS_OUT {
vec4 color;
} fs_in;
out vec4 color;
void main(void) {
color = fs_in.color;
}
I forgot to check for shader linking errors. And this is what I get:
WARNING: Output of vertex shader '<out VS_OUT.color>' not read by tessellation control shader
ERROR: Input of fragment shader '<in VS_OUT.color>' not written by tessellation evaluation shader
How can I fix this?

Without the code of the other shaders it's hard to help you.
Make sure your tessellation evaluation shader is correct too. A default one should look like this :
#version 410 core
layout(triangles, equal_spacing, ccw) in;
layout(packed) uniform MatrixBlock
{
mat4 projmat;
mat4 viewmat;
} matTransform;
void main ()
{
vec4 pos = gl_TessCoord.x * gl_in[0].gl_Position
+ gl_TessCoord.y * gl_in[1].gl_Position
+ gl_TessCoord.z * gl_in[2].gl_Position;
gl_Position = matTransform.projmat * matTransform.viewmat * pos;
}
The important part is the interpolation using the barycentric coordinates on the patch triangle. Also if the transformations are done in your vertex shader instead of the tess eval shader you may have strange results too.
Edit :
Now that you added tessellation stages you can't pass varying data from the vertex shader to the fragment shader. Indeed their are new triangles in the original patch triangle so you have to set the color for all these new triangles too. Actually when you use tessellation stages, the vertex shader and the tess control usually forward the vertices input to the tess eval shader.
So your tess control shader should be like :
#version 410 core
layout (vertices = 3) out;
in VS_OUT { vec4 color; } tcs_in[]; /* new */
out TCS_OUT { vec4 color; } tcs_out[]; /* new */
void main(void) {
if (gl_InvocationID == 0) {
gl_TessLevelInner[0] = 5.0;
gl_TessLevelOuter[0] = 5.0;
gl_TessLevelOuter[1] = 5.0;
gl_TessLevelOuter[2] = 5.0;
}
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
tcs_out[gl_InvocationID].color = tcs_in[gl_InvocationID].color; /* forward the data */
}
And you tess eval shader must also interpolate the color :
#version 410 core
layout (triangles, equal_spacing, cw) in;
in TCS_OUT { vec4 color; } tes_in[]; /* new */
out TES_OUT { vec4 color; } tes_out; /* new */
void main(void) {
tes_out.color = (gl_TessCoord.x * tes_in[0].color + /* Interpolation */
gl_TessCoord.y * tes_in[1].color +
gl_TessCoord.z * tes_in[2].color );
gl_Position = (gl_TessCoord.x * gl_in[0].gl_Position +
gl_TessCoord.y * gl_in[1].gl_Position +
gl_TessCoord.z * gl_in[2].gl_Position);
}
And of course in your fragment shader you now have a TES_OUT instead of VS_OUT.

I know this question is two years old now, but I this might help people in the future who experience the same issue and find this question.
After many hours of trying I figured out the problem. It seems as though the gl_in[].gl_Position inputs of the Tessellation Control Shader are not written to by the vertex shader. I suspect this must be a driver bug (maybe in the NVidia drivers?) because I cannot think of any reason this shouldn't work.
Solution:
Instead of relying on the gl_in[].gl_Position inputs of the Tessellation Control Shader just pass them yourself in a custom output/input.
This can be done by (roughly )adding the following lines to the respective shader:
// vertex shader
// ...
out vec4 vVertexOut;
void main() {
// ...
vVertexOut = uMVPMatrix * inVertex; // output your transformed vertex
}
// tesselation control shader
// ...
in vec4 vVertexOut[];
out vec4 tVertexOut[];
void main() {
// ...
tVertexOut[gl_InvocationID] = vVertexOut[gl_InvocationID];
}
// tesselation evaluation shader
// ...
in vec4 tVertexOut[];
void main() {
// ...
gl_Position = (tVertexOut[0] * gl_TessCoord[0]) + (tVertexOut[1] * gl_TessCoord[1]) + (tVertexOut[2] * gl_TessCoord[2]);
}

Related

OpenGL line width geometry shader

I am trying to implement geometry shader for line thickness using OpenGL 4.3.
I followed accepted answer and other given solutions of stackoverflow, but it is wrong according to the screenshot. Is there any proper way how can I get a normal of a screen? It seems correct in the first frame but the moment I move my mouse, the camera changes and offset direction is not correct. The shader is updated by camera matrix in while loop.
GLSL Geometry shader to replace glLineWidth
Vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 projection_view_model;
void main()
{
gl_Position = projection_view_model * vec4(aPos, 1.0);
}
Fragment shader
#version 330 core
//resources:
//https://stackoverflow.com/questions/6017176/gllinestipple-deprecated-in-opengl-3-1
out vec4 FragColor;
uniform vec4 uniform_fragment_color;
void main()
{
FragColor = uniform_fragment_color;
}
Geometry shader
#version 330 core
layout (lines) in;
layout(triangle_strip, max_vertices = 4) out;
uniform float u_thickness ;
uniform vec2 u_viewportSize ;
in gl_PerVertex
{
vec4 gl_Position;
//float gl_PointSize;
//float gl_ClipDistance[];
} gl_in[];
void main() {
//https://stackoverflow.com/questions/54686818/glsl-geometry-shader-to-replace-gllinewidth
vec4 p1 = gl_in[0].gl_Position;
vec4 p2 = gl_in[1].gl_Position;
vec2 dir = normalize((p2.xy - p1.xy) * u_viewportSize);
vec2 offset = vec2(-dir.y, dir.x) * u_thickness*100 / u_viewportSize;
gl_Position = p1 + vec4(offset.xy * p1.w, 0.0, 0.0);
EmitVertex();
gl_Position = p1 - vec4(offset.xy * p1.w, 0.0, 0.0);
EmitVertex();
gl_Position = p2 + vec4(offset.xy * p2.w, 0.0, 0.0);
EmitVertex();
gl_Position = p2 - vec4(offset.xy * p2.w, 0.0, 0.0);
EmitVertex();
EndPrimitive();
}
To get the direction of the line in normalized device space, the x and y components of the clip space coordinated must be divided by the w component (perspective divide):
vec2 dir = normalize((p2.xy - p1.xy) * u_viewportSize);
vec2 dir = normalize((p2.xy / p2.w - p1.xy / p1.w) * u_viewportSize);

Why output from vertex shader is being corrupted when an unsued output is not passed. (OpenGL/GLSL)

I'm receiving three attributes in the vertex shader and passing them to the fragment shader. If I omit one particular channel, that is not used in the frament shader at all, the fragment shader produces invalid output.
I reduced the code to the following simple examples:
A. (corrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
// it doesn't matter if flat is specified or not for the joint0 (apparently)
// out uvec4 o_Joint0;
flat out vec4 o_Joint0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Joint0=a_Joint0;
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in uvec4 o_Joint0;
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attributes o_Joint0 and o_Weight0, the fragment shader produces this correct output:
B. (incorrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attribute o_Weight0, as you can see the only thing omitted in both shaders was o_Joint0, the fragment shader produces this in incorrect output:
First, try completely omitting the a_Joint0 variable from the vertex shader (do not load it to the vertex shader at all, not even as a buffer).
If this does not work, try reverting your code back to before you omitted the variable and see if it works again, and then try and find out how it is actually affecting the fragment shader.

GLSL shaders going black/transparent

I am trying to add some shaders to my glut scene objects.
At this time I am trying to implement "hello world" shaders
but when I use the default vertex shader, my objects dissapear.
shaders:
#define GLSL(version, shader) "#version " #version " core\n" #shader
const char* vert = GLSL
(
330,
layout (std140) uniform Matrices {
mat4 pvm;
} ;
in vec4 position;
out vec4 color;
void main()
{
color = position;
gl_Position = pvm * position ;
}
);
const char* frag = GLSL
(
330,
in vec4 color;
out vec4 outputF;
void main()
{
outputF = vec4(1.0, 0.5, 0.25, 1.0);
}
);
Compilation shows no error:
Compiling shader : vertex shader
VERTEX STATUS:1
Compiling shader : fragment shader
FRAGMENT STATUS:1
Linking program
PROGRAM STATUS:1
PROGRAM ID : 3
Before calling glUseProgram:
After calling glUseProgram:
After calling glUseProgram without attach vertex shader:
CODE for rendering:
int opengl_draw_path_gl(rendered_path_t *p) {
unsigned int num_vertices,j;
unsigned int face_size;
unsigned long i,num_elems;
vect_t *a,*b;
num_elems=p->num_prisms;
num_vertices=p->prism_faces;
face_size=num_vertices*2;
a=p->data+2; // saltem punt centre primera cara
b=a+face_size;
glColor4fv(p->color);
// dibuixem tapa inici
_opengl_draw_path_terminator(num_vertices,p->data,a);
// Dibuixem tots els prismes
glBegin(GL_TRIANGLE_STRIP);
for(i=0;i<num_elems;i++) {
for(j=0;j<num_vertices;j++) {
glNormal3fv((GLfloat *)(a+j*2));
glVertex3fv((GLfloat *)(a+j*2+1));
glNormal3fv((GLfloat *)(b+j*2));
glVertex3fv((GLfloat *)(b+j*2+1));
}
glNormal3fv((GLfloat *)(a));
glVertex3fv((GLfloat *)(a+1));
glNormal3fv((GLfloat *)(b));
glVertex3fv((GLfloat *)(b+1));
a+=face_size;
b+=face_size;
}
glEnd();
// dibuixem tapa final
_opengl_draw_path_terminator(num_vertices,b,a);
return 0;
}
First of all I recommend you, to read a tutorial about vertex array objects.
But, since you are drawing with glBegin and glEnd, which is deprecated, you have to use compatibility mode shaders. You have to use the deprecated built in uniforms gl_Vertex and gl_Normal, according to the OpenGL commands glVertex3fv and glNormal3fv.
Adapt your code somhow like this:
#define GLSL(version, shader) "#version " #version "\n" #shader
Vertex shader:
const char* vert = GLSL
(
110,
varying vec4 position;
varying vec3 normal;
void main()
{
position = gl_ModelViewMatrix * gl_Vertex;
normal = normalize( gl_NormalMatrix * gl_Normal.xyz );
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
);
Fragment shader:
const char* frag = GLSL
(
110,
varying vec4 position;
varying vec3 normal;
void main()
{
gl_FragColor = vec4(1.0, 0.5, 0.25, 1.0);
}
);

Conditional output from geometry-shader GLSL

I am trying to figure out how to switch outputs in the geometry shader, specifically these two outputs:
layout(points, max_vertices = 1) out; // OUTPUT 1
layout(triangle_strip, max_vertices = 4) out; // OUTPUT 2
I am doing a rendering of starclusters and have a dataset of 100 000+ stars using sprites generated in the geometry shader. But, the sprite "halos" are to only appear if we move closer to the target star.
Geometry Shader
#version 440
const vec2 corners[4] = {
vec2(0.0, 1.0),
vec2(0.0, 0.0),
vec2(1.0, 1.0),
vec2(1.0, 0.0)
};
layout(points) in;
layout(points, max_vertices = 1) out;
//layout(triangle_strip, max_vertices = 4) out;
uniform vec4 campos;
float spriteSize = 0.000005; // set here for now.
out vec2 texCoord;
void main(){
float distToPoint = 1; //(1.f/((length(gl_in[0].gl_Position - campos)) ));
if(distToPoint == 1){ // dumb condition just to illustrate my point
// EMIT QUAD
for(int i=0; i<4; ++i){
vec4 pos = gl_in[0].gl_Position;
pos.xy += spriteSize *(corners[i] - vec2(0.5));
gl_Position = pos;
texCoord = corners[i];
EmitVertex();
}
EndPrimitive();
}else{
// EMIT POINT
gl_Position = gl_in[0].gl_Position;
EmitVertex();
EndPrimitive();
}
}
Fragment Shader
void main(void){
... // First I do a bunch of depth computation, irrelevant to this question
gl_FragDepth = depth;
// Here I want to switch between these two outputs!
//diffuse = texture2D(texture1, texCoord); // OUTPUT 1
diffuse = vec4(Color, 1.0); // OUTPUT 2
}
I have recently encountered streams for geometry shader but cant find a decent example of how this can be done in my instance.
Please let me know if more code needs to be posted.

GLSL geometry shader requires glProgramParameteriEXT regardless of layout

I created a basic quad drawing shader using a single point and a geometry shader.
I've read many posts and articles suggesting that I would not need to use glProgramParameteriEXT and could use the layout keyword so long as I was using a shader #version 150 or higher. Some suggested #version 400 or #version 420. My computer will not support #version 420 or higher.
If I use only layout and #version 150 or higher, nothing draws. If I remove layout (or even keep it; it does not seem to care because it will compile) and use glProgramParameteriEXT, it renders.
In code, this does nothing:
layout (points) in;
layout (triangle_strip, max_vertices=4) out;
This is the only code that works:
glProgramParameteriEXT( id, GL_GEOMETRY_INPUT_TYPE_EXT, GL_POINTS );
glProgramParameteriEXT( id, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP );
glProgramParameteriEXT( id, GL_GEOMETRY_VERTICES_OUT_EXT, 4 );
The alternative is to create a parser that creates the parameters via shader source.
Source for quad rendering via geometry shader:
#version 330
#ifdef VERTEX_SHADER
in vec4 aTexture0;
in vec4 aColor;
in mat4 aMatrix;
out vec4 gvTex0;
out vec4 gvColor;
out mat4 gvMatrix;
void main()
{
// Texture color
gvTex0 = aTexture0;
// Vertex color
gvColor = aColor;
// Matrix
gvMatrix = aMatrix;
}
#endif
#ifdef GEOMETRY_SHADER
layout (points) in;
layout (triangle_strip, max_vertices=4) out;
in vec4 gvTex0[1];
in vec4 gvColor[1];
in mat4 gvMatrix[1];
out vec2 vTex0;
out vec4 vColor;
void main()
{
vColor = gvColor[0];
// Top right.
//
gl_Position = gvMatrix[0] * vec4(1, 1, 0, 1);
vTex0 = vec2(gvTex0[0].z, gvTex0[0].y);
EmitVertex();
// Top left.
//
gl_Position = gvMatrix[0] * vec4(-1, 1, 0, 1);
vTex0 = vec2(gvTex0[0].x, gvTex0[0].y);
EmitVertex();
// Bottom right.
//
gl_Position = gvMatrix[0] * vec4(1, -1, 0, 1);
vTex0 = vec2(gvTex0[0].z, gvTex0[0].w);
EmitVertex();
// Bottom left.
//
gl_Position = gvMatrix[0] * vec4(-1, -1, 0, 1);
vTex0 = vec2(gvTex0[0].x, gvTex0[0].w);
EmitVertex();
EndPrimitive();
}
#endif
#ifdef FRAGMENT_SHADER
uniform sampler2D tex0;
in vec2 vTex0;
in vec4 vColor;
out vec4 vFragColor;
void main()
{
vFragColor = clamp(texture2D(tex0, vTex0) * vColor, 0.0, 1.0);
}
#endif
I am looking for suggestions as to why something like this might happen.