OpenGL - Layered Rendering Cube Only Render the First Face [duplicate] - c++

I'm trying to draw to a cubemap in a single pass using a geometry shade in OpenGL.
Basically need I do this to copy the content of a cubemap into another cubemap, and the may not have the same resolution and pixel layout.
I'm trying to achieve the result I want feeding a single point to the vertex shader and then, from the geometry shader, select each layer (face of the cubemap) and emit a quad and texture coordinates.
So far I've tried this method emitting only two of the cubemap faces (positive and negative X) to see if it could work, but it doesn't.
Using NSight I can see that there is something wrong.
This is the source cubemap:
And this is the result cubemap:
The only face that's being drawn to is the positive X and still it's not correct.
This is my geometry shader:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 8) out;
in vec3 pos[];
out vec3 frag_textureCoord;
void main()
{
const vec4 positions[4] = vec4[4] ( vec4(-1.0, -1.0, 0.0, 0.0),
vec4( 1.0, -1.0, 0.0, 0.0),
vec4(-1.0, 1.0, 0.0, 0.0),
vec4( 1.0, 1.0, 0.0, 0.0) );
// Positive X
gl_Layer = 0;
gl_Position = positions[0];
frag_textureCoord = vec3(1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(1.0, 1.0, -1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(1.0, 1.0, 1.0);
EmitVertex();
EndPrimitive();
// Negative X
gl_Layer = 1;
gl_Position = positions[0];
frag_textureCoord = vec3(-1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(-1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(-1.0, 1.0, 1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(-1.0, 1.0, -1.0);
EmitVertex();
EndPrimitive();
}
And this is my fragment shader:
#version 150 core
uniform samplerCube AtmosphereMap;
in vec3 frag_textureCoord;
out vec4 FragColor;
void main()
{
FragColor = texture(AtmosphereMap, frag_textureCoord) * 1.0f;
}
UPDATE
Further debugging with NSight shows that for the positive x face every fragment gets a value of frag_textureCoord of vec3(~1.0, ~0.0, ~0.0) (I've used ~ since the values are not exactly those but approximated). The negative x face instead never reaches the fragment shader stage.
UPDATE
Changing the definition of my vertex position from vec4(x, y, z, 0.0) to vec4(x, y, z, 1.0) makes my shader render correctly the positive X face, but the negative is still wrong, even if debugging the fragment shader I see that the right color is selected and applied, but then it becomes black.

gl_Layer = 0;
This is a Geometry Shader output. Calling EmitVertex will cause the value of all output variables to become undefined. Therefore, you must always set each output for each vertex to which that output applies.

Related

Draw a cubemap in a single pass OpenGL

I'm trying to draw to a cubemap in a single pass using a geometry shade in OpenGL.
Basically need I do this to copy the content of a cubemap into another cubemap, and the may not have the same resolution and pixel layout.
I'm trying to achieve the result I want feeding a single point to the vertex shader and then, from the geometry shader, select each layer (face of the cubemap) and emit a quad and texture coordinates.
So far I've tried this method emitting only two of the cubemap faces (positive and negative X) to see if it could work, but it doesn't.
Using NSight I can see that there is something wrong.
This is the source cubemap:
And this is the result cubemap:
The only face that's being drawn to is the positive X and still it's not correct.
This is my geometry shader:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 8) out;
in vec3 pos[];
out vec3 frag_textureCoord;
void main()
{
const vec4 positions[4] = vec4[4] ( vec4(-1.0, -1.0, 0.0, 0.0),
vec4( 1.0, -1.0, 0.0, 0.0),
vec4(-1.0, 1.0, 0.0, 0.0),
vec4( 1.0, 1.0, 0.0, 0.0) );
// Positive X
gl_Layer = 0;
gl_Position = positions[0];
frag_textureCoord = vec3(1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(1.0, 1.0, -1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(1.0, 1.0, 1.0);
EmitVertex();
EndPrimitive();
// Negative X
gl_Layer = 1;
gl_Position = positions[0];
frag_textureCoord = vec3(-1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(-1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(-1.0, 1.0, 1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(-1.0, 1.0, -1.0);
EmitVertex();
EndPrimitive();
}
And this is my fragment shader:
#version 150 core
uniform samplerCube AtmosphereMap;
in vec3 frag_textureCoord;
out vec4 FragColor;
void main()
{
FragColor = texture(AtmosphereMap, frag_textureCoord) * 1.0f;
}
UPDATE
Further debugging with NSight shows that for the positive x face every fragment gets a value of frag_textureCoord of vec3(~1.0, ~0.0, ~0.0) (I've used ~ since the values are not exactly those but approximated). The negative x face instead never reaches the fragment shader stage.
UPDATE
Changing the definition of my vertex position from vec4(x, y, z, 0.0) to vec4(x, y, z, 1.0) makes my shader render correctly the positive X face, but the negative is still wrong, even if debugging the fragment shader I see that the right color is selected and applied, but then it becomes black.
gl_Layer = 0;
This is a Geometry Shader output. Calling EmitVertex will cause the value of all output variables to become undefined. Therefore, you must always set each output for each vertex to which that output applies.

OpenGL layered rendering interferes with layer 0

I am using gl_Layer = gl_InvocationID; in a geometry shader to render into a framebuffer with a 3D texture attached.
This mostly works fine. Except every invocation of the shader also renders into layer 0, as well as the layer that I specify.
How can I avoid this? Is there something vital I'm missing with setting up the framebuffer? Perhaps with glFramebufferTexture?
Geometry Shader
#version 400
layout(invocations = 32) in;
layout(points) in;
layout(triangle_strip, max_vertices = 3) out;
out vec3 raster_color;
float blue;
void main()
{
gl_Layer = gl_InvocationID;
blue = float(gl_InvocationID) / 31.0;
gl_Position = vec4( -1.0, -1.0, 0.0, 1.0 );
raster_color = vec3( 0.0, 0.0, blue );
EmitVertex();
gl_Position = vec4( 1.0, -1.0, 0.0, 1.0 );
raster_color = vec3( 1.0, 0.0, blue );
EmitVertex();
gl_Position = vec4( 0.0, 1.0, 0.0, 1.0 );
raster_color = vec3( 1.0, 1.0, blue );
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 400
in vec3 raster_color;
out vec4 fragment_color;
void main()
{
fragment_color = vec4( raster_color, 1.0 );
}
EmitVertex invalidates all per-vertex outputs after it returns. The most obvious per-vertex outputs in this shader are:
raster_color
gl_Position
But, you may not have realized that gl_Layer is also per-vertex or which vertex this needs to be set for.
gl_Layer will be undefined for every vertex after the first in this shader. Some drivers will leave it untouched and simply work, others will do anything they want with it and you cannot make any assumptions about gl_Layer after EmitVertex (...). You are playing with fire, because it may not be the first vertex that defines a primitive's layer (more on this later).
To fix this, re-write your geometry shader this way:
#version 400
layout(invocations = 32) in;
layout(points) in;
layout(triangle_strip, max_vertices = 3) out;
out vec3 raster_color;
float blue;
void main()
{
blue = float(gl_InvocationID) / 31.0;
gl_Position = vec4( -1.0, -1.0, 0.0, 1.0 );
raster_color = vec3( 0.0, 0.0, blue );
gl_Layer = gl_InvocationID; // Handle case where First Vertex is Layer Provoking
EmitVertex();
gl_Position = vec4( 1.0, -1.0, 0.0, 1.0 );
raster_color = vec3( 1.0, 0.0, blue );
gl_Layer = gl_InvocationID; // Handle case where Layer Provoking vertex is Undefined
EmitVertex();
gl_Position = vec4( 0.0, 1.0, 0.0, 1.0 );
raster_color = vec3( 1.0, 1.0, blue );
gl_Layer = gl_InvocationID; // Handle case where Last Vertex is Layer Provoking
EmitVertex();
EndPrimitive();
}
I would like to take this opportunity to point out that only 1 vertex in a primitive needs to have gl_Layer set; this vertex is called the Layer Provoking Vertex. Your shader assumes that the first vertex is the layer provoking vertex, but this is implementation-specific. When in doubt, the best solution is to cover all bases (set gl_Layer for all vertices).
You need to check GL_LAYER_PROVOKING_VERTEX at run-time to figure out which vertex defines your layer. If you do not want to do that, you can write your shader the way I described above. Provoking vertex conventions are usually first or last, but the way Geometry Shaders works leaves the possibility that any arbitrary vertex could define the layer (GL_UNDEFINED_VERTEX, and this is the case you should assume).
Turned out it was not a problem with gl_Layer. It was simply a syntax error in glTexParameter that was causing my resulting 3D texture to repeat rather than clamp to edges.

Write positions to texture OpenGL/GLSL

I want to write the model-space vertex positions of a 3D mesh to a texture in OGL. Currently in order to write to a texture I set it to a fullscreen quad and write to it using a separate pass (based on tutorial seen here.)
The problem is that, from what I understand, I cannot pass more than one vertex list to the shader as the vertex shader can only be bound to one vertex list at a time, currently occupied by the screenspace quad.
Vertex Shader code:
layout(location = 0) in vec4 in_position;
out vec4 vs_position;
void main() {
vs_position = in_position;
gl_Position = vec4(in_position.xy, 0.0, 1.0);
}
Fragment Shader code:
in vec4 position; // coordinate in the screenspace quad
out vec4 outColor;
void main() {
vec2 uv = vec2(0.5, 0.5) * position.xy + vec2(0.5, 0.5);
outColor = ?? // Here I need my vertex position
}
Possible solution (?):
My idea was to introduce another shader pass before this to output the positions as r, g, b so that the position of the current texel can be retrieved from the texture (the only input format large enough to store many vertecies).
Although not 100% accurate, this image might give you an idea of what I want to do:
Model space coordinate map
Is there a way to encode the positions to the texture without using a fullscreen quad on the GPU?
Please let me know if you want to see more code.
Instead of generating the quad CPU side I would attach a geometry shader and create the quad there, that should free up the slot for your model-geometry to be passed in.
Geometry shader:
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}

Sun shader not working

I'm trying to get a sun shader to work, but I can't get it to work.
What I currently get is a quarter of a circle/elipsis on the lower left of my screen, that is really stuck to my screen (if I move the camera, it also moves).
All I do is render two triangles to form a screen-covering quad, with screen width and height in uniforms.
Vertex Shader
#version 430 core
void main(void) {
const vec4 vertices[6] = vec4[](
vec4(-1.0, -1.0, 1.0, 1.0),
vec4(-1.0, 1.0, 1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
vec4(1.0, -1.0, 1.0, 1.0),
vec4(-1.0, -1.0, 1.0, 1.0)
);
gl_Position = vertices[gl_VertexID];
}
Fragment Shader
#version 430 core
layout(location = 7) uniform int screen_width;
layout(location = 8) uniform int screen_height;
layout(location = 1) uniform mat4 view_matrix;
layout(location = 2) uniform mat4 proj_matrix;
out vec4 color;
uniform vec3 light_pos = vec3(-20.0, 7.5, -20.0);
void main(void) {
//calculate light position in screen space and get x, y components
vec2 screen_space_light_pos = (proj_matrix * view_matrix * vec4(light_pos, 1.0)).xy;
//calculate fragment position in screen space
vec2 screen_space_fragment_pos = vec2(gl_FragCoord.x / screen_width, gl_FragCoord.y / screen_height);
//check if it is in the radius of the sun
if (length(screen_space_light_pos - screen_space_fragment_pos) < 0.1) {
color = vec4(1.0, 1.0, 0.0, 1.0);
}
else {
discard;
}
}
What I think it does:
Get the position of the sun (light_pos) in screen space.
Get the fragment position in screen space.
If the distance between them is below a certain value, draw fragment with yellow color;
Else discard.
screen_space_light_pos is not yet in clip space. You've missed perspective division:
vec3 before_division = (proj_matrix * view_matrix * vec4(light_pos, 1.0)).xyw;
vec2 screen_space_light_pos = before_division.xy / before_division.z;
With common proj_matrix configurations, screen_space_light_pos will be in [-1,1]x[-1,1]. To match screen_space_fragment_pos range, you probably need to adjust screen_space_light_pos:
screen_space_light_pos = screen_space_light_pos * 0.5 + 0.5;

How to optimize a color gradient shader?

I have created this simple fragment shader for achieving a vertical color gradient effect.
But I find this to be taxing for my mobile device in full screen.
is there any way to optimize this?
here is the link to the code
http://glsl.heroku.com/e#13541.0
You could do something like this instead.
vec2 position = (gl_FragCoord.xy / resolution.xy);
vec4 top = vec4(1.0, 0.0, 1.0, 1.0);
vec4 bottom = vec4(1.0, 1.0, 0.0, 1.0);
gl_FragColor = vec4(mix(bottom, top, position.y));
Example
You can further change the color yourself, I just used random colors.
You can even further eliminate calculating the x but that's kinda overkill.
vec4 top = vec4(1.0, 0.0, 1.0, 1.0);
vec4 bottom = vec4(1.0, 1.0, 0.0, 1.0);
gl_FragColor = vec4(mix(bottom, top, (gl_FragCoord.y / resolution.y)));