Ok so I have a framebuffer with a bunch of attachments attached. The attachments are Color, Bloom, Velocity and Depth.
I start by clearing the framebuffer to the values of my choosing with the following code.
// Clear Color Buffer
float colorDefaultValue[4] = { 0.5, 0.5, 0.5, 1.0 };
glClearBufferfv(GL_COLOR, 0, colorDefaultValue);
// Clear Bloom Buffer
float bloomDefaultValue[4] = { 0.0, 0.0, 1.0, 1.0 };
glClearBufferfv(GL_COLOR, 1, bloomDefaultValue);
// Clear Depth Buffer
float depth[1] = { 1.0 };
glClearBufferfv(GL_DEPTH, 0, depth);
Then I proceed to render the scene using my main shader.
As you can see in the code below, Ive specified the outputs.
// Layouts
layout (location = 0) out vec4 o_vFragColor;
layout (location = 1) out vec4 o_vBloomColor;
And then i output values in the fragment shader to them.
o_vFragColor = vec4(color, alpha);
float brightness = dot(o_vFragColor.rgb, vec3(0.2126, 0.2126, 0.2126));
if(brightness > 1.0)
o_vBloomColor = vec4(o_vFragColor.rgb, 1.0);
Now, my question is: If a fragment is not bright enough, why does it output black to the bloom attachment? I haven't specified for it to output anything yet it adjusts it anyway.
For example, if I clear the bloom buffer to green
// Clear Bloom Buffer
float bloomDefaultValue[4] = { 0.0, 1.0, 0.0, 1.0 };
glClearBufferfv(GL_COLOR, 1, bloomDefaultValue);
and I dont output any value to the bloom attachment in the fragment shader, I get a black in the bloom buffer. You can see it it the following image.
BloomBuffer
The parts of the image where the buffer is green is where no geometry was drawn. The black parts are parts of the image that contained geometry that was not bright enough and therefore should have not outputted anything to the bloom buffer, yet they clearly did output something, black in this case.
Purple parts are fragments that are beyind the brightness threshold and working as intended.
What with the black?
You cannot write to a color buffer conditionally from within a fragment shader.
An output fragment always has values for each active color buffer, even if you didn't write them. This is true even if you don't declare a variable for that output value. If no value gets written to a particular fragment output location, then the value used will be undefined. But it will be something.
You can do conditional writing based on color masking. But that is a per-draw-call thing, not something that can be done in the fragment shader. You could employ blending in that color buffer, using an alpha of 0 to mean "don't write", and an alpha of 1 to replace what's there.
Related
Ok so I have a framebuffer with a bunch of attachments attached. The attachments are Color, Bloom, Velocity and Depth.
I start by clearing the framebuffer to the values of my choosing with the following code.
// Clear Color Buffer
float colorDefaultValue[4] = { 0.5, 0.5, 0.5, 1.0 };
glClearBufferfv(GL_COLOR, 0, colorDefaultValue);
// Clear Bloom Buffer
float bloomDefaultValue[4] = { 0.0, 0.0, 1.0, 1.0 };
glClearBufferfv(GL_COLOR, 1, bloomDefaultValue);
// Clear Depth Buffer
float depth[1] = { 1.0 };
glClearBufferfv(GL_DEPTH, 0, depth);
Then I proceed to render the scene using my main shader.
As you can see in the code below, Ive specified the outputs.
// Layouts
layout (location = 0) out vec4 o_vFragColor;
layout (location = 1) out vec4 o_vBloomColor;
And then i output values in the fragment shader to them.
o_vFragColor = vec4(color, alpha);
float brightness = dot(o_vFragColor.rgb, vec3(0.2126, 0.2126, 0.2126));
if(brightness > 1.0)
o_vBloomColor = vec4(o_vFragColor.rgb, 1.0);
Now, my question is: If a fragment is not bright enough, why does it output black to the bloom attachment? I haven't specified for it to output anything yet it adjusts it anyway.
For example, if I clear the bloom buffer to green
// Clear Bloom Buffer
float bloomDefaultValue[4] = { 0.0, 1.0, 0.0, 1.0 };
glClearBufferfv(GL_COLOR, 1, bloomDefaultValue);
and I dont output any value to the bloom attachment in the fragment shader, I get a black in the bloom buffer. You can see it it the following image.
BloomBuffer
The parts of the image where the buffer is green is where no geometry was drawn. The black parts are parts of the image that contained geometry that was not bright enough and therefore should have not outputted anything to the bloom buffer, yet they clearly did output something, black in this case.
Purple parts are fragments that are beyind the brightness threshold and working as intended.
What with the black?
You cannot write to a color buffer conditionally from within a fragment shader.
An output fragment always has values for each active color buffer, even if you didn't write them. This is true even if you don't declare a variable for that output value. If no value gets written to a particular fragment output location, then the value used will be undefined. But it will be something.
You can do conditional writing based on color masking. But that is a per-draw-call thing, not something that can be done in the fragment shader. You could employ blending in that color buffer, using an alpha of 0 to mean "don't write", and an alpha of 1 to replace what's there.
I have access to a depth camera's output. I want to visualise this in opengl using a compute shader.
The depth feed is given as a frame and i know the width and height ahead of time. How do I sample the texture and retrieve the depth value in the shader? Is this possible? I've read through the OpenGl types here and can't find anything on unsigned shorts so am starting to worry. Are there any workarounds?
My current compute shader
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
uniform float width;
uniform float height;
uniform sampler2D depth_feed;
void main() {
// get index in global work group i.e x,y position
vec2 sample_coords = ivec2(gl_GlobalInvocationID.xy) / vec2(width, height);
float visibility = texture(depth_feed, sample_coords).r;
vec4 pixel = vec4(1.0, 1.0, 0.0, visibility);
// output to a specific pixel in the image
imageStore(img_output, ivec2(gl_GlobalInvocationID.xy), pixel);
}
The depth texture definition is as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, nullptr);
Currently my code produces a plain yellow screen.
If you use perspective projection, then the depth value is not linear. See LearnOpenGL - Depth testing.
If all the depth values are near 0.0, and you use the following expression:
vec4 pixel = vec4(vec3(visibility), 1.0);
then all the pixels appear almost black. Actually the pixels are not completely black, but the difference is barely noticeable.
This happens, when the far plane is "too" far away. To verify that you can compute the power of 1.0 - visibility, to make the different depth values ​​recognizable. For instance:
float exponent = 5.0;
vec4 pixel = vec4(vec3(pow(1.0-visibility, exponent)), 1.0);
If you want a more sophisticated solution, you can linearize the depth values as explained in the answer to How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
Please note that for a satisfactory visualization you should use the entire range of the depth buffer ([0.0, 1.0]). The geometry must be between the near and far planes, but try to move the near and far planes as close to the geometry as possible.
So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);
Is it possible in OpenGL to sample from, and then write to the same texture in one draw call using GLSL?
I think it actually IS possible. I did it like this for GLSL alpha blending:
vec4 prevColor = texture(dfFrameTexture, pointCoord);
vec4 result = vec4(color.a) * color + vec4(1.0 - color.a) * prevColor;
gl_FragColor = result;
dfFrameTexture is the texture that is being written to with gl_FragColor.
I tested this and it worked. Drawing a white quad onto dfFrameTexture after it's been cleared to black results in a grey quad when I set the color to (0.5, 0.5, 0.5, 0.5)
In my .cpp code I create a list of quads, a few of them have a flag, in the pixel shader I check if this flag is set or not, if the flag is not set, the quad gets colored in red for example, if the flag is set, I want to decide the color of every single pixel, so if I need to colour half of the flagged quad in red and the other half in blue I can simply do something like :
if coordinate in quad < something color = red
else colour = blue;
In this way I can get half of the quad colored in blue and another half colored in red, or I can decide where to put the red color or where to put the blue one.
Imagine I've got a quad 50x50 pixels
[frag]
if(quad.flag == 1)
{
if(Pixel_coordinate.x<25 ) gl_fragColor = vec4(1.0, 0.0, 0.0, 1.0);
else gl_fragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
else
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
In this case I would expect that a quad with the flag set will get two colors per face.
I hope I have been more specific now.
thanks.
Just to add something I can't use any texture.
Ok i do this now :
Every quad has 4 textures coordinated (0,0), (0,1), (1,1), (1,0);
I enable the texture coordinates using :
glTexCoordPointer(2, GL_SHORT, sizeof(Vertex), BUFFER_OFFSET(sizeof(float) * 7));
[vert]
varying vec2 texCoord;
main()
{
texCoord = gl_MultiTexCoord0.xy;
}
[frag]
varying vec2 texCoord;
main()
{
float x1 = texCoord.s;
float x2 = texCoord.t;
gl_FragColor = vec4(x1, x2, 0.0, 1.0);
}
I get always the yellow color so x1 =1 and x2 = 1 almost always and some quad is yellow/green.
I would expect that the texture coordinates change in the fragment shader and so I should get a gradient, am I wrong?
If you want to know the coordinate within the quad, you need to calculate it yourself. In order to that, you'll need to create a new interpolant (call it something like vec2 quadCoord), and set it appropriately for each vertex, which means you'll likely also need to add it as an attribute and pass it through your vertex shader. eg:
// in the vertex shader
attribute vec2 quadCoordIn;
varying vec2 quadCoord;
main() {
quadCoord = quadCoordIn;
:
You'll need to feed in this attribute in your drawing code when drawing your quads. For each quad, the vertexes will have likely have quadCoordIn values of (0,0), (0,1), (1,1) and (1,0) -- you could use some other coordinate system if you prefer, but this is the easiest.
Then, in your fragment program, you can access quadCoord.xy to determine where in the quad you are.
In addition to Chris Dodd's answer, you can also access the screen-space coordinate (in pixels, though actually pixel centers and thus ?.5) of the currently processed fragment through the special fragment shader variable gl_FragCoord:
gl_FragColor = (gl_FragCoord.x<25.0) ? vec4(1.0, 0.0, 0.0, 1.0) : vec4(0.0, 1.0, 0.0, 1.0);
But this gives you the position of the fragment in screen-space and thus relative to the lower left corner of you viewport. If you actually need to know the position inside the individual quad (which makes more sense if you want to actually color each quad half-by-half, since the "half-cut" would otherwise vary with the quad's position), then Chris Dodd's answer is the correct approach.