I'm using the following shader to render a skydome to simulate a night sky. My issue is the clearly visible transitions between colours.
What causes these harsh gradient transitions?
Fragment shader:
#version 330
in vec3 worldPosition;
layout(location = 0) out vec4 outputColor;
void main()
{
float height = 0.007*(abs(worldPosition.y)-200);
vec4 apexColor = vec4(0,0,0,1);
vec4 centerColor = vec4(0.159, 0.132, 0.1, 1);
outputColor = mix(centerColor, apexColor, height);
}
Fbo pixel format:
GL.TexImage2D(
TextureTarget.Texture2D,
0,
PixelInternalFormat.Rgb32f,
WindowWidth,
WindowHeight,
0,
PixelFormat.Rgb,
PixelType.Float,
IntPtr.Zero )
As Ripi2 explained, 24 bit color is unable to perfectly represent a gradient and discontinuities between representable colours become jarringly visible on gradients of a single color.
To hide the color banding I implemented a simple form of ordered dithering with an 8x8 texture generated using this bayer matrix algorithm.
vec4 dither = vec4(texture2D(MyTexture0, gl_FragCoord.xy / 8.0).r / 32.0 - (1.0 / 128.0));
colourOut += dither;
Normally monitors have 8 bits per channel of resolution. For example, the red intensity varies from 0 to 255.
If your window horizontal size is 768 pixels and you want a full gradient on red channel, then each color step takes 768/256 = 3 pixels. Depending on your eye health you may see bands.
How to do smooth gradient on those 3 pixels? Use sub-pixel rendering.
Basically you "expand" the color step among the neighbour pixels: Add small amounts of other channels to neighbours, and reduce a bit the central pixel amount.
Related
I have access to a depth camera's output. I want to visualise this in opengl using a compute shader.
The depth feed is given as a frame and i know the width and height ahead of time. How do I sample the texture and retrieve the depth value in the shader? Is this possible? I've read through the OpenGl types here and can't find anything on unsigned shorts so am starting to worry. Are there any workarounds?
My current compute shader
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
uniform float width;
uniform float height;
uniform sampler2D depth_feed;
void main() {
// get index in global work group i.e x,y position
vec2 sample_coords = ivec2(gl_GlobalInvocationID.xy) / vec2(width, height);
float visibility = texture(depth_feed, sample_coords).r;
vec4 pixel = vec4(1.0, 1.0, 0.0, visibility);
// output to a specific pixel in the image
imageStore(img_output, ivec2(gl_GlobalInvocationID.xy), pixel);
}
The depth texture definition is as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, nullptr);
Currently my code produces a plain yellow screen.
If you use perspective projection, then the depth value is not linear. See LearnOpenGL - Depth testing.
If all the depth values are near 0.0, and you use the following expression:
vec4 pixel = vec4(vec3(visibility), 1.0);
then all the pixels appear almost black. Actually the pixels are not completely black, but the difference is barely noticeable.
This happens, when the far plane is "too" far away. To verify that you can compute the power of 1.0 - visibility, to make the different depth values ​​recognizable. For instance:
float exponent = 5.0;
vec4 pixel = vec4(vec3(pow(1.0-visibility, exponent)), 1.0);
If you want a more sophisticated solution, you can linearize the depth values as explained in the answer to How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
Please note that for a satisfactory visualization you should use the entire range of the depth buffer ([0.0, 1.0]). The geometry must be between the near and far planes, but try to move the near and far planes as close to the geometry as possible.
I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:
That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.
I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html
I have a framebuffer called "FBScene" that renders to a texture TexScene.
I have a framebuffer called "FBBloom" that renders to a texture TexBloom.
I have a framebuffer called "FBBloomTemp" that renders to a texture TexBloomTemp.
First I render all my blooming / glowing objects to FBBloom and thus into TexBloom. Then I play ping pong with FBBloom and FBBloomTemp, alternatingly blurring horizontally / vertically to get a nice bloom texture.
Then I pass the final "TexBloom" texture and the TexScene to a screen shader that draws a screen filling quad with both textures:
gl_FragColor = texture(TexBloom, uv) + texture(TexScene, uv);
The problem is:
While blurring the images, the bloom effect bleeds into the opposite edges of the screen if the glowing object is too close to the screen border.
This is my blur shader:
vec4 color = vec4(0.0);
vec2 off1 = vec2(1.3333333333333333) * direction;
vec2 off1DivideByResolution = off1 / resolution;
vec2 uvPlusOff1 = uv + off1DivideByResolution;
vec2 uvMinusOff1 = uv - off1DivideByResolution;
color += texture(image, uv) * 0.29411764705882354;
color += texture(image, uvPlusOff1) * 0.35294117647058826;
color += texture(image, uvMinusOff1) * 0.35294117647058826;
gl_FragColor = color;
I think I need to prevent uvPlusOff1 and uvMinusOff1 from beeing outside of the -1 and +1 uv range. But I don't know how to do that.
I tried to clamp the uv values at the gap in the code above with:
float px = clamp(uvPlusOff1.x, -1, 1);
float py = clamp(uvPlusOff1.y, -1, 1);
float mx = clamp(uvMinusOff1.x, -1, 1);
float my = clamp(uvMinusOff1.y, -1, 1);
uvPlusOff1 = vec2(px, py);
uvMinusOff1 = vec2(mx, my);
But it did not work as expected. Any help is highly appreciated.
Bleeding to the other side of the screen usually happens when the wrap-mode is set to GL_REPEAT. Set it to GL_CLAMP_TO_EDGE and it shouldn't happen anymore.
Edit - To explain a little bit more why this happens in your case: A texture coordinate of [1,1] means the bottom-right corner of the bottom-right texel. When linear filtering is enabled, this location will read four pixels around that corner. In case of repeating textures, three of them are on other sides of the screen. If you want to prevent the problem manually, you have to clamp to the range [0 + 1/texture_size, 1 - 1/texture_size].
I'm also not sure why you even clamp to [-1, 1], because texture coordinates usually range from [0, 1]. Negative values will be outside of the texture and are handled by the wrap mode.
So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);
I've written the following shader to perform a bright pass of my scene so I can extract luminance for later blurring as part of a "glow" effect.
// "Bright" pixel shader.
#version 420
uniform sampler2D Map_Diffuse;
uniform float uniform_Threshold;
in vec2 attrib_Fragment_Texture;
out vec4 Out_Colour;
void main(void)
{
vec3 luminances = vec3(0.2126, 0.7152, 0.0722);
vec4 texel = texture2D(Map_Diffuse, attrib_Fragment_Texture);
float luminance = dot(luminances, texel.rgb);
luminance = max(0.0, luminance - uniform_Threshold);
texel.rgb *= sign(luminance);
texel.a = 1.0;
Out_Colour = texel;
}
The bright areas are successfully extracted however there are some unstable features in the scene sometimes, resulting in pixels that flicker on and off for a while. When this is blurred the effect is more pronounced, with bits of glow kind-of flickering too. The artifacts occur in, for example, the third image in the screenshot I've posted, where the object is in shadow and so there's far less luminance in the scene. They're mostly present in transition from away to towards the light of course (during rotation of the object), where the edge is just hitting the light.
My question is to ask whether there's a way you can detect and mitigate this in the shader. Note that the bright pass is part of a general down-sample, from screen resolution to 512x512.
You could read the surrounding pixels also and do your math based on that.
Kind of like is done here.