OpenGL GLSL bloom effect bleeds on edges - opengl

I have a framebuffer called "FBScene" that renders to a texture TexScene.
I have a framebuffer called "FBBloom" that renders to a texture TexBloom.
I have a framebuffer called "FBBloomTemp" that renders to a texture TexBloomTemp.
First I render all my blooming / glowing objects to FBBloom and thus into TexBloom. Then I play ping pong with FBBloom and FBBloomTemp, alternatingly blurring horizontally / vertically to get a nice bloom texture.
Then I pass the final "TexBloom" texture and the TexScene to a screen shader that draws a screen filling quad with both textures:
gl_FragColor = texture(TexBloom, uv) + texture(TexScene, uv);
The problem is:
While blurring the images, the bloom effect bleeds into the opposite edges of the screen if the glowing object is too close to the screen border.
This is my blur shader:
vec4 color = vec4(0.0);
vec2 off1 = vec2(1.3333333333333333) * direction;
vec2 off1DivideByResolution = off1 / resolution;
vec2 uvPlusOff1 = uv + off1DivideByResolution;
vec2 uvMinusOff1 = uv - off1DivideByResolution;
color += texture(image, uv) * 0.29411764705882354;
color += texture(image, uvPlusOff1) * 0.35294117647058826;
color += texture(image, uvMinusOff1) * 0.35294117647058826;
gl_FragColor = color;
I think I need to prevent uvPlusOff1 and uvMinusOff1 from beeing outside of the -1 and +1 uv range. But I don't know how to do that.
I tried to clamp the uv values at the gap in the code above with:
float px = clamp(uvPlusOff1.x, -1, 1);
float py = clamp(uvPlusOff1.y, -1, 1);
float mx = clamp(uvMinusOff1.x, -1, 1);
float my = clamp(uvMinusOff1.y, -1, 1);
uvPlusOff1 = vec2(px, py);
uvMinusOff1 = vec2(mx, my);
But it did not work as expected. Any help is highly appreciated.

Bleeding to the other side of the screen usually happens when the wrap-mode is set to GL_REPEAT. Set it to GL_CLAMP_TO_EDGE and it shouldn't happen anymore.
Edit - To explain a little bit more why this happens in your case: A texture coordinate of [1,1] means the bottom-right corner of the bottom-right texel. When linear filtering is enabled, this location will read four pixels around that corner. In case of repeating textures, three of them are on other sides of the screen. If you want to prevent the problem manually, you have to clamp to the range [0 + 1/texture_size, 1 - 1/texture_size].
I'm also not sure why you even clamp to [-1, 1], because texture coordinates usually range from [0, 1]. Negative values will be outside of the texture and are handled by the wrap mode.

Related

texture a ball on a sphere has a dark band

I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:
That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.
I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html

Smoother gradient transitions with OpenGL?

I'm using the following shader to render a skydome to simulate a night sky. My issue is the clearly visible transitions between colours.
What causes these harsh gradient transitions?
Fragment shader:
#version 330
in vec3 worldPosition;
layout(location = 0) out vec4 outputColor;
void main()
{
float height = 0.007*(abs(worldPosition.y)-200);
vec4 apexColor = vec4(0,0,0,1);
vec4 centerColor = vec4(0.159, 0.132, 0.1, 1);
outputColor = mix(centerColor, apexColor, height);
}
Fbo pixel format:
GL.TexImage2D(
TextureTarget.Texture2D,
0,
PixelInternalFormat.Rgb32f,
WindowWidth,
WindowHeight,
0,
PixelFormat.Rgb,
PixelType.Float,
IntPtr.Zero )
As Ripi2 explained, 24 bit color is unable to perfectly represent a gradient and discontinuities between representable colours become jarringly visible on gradients of a single color.
To hide the color banding I implemented a simple form of ordered dithering with an 8x8 texture generated using this bayer matrix algorithm.
vec4 dither = vec4(texture2D(MyTexture0, gl_FragCoord.xy / 8.0).r / 32.0 - (1.0 / 128.0));
colourOut += dither;
Normally monitors have 8 bits per channel of resolution. For example, the red intensity varies from 0 to 255.
If your window horizontal size is 768 pixels and you want a full gradient on red channel, then each color step takes 768/256 = 3 pixels. Depending on your eye health you may see bands.
How to do smooth gradient on those 3 pixels? Use sub-pixel rendering.
Basically you "expand" the color step among the neighbour pixels: Add small amounts of other channels to neighbours, and reduce a bit the central pixel amount.

How does the coordinate system work for 3D textures in OpenGL?

I am attempting to write and read from a 3D texture, but it seems my mapping is wrong. I have used Render doc to check the textures and they look ok.
A random layer of this voluemtric texture looks like:
So just some blue to denote absence and some green values to denote pressence.
The coordinates I calculate when I write to each layer are calculated in the vertex shader as:
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z -= level;
pos.z *= 1.f/voxel_size;
gl_Position = pos;
Since the texture itself looks ok it seems these coordinates are good to achieve my goal.
It's important to note that right now voxel_size is 1 and the scale of the texture is supposed to be 1 to 1 with the scene dimensions. In essence, each pixel in the texture represents a 1x1x1 voxel in the scene.
Next I attempt to fetch the texture values as follows:
vec3 pos = vertexPos;
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z *= 1.f/(4*16);
outColor = texture(voxel_map, pos);
Where vertexPos is the global vertex position in the scene. The z coordinate may be completely wrong however (i am not sure if I am supposed to normalize the depth component or not) but that is not the only issue. If you look at the final result:
There is a horizontal sclae component problem. Since each texel represents a voxel, the color of a cube should always be a fixed color. But as you can see I am getting multiple colors for a single cube on the top faces. So my horizontal scale is wrong.
What am i doing wrong when fetching the texels from the texture?

How to get accurate fragment screen position, like gl_FragCood in vertex shader?

I did some calculations using projected gl_Position and screen parameters, but position seems distorted in polygons close to the camera. But when I use...
vec2 fragmentScreenCoordinates = vec2(gl_FragCoord.x / _ScreenParams.x, gl_FragCoord.y / _ScreenParams.y);
...I got pretty accurate xy results.
Pretty output gl_FragCoord.xy coordinates:
Calculating from projected vertices results in interpolated values all over the faces, which I cannot use for sampling screen aligned textures.
Ugly interpolated output from gl_Position:
Is there a way to produce this gl_FragCoord-like value in vertex shader? I really want to calculate texture coordinates in vertex shader for independent texture reads, manual depth tests, etc.
Or is there any Unity built in values I can use here?
In the vertex shader, you set
gl_Position = ...
This must be in clip space, before the perspective divide to normalized device coordinates. This is because OpenGL does a bunch of stuff in the 4D space and is necessary for interpolation.
Since you just want the value at each vertex and nothing interpolated, you can normalize right away (or can even leave out if using an orthographic projection)...
vec3 ndc = gl_Position.xyz / gl_Position.w; //perspective divide/normalize
vec2 viewportCoord = ndc.xy * 0.5 + 0.5; //ndc is -1 to 1 in GL. scale for 0 to 1
vec2 viewportPixelCoord = viewportCoord * viewportSize;
Here, viewportCoord is the equivalent of your fragmentScreenCoordinates, assuming the viewport covers the window.
Note: as #derhass points out, this will fail if geometry intersects the w = 0 plane. I.e. a visible triangle's vertex is behind the camera.
[EDIT]
The comments discuss using the coordinates for 1 to 1 pixel nearest neighbor lookup. As #AndonM.Coleman says, changing the coordinates will work, but it's easier and faster to use nearest neighbor filtering. You can also use texelFetch, which bypasses filtering altogether:
Snap the coordinates:
vec2 sampleCoord = (floor(viewportPixelCoord) + 0.5) / textureSize(mySampler);
vec4 colour = texture(mySampler, sampleCoord);
Nearest neighbor filtering (not sure what this is in unity3d):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //magnification is the important one here
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Use texelFetch:
vec4 colour = texelFetch(mySampler, ivec2(viewportPixelCoord), 0);

Reconstructed position from depth - How to handle precision issues?

In my deferred renderer, I've managed to successfully reconstruct my fragment position from the depth buffer.... mostly. By comparing my results to the position stored in an extra buffer, I've noticed that I'm getting a lot of popping far away from the screen. Here's a screenshot of what I'm seeing:
The green and yellow parts at the top are just the skybox, where the position buffer contains (0, 0, 0) but the reconstruction algorithm interprets it as a normal fragment with depth = 0.0 (or 1.0?).
The scene is rendered using fragColor = vec4(0.5 + (reconstPos - bufferPos.xyz), 1.0);, so anywhere that the resulting fragment is exactly (0.5, 0.5, 0.5) is where the reconstruction and the buffer have the exact same value. Imprecision towards the back of the depth buffer is to be expected, but that magenta and blue seems a bit strange.
This is how I reconstruct the position from the depth buffer:
vec3 reconstructPositionWithMat(vec2 texCoord)
{
float depth = texture2D(depthBuffer, texCoord).x;
depth = (depth * 2.0) - 1.0;
vec2 ndc = (texCoord * 2.0) - 1.0;
vec4 pos = vec4(ndc, depth, 1.0);
pos = matInvProj * pos;
return vec3(pos.xyz / pos.w);
}
Where texCoord = gl_FragCoord.xy / textureSize(colorBuffer, 0);, and matInvProj is the inverse of the projection matrix used to render the gbuffer.
Right now my position buffer is GL_RGBA32F (since it's only for testing accuracy, I don't care as much about bandwith and memory waste), and my depth buffer is GL_DEPTH24_STENCIL8 (I got similar results from GL_DEPTH_COMPONENT32, and yes I do need the stencil buffer).
My znear is 0.01f, and zfar is 1000.0f. I'm rendering a single quad as my ground which is 2000.0f x 2000.0f large (I wanted it to be big enough that it would clip with the far plane).
Is this level of imprecision considered acceptable? What are some ways that people have gotten around this problem? Is there something wrong with how I reconstruct the view/eye-space position?