HDR Bloom effect rendering pipeline using OpenGL/GLSL - opengl

I have integrated bloom HDR rendering using OpenGL and GLSL... At least I think! I'm not really sure about the result.
I followed a tutorial from intel website:
https://software.intel.com/en-us/articles/compute-shader-hdr-and-bloom
And about Gaussian blur effect I follow scrupulously all the advices concerning the performance on the following website:
https://software.intel.com/en-us/blogs/2014/07/15/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms
According to the first website:
"The bright pass output is then downscaled by half 4 times. Each of the downscaled bright pass outputs are blurred with a separable Gaussian filter and then added to the next higher resolution bright pass output. The final output is a ΒΌ size bloom which is up sampled and added to the HDR output before tone mapping."
Here's the bloom pipeline (the pictures above have been taken from NSight NVIDIA Debugger).
The resolution of the window in my test is 1024x720 (for the need of this algorithm this resolution will be downscaled 4 times).
Step 1:
Lighting pass (blending of material pass + shadow mask pass + skybox pass):
Step 2:
Extracting hight light information into a bright pass (To be precise, 4 mipmaps textures are generated ("The bright pass output is then downscaled by half 4 times" -> 1/2, 1/4, 1/8 and finally 1/2)):
Step 3:
"Each of the downscaled bright pass outputs are blurred with a separable Gaussian filter and then added to the next higher resolution bright pass output."
I want to be precise that the bilinear filtering is enable (GL_LINEAR) and the pexilization on the pictures above are the result of the resizing of the texture onto the NSight debugger window (1024x720).
a) Resolution 1/16x1/16 (64x45)
"1/16x1/16 blurred output"
b) Resolution 1/8x1/8 (128x90)
"1/8x1/8 downscaled bright pass, combined with 1/16x1/16 blurred output"
"1/8x1/8 blurred output"
c) Resolution 1/4x1/4 (256x180)
"1/4x1/4 downscaled bright pass, combined with 1/8x1/8 blurred output"
" 1/4x1/4 blurred output"
d) Resolution 1/2x1/2 (512x360)
"1/2x1/2 downscaled bright pass, combined with 1/4x1/4 blurred output"
"1/2x1/2 blurred output"
To target the desired level of mipmap I use FBO resizing (but maybe it would be smarter to use separate FBOs already sized at the initialization rather than resize the same one several times. What do you think of this idea ?).
Step 4:
Tone mapping render pass:
Until here I would like to have an external advice on my work. Is it correct or not ? I'm not really sure about the result espacially about the step 3 (the downscaling and bluring part).
I think the bluring effect is not very pronounced! However I use a convolution kernel 35x35 (it would be sufficient, I think :)).
But I'm really intrigued by an article on a pdf. Here's the presentation of the bloom pipeline (the presentation is pretty the same than the one I applied).
Link:
https://www.google.fr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCMQFjAA&url=https%3A%2F%2Ftransporter-game.googlecode.com%2Ffiles%2FRealtimeHDRImageBasedLighting.pdf&ei=buBhVcLmA8jaUYiSgLgK&usg=AFQjCNFfbP9L7iEiGT6gQNW6dB2JFVcTmA&bvm=bv.93990622,d.d24
As you can see on the picture that the blur bleeding effect is so much stronger than mine! Do you think the author use several convolution kernels (higher resolutions) ?
The first thing I don't understand is how the gaussian blur algorithm make appears other colors different than white (grey-scale values) on the third picture. I looked very closely (high zoom) onto the bright picture (the second one) and all the pixels seems to be close to white or white (grayscale). One thing is sure: there is no blue or orange pixels on the bright texture. So how can we explain a such transition from picture 2 to picture 3? It's very strange for me.
The second thing I don't understand is the high difference of blur bleeding effect between the pictures 3, 4, 5 and 6! In my presentation I use 35x35 convolution kernel and the final result is close to the third picture here.
How can you explain a such difference?
PS: Note that I use GL_HALF_FLOAT and GL_RGBA16F pixel internal format to initialize the bloom render pass texture (all the other render passes are initialized as GL_RGBA and GL_FLOAT data type).
Is something wrong with my program ?
Thank you very much for your help!

Blurred small-res textures don't seem blurred enough. I think there is somewhere a problem regarding the width of the filter (not number of samples, but distance between samples) or framebuffer size.
Let's say that you have 150x150 original fbo, a a 15x15 downscaled version for bloom. And that you use 15x15 blur filter.
Blurred high-res version would affect 7px stroke around bright parts.
But while blurring low-res image, the width of the kernel would practically affect an entire image area. At low-res, 7px stroke means - an entire image area. So all pixels in blurred low-res version would have some contribution to the final composed image. So, high-res blurred image would contributed with its blur for the 7px stroke around bright parts, while low-res blurred image would make quite a significant difference over an entire image area.
Your low-res images just don't seem well blurred, cause they're contribution still remains within 35/2px stroke around bright parts, which is wrong.
I hope I managed to explain what is wrong. What to change exactly, probably viewport size while blurring low-res images, but I simply can't be 100% sure.

Related

How to blend samples from multisample software rendering

please forgive any incorrect terminology, I'll do my best to explain.
I'd like to know how a rendering technology - gpu/cpu etc blends/merges the samples generated from multisample rendering? (Presumably over multiple passes.)
To be clear - I'm not asking for DirectX / OpenGL examples, I'm asking how it actually works.
Background: I've written a 2d polygon drawing function - in C/C++ - which is based on the common model or dividing each horizontal scanline into multiple 'samples' (in my case 4) and then using this to estimate coverage. I clamp these points to 4 vertical positions as well, giving me a 4x4 grid of 'samples' per pixel.
I currently generate a bitmask mask per pixel of which 'samples' are covered and also an 'alpha' of how covered this pixel is from 0 to 256. This works perfectly with a single polygon and all the edges are nicely antialiased. The issue arises when drawing something like a pie chart, the first piece is drawn perfectly but the second piece which shares edges with it will draw over those edge pixels.
For example: Multisample Grid Picture in this picture my renderer will draw the orange section, and the bottom middle pixel will be 50% covered by this orange polygon, so will be 50% orange and 50% background colour (say black for instance). The green polygon will then be drawn and also cover the bottom middle pixel by 50% - so it will blend 50% green with the existing 50% orange and 50% black, giving us 50% green and 25% orange and 25% black - but realistically the black background colour should never come into it as the pixel is fully covered, just not by any one polygon.
This page describes the process and says "In situations like this OpenGL will use coverage percentages to blend the samples from the foreground polygon with the colors already in the framebuffer. For example, for the pixel in the bottom center of the image above, OpenGL will perform two samples in the green polygon, average those samples together, and then blend the resulting color 50% with the color already in the framebuffer for that pixel." but doesn't describe how that process actually works: https://www2.lawrence.edu/fast/GREGGJ/CMSC420/chapter16/Chapter_16.html
I haven't posted source code because it's quite a large project and I'm not doing anything particularly different from most simple polygon renderers except split the main loop out to callback functions.
I can't switch up to a render buffer size 4xwidth and 4xheight as it's used for more than just polygon drawing. I'm happy to accept that all 'joined' polygons be known at function run time - such as the user being required to pass in all the pie chart polygons rather that one at a time as that seems a fair requirement.
Any guidance would be appreciated.

Wrong blending in OpenGL on small alpha value

I draw from texture a lot of white traingles. But when it are drawing on yellow circle, the points which contains a small alpha value(but not equal with 0) are blended wrong, and I get some darker pixels on screen(see on screenshot, it was zoomed in). Which can be the problem?
On blue background all are ok.
As #tklausi pointed out in the comments, this problem was related to the texture interpolation in combination with traditional alpha blending. At the transition from values with high alpha to "background" with alpha=0, you will get some interpolation results where alpha is > 0, and RGB is mixed with your "background" color.
#tlkausi's solution was to change the RGB values of the background to white. But this will result in the same issue as before: If your actual image has dark colors, you will see bright artifacts around it then.
The correct solution would be to repeat the RGB color of the actual border pixels, so that the interpolation will always result in the same color, just with a lower alpha value.
However, there is a much better solution: premultiplied alpha.
Instead of storing (R,G,B,a) in the texture per pixel, you store (aR,aG,aB,a). When blending, you don't use a*source + (1-a) * background, but just source + (1-a)*background. The difference is that you now have a "neutral element" (0,0,0,0) and interpolation towards that will not pose any issue. It works nicely with filtering, and is also good for mipmapping and other techniques.
In general, I would recommend to always use premultiplied alpha in favor of the "traditional" one. The premultiplication can be directly applied into the image file, or you can do it at texture upload, but it does incur no runtime costs at all.
More information about premultiplied alpha can be found in this MSDN blog article or over here at NVIDIA.

Subpixel rasterization on opaque backgrounds

I'm working on a subpixel rasterizer. The output is to be rendered on an opaque bitmap. I've come so far as to correctly render text white-on-black (because i can basically disregard the contents of the bitmap).
The problem is the blending. Each actually rendered pixel affects it's neighbours intensity levels as well, because of the lowpass filtering technique (I'm using the 5-tap fir - 1/9, 2/9, 3/9 etc.), and additionally alpha levels of the pixel to be rendered. This result then has to be alphablended onto the destination image, which is where the problem occurs...
The result of the pixels interactions has to be added together to achieve correct luminance - and the alphablended to the destination - but if I rasterize one pixel at a time, I 'loose' the information of the previous pixels, hence, further addition may lead to overflowing.
How is this supposed to be done? The only solution I can imagine would work is, to render to a separate image with alpha channels for each colour, then some complex blending algorithm, and lastly alphablend it to the destination.. Somehow.
However, I couldn't find any resources on how to actually do it - besides the basic concepts of lcd subpixel rendering and nice closeup images of monitor pixels. If anyone can help me along the way, I would be very grateful.
Tonight I awoke and could not fall asleep again.
I could not let all those brain energy get to waste and stumbled over exactly the same problem.
I came up with two different solutions, both unvalidated.
You have to use a 3 channel alpha mask, one for each subpixel, blend each color with its own alpha.
You can use the color channels each as alpha mask if you only render gray/BW font (1-color_value if you draw dark text on a light background color), again applying each color individualy. The color value itself should be considered 1 in this case.
Hope this helps a little, I filled ~2h of insomnia with it.
~ Jan

Is it possible to get this "chroma-shift" effect with OpenGL shaders

I'd like to be able to produce this effect, to be specific, the color-crawl / color-shift.
Is this possible with OpenGL shaders, or do I need to use another technique?
I'm new to OpenGL and I'd like try this as a getting started exercise, however if there's a better way of doing this, ultimately I want to produce this effect.
FYI I'm using Cinder as my OpenGL framework.
I know this isn't much information, but I'm having trouble even finding out what this effect is really called, so I can't google it.
I can't help you with the name of the effect, but I have an idea to produce this effect. My understanding is that each color component is shifted by some amount. A simple translation to the right of left of individual color components produced the black and white original image:
Steps to get the image you want
Get the source black and white image in a texture. If it's the result of other rendering, copy it to a texture.
Render a full screen quad (or the size you want) with texture coordinates from (0,0) to (1,1) and with the texture attached.
Apply a fragment shader that samples 3 times the input texture with a different shift in texture coordinates. e.g. -2 texels, 0 texel and +2 texel offsets. You can expirement and try more samples if you want and at different offsets.
Combine those 3 samples by keeping only 1 color component of each.
Alternate if performance doesn't matter or shaders are not available
Don't use a pixel shader but instead on OpenGL blending with the ADD function. Render 3 times that same full screen quad with the texture attached and use the texture matrix to offset the lookups each time. Mask the output colormask differently for each pass and you get the same result: pass 1 => red, pass 2 => green, pass 3 => blue.
The effect you're looking for is called chromatic abberation, you can it look up at Wikipedia. You were given a solution already, but I think it's my duty being a physicist, to give you a deeper understanding of what is going on, and how the effect can be generalized.
Remember that every camera has some aperture and light usually is described as waves. The interaction of waves with an aperture is called diffraction, but when it comes down mathematically it's just a convolution of the wave function with the fourier transform of the aperture function. Diffraction depends on the wavelength, so this creates a spatial shift depending on the color. The other effect contributing is dispersion, i.e. the dependence on refraction of the wavelength. Again diffraction can be described by a convolution.
Now convolutions can be chained up, yielding a total convolution kernel. In the case of Gauss blurring filter the convolution kernel is a Gauss distribution identical in all channels. But you can have different convolution kernels for each target channel. What #bernie suggestet are actually box convolution kernels, shifted by a few pixels in each channel.
This is a nice tutorial about convolution filtering with GLSL. You may use for loops as well instead of unrolling the loops.
http://www.ozone3d.net/tutorials/image_filtering_p2.php
I suggest you use some Gauss shaped kernels, with the blurring for red and blue being stronger than green, and of course slightly shifted center points.
GeexLab have a demo of Chromatic Abberation, with source in their Shader Library here.

Antialiasing algorithm?

I have textures that i'm creating and would like to antialias them. I have access to each pixel's color, given this how could I antialias the entire texture?
Thanks
I'm sorry but true anti-aliasing does not consist in getting the average color from the neighbours as commented above. This will undoubtfully soften the edges but it's not anti-aliasing but blurring. True anti-aliasing just cannot be done properly on a bitmap, since it has to be calculated at drawing time to tell which pixels and/or edges must be "softened" and which ones must not. For instance: imagine you draw an horizontal line which must be exactly 1 pixel thick (say "high") and must be placed exactly on an integer screen row coordinate. Obviously, you'll want it unsoftened, and proper anti-aliasing algorithm will do it, drawing your line as a perfect row of solid pixels surrounded by perfect background-coloured pixels, with no tone blending at all. But if you take this same line once it's been drawn (i.e. bitmap) and apply the average method, you'll get blurring above and below the line, resulting a 3 pixels thick horizontal line, which is not the goal. Of course, everything could be achieved through the right coding but from a very different and much more complex approach.
The basic method for anti-aliasing is: for a pixel P, sample the pixels around it. P's new color is the average of its original color with those of the samples.
You might sample more or less pixels, change the size of the area around the pixel to sample, or randomly choose which pixels to sample.
As others have said, though: anti-aliasing isn't really something that's done to an image that's already a bitmap of pixels. It's a technique that's implemented in the 2D/3D rendering engine.
"Anti-aliasing" can refer to a broad range of different techniques. None of those would typically be something that you would do in advance to a texture.
Maybe you mean mip-mapping? http://en.wikipedia.org/wiki/Mipmap
It's not very meaningful to ask about antialiasing in terms this vague. It all depends on the nature of the textures and how they will be used.
Generally though, if you don't have control over the display environment and your textures might be scaled, rotated, or composited with other elements you don't have any control over, you should use transparency to indicate where your image ends and where it only partially fills a pixel.
It's also generally good to avoid sharp edges and features that are small relative to the amount of scaling that might be done.