please forgive any incorrect terminology, I'll do my best to explain.
I'd like to know how a rendering technology - gpu/cpu etc blends/merges the samples generated from multisample rendering? (Presumably over multiple passes.)
To be clear - I'm not asking for DirectX / OpenGL examples, I'm asking how it actually works.
Background: I've written a 2d polygon drawing function - in C/C++ - which is based on the common model or dividing each horizontal scanline into multiple 'samples' (in my case 4) and then using this to estimate coverage. I clamp these points to 4 vertical positions as well, giving me a 4x4 grid of 'samples' per pixel.
I currently generate a bitmask mask per pixel of which 'samples' are covered and also an 'alpha' of how covered this pixel is from 0 to 256. This works perfectly with a single polygon and all the edges are nicely antialiased. The issue arises when drawing something like a pie chart, the first piece is drawn perfectly but the second piece which shares edges with it will draw over those edge pixels.
For example: Multisample Grid Picture in this picture my renderer will draw the orange section, and the bottom middle pixel will be 50% covered by this orange polygon, so will be 50% orange and 50% background colour (say black for instance). The green polygon will then be drawn and also cover the bottom middle pixel by 50% - so it will blend 50% green with the existing 50% orange and 50% black, giving us 50% green and 25% orange and 25% black - but realistically the black background colour should never come into it as the pixel is fully covered, just not by any one polygon.
This page describes the process and says "In situations like this OpenGL will use coverage percentages to blend the samples from the foreground polygon with the colors already in the framebuffer. For example, for the pixel in the bottom center of the image above, OpenGL will perform two samples in the green polygon, average those samples together, and then blend the resulting color 50% with the color already in the framebuffer for that pixel." but doesn't describe how that process actually works: https://www2.lawrence.edu/fast/GREGGJ/CMSC420/chapter16/Chapter_16.html
I haven't posted source code because it's quite a large project and I'm not doing anything particularly different from most simple polygon renderers except split the main loop out to callback functions.
I can't switch up to a render buffer size 4xwidth and 4xheight as it's used for more than just polygon drawing. I'm happy to accept that all 'joined' polygons be known at function run time - such as the user being required to pass in all the pie chart polygons rather that one at a time as that seems a fair requirement.
Any guidance would be appreciated.
Related
Im making a voxel game, and i have designed the water as cubes with 0.5 alpha. It works great if all the water is at the same height, like in the image below:
But, if the water is not at the same height, alpha overlapping happens:
How can I prevent this overlapping to occur? (For example, only drawing the nearest water body for each pixel, discarding the remaining). Should I need to use FrameBuffers, drawing the scene with multiple passes, or it would be enough by using a alternate blend function, or taking another less GPU expensive approach?
I found an answer without drawing the scene with multiple passes. I hope it helps somebody:
We are going to draw the nearest water body for each pixel, discarding the remaining, and so, avoiding the overlapping.
First, you draw the solid blocks normally.
Then, you draw the water after disabling writing in the color buffer glColorMask(false,false,false,false). The Z-buffer will be updated as desired, but no water will be drawn yet.
Finally, you enable writing in the color buffer (glColorMask(true,true,true,true) ) and set the depthFunc to LEQUAL ( glDepthFunc(GL_LEQUAL) ). Only the nearest water pixels will pass the depth test (Setting it to LEQUAL instead of EQUAL deals with some rare but possible floating point approximation errors). Enabling blending and drawing the water again will produce the effect we wanted:
I have integrated bloom HDR rendering using OpenGL and GLSL... At least I think! I'm not really sure about the result.
I followed a tutorial from intel website:
https://software.intel.com/en-us/articles/compute-shader-hdr-and-bloom
And about Gaussian blur effect I follow scrupulously all the advices concerning the performance on the following website:
https://software.intel.com/en-us/blogs/2014/07/15/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms
According to the first website:
"The bright pass output is then downscaled by half 4 times. Each of the downscaled bright pass outputs are blurred with a separable Gaussian filter and then added to the next higher resolution bright pass output. The final output is a ¼ size bloom which is up sampled and added to the HDR output before tone mapping."
Here's the bloom pipeline (the pictures above have been taken from NSight NVIDIA Debugger).
The resolution of the window in my test is 1024x720 (for the need of this algorithm this resolution will be downscaled 4 times).
Step 1:
Lighting pass (blending of material pass + shadow mask pass + skybox pass):
Step 2:
Extracting hight light information into a bright pass (To be precise, 4 mipmaps textures are generated ("The bright pass output is then downscaled by half 4 times" -> 1/2, 1/4, 1/8 and finally 1/2)):
Step 3:
"Each of the downscaled bright pass outputs are blurred with a separable Gaussian filter and then added to the next higher resolution bright pass output."
I want to be precise that the bilinear filtering is enable (GL_LINEAR) and the pexilization on the pictures above are the result of the resizing of the texture onto the NSight debugger window (1024x720).
a) Resolution 1/16x1/16 (64x45)
"1/16x1/16 blurred output"
b) Resolution 1/8x1/8 (128x90)
"1/8x1/8 downscaled bright pass, combined with 1/16x1/16 blurred output"
"1/8x1/8 blurred output"
c) Resolution 1/4x1/4 (256x180)
"1/4x1/4 downscaled bright pass, combined with 1/8x1/8 blurred output"
" 1/4x1/4 blurred output"
d) Resolution 1/2x1/2 (512x360)
"1/2x1/2 downscaled bright pass, combined with 1/4x1/4 blurred output"
"1/2x1/2 blurred output"
To target the desired level of mipmap I use FBO resizing (but maybe it would be smarter to use separate FBOs already sized at the initialization rather than resize the same one several times. What do you think of this idea ?).
Step 4:
Tone mapping render pass:
Until here I would like to have an external advice on my work. Is it correct or not ? I'm not really sure about the result espacially about the step 3 (the downscaling and bluring part).
I think the bluring effect is not very pronounced! However I use a convolution kernel 35x35 (it would be sufficient, I think :)).
But I'm really intrigued by an article on a pdf. Here's the presentation of the bloom pipeline (the presentation is pretty the same than the one I applied).
Link:
https://www.google.fr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCMQFjAA&url=https%3A%2F%2Ftransporter-game.googlecode.com%2Ffiles%2FRealtimeHDRImageBasedLighting.pdf&ei=buBhVcLmA8jaUYiSgLgK&usg=AFQjCNFfbP9L7iEiGT6gQNW6dB2JFVcTmA&bvm=bv.93990622,d.d24
As you can see on the picture that the blur bleeding effect is so much stronger than mine! Do you think the author use several convolution kernels (higher resolutions) ?
The first thing I don't understand is how the gaussian blur algorithm make appears other colors different than white (grey-scale values) on the third picture. I looked very closely (high zoom) onto the bright picture (the second one) and all the pixels seems to be close to white or white (grayscale). One thing is sure: there is no blue or orange pixels on the bright texture. So how can we explain a such transition from picture 2 to picture 3? It's very strange for me.
The second thing I don't understand is the high difference of blur bleeding effect between the pictures 3, 4, 5 and 6! In my presentation I use 35x35 convolution kernel and the final result is close to the third picture here.
How can you explain a such difference?
PS: Note that I use GL_HALF_FLOAT and GL_RGBA16F pixel internal format to initialize the bloom render pass texture (all the other render passes are initialized as GL_RGBA and GL_FLOAT data type).
Is something wrong with my program ?
Thank you very much for your help!
Blurred small-res textures don't seem blurred enough. I think there is somewhere a problem regarding the width of the filter (not number of samples, but distance between samples) or framebuffer size.
Let's say that you have 150x150 original fbo, a a 15x15 downscaled version for bloom. And that you use 15x15 blur filter.
Blurred high-res version would affect 7px stroke around bright parts.
But while blurring low-res image, the width of the kernel would practically affect an entire image area. At low-res, 7px stroke means - an entire image area. So all pixels in blurred low-res version would have some contribution to the final composed image. So, high-res blurred image would contributed with its blur for the 7px stroke around bright parts, while low-res blurred image would make quite a significant difference over an entire image area.
Your low-res images just don't seem well blurred, cause they're contribution still remains within 35/2px stroke around bright parts, which is wrong.
I hope I managed to explain what is wrong. What to change exactly, probably viewport size while blurring low-res images, but I simply can't be 100% sure.
I am working in VR field where good calibration of a projected screen is very important, and because of difficult-to-adjust ceiling mounts and other hardware specificities, I am looking for a fullscreen shader method to “correct” the shape of the screen.
Most of 2D or 3D engines allows to apply a full-screen effect or deformation by redrawing the rendering result on a quad that you can deform or render in a custom way.
The first idea was to use a vertex shader to offset the corners of this screen quad, so the image is deformed as a quadrilateral (like the hardware keystone on a projector), but it won’t be enough for the requirements
(this approach is described on math.stackexchange with a live fiddle demo).
In my target case:
The image deformation must be non-linear most of the time, so 9 or 16 control points are needed to get a finer adjust.
The borders of the image are not straight (barrel or pillow effect), so even with few control points, the image must be distorted in a curved way in between. Otherwise the deformation would make visible linear seams between at each control points’ limits.
Ideally, knowing the corrected position of each control points of 3x3 or 4x4 grid, the way would be to define a continuous transform for the texture coordinates of the image being drawn on the full screen
quad:
u,v => corrected_u, corrected_v
You can find an illustration here.
I’ve saw some FFD algorithm that works in 2D or 3D that would allow to deform “softly” an image or mesh as if it was made of rubber, but the implementation seems heavy for a real-time shader.
I thought also of a weight-based deformation as we have in squeletal/soft-bodies animation, but seems uncertain to weight properly the control points.
Do you know a method, algorithm or general approach that could help me solve the problem ?
I saw some mesh-based deformation like the new Oculus Rift DK2 requires for its own deformations, but most of the 2D/3D engine use a single quad made of 4 vertices only in standard.
If you need non linear deformation Bezier Surfaces are pretty handy and easy to implement.
You can either pre build them in CPU, or use hardware tessellation (example provided here)
Continuing my research, I found a way.
I created a 1D RGB texture corresponding to a "ramp" or cosine values. This will be the 3 influence coefficients of offset parameters on a 0..1 axis, with 3 coefficients at 0, 0.5 and 1 :
Red starts at 1 at x=0 and goes down to 0 at x=.5
Green start at 0 at x=0, goes to 1 at x=0.5 and goes back to 0 at x=1
Blue starts at 0 at x=0.1 and goes up to 1 at x=1
With these, from 9 float2 uniforms I can interpolate very softly my parameters over the image (with 3 lookups on horizontal, and a final one for vertical).
Then, one interpolated, I offsets the texture coord with these and it works :-D
This is more or less a weighted interpolation of the coordinates using texture lookups for speedup.
I have researched and the methods used to make a blooming effects are usually based on having a sharp and blurred image conjoined to give the glow effect. But I want to know how I can make gl_lines(or any line) have brightness. Since in my game I am randomly generating a simple 2D terrain, I wish to make the terrain line segments glow.
Use a fragment shader to calculate the distance from a fragment to the edge and color the fragment with the appropriate color value. You can use a simple control curve to control the radius and intensity anlong of the glow(like in photoshop). It can also be tuned to act like wireframe visualization. The idea is you don't really rasterize points to lines using a draw call, just shade each pixel based on its distance from the corresponding edge.
The difference from using a blur pass is that you will first get better performance, and second - per-pixel control over the glow, you can have non-uniform glow which you cannot get by using blur because it is not really aware of the actual line geometry, it just blindly works on pixels, whereas with edge distance detection you do use the actual geometry data as input without flatting it down to pixels. You can also have stuff like gradient glows, e.g. the glow color is different and changes with the radius.
I have textures that i'm creating and would like to antialias them. I have access to each pixel's color, given this how could I antialias the entire texture?
Thanks
I'm sorry but true anti-aliasing does not consist in getting the average color from the neighbours as commented above. This will undoubtfully soften the edges but it's not anti-aliasing but blurring. True anti-aliasing just cannot be done properly on a bitmap, since it has to be calculated at drawing time to tell which pixels and/or edges must be "softened" and which ones must not. For instance: imagine you draw an horizontal line which must be exactly 1 pixel thick (say "high") and must be placed exactly on an integer screen row coordinate. Obviously, you'll want it unsoftened, and proper anti-aliasing algorithm will do it, drawing your line as a perfect row of solid pixels surrounded by perfect background-coloured pixels, with no tone blending at all. But if you take this same line once it's been drawn (i.e. bitmap) and apply the average method, you'll get blurring above and below the line, resulting a 3 pixels thick horizontal line, which is not the goal. Of course, everything could be achieved through the right coding but from a very different and much more complex approach.
The basic method for anti-aliasing is: for a pixel P, sample the pixels around it. P's new color is the average of its original color with those of the samples.
You might sample more or less pixels, change the size of the area around the pixel to sample, or randomly choose which pixels to sample.
As others have said, though: anti-aliasing isn't really something that's done to an image that's already a bitmap of pixels. It's a technique that's implemented in the 2D/3D rendering engine.
"Anti-aliasing" can refer to a broad range of different techniques. None of those would typically be something that you would do in advance to a texture.
Maybe you mean mip-mapping? http://en.wikipedia.org/wiki/Mipmap
It's not very meaningful to ask about antialiasing in terms this vague. It all depends on the nature of the textures and how they will be used.
Generally though, if you don't have control over the display environment and your textures might be scaled, rotated, or composited with other elements you don't have any control over, you should use transparency to indicate where your image ends and where it only partially fills a pixel.
It's also generally good to avoid sharp edges and features that are small relative to the amount of scaling that might be done.