OpenGL blending function to elminate primitive overlap but maintain overall opacity - c++

I have some geometry which has a single primitive set that's a tri-strip. Some of the triangles in the primitive overlap, so when I add a material to the geometry with an alpha value I see the overlap (as expected). I want to get rid of this effect without changing the geometry though -- I tried playing around with different blending modes (glBlendFunc()) but I couldn't get this to work. I got some interesting results, but nothing that would eliminate opacity effects within the primitives of the tri strip, and preserve opacity for the entire object. I'm using OpenSceneGraph, but it provides a method to call glBlendFunc() for the geometry in question.
So from the image, assume that pink roads, purple roads and yellow roads constitute three separate objects, each created using a single tri strip (there are multiple strips, but for arguments sake, pretend that there were only three different colored tri strips here). I basically don't want to see the self intersections within the same color
Also, my question is pretty much the same as this one: OpenGL, primitives with opacity without visible overlap, but I should note that when I tried the blending mode in accepted answer for that question, the strips weren't rendered in the scene at all.

I've had the same issue in a previous project. Here's how I solved it :
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA)
and draw the rectangles. The idea behind this is that you draw a
rectangle with the desired transparency which is taken from the
framebuffer, but in the progress mask the area you've drawn to so that
your subsequent rectangles will be masked there.
Source : Stackoverflow : Overlapping rectangles

One way to do this is to render each set of paths to a texture and then draw the texture onto the window with alpha. You can do this for each color of path.
This outlines the general idea.

Related

Is it possible to add fragments outside of a 3D model's area?

In a 3D scene in Godot, I am attempting to create a pixel-perfect outline for a Spatial shader (applied after a pixelation effect to ensure the same resolution). To achieve this, I would like to modify pixels directly adjacent to the target mesh.
That said, I have a hunch that I simply cannot modify pixels outside of a mesh's area in screespace, and that I would have to use a separate donor mesh to achieve this effect. The issue with this is that I'm even more unsure of how to access an external mesh (I am fine applying the same pixel perfect effect to all meshes on-screen, but it would have to be pixel-perfect).
A secondary solution that I may have to settle with: do an inwards bleed for the outline, sacrificing the outermost pixels for the outline, which would be acceptable compromise.
You can render just the elements you want to a Viewport using cull_mask as I just described in another answer here.
Now, you can take the texture from that Viewport use a ViewportTexture (make sure it is local to the scene and you are using it a Node placed after the Viewport in the scene tree) and process it using a shader.
I suggest you make the background of the Viewport transparent, so you can use the alpha channel to check if a pixel is rendered or not. The outline pixels will be those which are not rendered but are adjacent to a pixel that was rendered.
This is the idea behind convolution edge detection. See Kernel (image processing).

How to avoid distance ordering in large scale billboard rendering with transparency

Setting the scene:
I am rendering a height map (vast non-transparent surface) with a large amount of billboards on it (typically grass, flowers and so on).
The billboards thus have a mostly transparent color map applied, with only a few pixels colored to produce the grass or leaf shapes and such. Note that the edges of those shapes use a bit of transparency gradient to make them look smoother, but I have also tried with basic, binary color/transparent textures.
Pseudo rendering code goes like so:
map->render();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
wildGrass->render();
glDisable(GL_BLEND);
Where the wildGrass render instruction renders multiple billboards at various locations in a single OGL call.
The issue I am experiencing has to do with transparency and the fact that billboards apparently hide each-other, even on their transparent area. However the height-map solid background is correctly displayed on those transparent parts.
Here's the glitch:
Left is with an explicit fragment shader discard on fully transparent pixels
Right is without the discard, clearly showing the billboard's flat quad
Based on my understanding of OGL blending and some reading, it seems that the solution is to have a controlled order of rendering, starting from the most distant objects to the closest, so that the color buffer is filled properly in the end.
I am desperately hoping that there is another way... The ordering here would typically vary depending on the point of view, which means it has to be applied in-real-time for each frame. Plus the nature of those particular billboards is to be produced in a -very large- number... Performance alert!
Any suggestions or is my approach of blending wrong?
Did not work for me:
#httpdigest's suggestion to disable depth buffer writing:
It worked essentially for billboards with the same texture (and possibly a specific type of texture, like wild grass for instance), because the depth inconsistencies are not visually noticeable - however introducing another texture, say a flower with drastically different colours, will immediately highlights those mistakes.
Solution:
#Rabbid76's suggestion to use not-semi-transparent textures with multi-sampling & anti-aliasing: I believe this is the way to go for best visual effect with reasonably low cost on performance.
Alternative solution:
I found an intermediary solution which is probably the cheapest in performance to the expense of quality. I still use textures with gradient transparent edges, but instead of discarding fully transparent pixels, I introduced a degree of tolerance, for example any pixel with alpha < 0.6 is discarded - the value is found experimentally to find the right balance.
With this approach:
I still perform depth tests, so output is correct
Textures quality is degraded/look less smooth - but reasonably so
The glitches on semi-transparent pixels still appear - but are nearly not noticeable
See capture below
So to conclude:
My solution is a cheap and simple approximation giving less smooth visual result
Best result can be obtained by rendering all the billboards to a multi-sampled texture resolve with anti-aliasing and finally output the result in a full screen quad. There are probably to ways to do this:
Either rendering the map first and use the resulting depth buffer when rendering the billboards
Or render both the map and billboards on the multi-sampled texture
Note that the above approaches are both meant to avoid having to distance-base sort a large number of billboards - but this remains a valid option and I have read about storing billboard locations in a quad tree for quick access.

OpengGL Light Blending

I am writing a Lights/Shadows system for my game using Java alongside the LWJGL. For each one of the Light-Emitting Entities I generate such a texture:
I should warn you that these Textures are full of (0, 0, 1) or (1, 0, 0) pixels, and the gradient effect is achieved with the alpha channel. I interpret the Alpha channel as a gradient factor.
Afterwards, I wish to blend every light/shadow texture together on a single texture, each at it's respective correct position. For that, I use a Framebuffer. I tried to achieve the desired effect using the following blend equation/function combination:
glBlendEquationSeparateEXT(GL_FUNC_ADD, GL_MAX);
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE);
I chose GL_ONE/GL_ONE for the Alpha Channel Blend Function arbitrarily, for GL_MAX will only do max(Sa, Da), as stated here, which means that the scaling factors are not used. The result of this combination is the following:
This image was obtained with Apple's OpenGL Driver Profiler, so I did not render it using my application (which could mess with the final result). The next step would be to render this texture over the actual game using multiply-blending, in order to darken the image, but the lights/shadows texture is obviously wrong, because we can see the edges of individual light/shadow textures over each other.
How should I proceed to achieve the desired result?
Edit:
I forgot to explain my choices for the scaling factors:
I think that it would be right to simply add the colors of each light (pondering each of them with their respective alpha values) and choose the alpha of the final fragment to be the biggest of each overlapping light.
Imagine that one of your texture rectangles was extended outside its current border with some arbitrary pattern, like pure green. Imagine further that we were somehow allowed to use two different blending functions, one inside the border, and one outside. You would get the same image you have here (none of the green showing) if outside the border you used the blend function
glBlendFuncSeparateEXT(GL_ZERO, GL_ONE, GL_ONE, GL_ONE)
We would then want whatever blending function we use inside to give us a continuous blending result. The blending function
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE)
would not give us such a result. It is not so much because the first parameter which would mean ignoring the source near the border (small alpha on the border, if I read your image description correctly). So it must be the second parameter. We want the destination only when the source alpha is small. Change GL_DST_ALPHA to GL_ONE_MINUS_SRC_ALPHA. This would be more standard, but maybe I'm not understanding your objectives?

OpenGL Alpha blending and object independent transparency

I am trying to implement object independent transparency (OIT) using the following naive technique:
Sort opaque and transparent objects.
Render opaque with depth write on.
Disable depth write, enable alpha blending, and render the transparent objects.
It work ok if I have fully opaque and transparent objects only.But what about the case where some objects have transparent alpha textures (all my meshes are planar) and need Alpha blending on, and other are transparent.What is the routine in this case?
Currently, if I render the transparent first with depth write on and alpha blend on too, then rendering the object with alpha channel texture (depth write off, blend on) what happens is that the part of the last rendered plane that intersects into the first plane gets culled. Here is a picture to depict what I am after:
Both planes have some amount of transparency and still maintain depth sort.
I know I can use more sophisticated approaches for OIT like this. But is it possible to do it without getting into fragment linked lists etc?
A rasterization-based renderer, by its very nature, cannot handle transparency well. Rasterizers work by rendering an object, then rendering another. It has no idea of what has been rendered before or what will be rendered afterwards. It's job is to take a 3D shape and turn it into a field of colors.
If there was a simple technique for doing order-independent transparency, then you'd have heard about it by now because everyone would be using it. There isn't. Every general OIT technique is complicated and has some performance downsides associated with it.
Is it possible to find a way for the very specific case of two planes intersecting with nothing else on the screen? Yes. Could you generalize that method for arbitrary scenes with arbitrary transparency? No.

Outline / Silhouette rendering with OpenGL

I know there are several techniques to achieve this, but none of them seems sufficient.
Using a sobel / laplace filter doesn't find all the correct edges (and finds unwanted edges), is slow and doesn't give me control over the outline width.
What i have settled on for now is rendering the backside of my objects first with a solid color and a little bigger than the actual objects. The result does look good, but i really want my outlines to have a constant width.
I already tried rendering the backside of my objects with thick wireframe lines. Gives me a constant outline width, but line width is deprecated, produces rendering artifacts and leaves gaps, if the outline abruptly changes direction (like on a cube for example). I have not yet tried using a third rendering pass drawing a point the size of the wireframe lines for each vertex, because of the other problems with this technique.
Any ideas?
edit I even looked at finding the edges myself using a geometry shader, as described in http://prideout.net/blog/?p=54, but it suffers from the same gaps as the wireframe technique.
edit I was able to get rid of the rendering artifacts with the wireframe technique by disabling the GL_DEPTH_TEST while drawing the outlines. Unfortunately i also lost the outlines on overlapping objects...
My goal is to get the same effect they use on characters in the Dragons Lair 3 game. Does anyone know how they did it?
in case you're after real edge detection, Ive found that you can get pretty good results with a convolution LoG (Laplacian over Gaussian) 5x5 kernel, applied to the depth buffer and blended over the rendered object (possibly with a decent FSAA). You need some tuning in the fragment shader in order to clamp the blended outline, but the results are good. (and its a matter of what you really want, btw)
note that:
1) Laplace filtering and log filtering are different things and produce different results
2) if you apply the convolution on the depth buffer, instead of the rendered image, you end up with totally different results, firthermore, if an outline width conrol is desired, a dilate filter followed by a selective-erode pass can be applied, this way you will end up with a render that closely match a hand drawn sketch made with a marker, and you have fine control over tip size but at the cost of 2 extra pass