OpenGL Alpha blending and object independent transparency - opengl

I am trying to implement object independent transparency (OIT) using the following naive technique:
Sort opaque and transparent objects.
Render opaque with depth write on.
Disable depth write, enable alpha blending, and render the transparent objects.
It work ok if I have fully opaque and transparent objects only.But what about the case where some objects have transparent alpha textures (all my meshes are planar) and need Alpha blending on, and other are transparent.What is the routine in this case?
Currently, if I render the transparent first with depth write on and alpha blend on too, then rendering the object with alpha channel texture (depth write off, blend on) what happens is that the part of the last rendered plane that intersects into the first plane gets culled. Here is a picture to depict what I am after:
Both planes have some amount of transparency and still maintain depth sort.
I know I can use more sophisticated approaches for OIT like this. But is it possible to do it without getting into fragment linked lists etc?

A rasterization-based renderer, by its very nature, cannot handle transparency well. Rasterizers work by rendering an object, then rendering another. It has no idea of what has been rendered before or what will be rendered afterwards. It's job is to take a 3D shape and turn it into a field of colors.
If there was a simple technique for doing order-independent transparency, then you'd have heard about it by now because everyone would be using it. There isn't. Every general OIT technique is complicated and has some performance downsides associated with it.
Is it possible to find a way for the very specific case of two planes intersecting with nothing else on the screen? Yes. Could you generalize that method for arbitrary scenes with arbitrary transparency? No.

Related

How to avoid distance ordering in large scale billboard rendering with transparency

Setting the scene:
I am rendering a height map (vast non-transparent surface) with a large amount of billboards on it (typically grass, flowers and so on).
The billboards thus have a mostly transparent color map applied, with only a few pixels colored to produce the grass or leaf shapes and such. Note that the edges of those shapes use a bit of transparency gradient to make them look smoother, but I have also tried with basic, binary color/transparent textures.
Pseudo rendering code goes like so:
map->render();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
wildGrass->render();
glDisable(GL_BLEND);
Where the wildGrass render instruction renders multiple billboards at various locations in a single OGL call.
The issue I am experiencing has to do with transparency and the fact that billboards apparently hide each-other, even on their transparent area. However the height-map solid background is correctly displayed on those transparent parts.
Here's the glitch:
Left is with an explicit fragment shader discard on fully transparent pixels
Right is without the discard, clearly showing the billboard's flat quad
Based on my understanding of OGL blending and some reading, it seems that the solution is to have a controlled order of rendering, starting from the most distant objects to the closest, so that the color buffer is filled properly in the end.
I am desperately hoping that there is another way... The ordering here would typically vary depending on the point of view, which means it has to be applied in-real-time for each frame. Plus the nature of those particular billboards is to be produced in a -very large- number... Performance alert!
Any suggestions or is my approach of blending wrong?
Did not work for me:
#httpdigest's suggestion to disable depth buffer writing:
It worked essentially for billboards with the same texture (and possibly a specific type of texture, like wild grass for instance), because the depth inconsistencies are not visually noticeable - however introducing another texture, say a flower with drastically different colours, will immediately highlights those mistakes.
Solution:
#Rabbid76's suggestion to use not-semi-transparent textures with multi-sampling & anti-aliasing: I believe this is the way to go for best visual effect with reasonably low cost on performance.
Alternative solution:
I found an intermediary solution which is probably the cheapest in performance to the expense of quality. I still use textures with gradient transparent edges, but instead of discarding fully transparent pixels, I introduced a degree of tolerance, for example any pixel with alpha < 0.6 is discarded - the value is found experimentally to find the right balance.
With this approach:
I still perform depth tests, so output is correct
Textures quality is degraded/look less smooth - but reasonably so
The glitches on semi-transparent pixels still appear - but are nearly not noticeable
See capture below
So to conclude:
My solution is a cheap and simple approximation giving less smooth visual result
Best result can be obtained by rendering all the billboards to a multi-sampled texture resolve with anti-aliasing and finally output the result in a full screen quad. There are probably to ways to do this:
Either rendering the map first and use the resulting depth buffer when rendering the billboards
Or render both the map and billboards on the multi-sampled texture
Note that the above approaches are both meant to avoid having to distance-base sort a large number of billboards - but this remains a valid option and I have read about storing billboard locations in a quad tree for quick access.

Can I carry out MSAA for deferred rendering by just rendering the geometry twice?

I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).

Model with transparency

I have a model with transparent quads for a beard. I can not tell what triangles belong to the beard because their color comes from the texture passed to the fragment shader. I have tried to presort the triangles back to front during export of the model, but this does not help. So I implemented MSAA and Alpha to Coverage, but this did not help either. My last attempt was to draw the model, with a depth mask off and skipping any transparent data, so the color buffer would have non-clear color values to blend with. Then I would draw the model a second time with depth testing on and drawing the alpha pieces.
Nothing I have tried so far has worked. What other techniques can I try to get the beard of the model to properly draw? I am looking for a way to handle this that doesn't use a bunch of extensions. I'd prefer techniques that can be handled with plain old OpenGL 4.
Here is an image of what I am dealing with.
This is what I got after I applied the selected answer.
What you're trying to do there is a still largely unsolved problem: Order independent transparency. MSAA is something entirely different, as is alpha coverage.
So far the best working solution is to separate the model into an opaque and a hairy part. Draw the opaque parts of your scene first, then draw everything (semi-)translucent, ordered far to near in a second pass.
The way your image looks like it seems like the beard is rendered as the first thing, which is quite the opposite of what you actually want.
Simple way:
Enable depth write (depth mask), disable alpha-blending, draw model without the beard.
Disable depth write, enable alpha-blending, draw the beard. Make sure face culling is enabled.
Hard way:
Because order-independent transparency in renderers that use z-buffer is an unsolved problem (as datenwolf said), you could try depth-peeling. I believe the paper is available within OpenGL SDK. Most likely it'll be slower than "simple way", and there'll be a limit on number of maximum overlapping transparent polygons. Also check wikipedia article on order-independent transparency.

Alpha blending with multiple textures leaves colored border

Following problem: I have two textures and I want to combine these two into a new texture. Thus, one texture is used as background, the other will be overlaid. The overlay texture is getting initialized with glClearColor(1.0, 1.0, 1.0, 0.0). Objects are draw onto the texture, these objects do have alpha values.
Now blending between the two textures leaves a white border around the objects. The border comes from the fact that the background color in the second texture is white, isn't it?
How can I use alpha blending where I do not have to think about the background color of the overlaying texture?
I solved the problem myself, but thanks a lot to all of you guys!
The problem was following: to combine both textures I used glblend(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) which does not work due to the fact that OpenGL uses pre-multiplied alpha values. Blending with glblend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), works as the source term now will be:
1*src_alpha*src_color!
How can i use alpha blending where i do not have to think about the background color of the overlaying texture?
You can't; your blend function incorporates the background color into it, because it may not actually be the "background". You render multiple objects to the texture, so the "background" color may in fact be a previously rendered object.
Your best bet is to minimize the impact. There's no particular need for the background color to be white. Just make it black. This won't make the artifacts go away; it will hopefully just make it less noticeable.
The simple fact is that blending in graphics cards simply isn't designed to be able to do the kinds of compositing you're doing. It works best when what you're blending with is opaque. Even if there are layers of transparency between the opaque surface and what you're rendering, it still works.
But if the background is actually transparent, with no fully opaque color, the math simply stops working. You will get artifacts; the question is how noticeable they will be.
If you have access to more advanced hardware, you could use some shader-based programmatic blending techniques. But these will have a performance impact.
Although I think you probably get better results with a black background, as Nicol Bolas pointed out. But you should double check your blending functions, because as you point out, it SHOULD not matter...
1.0 * 0.0 + 0.734 * 1.0 = 0.734
What I don't really get is why your base texture is fully transparent? Is that intended? Unless you blend the textures and then use them somewhere else initializing to Alpha = 1.0 is a batter idea.
Make sure you disable depth writing before you draw the transparent texture (so one transparent texture can't "block" another, preventing part of it from being drawn). To do so just call glDepthMask(false). Once you are done drawing transparent objects, call glDepthMask(true) to set depth writing back to normal.

OpenGL 2D game question

I want to make a game with Worms-like destructible terrain in 2D, using OpenGL.
What is the best approach for this?
Draw pixel per pixel? (Uh, not good?)
Have the world as a texture and manipulate it (is that possible?)
Thanks in advance
Thinking about the way Worms terrain looked, I came up with this idea. But I'm not sure how you would implement it in OpenGL. It's more of a layered 2D drawing approach. I'm posting the idea anyway. I've emulated the approach using Paint.NET.
First, you have a background sky layer.
And you have a terrain layer.
The terrain layer is masked so the top portion isn't drawn. Draw the terrain layer on top of the sky layer to form the scene.
Now for the main idea. Any time there is an explosion or other terrain-deforming event, you draw a circle or other shape on the terrain layer, using the terrain layer itself as a drawing mask (so only the part of the circle that overlaps existing terrain is drawn), to wipe out part of the terrain. Use a transparent/mask-color brush for the fill and some color similar to the terrain for the thick pen.
You can repeat this process to add more deformations. You could keep this layer in memory and add deformations as they occur or you could even render them in memory each frame if there aren't too many deformations to render.
I guess you'd better use texture-filled polygons with the correct mapping (a linear one that doesn't stretch the texture to use all the texels, but leaves the cropped areas out), and then reshape them as they get destroyed.
I'm assuming your problem will be to implement the collision between characters/weapons/terrain.
As long as you aren't doing this on opengl es, you might be able to get away with using the stencil buffer to do per-pixel collision detection and have your terrain be a single modifyable texture.
This page will give an idea:
http://kometbomb.net/2007/07/11/hardware-accelerated-2d-collision-detection-in-opengl/
The way I imagine it is this:
a plane with the texture applied
a path( a vector of points/segments ) used for ground collisions.
When something explodes, you do a boolean operation (rectangle-circle) for the texture(revealing the background) and for the 'walkable' path.
What I'm trying to say is you do a geometric boolean operation and you use the result to update the texture(with an alpha mask or something) and update the data structure you use to keep track of the walkable area(which ever that might be).
Split things up, instead of relying only on gl draw methods
I think I would start by drawing the foreground into the stencil buffer so the stencil buffer is set to 1 bits anywhere there's foreground, and 0 elsewhere (where you want your sky to show).
Then to draw a frame, you draw your sky, enable the stencil buffer, and draw the foreground. For the initial frame (before any explosion has destroyed part of the foreground) the stencil buffer won't really be doing anything.
When you do have an explosion, however, you draw it to the stencil buffer (clearing the stencil buffer for that circle). Then you re-draw your data as before: draw the sky, enable the stencil buffer, and draw the foreground.
This lets you get the effect you want (the foreground disappears where desired) without having to modify the foreground texture at all. If you prefer not to use the stencil buffer, the alternative that seems obvious to me would be to enable blending, and just manipulate the alpha channel of your foreground texture -- set the alpha to 0 (transparent) where it's been affected by an explosion. IMO, the stencil buffer is a bit cleaner approach, but manipulating the alpha channel is pretty simple as well.
I think, but this is just a quick idea, that a good way might be to draw a Very Large Number of Lines.
I'm thinking that you represent the landscape as a bunch of line segments, for each column of the screen you have 0..n vertical lines, that make up the ground:
12 789
0123 6789
0123456789
0123456789
In the above awesomeness, the column of "0":s makes up a single line, and so on. I didn't try to illustrate the case where a single pixel column has more than one line, since it's a bit hard in this coarse format.
I'm not sure this will be efficient, but it at least makes some sense since lines are an OpenGL primitive.
You can color and texture the lines by enabling texture-mapping and specifying the desired texture coordinates for each line segment.
Typically the way I have seen it done is to have each entity be a textured quad, then update the texture for animation. For a destructible terrain it might be best to break the train into tiles then you only have to update the ones that have changed. Don't use GLdrawpixels it is probably the slowest approach possible (outside of reloading textures from disk every frame though it would be close.)