OpenGL: Rendering two transparent planes intersecting each other: impossible or not? - opengl

I ran to this problem hard, it seems impossible to render.
How can one solve this problem? I want the OpenGL render it like it looks at the right side of this image below:

You need to render your planes while disabling the depth test and using an order independent blending formula.
If you have some opaque geometries on the back, draw those ones, put the depth buffer to read only instead of disabling the depth test, and render the transparent ones.
There are also advanced techniques dealing with that common problem, like depth peeling.
EDIT
You can put the depth buffer in read only using: glDepthMask(GL_FALSE).
Here is a good article explaining why you can't achieve the perfect transparency: Transparency Sorting. Also give a look at Order Independent Transparency with Dual Depth Peeling article which covers two methods (one is quite straightforward and single pass) used to have exact (or approximate) order independant transparency.
I forgot to mention Alpha to Coverage.

One non-trivial solution is to split the planes into parts, sort them and then render them back to front. However the perfect sorting is hard to achieve.
Like in the article posted in the other answer:
Transparency Sorting: Depth Sorting

Related

Model with transparency

I have a model with transparent quads for a beard. I can not tell what triangles belong to the beard because their color comes from the texture passed to the fragment shader. I have tried to presort the triangles back to front during export of the model, but this does not help. So I implemented MSAA and Alpha to Coverage, but this did not help either. My last attempt was to draw the model, with a depth mask off and skipping any transparent data, so the color buffer would have non-clear color values to blend with. Then I would draw the model a second time with depth testing on and drawing the alpha pieces.
Nothing I have tried so far has worked. What other techniques can I try to get the beard of the model to properly draw? I am looking for a way to handle this that doesn't use a bunch of extensions. I'd prefer techniques that can be handled with plain old OpenGL 4.
Here is an image of what I am dealing with.
This is what I got after I applied the selected answer.
What you're trying to do there is a still largely unsolved problem: Order independent transparency. MSAA is something entirely different, as is alpha coverage.
So far the best working solution is to separate the model into an opaque and a hairy part. Draw the opaque parts of your scene first, then draw everything (semi-)translucent, ordered far to near in a second pass.
The way your image looks like it seems like the beard is rendered as the first thing, which is quite the opposite of what you actually want.
Simple way:
Enable depth write (depth mask), disable alpha-blending, draw model without the beard.
Disable depth write, enable alpha-blending, draw the beard. Make sure face culling is enabled.
Hard way:
Because order-independent transparency in renderers that use z-buffer is an unsolved problem (as datenwolf said), you could try depth-peeling. I believe the paper is available within OpenGL SDK. Most likely it'll be slower than "simple way", and there'll be a limit on number of maximum overlapping transparent polygons. Also check wikipedia article on order-independent transparency.

Efficient OIT techniques for planar geometry

This question is the continuation to this one.I need to implement Order independent transparency(OIT) for planar object as it is done in Adobe AfterEffects.
Some of the methods used today are depth peeling , linked lists ,ray casting etc. These are quite complex solutions which are essential for self overlapping and volumetric geometry.Now, in my case all the objects are planes. Isn't there a simpler way of doing it in such a scenario? I am looking into AfterEffects. It uses OpenGL 2.X so I think they use some more "trivial" technique.Can stencil buffer be of use here?
If someone is going to suggest layered approach - this is not an option as 2 or more transparent planes may intersect one into another so both depth and occluded geometry must be preserved.

Perfect filled triangle rendering algorithm?

Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.

Outline / Silhouette rendering with OpenGL

I know there are several techniques to achieve this, but none of them seems sufficient.
Using a sobel / laplace filter doesn't find all the correct edges (and finds unwanted edges), is slow and doesn't give me control over the outline width.
What i have settled on for now is rendering the backside of my objects first with a solid color and a little bigger than the actual objects. The result does look good, but i really want my outlines to have a constant width.
I already tried rendering the backside of my objects with thick wireframe lines. Gives me a constant outline width, but line width is deprecated, produces rendering artifacts and leaves gaps, if the outline abruptly changes direction (like on a cube for example). I have not yet tried using a third rendering pass drawing a point the size of the wireframe lines for each vertex, because of the other problems with this technique.
Any ideas?
edit I even looked at finding the edges myself using a geometry shader, as described in http://prideout.net/blog/?p=54, but it suffers from the same gaps as the wireframe technique.
edit I was able to get rid of the rendering artifacts with the wireframe technique by disabling the GL_DEPTH_TEST while drawing the outlines. Unfortunately i also lost the outlines on overlapping objects...
My goal is to get the same effect they use on characters in the Dragons Lair 3 game. Does anyone know how they did it?
in case you're after real edge detection, Ive found that you can get pretty good results with a convolution LoG (Laplacian over Gaussian) 5x5 kernel, applied to the depth buffer and blended over the rendered object (possibly with a decent FSAA). You need some tuning in the fragment shader in order to clamp the blended outline, but the results are good. (and its a matter of what you really want, btw)
note that:
1) Laplace filtering and log filtering are different things and produce different results
2) if you apply the convolution on the depth buffer, instead of the rendered image, you end up with totally different results, firthermore, if an outline width conrol is desired, a dilate filter followed by a selective-erode pass can be applied, this way you will end up with a render that closely match a hand drawn sketch made with a marker, and you have fine control over tip size but at the cost of 2 extra pass

OpenGL, applying texture from image to isosurface

I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)