artifacts (seams) seen in translucent polygon with holes in openGL - opengl

I have this polygon with a hole when rendered in translucent color in openGL, it shows some sort of artifacts along seams of tessellated triangles (by GLUtesselator). This is a bit of strange because the same polygon would not have such artifacts if it's drawn in opaque color.
Artifacts seen as doted line extended from inner circle to outer boundary of polygon:
More artifacts seen in interior of the polygon:
It appears like bleeding from alpha blending of color between two adjacent triangles' edges. But I have no idea of how to mitigate the problem.
Has anyone seen this problem before? or can someone point out what the problem might be and a possible solution for me?

OpenGL guarantees that drawing two triangles that share an edge will produce each covered fragment exactly once, thus you could render artifact-free translucent polygons.
However, this guarantee holds only if both vertices of that edge are identical between the two triangles. Also it won't hold if you anti-alias your polygons with GL_SMOOTH_POLYGON.
It's hard to tell what's the case here without seeing the relevant code, but I would definitely check the coordinates of the vertices of the shared edges to see if they are the same.

Related

Why do I have this strange seam between polygons?

I have two simple cube-shaped primitives that are pushed together. Where the polygons connect, there is a razor-thin seam where the polygon edges match up (see pic, red arrow).
Each face owns its own vertices, they are not shared with indicia. I have confirmed with debugging that the coordinates of the vertices at each end of the seam that should be occupying the same position ARE occupying the same position/normal/uv. The winding of the faces that join together are the same. I have even adjusted the code to MANUALLY COPY the positions, normals, and UV of the vertices in question just in case there was some floating point error that was too small to be display.
Can anyone explain what's going on here? Is there a way to fix it without literally joining those vertices into a single vertex and indexing it?
I've included a wireframe pic in the screenshot as well. I can tell from the wireframe that the two lines, though overlapping, are a bit off. But with all coordinates at the same value, what is it??

polygon to triangle convertion effect visibility of the mesh

I am using OpenGl ES to visualize a mesh which has polygons with more than 3 vertexes. I wanted to convert these polygons to triangles using following loop. In the loop I created polygonVertexSize-2 number of triangles just by filling an OpenGL index array which refers to same vertexes in a different order and times.
for(int j=0;j<polygonVertexSize-2;j++) //number of triangles
{
//GetPolygonVertex returns the index of a polygon Vertex
indices[indp+0]=Polygon->GetPolygonVertex(0);
indices[indp+1]=Polygon->GetPolygonVertex(1+j);
indices[indp+2]=Polygon->GetPolygonVertex(2+j);
indp+=3;
}
Problem with this conversion is, unless I apply glDisable(GL_CULL_FACE) some parts of the meshes are not visible. Which probably means my triangulation cause surface normals to be wrong. Another thing to note is, I average a normal for a vertex using the normals of the same vertex in different triangles.
How may I solve this problem? is it a bad idea to disable culling to solve this problem?
Here are the results with culling and without
The problem is with back-face culling.
Part of the mesh are invisible because they are facing away from camera. glDisable(GL_CULL_FACE) is the simplest way to solve this problem but this can cause performance problems (every triangle is processed twice). But it shouldn't affect your scene.
If you want to do it "right" you have to change the winding for invisible triangles. Just swap two vertices.
//only for invisible triangles
indices[indp+0]=Polygon->GetPolygonVertex(0);
indices[indp+1]=Polygon->GetPolygonVertex(2+j);
indices[indp+2]=Polygon->GetPolygonVertex(1+j);
Your triangulation is right if your polygon is planar and convex. You can simply check if your polygon is convex using gift wrapping algorithm or just walk through vertices and compute dot products, if the sign of dot product changes, polygon is not convex

OpenGL Random White Dots

I am trying to render a geometry in a white background. The problems is that random white dots appears inside the geometry. As I resize my window, the white points switch places... appearing and disappearing randomly inside the geometry (while I am resizing the window).
I have conducted extensive tests and have found that the dots only appears at the edges between two triangles. It seems like both triangles fail to render that pixels (as if that pixels isn't contained by any of the triangles), so the white background is rendered. I should note that only a few pixels at those borders are white (not all). And its not some kind of texture filtering issue since the problem happens even if I render the polygon with a solid color (that I set directly inside the shader).
Really, it seems some kind of hit test problem where the OpenGL implementation fails to detect some pixels on the boundaries of two adjacent triangles.
I am running this example in a 27'' iMac with a NVIDIA GeForce GTX 675MX. I'm going to test this same application on my MacBook with Intel Integrated Graphics Card.
Can someone shed some light on this topic?
Thanks #Damon. I solved the issue which wasn't that the vertices weren't exactly the same. The true problem is that (by design) some vertices needed to stay in the intersection between two triangles. This was causing problems with OpenGL. The solution was to move the vertex slightly down (inside the triangles) and adjust the texture coordinates accordingly.
Many thanks!

OpenGL blending function to elminate primitive overlap but maintain overall opacity

I have some geometry which has a single primitive set that's a tri-strip. Some of the triangles in the primitive overlap, so when I add a material to the geometry with an alpha value I see the overlap (as expected). I want to get rid of this effect without changing the geometry though -- I tried playing around with different blending modes (glBlendFunc()) but I couldn't get this to work. I got some interesting results, but nothing that would eliminate opacity effects within the primitives of the tri strip, and preserve opacity for the entire object. I'm using OpenSceneGraph, but it provides a method to call glBlendFunc() for the geometry in question.
So from the image, assume that pink roads, purple roads and yellow roads constitute three separate objects, each created using a single tri strip (there are multiple strips, but for arguments sake, pretend that there were only three different colored tri strips here). I basically don't want to see the self intersections within the same color
Also, my question is pretty much the same as this one: OpenGL, primitives with opacity without visible overlap, but I should note that when I tried the blending mode in accepted answer for that question, the strips weren't rendered in the scene at all.
I've had the same issue in a previous project. Here's how I solved it :
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA)
and draw the rectangles. The idea behind this is that you draw a
rectangle with the desired transparency which is taken from the
framebuffer, but in the progress mask the area you've drawn to so that
your subsequent rectangles will be masked there.
Source : Stackoverflow : Overlapping rectangles
One way to do this is to render each set of paths to a texture and then draw the texture onto the window with alpha. You can do this for each color of path.
This outlines the general idea.

OpenGL lighting question?

Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)