OpenGL Random White Dots - opengl

I am trying to render a geometry in a white background. The problems is that random white dots appears inside the geometry. As I resize my window, the white points switch places... appearing and disappearing randomly inside the geometry (while I am resizing the window).
I have conducted extensive tests and have found that the dots only appears at the edges between two triangles. It seems like both triangles fail to render that pixels (as if that pixels isn't contained by any of the triangles), so the white background is rendered. I should note that only a few pixels at those borders are white (not all). And its not some kind of texture filtering issue since the problem happens even if I render the polygon with a solid color (that I set directly inside the shader).
Really, it seems some kind of hit test problem where the OpenGL implementation fails to detect some pixels on the boundaries of two adjacent triangles.
I am running this example in a 27'' iMac with a NVIDIA GeForce GTX 675MX. I'm going to test this same application on my MacBook with Intel Integrated Graphics Card.
Can someone shed some light on this topic?

Thanks #Damon. I solved the issue which wasn't that the vertices weren't exactly the same. The true problem is that (by design) some vertices needed to stay in the intersection between two triangles. This was causing problems with OpenGL. The solution was to move the vertex slightly down (inside the triangles) and adjust the texture coordinates accordingly.
Many thanks!

Related

artifacts (seams) seen in translucent polygon with holes in openGL

I have this polygon with a hole when rendered in translucent color in openGL, it shows some sort of artifacts along seams of tessellated triangles (by GLUtesselator). This is a bit of strange because the same polygon would not have such artifacts if it's drawn in opaque color.
Artifacts seen as doted line extended from inner circle to outer boundary of polygon:
More artifacts seen in interior of the polygon:
It appears like bleeding from alpha blending of color between two adjacent triangles' edges. But I have no idea of how to mitigate the problem.
Has anyone seen this problem before? or can someone point out what the problem might be and a possible solution for me?
OpenGL guarantees that drawing two triangles that share an edge will produce each covered fragment exactly once, thus you could render artifact-free translucent polygons.
However, this guarantee holds only if both vertices of that edge are identical between the two triangles. Also it won't hold if you anti-alias your polygons with GL_SMOOTH_POLYGON.
It's hard to tell what's the case here without seeing the relevant code, but I would definitely check the coordinates of the vertices of the shared edges to see if they are the same.

OpenGL blending function to elminate primitive overlap but maintain overall opacity

I have some geometry which has a single primitive set that's a tri-strip. Some of the triangles in the primitive overlap, so when I add a material to the geometry with an alpha value I see the overlap (as expected). I want to get rid of this effect without changing the geometry though -- I tried playing around with different blending modes (glBlendFunc()) but I couldn't get this to work. I got some interesting results, but nothing that would eliminate opacity effects within the primitives of the tri strip, and preserve opacity for the entire object. I'm using OpenSceneGraph, but it provides a method to call glBlendFunc() for the geometry in question.
So from the image, assume that pink roads, purple roads and yellow roads constitute three separate objects, each created using a single tri strip (there are multiple strips, but for arguments sake, pretend that there were only three different colored tri strips here). I basically don't want to see the self intersections within the same color
Also, my question is pretty much the same as this one: OpenGL, primitives with opacity without visible overlap, but I should note that when I tried the blending mode in accepted answer for that question, the strips weren't rendered in the scene at all.
I've had the same issue in a previous project. Here's how I solved it :
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA)
and draw the rectangles. The idea behind this is that you draw a
rectangle with the desired transparency which is taken from the
framebuffer, but in the progress mask the area you've drawn to so that
your subsequent rectangles will be masked there.
Source : Stackoverflow : Overlapping rectangles
One way to do this is to render each set of paths to a texture and then draw the texture onto the window with alpha. You can do this for each color of path.
This outlines the general idea.

OpenGL rendering and rasterization of slivers

I am testing some rendering stuff with OpenGL and I noticed that I have some issues with long thin polygons that are forming a plane. So when having two of these long polygons directly next to each other, snapping at the long side, I noticed that some of the pixels at the edge are invisible. These invisible pixels move around when I move the camera.
What I found is that this is because the pixels at the edge of these "sliver" polygons will be invisible because the rasterization thinks that they are not within that polygon at this specific view angle.
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
If you found my description of the problem a bit weird see http://www.ugrad.cs.ubc.ca/~cs314/Vjan2008/slides/week5.day3-4x4.pdf page 27 and following. That's what I mean.
EDIT: ok i think i have to make clear what my problem is, because i have a feeling that i cant adress it with anti aliasing techniques
aaa|b|cc
aaa|b|cc
aaa|b|cc
^ ^
1 2
- the polygons a, b and c form a plane
- some pixels at edge 1 and 2 are invisible at certain camera angles
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
In general, you don't. If OpenGL thinks that a part of a triangle is too thin to be rendered for a given resolution, then it's too thin to be rendered. The general form of this issue is called called "aliasing".
The solution is to use an antialiasing technique. For example, multisampling. When you create the context, select a number of samples to use.

2D OpenGL scene slows down with lots of overlapping shapes

I'm drawing 2D shapes with OpenGL. They aren't using that many polygons. I notice that I can have lots and lots of shapes as long as they don't overlap. If I get a shape behind a shape behind.... etc.. it really starts lagging. I feel like I might be doing something wrong. Is this normal and is there a way to fix this (I can't omit rendering because I do blending for alpha). I also have CW backface culling enabled.
Thanks
Are your two cases (overlapping and non-overlapping) using the exact same set of shapes? Because if the overlapping case involves a total area of all your shapes that is larger than the first case, then it would be expected to be slower. If it's the same set of shapes that slows down if some of them overlap, then that would be very unusual and shouldn't happen on any standard hardware OpenGL implementation (what platform are you using?). Backface culling won't be causing any problem.
Whenever a shape is drawn, the GPU has to do some work for each pixel that it covers on the screen. If you draw the same shape 100 times in the same place, then that's 100-times the pixel work. Depth buffering can reduce some of the extra cost for opaque objects if you draw objects in depth-sorted order, but that trick can't work for things that use transparency.
When using transparency, it's the sum of the area of each rendered shape that matters. Not the amount of the screen that's covered after everything is rendered.
You need to order your shapes front-to-back if they are opaque. Then the depth test can quickly and easily reject each pixel.
Then, you need to order them back-to-front if they are transparent. Rendering transparency out-of-order is very slow.
Edit: Hmm, I (somehow) missed the fact that this is 2D, despite the fact that the OP mentioned it repeatedly.

OpenGL texturing via vertex alphas, how to avoid following diagonal lines?

http://img136.imageshack.us/img136/3508/texturefailz.png
This is my current program. I know it's terribly ugly, I found two random textures online ('lava' and 'paper') which don't even seem to tile. That's not the problem at the moment.
I'm trying to figure out the first steps of an RPG. This is a top-down screenshot of a 10x10 heightmap (currently set to all 0s, so it's just a plane), and I texture it by making one pass per texture per quad, and each vertex has alpha values for each texture so that they blend with OpenGL.
The problem is that, notice how the textures trend along diagonals, and even though I'm drawing with GL_QUAD, this is presumably because the quads are turned into sets of two triangles and then the alpha values at the corners have more weight along the hypotenuses... But I wasn't expecting that to matter at all. By drawing quads, I was hoping that even though they were split into triangles at some low level, the vertex alphas would cause the texture to radiate in a circular outward gradient from the vertices.
How can I fix this to make it look better? Do I need to scrap this and try a whole different approach? IS there a different approach for something like this? I'd love to hear alternatives as well.
Feel free to ask questions and I'll be here refreshing until I get a valid answer, so I'll comment as fast as I can.
Thanks!!
EDIT:
Here is the kind of thing I'd like to achieve. No I'm obviously not one of the billions of noobs out there "trying to make a MMORPG", I'm using it as an example because it's very much like what I want:
http://img300.imageshack.us/img300/5725/runescapehowdotheytile.png
How do you think this is done? Part of it must be vertex alphas like I'm doing because of the smooth gradients... But maybe they have a list of different triangle configurations within a tile, and each tile stores which configuration it uses? So for example, configuration 1 is a triangle in the topleft and one in the bottomright, 2 is the topright and bottomleft, 3 is a quad on the top and a quad on the bottom, etc? Can you think of any other way I'm missing, or if you've got it all figured out then please share how they do it!
The diagonal artefacts are caused by having all of your quads split into triangles along the same diagonal. You define points [0,1,2,3] for your quad. Each quad is split into triangles [0,1,2] and [1,2,3]. Try drawing with GL_TRIANGLES and alternating your choice of diagonal. There are probably more efficient ways of doing this using GL_TRIANGLE_STRIP or GL_QUAD_STRIP.
i think you are doing it right, but you should increase the resolution of your heightmap a lot to get finer tesselation!
for example look at this heightmap renderer:
mdterrain
it shows the same artifacts at low resolution but gets better if you increase the iterations
I've never done this myself, but I've read several guides (which I can't find right now) and it seems pretty straight-forward and can even be optimized by using shaders.
Create a master texture to control the mixing of 4 sub-textures. Use the r,g,b,a components of the master texture as a percentage mix of each subtextures ( lava, paper, etc, etc). You can easily paint a master texture using paint.net, photostop, gimp and just paint into each color channel. You can compute the resulting texture before hand using all 5 textures OR you can calculate the result on the fly with a fragment shader. I don't have a good example of either, but I think you can figure it out given how far you've come.
The end result will be "pixel" pefect blending (depends on the textures resolution and filtering) and will avoid the vertex blending issues.