Opengl ES draw wireframe over GL_TRIANGLES correctly - opengl

I need to draw a wireframe around a cube,I have everything made but I have some problem with the alpha testing, whatever I do the GL_LINES keep either overlapping the GL_TRIANGLES when they dont have to(they are behind them) or the GL_TRIANGLES keep overlapping the GL_LINES (when the lines should be visible).
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
SquareMap.get().shader.getShader().begin();
SquareMap.get().shader.getShader().setUniformMatrix(u,camera.combined);
LineRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
TriangleRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
SquareMap.get().shader.getShader().end();
Also the wireframe is a little bigger than the cube.
The TriangleRenderer3D.get().render and LineRenderer3D().render just load the vertices and call gl_drawarrays
By enabling depth mask the cube GL_TRIANGLES overlap the lines
Do I need to enable something that I missing here?

It is worth mentioning that line primitives have different pixel coverage rules than triangles. A line must cross through a diamond-shaped pattern in the center of a pixel to be visible, where as a triangle needs to cover the top-left corner. This documentation is for Direct3D, but it does an infinitely better job describing these rules (which are the same in GL) than any OpenGL documentation I have come across.
As for fixing this problem, a small offset applied to all vertex positions in order to better align their centers is the most common approach. This is typically done by translating X and Y by 0.375 units.
Another Microsoft document explains this as well.
While some of the issues described in the first paragraph may be primitive coverage related, none are in the last paragraph.
The issue described in the final paragraphs can be addressed this way:
//
// Write wires wherever the line's depth is less than or equal to the triangles.
//
glDepthFunc (GL_LEQUAL);
TriangleRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
LineRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
By rendering the triangles first, and then only drawing the lines where they are either in front of or at the same depth as (default depth test discards this scenario) you should get the behavior you want. Leave depth writes enabled.

Related

How does the rasterizer create fragments?

Does the rasterizer (in OpenGL) create one fragment for each pixel the triangle is mapped to? So if we have 4 triangles and each triangle covers the whole screen (each triangle has a different z value) and my resolution is 1080*720, are there then 1080*720*4 fragments created?
I got confused with this concepts because I haven't seen it mentioned clearly somewhere. And will the fragment shader then render all these fragments or are they discarded based on the depth function settings before rendering?
Im assuming there is no multisampling.
That's pretty much the crux of it. The only complication in this case may be thrown up by depth testing, which may discard fragments if the Z test fails. So assuming each triangle is rendered in front of the proceeding triangle, then yes you'll have 1080*720*4 fragments.

OpenGL : drawing line using degenerate triangle

In my engine, I want to avoid having line, and separate triangle types. I want to draw the lines using a triangle where 2 verts are identical. But in opengl, this triangle wont be displayed because it has zero area, and therefore can't cover a pixel.
Internally, at the driver level, an opengl line is drawn using a degenerate triangle, and a different rasterization rule is used where it draws at least one pixel per scanline.
D3d had some option where you could set the rasterization to always draw the first pixel per scan line--effectively accomplishing what I want in d3d.
But how can I do this with opengl? I don't see any command that would allow you to change the rasterization rules.
Well I did this exact thing by, of the three vertices necessary, using the first two as the start point of the line and then using glPolygonMode(GL_FRONT_AND_BACK, GL_LINE) at the start of the line rendering object and glPolygonMode(GL_FRONT_AND_BACK, GL_FILL) at the end.
Combine that with the appropriate enabling and disabling of face culling, and you've got yourself a perfectly good line renderer that still uses the triangle set up.

transparency in opengl (using FLTK)

I'm drawing some 3D structures in a Fl_Gl_Window in FLTK's implementation of opengl. This images are drawn and rotated so the code looks something like
glTranslatef(-xshift,-yshift,-zshift);
glRotatef(ang1,ang2,ang3);
glTranslatef(xshift,yshift,zshift);
glColor4f((120.0/256.0),(120.0/256.0),(120.0/256.0),0.2);
for (int side=0;side<num_sides;side++){
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
glBegin(GL_TRIANGLES);
//draw shape
glEnd();
glDisable(GL_BLEND);
}
and it almost works apart from at different angles the transparency doesn't work properly. For example, if I draw a cube from one side it will look transparent all the way through without being able to discern the two sides but from the other one side will appear darker as it is supposed to. It's as if it calculates the transparency too 'early' as in before the rotation. Am I doing something wrong? Should I move the rotation to below the transparency effects (i.e. before them in execution) or does the order of the triangles matter?
The order of the triangles matters. To get the desired effect for transparency you need to render the triangles in back to front order because the hardware blending works by reading the color for the fragment in the depth buffer and blending it with the fragment currently being shaded. That's why you are getting different results when you rotate your cube since you are not changing the order of the triangles in the cube. You may also want to look into Order Independent Transparency techniques.
Depending on how many triangles you have sorting them every frame can get really expensive. One approximation technique is to presort the triangles along the x, y, and z axes and then choose the sorted ordered that most closely matches your viewing direction. This only works to a certain extent. One popular type of order independent transparency technique is depth peeling. Here's a tutorial with some code for implementing it: http://mmmovania.blogspot.com/2010/11/order-independent-transparency.html?m=1. You might also want to read the original paper to get a better understanding of the technique: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.9286&rep=rep1&type=pdf.

How to render perfect wireframed rectangle in 2D mode with OpenGL?

Edit: just so you know: I have not solved this problem perfectly yet, currently I am using 0.5px offset, it seems to work, but as others have said, it is not the "proper" solution. So I am looking for the real deal, the diamond exit rule solution didn't work at all.
I believe it is a bug in the graphics card perhaps, but if so, then any professional programmer should have their bullet-proof solutions for this, right?
Edit: I have now bought a new nvidia card (had ATI card before), and i still experience this problem. I also see the same bug in many, many games. So i guess it is impossible to fix in a clean way?
Here is image of the bug:
How do you overcome this problem? Preferrably a non-shader solution, if possible. I tried to set offset for the first line when i drew 4 individual lines myself instead of using wireframe mode, but that didnt work out very well: if the rectangle size changed, it sometimes looked perfect rectangle, but sometimes even worse than before my fix.
This is how i render the quad:
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBegin(GL_QUADS);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glEnd();
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
Yes, I know I can use vertex arrays or VBO's, but that isn't the point here.
I also tried GL_LINE_LOOP, but it didn't fix the bug.
Edit: One solution is at, which works so far: Opengl pixel perfect 2D drawing by Lie Ryan:
Note that OpenGL coordinate space has no notion of integers,
everything is a float and the "centre" of an OpenGL pixel is really at
the 0.5,0.5 instead of its top-left corner. Therefore, if you want a
1px wide line from 0,0 to 10,10 inclusive, you really had to draw a
line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you
have anti-aliasing and you try to draw from 50,0 to 50,100 you may see
a blurry 2px wide line because the line fell in-between two pixels.
Although you've discovered that shifting your points by 0.5 makes the problem go away it's not for the reason that you think.
The answer does indeed lie in the diamond exit rule which is also at the heart of the correctly accepted answer to Opengl pixel perfect 2D drawing.
The diagram below shows four fragments/pixels with a diamond inscribed within each. The four coloured spots represent possible starting points for your quad/line loop i.e. the window co-ordinates of the first vertex.
You didn't say which way you were drawing the quad but it doesn't matter. I'll assume, for argument's sake, that you are drawing it clockwise. The issue is whether the top left of the four fragments shown will be produced by rasterising either your first or last line (it cannot be both).
If you start on the yellow vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the green vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. However the last line will pass through it vertically and exit it as it ascends back to the green vertex. The fragment will therefore be produced as a result of the last line's rasterisation.
If you start on the blue vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the red vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. The last line will also not pass through the diamond and therefore not exit it as it ascends back to the red vertex. The fragment will therefore not be produced as a result of either line's rasterisation.
Note that any vertex that is inside the diamond will automatically cause the fragment to be produced as the first line must exit the diamond (provided your quad is actually big enough to leave the diamond of course).
This is not a bug, this is exactly following the specification. The last pixel of a line is not drawn to prevent overdraw with following line segments, which would cause problems with blending. Solution: Send the last vertex twice.
Code Update
// don't use glPolygonMode, it doesn't
// do what you think it does
glBegin(GL_LINE_STRIP);
glVertex2f(a);
glVertex2f(b);
glVertex2f(c);
glVertex2f(d);
glVertex2f(a);
glVertex2f(a); // resend last vertex another time, to close the loop
glEnd();
BTW: You should learn how to use vertex arrays. Immediate mode (glBegin, glEnd, glVertex calls) have been removed from OpenGL-3.x core and onward.
#Troubadour described the problem perfectly. It's not a driver bug. GL is acting exactly as specified. It's designed for sub-pixel accurate representation of the world space object in device space. That's what it's doing. Solutions are 1) anti-alias, so the device space affords more fidelity and 2) arrange for a world coordinate system where all transformed vertices fall in the middle of a device pixel. This is the "general" solution you are looking for.
In all cases you can achieve 2) by moving the input points around. Just shift each point enough to take its transform to the middle of a device pixel.
For some (unaltered) point sets, you can do it by slighly modifying the view transformation. The 1/2 pixel shift is an example. It works e.g. if the world space is an integer-scaled transform of device space followed by a translation by integers, where world coordinates are also integers. Under many other conditions, though, +1/2 won't work.
** Edit **
NB as I said a uniform shift (1/2 or any other) can be built into the view transform. There is no reason to fiddle with vertex coordinates. Just prepend a translation, e.g. glTranslatef(0.5f, 0.5f, 0.0f);
Try change 0.5 into the odd magic number that's used everywhere 0.375.
Used be opengl, and X11 etc.
Becuase of that diamond rule mentioned and how graphiccards draw to avoid unnecessery overdraws of pixels.
Provide some link but there's lots of them, just search keywords opengl 0.375 diamond rule if you need more information. It's about how outlines and fills are treated algorithmically in opengl. It's needed for pixelperfect rendering of textures in for example 2d sprites aswell.
Take a look at this.
Want to add something; So doing what you want, implementing diamond rule implemented in code would be simply one liner; change 0.5 into 0.375 like this;
And it should render properly.
glTranslatef(0.375, 0.375, 0.0);

OpenGL blending function to elminate primitive overlap but maintain overall opacity

I have some geometry which has a single primitive set that's a tri-strip. Some of the triangles in the primitive overlap, so when I add a material to the geometry with an alpha value I see the overlap (as expected). I want to get rid of this effect without changing the geometry though -- I tried playing around with different blending modes (glBlendFunc()) but I couldn't get this to work. I got some interesting results, but nothing that would eliminate opacity effects within the primitives of the tri strip, and preserve opacity for the entire object. I'm using OpenSceneGraph, but it provides a method to call glBlendFunc() for the geometry in question.
So from the image, assume that pink roads, purple roads and yellow roads constitute three separate objects, each created using a single tri strip (there are multiple strips, but for arguments sake, pretend that there were only three different colored tri strips here). I basically don't want to see the self intersections within the same color
Also, my question is pretty much the same as this one: OpenGL, primitives with opacity without visible overlap, but I should note that when I tried the blending mode in accepted answer for that question, the strips weren't rendered in the scene at all.
I've had the same issue in a previous project. Here's how I solved it :
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA)
and draw the rectangles. The idea behind this is that you draw a
rectangle with the desired transparency which is taken from the
framebuffer, but in the progress mask the area you've drawn to so that
your subsequent rectangles will be masked there.
Source : Stackoverflow : Overlapping rectangles
One way to do this is to render each set of paths to a texture and then draw the texture onto the window with alpha. You can do this for each color of path.
This outlines the general idea.