What are acceptable glPolygonMode first-argument values? - opengl

The documentation for glPolygonMode only specifies the enum GL_FRONT_AND_BACK as an acceptable first parameter (face). Are there other acceptable enums, such as only the front, or only the back?
glPolygonMode(GLenum face, GLenum mode);
I know that mode only supports GL_POINT, GL_LINE, and GL_FILL, but it just seems extremely strange that the only one the documentation specifies for face is GL_FRONT_AND_BACK, but it's a requirement to use it as an argument.

glPolygonMode accepted different parameters for the face in legacy OpenGL contexts. If you look at the Khronos man page for it for OpenGL 2.1, it says:
face Specifies the polygons that mode applies to. Must be GL_FRONT for
front-facing polygons, GL_BACK for back-facing polygons, or
GL_FRONT_AND_BACK for front- and back-facing polygons.
Conversely, in the OpenGL 4 man page, it says:
face Specifies the polygons that mode applies to. Must be
GL_FRONT_AND_BACK for front- and back-facing polygons.
In the OpenGL 3.3 spec, in the section 'E2. Deprecated and Removed Features', it lists:
Separate polygon draw mode - PolygonMode face values of FRONT and
BACK; polygons are always drawn in the same mode, no matter which face
is being rasterized.
Likely, the face parameter was just retained for compilation equivalence for different OpenGL context targets, even though in modern OpenGL, it can really only have one value, and is now redundant.

Related

Is it possible to depth test against a depth texture I am also sampling, in the same draw call?

Context:
I am using a deferred rendering setup, where in the first stage I have two FBO's: one is the GBuffer, for storing the normals, albedo, and material information for all visible fragments. This FBO has a 32-bit depth texture. This gets drawn into in a geometry pass, before any lighting is calculated.
The second FBO is color-only, and starts off black, but accumulates lighting over several passes, from lighting shaders that sample from the GBuffer and write to the color-only buffer using additive blending.
The problem is, I would really like to utilize early depth testing in order to have my lighting ONLY calculate for fragments that contain actual geometry (Not just sky). The best way I can think of to do this is to use depth testing to fail any pixels that have a depth of one in the case of sunlight, or to fail any pixels that lie behind the sphere of influence for point lights. However, I don't think I can bind this depth texture to my color FBO, since I also sample from it inside the lighting shader to calculate the fragments position in world-space.
So my question is: Is there a way to use the same depth texture for both the early depth test, and for sampling inside the shader? Or if not, is there some other (reasonably performant) way of rejecting pixels that don't have geometry in them? I will not be writing to this depth texture at all in my lighting pass.
I only have to target modern graphics hardware on PC's (So I can use any common extensions, or openGL 4.6 features).
There are rules in OpenGL about reading from data in a shader that's also being updated due to a framebuffer operation. Those rules used to be quite strict. Indeed, pre-GL 4.4, the rules were so strict that what you're trying to do was actually undefined behavior. That is, if an image from a texture was attached to the rendering FBO, and you took a sample from that texture in a way such that it was at all possible to be reading from the attached image, you got undefined behavior. Never mind if your write mask meant that no writing happened; it was UB.
Fortunately, it's well-defined now. You only get UB if you're doing an actual write, not merely because you have an image attached to the FBO. And by "now," I mean basically any hardware made in the last 10 years. While ARB_texture_barrier and GL 4.5 are fairly recent, their predecessor NV_texture_barrier is actually quite old. And despite being an NVIDIA extension by name, it was so widely implemented that it is even available on MacOS implementations.

Opengl ES draw wireframe over GL_TRIANGLES correctly

I need to draw a wireframe around a cube,I have everything made but I have some problem with the alpha testing, whatever I do the GL_LINES keep either overlapping the GL_TRIANGLES when they dont have to(they are behind them) or the GL_TRIANGLES keep overlapping the GL_LINES (when the lines should be visible).
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
SquareMap.get().shader.getShader().begin();
SquareMap.get().shader.getShader().setUniformMatrix(u,camera.combined);
LineRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
TriangleRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
SquareMap.get().shader.getShader().end();
Also the wireframe is a little bigger than the cube.
The TriangleRenderer3D.get().render and LineRenderer3D().render just load the vertices and call gl_drawarrays
By enabling depth mask the cube GL_TRIANGLES overlap the lines
Do I need to enable something that I missing here?
It is worth mentioning that line primitives have different pixel coverage rules than triangles. A line must cross through a diamond-shaped pattern in the center of a pixel to be visible, where as a triangle needs to cover the top-left corner. This documentation is for Direct3D, but it does an infinitely better job describing these rules (which are the same in GL) than any OpenGL documentation I have come across.
As for fixing this problem, a small offset applied to all vertex positions in order to better align their centers is the most common approach. This is typically done by translating X and Y by 0.375 units.
Another Microsoft document explains this as well.
While some of the issues described in the first paragraph may be primitive coverage related, none are in the last paragraph.
The issue described in the final paragraphs can be addressed this way:
//
// Write wires wherever the line's depth is less than or equal to the triangles.
//
glDepthFunc (GL_LEQUAL);
TriangleRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
LineRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
By rendering the triangles first, and then only drawing the lines where they are either in front of or at the same depth as (default depth test discards this scenario) you should get the behavior you want. Leave depth writes enabled.

What is the goal of glTexEnv?

Reading about point sprites for a particle system rendering, in this site they talk about point sprites and use the call glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
I tried to find info on this but everything points to the description given in the documentation of OpenGL, can someone give a more convenient example/explanation to understand the meaning of this?
Your title is a little broad, you are actually interested in one particular parameter (which is not explained in the manual page). However, if you read the formal specification for OpenGL 2.0, you will see that the parameter is explained there.
OpenGL Version 2.0 (October 22, 2004)  -  3.3. Points  -  p. 100
All fragments produced in rasterizing a point sprite are assigned the same associated data, which are those of the vertex corresponding to the point. However, for each texture coordinate set where GL_COORD_REPLACE is GL_TRUE, these texture coordinates are replaced with point sprite texture coordinates.
Effectively what this means is that when disabled (default), the fragments produced during rasterization are assigned a single texture coordinate set. Those coordinates would be the ones associated with the single vertex that created the point sprite.
That behavior is not particularly useful though, because if every part of the point sprite has the same texture coordinates, then texture mapping is worthless. So as an alternative, GL can compute the texture coordinates itself and it does so by assigning the bottom-left corner (0,0) and the top-right corner of the sprite (1,1). This behavior is also customizable and if you are interested in reading more about how to assign which corner which coordinates, the linked part of the specification explains that in detail.
These additional point parameters are illustrated here:
  

glDrawElements draw polygon

I have read that the first parameter of the glDrawElements is mode:
http://www.opengl.org/sdk/docs/man3/xhtml/glDrawElements.xml
Symbolic constants GL_POINTS, GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, GL_LINE_STRIP_ADJACENCY, GL_LINES_ADJACENCY, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_TRIANGLES, GL_TRIANGLE_STRIP_ADJACENCY and GL_TRIANGLES_ADJACENCY are accepted.
I do not see there GL_POLYGON. Is that means that I can not use GL_POLYGON? and if I got 10 indices? Am I need to transform it to a few polygons which contains 3 indices each one? If it is true, How do I do it?
The GL3 and GL4 level man pages on www.opengl.org only document the Core Profile of OpenGL. GL_POLYGON is deprecated, and was not part of the Core Profile when the spec was split into Core and Compatibility profiles in OpenGL 3.2.
You can still use GL_POLYGON if you create a context that supports the Compatibility Profile. But if you start out, I would suggest that you stick to Core Profile features. If you do need documentation for the deprecated features, you'll have to go back to the GL2 man pages.
To draw a polygon, GL_TRIANGLE_FAN is the easiest replacement. You can use the same set of vertices for a triangle fan as you would use for GL_POLYGON, and it will produce the same result.
You are linking to the GL3 manual pages, by the way.
Since GL_POLYGON was deprecated in 3.0 and removed in 3.1, you are not going to find it listed there. In fact, you will find some tokens there that are only supported in GL 3.2 (adjacency primitives, which were introduced when Geometry Shaders were); fortunately that is actually documented in the manual page itself unlike the fact that GL_POLYGON was deprecated.
For compatibility profiles (which you are using), you should view the GL2 manual page. The GL2 man page can be found here.

Simple OpenGL Clarification

Does version OpenGL 3+ only uses "GL_TRIANGLES" ?
That's what I read, but in the documentation for OpenGL 3.3, http://www.opengl.org/sdk/docs/man3/, "glDrawArrays()" takes the following parameters:
GL_POINTS,
GL_LINE_STRIP,
GL_LINE_LOOP,
GL_LINES,
GL_LINE_STRIP_ADJACENCY,
GL_LINES_ADJACENCY,
GL_TRIANGLE_STRIP,
GL_TRIANGLE_FAN,
GL_TRIANGLES,
GL_TRIANGLE_STRIP_ADJACENCY,
GL_TRIANGLES_ADJACENCY
Does version OpenGL 3+ only uses "GL_TRIANGLES"
You mean "instead of also offering GL_QUADS and GL_POLYGON"?
Yes indeed. Quads and Polygons have been removed altogether. Most polygons needed to be tesselated into triangles anyway, since OpenGL can deal with convex polygons only (convex also imples planar!). Similar holds for quads.
Lines and points remain to be supported of course.
Does version OpenGL 3+ only uses "GL_TRIANGLES" ?? That's what I read
Where? Please provide a link.
There is a difference between "GL_TRIANGLES" and "triangles".
GL_TRIANGLES is a specific primitive type. It has a specific interpretation. It's base primitive type is "triangles" (as in, it generates triangles), but there's more to it than that.
"triangles" are exactly that: assemblages of 3 vertices that represent a planar area. GL_TRIANGLES, GL_TRIANGLE_STRIP, and GL_TRIANGLE_FAN produce triangles.
OpenGL 3.1+ core does not allow the use of the specific primitive types GL_QUADS, GL_QUAD_STRIP (ie: all "quad" types), and GL_POLYGON. Everything else is fair game.
According to section 2.6.1 of the specification commands like glDrawArrays() accept the primitives you posted. So, no, OpenGL 3.3 doesn't accept just GL_TRIANGLES.
What you read was probably meant to explain that OpenGL doesn't support primitives like GL_QUADS and GL_POLYGON anymore.
Quad and polygon primitives have been removed according to appendix E.2.2 of the specification (since version 3.1, prior versions still support them, although they're deprecated from version 3.0).
You can find the specification here.