What is the goal of glTexEnv? - opengl

Reading about point sprites for a particle system rendering, in this site they talk about point sprites and use the call glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
I tried to find info on this but everything points to the description given in the documentation of OpenGL, can someone give a more convenient example/explanation to understand the meaning of this?

Your title is a little broad, you are actually interested in one particular parameter (which is not explained in the manual page). However, if you read the formal specification for OpenGL 2.0, you will see that the parameter is explained there.
OpenGL Version 2.0 (October 22, 2004)  -  3.3. Points  -  p. 100
All fragments produced in rasterizing a point sprite are assigned the same associated data, which are those of the vertex corresponding to the point. However, for each texture coordinate set where GL_COORD_REPLACE is GL_TRUE, these texture coordinates are replaced with point sprite texture coordinates.
Effectively what this means is that when disabled (default), the fragments produced during rasterization are assigned a single texture coordinate set. Those coordinates would be the ones associated with the single vertex that created the point sprite.
That behavior is not particularly useful though, because if every part of the point sprite has the same texture coordinates, then texture mapping is worthless. So as an alternative, GL can compute the texture coordinates itself and it does so by assigning the bottom-left corner (0,0) and the top-right corner of the sprite (1,1). This behavior is also customizable and if you are interested in reading more about how to assign which corner which coordinates, the linked part of the specification explains that in detail.
These additional point parameters are illustrated here:
  

Related

How does the rasterizer create fragments?

Does the rasterizer (in OpenGL) create one fragment for each pixel the triangle is mapped to? So if we have 4 triangles and each triangle covers the whole screen (each triangle has a different z value) and my resolution is 1080*720, are there then 1080*720*4 fragments created?
I got confused with this concepts because I haven't seen it mentioned clearly somewhere. And will the fragment shader then render all these fragments or are they discarded based on the depth function settings before rendering?
Im assuming there is no multisampling.
That's pretty much the crux of it. The only complication in this case may be thrown up by depth testing, which may discard fragments if the Z test fails. So assuming each triangle is rendered in front of the proceeding triangle, then yes you'll have 1080*720*4 fragments.

GLSL shader: occlusion order and culling

I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).

Opengl ES draw wireframe over GL_TRIANGLES correctly

I need to draw a wireframe around a cube,I have everything made but I have some problem with the alpha testing, whatever I do the GL_LINES keep either overlapping the GL_TRIANGLES when they dont have to(they are behind them) or the GL_TRIANGLES keep overlapping the GL_LINES (when the lines should be visible).
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
SquareMap.get().shader.getShader().begin();
SquareMap.get().shader.getShader().setUniformMatrix(u,camera.combined);
LineRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
TriangleRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
SquareMap.get().shader.getShader().end();
Also the wireframe is a little bigger than the cube.
The TriangleRenderer3D.get().render and LineRenderer3D().render just load the vertices and call gl_drawarrays
By enabling depth mask the cube GL_TRIANGLES overlap the lines
Do I need to enable something that I missing here?
It is worth mentioning that line primitives have different pixel coverage rules than triangles. A line must cross through a diamond-shaped pattern in the center of a pixel to be visible, where as a triangle needs to cover the top-left corner. This documentation is for Direct3D, but it does an infinitely better job describing these rules (which are the same in GL) than any OpenGL documentation I have come across.
As for fixing this problem, a small offset applied to all vertex positions in order to better align their centers is the most common approach. This is typically done by translating X and Y by 0.375 units.
Another Microsoft document explains this as well.
While some of the issues described in the first paragraph may be primitive coverage related, none are in the last paragraph.
The issue described in the final paragraphs can be addressed this way:
//
// Write wires wherever the line's depth is less than or equal to the triangles.
//
glDepthFunc (GL_LEQUAL);
TriangleRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
LineRenderer3D.get().render(SquareMap.get().shader,worldrenderer.getCamera());
By rendering the triangles first, and then only drawing the lines where they are either in front of or at the same depth as (default depth test discards this scenario) you should get the behavior you want. Leave depth writes enabled.

How can I apply a depth test to vertices (not fragments)?

TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?

glViewport, window sizes and clipping

I am trying to understand the relationship between the screen and the logic OpenGL uses to decide if a primitive should be rendered, i.e. is it onscreen or not.
For example, suppose you set the viewport larger than the screen (no reason to do this, but for example's sake). OpenGL doesn't "know" the screen size, so it will "draw" points off the screen so long as the orthographic projection places them within the viewport, correct?
But also, if I define a vertex position to be outside the viewport as determined by the projection, does OpenGL include it in rendering?
glViewport(0,0,100,100);
ApplyOrtho(50,50); // custom ES 2.0 utility to apply 2D orthographic projection
Now a vertex of position (75,75) would not get rendered by OpenGL, right?
the logic OpenGL uses to decide if a primitive should be rendered
There is only one piece of logic that OpenGL uses to decide if a primitive should be rendered. If the primitive hasn't been clipped/culled, then the only thing that will stop it from being rasterized is if the user has disabled all primitive rasterization with glEnable(GL_RASTERIZER_DISCARD). Otherwise, the OpenGL specification defines that all primitives that were not culled as part of clipping will be rasterized.
Now, whether they will produce any visible effect is a different question. And since primitives that are off-screen can't produce visible effects (unless you're using image load/store), a conforming OpenGL implementation is free to cull such triangles if it wants. But more likely, it will rasterize them and simply check to see if the fragment falls outside of the window. If it does, it will just discard those fragments.
In general, this should not be something you should be concerned about. Just set a reasonable viewport and you'll be fine.