GLSL get relative normalized coordinates of the fragment - opengl

in a fragment shader, without a texture associated to it, how do I get normalized coordinates that start, for instance, on (0, 0) on the top left corner of my geometry, and (1, 1) on the bottom-right corner? This should be independent of the geometry I'm using

Related

Discard fragments of vertice drawn with gl_PointSize of 100, depending on distance to center

In a strict GLES 3.0 environment I draw vertices as GL_POINTS and set their gl_PointSize to 100, this renders me nice 100x100 px points. But they are flat shaded:
Instead I want to render them as (perfect) circles in my shader.
For GL_TRIANGLE_STRIP I did this by calculating the distance between the flat shaded quad center and the interpolated (between vertices) point position to then discard the fragment when bigger than the wanted radius.
Works fine for GL_TRIANGLE_STRIP. Doesn't work for GL_POINTS because there is only one vertex. I would need 2 vertices to interpolate in between. What I would need is the fragment's position instead, so I could discard the fragment depending on its distance to the points center position.
Any idea how I could do this with GL_POINTS?
Switching to GL_TRIANGLE_STRIP or other primitives is not possible. Geometry shaders are also not available.

What OpenGL functions modify vertex positions prior to the vertex shader?

Let's say there's a color render texture that is 1000 px wide, 500 px tall. And I draw a quad with vertices at the four corners (-1, -1, 0), (1, -1, 0), (-1, 1, 0), (1, 1, 0) to it without any transformation in the vertex shader.
Will this always cover the entire texture's surface by default, assuming no other other GL functions prior to this sole draw command were called?
What OpenGL functions (that modify vertex positions) could cause this quad to no longer fill the screen?
(I'm trying to understand how vertices can be messed with prior to the vertex shader, so I can avoid or use the right functions to always map them so NDC (-1, -1) to (1, 1) represents the entire surface).
edit: If the positions are not altered, then I'm also wondering how their mapping to a render buffer might be modified prior to the vertex shader. For instance, will (-1, -1, 0) reliably refer to a fragment at the bottom-left of the render buffer, (0, 0, 0) to the middle, and (1, 1, 0) to the top-right?
Nothing happens to vertex data "prior to the vertex shader". Nothing can happen to it, because OpenGL doesn't know what the vertex attributes mean. It doesn't know what attribute 2 refers to; it doesn't know what is a position, normal, texture coordinate or anything. As far as OpenGL is concerned, it's all just data. What gives that data meaning is your vertex shader. And only in the way defined by your vertex shader.
Data from buffer objects are read in accord with the format specified by your VAO, and are given to the vertex shader invocations which process those vertices.

OpenGL: Mapping texture on a sphere using spherical coordinates

I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).

Accessing barycentric coordinates inside fragment shader

In the fragment shader, values are naturally interpolated. For example, if I have three vertices, each with a color, red for the first vertex, green for the second and blue for the third. If I render a triangle with them, the expected result is the common
triangle.
Obviously, OpenGL calculates the interpolation coefficients (a, b, c) for each point inside the triangle. Is there any way to explicitly access these values or would I need to calculate the fragment coordinates of the three vertices and find the barycentric coordinates of the point myself?
I know this is perfectly feasible, but I thought OpenGL could have provided something.
I'm not aware of any built-in for getting the barycentric coordinates. But you should't need any calculations in the fragment shader.
You can pass the barycentric coordinates of the triangle vertices as attributes into the vertex shader. The attribute values for the 3 vertices are simply (1, 0, 0), (0, 1, 0), and (0, 0, 1). Then pass the attribute value through to the fragment shader (using a varying variable in legacy OpenGL, out in vertex shader and in in fragment shader in core OpenGL). Then value of the variable received by the fragment shader are the barycentric coordinates of the fragment.
This is very similar to the way you would commonly pass texture coordinates into the vertex shader, and them pass them through to the fragment shader, which receives the interpolated values.
NV_fragment_shader_barycentric

3D Texture sampling perpendicular to primitive

I'm implementing a slice-based volume renderer - i.e. my volumetric data is in a 3D texture, and I have a stack of proxy geometry that is rendered to sample the data.
I would like to know whether there is a way to specify the size of a fragment in texels perpendicular to the plane of a primitive.
For example my geometry is axis aligned like this:
a stack of 200 planes (quads) with the bottom left at (-1, -1, z) and the top right at (1, 1, z)
where z is from -1 to 1 with a step size of 0.01
the texture coordinate is (gl_Position.xyz + 1) / 2
If I understand texture sampling correctly, the selection of MIN_FILTER or MAG_FILTER in the xy/st direction should happen automatically depending on the size of a fragment in texels since they are on the same primitive.
How can I set the 'size' of a fragment in texels in the z/p direction? Working with the above example, I would like to interpolate between samples using MAG_FILTER if I have more slices than texture sample points in the Z direction, and using MIN_FILTER if I have fewer slices.
The filtering modes are specified only for x, y and not z. The trilinear filtering modes interpolate across different mipmap levels. Perhaps what you really are intending to do, can be achieved with a 3D texture using texImage3D()?