Accessing barycentric coordinates inside fragment shader - opengl

In the fragment shader, values are naturally interpolated. For example, if I have three vertices, each with a color, red for the first vertex, green for the second and blue for the third. If I render a triangle with them, the expected result is the common
triangle.
Obviously, OpenGL calculates the interpolation coefficients (a, b, c) for each point inside the triangle. Is there any way to explicitly access these values or would I need to calculate the fragment coordinates of the three vertices and find the barycentric coordinates of the point myself?
I know this is perfectly feasible, but I thought OpenGL could have provided something.

I'm not aware of any built-in for getting the barycentric coordinates. But you should't need any calculations in the fragment shader.
You can pass the barycentric coordinates of the triangle vertices as attributes into the vertex shader. The attribute values for the 3 vertices are simply (1, 0, 0), (0, 1, 0), and (0, 0, 1). Then pass the attribute value through to the fragment shader (using a varying variable in legacy OpenGL, out in vertex shader and in in fragment shader in core OpenGL). Then value of the variable received by the fragment shader are the barycentric coordinates of the fragment.
This is very similar to the way you would commonly pass texture coordinates into the vertex shader, and them pass them through to the fragment shader, which receives the interpolated values.

NV_fragment_shader_barycentric

Related

OpenGL: Mapping texture on a sphere using spherical coordinates

I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).

How do I obtain the vertices of the current polygon inside a fragment shader?

I've got a shader to procedurally generate geometric shapes inside a quad. Essentially, you render a quad with this fragment shader active, and it calculates which fragments are on the border of the shape and discards everything else.
The problem is the dimensions of the quad. At the moment, I have to pass in the vertex data twice, once to the VBO and a second time as uniform variables to the shader, so it knows how big of a shape it's supposed to be creating.
Is there any way to only have to do this once, by having some way to get the coordinates of the top-left and bottom-right vertices of the current quad when I'm inside the fragment shader, so that I could simply give the vertex data to OpenGL once and have the shader calculate the largest shape that will fit inside the quad?
I think you probably want to use a geometry shader. Each vertex would consist of the position of a corner of the quad (a vector of 2-4 values) and the size of the quad (which could be a single value or upto 9 depending on how general you need the quad to be).
The geometry shader would generate the additional vertices for the quad and pass the size through to the fragment shader.
Depending on what exactly you're doing you may also be able to use point sprites and use the implicit coordinates that they have (gl_PointCoord). However, point sprites have a maximum size (which can be queried via GL_POINT_SIZE_RANGE and GL_POINT_SIZE_GRANULARITY).
You could pull the vertices yourself. You could create a Uniform Buffer or a Texture Buffer with the vertex data and just access this buffer in the fragment shader. In the vertex shader, in order to know what vertex to output you could just use the built-in variable gl_VertexID
I'd pass the top left and bottom right vertices of the quad as two extra input attributes for each vertex. The quads themselves get rendered as triangles.
In the vertex shader, declare two output attributes as flat (so they don't get interpolated) and copy the input attributes to these outputs.

Vertex shader vs Fragment Shader [duplicate]

This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.

Can someone explain how this code transforms something from per vertex lighting to per pixel?

In a tutorial there was a diffuse value calculation of the type
float diffuse_value = max(dot(vertex_normal, vertex_light_position), 0.0);
..on the vertex shader.
That was supposed to be making per vertex lighting if later on the fragment shader..
gl_FragColor = gl_Color * diffuse_value;
Then when he moved the first line - appropriately (by outputting vertex_normal and vertex_light_position to fragment) - to the the fragment shader, it is supposed to be transforming the method to "per pixel shading".
How is that so? The first method appears to be doing the diffuse_value calculation every pixel anyway!
diffuse_value in the first case is computed in the vertex shader. So it's only done per vertex.
After the vertex shader outputs values, the rasterizer takes those values (3 per triangle for each vector) and interpolates (in a perspective correct manner) them to provide different values for each pixel. As it happens, interpolating vectors like that (the normal and the light direction vectors) is not proper, because it loses their normalized property. Many implementations will actually normalize the vectors first thing in the fragment shader.
But it's worse to interpolate the dot of the 2 vectors (what the vector lighting effectively does). Say for example that your is N=+Z for all your vertices and L=norm(Z-X) on one and L=norm(Z+X) on another.
N.L = 1/sqrt(2) for both vertices.
Interpolating that will give you a flat lighting, whereas actually interpolating N and L separately and renormalizing will give you the result you'd expect, a lighting that peaks exactly in the middle of the polygon. (because the interpolation of norm(Z-X) and norm(Z+X) will give exactly Z once normalized).
Well ... Code in a vertex shader is only evaluated per-vertex, with the input values of that vertex.
But when moved to a fragment shader, it is evaluated per-fragment, i.e. per pixel, with input values appropriately interpolated between vertices.
At least that is my understanding, I'm quite rusty with shader programming though.
If diffuse_value is computed in vertex shader, that means it is computed per vertex. Then, it is linearly interpolated on pixels of triangle and feed into pixel shader. (If you don't have per-pixel normals, that's all you can do.) Then, in pixel shader, polygon color (interpolated too) is modulated with that diffuse_value.

Fragment Shader Eye-Space unscaled depth coordinate

I'm trying to use the unscaled (true distance from the front clipping plane) distance to objects in my scene in a GLSL fragment shader. The gl_FragCoord.z value is smaller than I expect. In my vertex shader, I just use ftransform() to set gl_Position. I'm seeing values between 2 and 3 when I expect them to be between 15 and 20.
How can I get the real eye-space depth?
Thanks!
Pass whatever you want down as a varying from the vertex shader.
the Z value available in the fragment shader has gone through normalization based on your z-near/z-far (from the projection matrix), and DepthRange. So it is not directly what you're after. Technically, you could try to reconstruct it by reverting the various OpenGL operations on Z that happen after the vertex shader, but it's probably more trouble (starting with the fact that reverting the projection matrix is non-linear) than just passing down what you want, exactly.
As a side note, the Z you would compute with gl_ModelViewMatrix * gl_Vertex is the Z from the point of view, not the near-Z plane.