OpenGL - tex coord in fragment shader outside of specified range - opengl

I'm trying to draw a rectangle with a texture in OpenGL. I'm simply trying to render an entire .jpg image, so I specify the texture coordinates as [0, 0] to [1, 1] in the vertex buffer. I expect all the interpolated texture coordinates in the fragment shader to be between [0, 0] and [1, 1], however, depending on where the texture is drawn, I sometimes get a texture coordinate that is less than 0 (I know this is the case because I tried outputting red from the fragment shader if the tex coord is less than 0).
How come I get an interpolated value outside of the specified range? I currently visualize vertices/fragments like the following image (https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing):
If I imagine a rectangle instead, then if the pixel sample is inside the rectangle, then the interpolated texture coord must be at least 0, since the very left of the rectangle represents 0, right? So how do I end up with a value less than 0?
Edit: after some basic testing, it looks like the fragment shader is called if a shape simply intersects that pixel, not if the pixel sample point is inside the shape. I tested this by placing the start of the rectangle slightly before and slightly after the middle of a pixel - when slightly behind the middle of the pixel, I don't get a negative value, but if I place it slightly after the middle, then I do get a negative value. This contradicts what the website I linked to said - perhaps it's driver-dependent?
Edit: the previous test I did was with multisampling on. If I turn multisampling off, then even if the shape is past the middle, I don't get a negative value...

Turns out I just needed to keep reading the article I linked:
This is where multisampling becomes interesting. We determined that 2 subsamples were covered by the triangle so the next step is to determine a color for this specific pixel. Our initial guess would be that we run the fragment shader for each covered subsample and later average the colors of each subsample per pixel. In this case we'd run the fragment shader twice on the interpolated vertex data at each subsample and store the resulting color in those sample points. This is (fortunately) not how it works, because this basically means we need to run a lot more fragment shaders than without multisampling, drastically reducing performance.
How MSAA really works is that the fragment shader is only run once per pixel (for each primitive) regardless of how many subsamples the triangle covers. The fragment shader is run with the vertex data interpolated to the center of the pixel and the resulting color is then stored inside each of the covered subsamples. Once the color buffer's subsamples are filled with all the colors of the primitives we've rendered, all these colors are then averaged per pixel resulting in a single color per pixel. Because only two of the 4 samples were covered in the previous image, the color of the pixel was averaged with the triangle's color and the color stored at the other 2 sample points (in this case: the clear color) resulting in a light blue-ish color.
So I was getting a negative value because the fragment shader was being run on a pixel that had at least one of its sub-sample points covered by the shape, but the shape was slightly after the mid-point of the pixel, and since "the fragment shader is run with the vertex data interpolated to the center of the pixel", I was getting a negative value.

Related

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

How does gl_PointSize work

Coming from a DX background, I am trying to comprehend exactly what/how gl_PointSize and gl_PointCoord work. I've searched online and the man pages, but haven't had a really good explanation on them. Say I have a 300x300 output buffer, and I defined a vertex shader with 90,000 points, corresponding to every location in the 300x300 buffer (with increments of 1 in each dimension). Now in the vertex shader if I define a gl_PointSize of 2, does it invoke the fragment shader 90,000 times or 360,000 times?? If it's 360,000 times, I can understand what gl_PointCoord represents. But if it's only 90,000 times, does that mean each fragment output then represents 4 pixels? And in this case, what does gl_PointCoord represent? Wouldn't it always be 0.5,0.5 and not really useful??
Thanks
Section 14.4.1 "Basic Point Rasterization" of the OpenGL 4.5 core profile specification states:
Point rasterization produces a fragment for each framebuffer pixel
whose center lies inside a square centered at the point’s (xw; yw),
with side length equal to the current point size.
So in case of a point size > 1, several fragments might be generated. The fragment shader is invoked at least once per fragment. If we keep multisampling out of the picture it is invoked once per fragment, so 360,000 times is the correct answer.
Note that this also ignores the early depth test which might or might not play a role in the scene you describing, as it might discard the fragments before the FS is invoked.
From the same section of the spec:
All fragments produced in rasterizing a point sprite are assigned the
same associated data, which are those of the vertex corresponding to
the point. However, the fragment shader built-in gl_PointCoord
contains point sprite texture coordinates. The s point sprite
texture coordinate varies from zero to one across the point
horizontally left-to-right. If POINT_SPRITE_COORD_ORIGIN is
LOWER_LEFT, the t coordinate varies from zero to one vertically
bottom-to-top. Otherwise if the point sprite texture coordinate origin
is UPPER_LEFT, the t coordinate varies from zero to one vertically
top-to-bottom. [...]
So the point coordinate does exactly represent what you would expect it to represent. Note that this definition means that the point coordinate does not necessarily always end up as 0.5 even if the point size is 1 and no multisampling is used. In that case, the fragment shader is invoked for pixel centers, but the point might not lie exactly on a pixel center, so you will see where exactly in the 1x1 pixel big point you are sampling.

OpenGL shader effect

I need a efficient openGL pipeline to achieve a specific look of the line segment shapes.
This is a look I am aiming for:
(https://www.shadertoy.com/view/XdX3WN)
This is one of the primitives (spiral) I already have inside my program:
Inside gl_FragColor for this picture I am outputting distance from fragment to camera. The pipeline for this is the usual VBO->VAO->Vertex shader->Fragment shader path.
The shadertoy shader calculates the distance to the 3 points in every fragment of the screen and outputs the color according to that. But in my example I would need this in a reverse. Calculate color for surrounding fragments for ever fragment of spiral (in this case). Is it necessary to go with a render a scene into a texture using a FBO or is there a shortcut?
In the end I used:
CatmullRom spline interpolation to get point data from control points
Build VBO from above points
Vortex shader: pass point position data
Geometry shader: emit sprite size quads for every point
Fragment shader: use exp function to get a smooth gradient color from the center of the sprite quad
Result is something like this:
with:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE); // additive blend
It renders to FBO with GL_RGBA16 for more smoothness.
For small limited number of lines
use single quad covering the area or screen as geometry and send the lines points coordinates and colors to shader as 1D texture(s) or uniform. Then you can do the computation inside fragment shader per pixel an all lines at once. Higher line count will slow things down considerably.
For higher number of lines
you need to convert your geometry from lines to rectangles covering affected surroundings of a line:
use transparency to merge the lines correctly and compute color from perpendicular distance from the line. Add the dots from distance from the endpoints (can be done with texture instead of shader).
Your image suggest that the light affects whole screen so in that case you need to call Quad covering whole screen per each line instead of a rectangle coverage

Why fragments do not necessarily correspond to pixels one to one?

Here is a good explanation what is a fragment:
https://gamedev.stackexchange.com/questions/8977/what-is-a-fragment/8981#8981
But here (and not only here) I have read that "I want to stress the fact that one pixel is not necessarily one fragment, multiple fragment can be combined to make one pixel....". But I don't understand clearly what are fragments and why they are not necessarily correspond to pixels one to one?
EDIT: When multiple fragments form one pixel it is only in the case when they overlap after projection, or it is because the pixel is bigger than the fragment, hence you need to put together next to each other multiple fragments with the same color to form a pixel?
A fragment has a location that can be queried via its built-in gl_FragCoord variable where the x and y component directly correspond to pixels on your screen. So you could say that a fragment indeed corresponds to a pixel.
However, a fragment outputs a color and stores that color in a color buffer at its coordinates. This does not mean this color is the actual pixel color that is shown to the viewer.
Because a fragment shader is run for each object, it could happen that other objects are drawn after your first object that also output a fragment at the same screen coordinate. When taking depth-testing, stencil testing and blending into account, the resulting color value in the color buffer might get overwritten and/or merged with new colors.
Think of it like this:
Object 1 gets drawn and draws the color purple at screen coordinate (200, 300);
Object 2 gets drawn and draws the color red at same coordinate, overwriting it.
Object 3 (is blue) has transparency of 50% at same coordinate, and merges colors.
Final fragment color output is then combination of red and blue (50%).
The final resulting pixel could then be a color from a single fragment shader run, a color that is overwritten by many other fragment shader runs, or a combination of colors via blending.
A fragment is not equal to a pixel when multi sample anti-aliasing (MSAA) or any of the other modes that change the ratio of rendered pixels to screen pixels is activated.
In the case of 4x MSAA, each screen pixel will be represented by 4 (2x2) fragments in the display buffer. The fragment shader for a particular polygon will only be run once for the screen pixel no matter how many of the fragments are covered by the polygon. Since a polygon may not cover all the fragments within a pixel it will only store color into the fragments it covers. This is repeated for every polygon that may cover one or more of the fragments. Then at the final display all 4 fragments are blended to produce the final screen pixel.

Repeating part of texture over another texture

So I'm trying to replace a part of a texture over another in GLSL, first step in a grand scheme.
So I have a image, 2048x2048, with 3 textures on the top left, each 512x512. For testing purposes I'm trying to just repeatedly draw the first one.
//get coord of smaller texture
coord = vec2(int(gl_TexCoord[0].s)%512,int(gl_TexCoord[0].t)%512);
//grab color from it and return it
fragment = texture2D(textures, coord);
gl_FragColor = fragment;
It seems that it only grabs the same pixel, I get one color from the texture returned to me. Everything ends up grey. Anyone know what's off?
Unless that's a rectangle texture (which is isn't since you're using texture2D), your texture coordinates are normalized. That means that the range [0, 1] maps to the entire range of the texture. 0.5 always means halfway, whether for a 256 sized texture or a 8192 one.
Therefore, you need to stop passing non-normalized texture coordinates (texel values). Pass normalized texture coordinates and adjust those.