Get surface size in Fragment Shader (GLSL) - opengl

I'm writing a shader for my mesh that has triangles of different sizes, in 3 dimensions. Therefore if I "blur" or feather in my fragment stage, I get the same amount of blur, despite the size of the triangle I'm fragmenting. Some triangles are huge, some are tiny.
I want to have a reference value to get a right proportion of my triangle.
Is there a way to get the length of the triangle, so that I have a scale beside my UV coords.
Please find this image to show what I mean: here . I want to get the result of #3
I'm working in OpenGL, GLSL.
Thanks for your help!

I solved it with a Geometry shader.

Related

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

OpenGL: blur only one part of the texture; can using vertex shader speed up?

Let's say there is one texture: 6000x6000
I only need to blur one part, let's say the center rectangle 100x100
If I use vertex shader to put the interested area to this center rectangle, by inputting the coordinates of the 4 points and their corresponding texture coordinates in the big texture, I think the fragment shader only process the pixels in the center rectangle.
In my understanding, a regular GPU cannot really handle 6000x6000 pixels concurrently; it will divide to several segments.
Now with 100x100, all pixels can be processed simultaneously, so it would be faster.
Is my understanding correct?
You can do a "render to texture", so you can use your "vertex shader" to select the area you want to blur... and then your fragment shader will apply the blur only in that area.
your understanding seems to be correct: consider that the GPU will only spend efford processing the fragments INSIDE the area determined by your vertex shader, so if you set your vertex to a subset of your target [just like the screen, your target may be a texture, via framebuffers], then your GPU will process only the desired area.

OpenGL shader effect

I need a efficient openGL pipeline to achieve a specific look of the line segment shapes.
This is a look I am aiming for:
(https://www.shadertoy.com/view/XdX3WN)
This is one of the primitives (spiral) I already have inside my program:
Inside gl_FragColor for this picture I am outputting distance from fragment to camera. The pipeline for this is the usual VBO->VAO->Vertex shader->Fragment shader path.
The shadertoy shader calculates the distance to the 3 points in every fragment of the screen and outputs the color according to that. But in my example I would need this in a reverse. Calculate color for surrounding fragments for ever fragment of spiral (in this case). Is it necessary to go with a render a scene into a texture using a FBO or is there a shortcut?
In the end I used:
CatmullRom spline interpolation to get point data from control points
Build VBO from above points
Vortex shader: pass point position data
Geometry shader: emit sprite size quads for every point
Fragment shader: use exp function to get a smooth gradient color from the center of the sprite quad
Result is something like this:
with:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE); // additive blend
It renders to FBO with GL_RGBA16 for more smoothness.
For small limited number of lines
use single quad covering the area or screen as geometry and send the lines points coordinates and colors to shader as 1D texture(s) or uniform. Then you can do the computation inside fragment shader per pixel an all lines at once. Higher line count will slow things down considerably.
For higher number of lines
you need to convert your geometry from lines to rectangles covering affected surroundings of a line:
use transparency to merge the lines correctly and compute color from perpendicular distance from the line. Add the dots from distance from the endpoints (can be done with texture instead of shader).
Your image suggest that the light affects whole screen so in that case you need to call Quad covering whole screen per each line instead of a rectangle coverage

Approach for writing a GLSL fragment shader with a solid color per triangle/face

I have vertex and triangle data which contains a color for each triangle (face), not for each vertex. i.e. A single vertex is shared by multiple faces, each face potentially a different color.
How should I approach this problem in GLSL to obtain a solid color assignment for each face being rendered? Calculating and assigning a "vertex color" buffer by averaging the colors of a vertex's neighboring polys is easy enough, but this of course produces a blurry result where the colors are interpolated in the fragment shader.
What I really need shouldn't be interpolated color values at all, I'll have about 40k triangles shaded with approx 15 possible solid colors once this is working as intended.
While you maybe could do this in high end GLSL, the right way to do solid shading is to make unique vertices for every triangle. This is a trivial loop. For every vertex, count how many triangles share it. That's how often you have to replicate it. Make sure your loop to do this is O(n). Then just set each vertex color or normal to that of the triangle. Again one straight loop. Do not bother to optimize for shared colors, it is not worth it.
Edit much later, because this is a popular answer:
To do flat per face shading you can interpolate the vertex position in world or view space. Then in the fragment shader compute ddx(dFdx) and ddy(dFdy) of this variable. Take the cross product of those two vectors and normalize it - you got a flat normal! No mesh changes or per vertex data needed at all.
OpenGL does not have "per-face" attributes. See:
How can I specify per-face colors when using indexed vertex arrays in OpenGL 3.x?
Here are a few possible options I see:
Ditch the index arrays and use separate vertices for each face like starmole suggested
Create an index array for each color used. Use materials instead of vertex colors and change the material after drawing the triangles from the index array for each color.
If the geometry allows it, you can make sure the last vertex specified by the index array has the correct vertex color for the face, and then use GL_FLAT shading, or have the fragment shader only use at the last vertex color.
In addition to the other answers, you could maybe employ the gl_PrimitiveID variable, that's an input to the fragment shader (don't know since which version) and is incremented implicitly for each triangle. You could then use this to lookup the color (either from a 40k buffer texture of colors or color indices into a 15 color color map, or just some direct computation from the primitive id). But don't ask me about the performance of this approach.

C++ shader optimization question

Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized