How do I detect triangle edge and access the two vertices that form it? - glsl

I've seen other questions about only drawing fragments on the triangle edges using barycentric coordinates, but I need more than that and I wonder if there should be another approach.
This is basically a shadow map render and I want to write some additional results to the FBO color attachment. (Namely the light origin - edge vertices plane equation).
I can easily do this via a geometry shader converting triangles to lines but it's not pixel-to-pixel exact to the triangle edge. And it's also causing depth fighting that I can't accept.
I was hoping for a trick in a fragment shader that I can somehow render triangles and get the edge vertex coordinates in there.

Related

In OpenGL, is it possible to draw the edges of the unoccluded triangles only?

I have a mesh to render with OpenGL. What I want is to render its edges, but only the ones of the un-occluded faces. However, I realize that this is not possible with only:
glEnable(GL_DEPTH_TEST); // Enable depth test
glDepthFunc(GL_LEQUAL); // Accept fragment if it closer to the camera than the former one
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
since there is no depth information in-between the edges, so the edges of the occluded triangles are still rendered.
A work around is to draw the triangles with GL_FILL first in background color (in my case, white), and then draw the edges separately. But doing so results in artifacts similar to the z-fighting phenomenon, i.e., some edges seem thinner than others or even vanish, as shown below
On the left is what I have, and on the right is what I desire (viewed in MeshLab). Since depth test of triangles seems to be unavoidable in this case, I guess I am also asking:
How can I draw edges over triangles without the z-fighting artifacts?
Note that face culling is not useful, as it only eliminates faces facing backward, but cannot deal with occlusion.
Set a polygon offset for the first pass with glPolygonOffset:
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(1, 1);
Disable the polygon offset for the 2nd pass:
glDisable(GL_POLYGON_OFFSET_FILL);
Polygon fill offset manipulates the depth of a fragment by a minimum amount. This results in the depth of the fragments in the first pass being a small amount greater than the depth of the same fragments in the second pass. This is how you get rid of the deep conflict.

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

Opengl: coloring a world map?

Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?

Quad texture stretching on OpenGL

So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.

calculating normals for quad mesh

I have a struct QUAD that stores 4 pointers to 4 VECTOR3D (which contains 3 floats) so that I can draw the quad mesh.
From what I understand is whenever I draw a mesh, I need normal as well to properly light/shade a mesh and it's relatively easy when it's a mesh laying on a plain, using normal per face.
When I have 2 by 2 quad meshes laying on XZ coordinate and tried to raise it's centre (0,0,0) by a certain point, say (0, 4, 0) it would start to form real 3D shapes, then I need to calculate normals again. I'm having hard time understanding how and what is to be to calculated normals. As expected, the 3D shape shades like it's still a flat mesh, so it does not represent real shape. One of the explanation says I need to calculate normals per vertex instead of per face.
Does it mean I need to calculate normals for all corners of mesh? once i have normals what would i do? I was still using old glBegin glEnd methods but now I feel like i need to use DrawArray method. I'm deeply confused and I'm pretty sure I don't make much sound but i'd much appreciate your help.
If you need flat looking surface then your normals will be normals to the quad plane. If you need "soft looking" surface you need to blend(read this and watch this cool simple video) normals - that will add sort of gradient.