Is there a way to split a gl line once it is the shader? - glsl

I have a view where I can display the Earth, and satellites in a 3d view, or 2d view (Mercator). I can send the XYZ data to the shader, and the shader will do the appropriate display for 3d or 2d. In 2d, it does an XYZ to Lat/Lon/Alt conversion in the shader. The problem is as follows: In the 3d view, everything is great:
The 2d view looks good, except for lines that in 3d cross the +/- 180 longitude boundary. The line is of course across the entire earth. What would be better, is if I could break the line into 2 segments (the proposed yellow lines below)
I can't really fix this in the vertex shader, because I only have a single vertex to work with. All options I have come up with are ugly:
Move LLA conversion to CPU instead of GPU, and split lines there. (Waste of CPU, and doubling of data)
Multiply all vertices to have "phantom segments" to add segments, and turn segments on/off by changing transparency (up to 4x times the data to the vertex shader, and lots of extra transparent lines to the fragment shader).
Final limitation: This is webgl, so I can't have a geometry shader.
Does anyone have some shader magic or cleverness so that I can keep this on the GPU?

Related

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

OpenGL shader effect

I need a efficient openGL pipeline to achieve a specific look of the line segment shapes.
This is a look I am aiming for:
(https://www.shadertoy.com/view/XdX3WN)
This is one of the primitives (spiral) I already have inside my program:
Inside gl_FragColor for this picture I am outputting distance from fragment to camera. The pipeline for this is the usual VBO->VAO->Vertex shader->Fragment shader path.
The shadertoy shader calculates the distance to the 3 points in every fragment of the screen and outputs the color according to that. But in my example I would need this in a reverse. Calculate color for surrounding fragments for ever fragment of spiral (in this case). Is it necessary to go with a render a scene into a texture using a FBO or is there a shortcut?
In the end I used:
CatmullRom spline interpolation to get point data from control points
Build VBO from above points
Vortex shader: pass point position data
Geometry shader: emit sprite size quads for every point
Fragment shader: use exp function to get a smooth gradient color from the center of the sprite quad
Result is something like this:
with:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE); // additive blend
It renders to FBO with GL_RGBA16 for more smoothness.
For small limited number of lines
use single quad covering the area or screen as geometry and send the lines points coordinates and colors to shader as 1D texture(s) or uniform. Then you can do the computation inside fragment shader per pixel an all lines at once. Higher line count will slow things down considerably.
For higher number of lines
you need to convert your geometry from lines to rectangles covering affected surroundings of a line:
use transparency to merge the lines correctly and compute color from perpendicular distance from the line. Add the dots from distance from the endpoints (can be done with texture instead of shader).
Your image suggest that the light affects whole screen so in that case you need to call Quad covering whole screen per each line instead of a rectangle coverage

Screen-space distance along line strip in GLSL

When rendering a line strip, how do I get the distance of a fragment to the start point of the whole strip along the line in pixels?
When rendering a single line segment between two points in 3D, the distance between those two points in screen space is simply the Euclidean distance between their 2D projections. If I render this segment, I can interpolate (layout qualifier noperspective in GLSL) the screen-space distance from the start point along the line for each fragment.
When rendering a line strip, however, this does not work, because in the geometry shader, I only have information about the start and end point of the current segment, not all previous segments. So what I can calculate with the method above is just the distance of each fragment to the start point of the line segment, not to the start point of the line strip. But this is what I want to achieve.
What do I need that for: stylized line rendering. E.g., coloring a polyline according to its screen coverage (length in pixels), adding a distance mark every 50 pixels, alternating multiple textures along the line strip, ...
What I currently do is:
project every point of the line beforehand on the CPU
calculate the lengths of all projected line segments in pixels
store the lengths in a buffer as vertex attribute (vertex 0 has distance 0, vertex 1 has the length of the segment 0->1, vertex 2 has the length 0->1 + 1->2, ...)
in the geometry shader, create the line segments and use the distances calculated on the CPU
interpolate the values without perspective correction for each fragment
This works, but there has to be a better way to do this. It's not feasible to project a few hundred or thousand line points on the CPU each frame. Is there a smart way to calculate this in the geometry shader? What I have is the world-space position of the start and end point of the current line segment, I have the world-space distance of both points to the start point of the line strip along the line (again vertex attribute 0, 0->1, 0->1 + 1->2, ...) and I can provided any other uniform data about the line strip (total length in world space units, number of segments, ...).
Edit: I do not want to compute the Euclidean distance to the start point of the line strip, but the distance along the whole line, i.e. the sum of the lengths of all projected line segments up to the current fragment.
I see two ways:
You can use the vertex shader. Add the start coordinates of the line strip as additional values of each vertex of the line. The vertex shader can then compute the distance to the start point and pass them interpolated to the fragment shader. There you have to rescale them to receive pixel values.
Or, you can tell the vertex shader about the start coordinates of each line strip by using uniforms. There you would need to transform them like the vertex coordinates are transformed and move them towards the fragment shader. There you would have to transform them to pixel coordinates and calculate the distance to the actual fragment.
I thought I was just missing something here and my task could be solved by simple perspective calculation magic. Thanks derhass for pointing out the pointlessness of my quest.
As always, formulating the problem was already half the solution. When you know what to look for ("continuous parameterization of a line"), you can stumble upon the paper
Forrester Cole and Adam Finkelstein. Two Fast Methods for High-Quality Line Visibility. IEEE Transactions on Visualization and Computer Graphics 16(5), February 2010.
which deals with this very problem. The solution is similar to what derhass already proposed. Cole and Finkelstein use a segment atlas which they calculate per frame on the GPU. It includes a list of all projected line points (among other attributes such as visibility) to keep track of the position along the line at each fragment. This segment atlas is computed in a framebuffer, as the paper (draft) dates back to 2009. Implementing the same method in a compute shader and storing the results in a buffer seems like the way to go.

Difference between tessellation shaders and Geometry shaders

I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/

Drawing a mix of quads and triangles using the geometry shader and lines_adjacency

My current rendering implementation is as follows:
Store all vertex information as quads rather than triangles
For triangles, simply repeat the last vertex (i.e. v0 v1 v2 v2)
Pass vertex information as lines_adjacency to geometry shader
Check if quad or triangle, output as triangle_strip
The reason I went this route was because I was implementing a wireframe shader, and I wanted to draw the quads without a diagonal line through them. But, I've since discarded the feature.
I'm now wondering if I should go back to simply drawing GL_TRIANGLES, and leave the geometry shader out of the equation. But that got me thinking... what's actually more efficient from a performance point of view?
In average, my scenes are composed of quads and triangles in equal amounts.
Drawing with all triangles would mean: 6 vertices per quad, 3 per triangle.
Drawing with lines_adjacency would mean: 4 vertices per quad, 4 per triangle.
(This is with indexed drawing, so the vertex buffer is the same size for both of them)
So the vertex ratio is 9:8 (triangles : lines_adjacency).
Would I be correct in assuming that with indexed drawing, each vertex is only getting processed once by the vertex shader (as opposed to once per index)? In which case drawing triangles is going to be more efficient (since there isn't an extra geometry-shader step to perform), with the only negative being the slight amount of extra memory the indices take up.
Then again, if the vertices do get processed once per index, I could see the edge being with the lines_adjacency method, considering the geometry conversion is very simple, whilst the vertex shader might be running more intensive lighting calculations.
So that pretty much sums up my question: how do vertices get treated with indexed drawing, and what sort of performance impact could be expected if including a simple geometry shader?
Geometry shaders never improve efficiency in this sort of situation, they only complicate the primitive assembly process. When you use geometry shaders, the post-T&L cache no longer works the way it was originally designed.
While it is true that the geometry shader will reuse any shared (indexed) vertices transformed in the vertex shader stage when it needs to fetch vertex data, the geometry shader still computes and emits a unique set of vertices per-output-primitive.
Furthermore, because geometry shaders are allowed to emit a variable number of data points they are unlike other shader stages. It is much more difficult to parallelize geometry shaders than it is vertex or fragment. There are just too many negative things about geometry shaders for me to suggest using them unless you actually need them.