Screen-space distance along line strip in GLSL - opengl

When rendering a line strip, how do I get the distance of a fragment to the start point of the whole strip along the line in pixels?
When rendering a single line segment between two points in 3D, the distance between those two points in screen space is simply the Euclidean distance between their 2D projections. If I render this segment, I can interpolate (layout qualifier noperspective in GLSL) the screen-space distance from the start point along the line for each fragment.
When rendering a line strip, however, this does not work, because in the geometry shader, I only have information about the start and end point of the current segment, not all previous segments. So what I can calculate with the method above is just the distance of each fragment to the start point of the line segment, not to the start point of the line strip. But this is what I want to achieve.
What do I need that for: stylized line rendering. E.g., coloring a polyline according to its screen coverage (length in pixels), adding a distance mark every 50 pixels, alternating multiple textures along the line strip, ...
What I currently do is:
project every point of the line beforehand on the CPU
calculate the lengths of all projected line segments in pixels
store the lengths in a buffer as vertex attribute (vertex 0 has distance 0, vertex 1 has the length of the segment 0->1, vertex 2 has the length 0->1 + 1->2, ...)
in the geometry shader, create the line segments and use the distances calculated on the CPU
interpolate the values without perspective correction for each fragment
This works, but there has to be a better way to do this. It's not feasible to project a few hundred or thousand line points on the CPU each frame. Is there a smart way to calculate this in the geometry shader? What I have is the world-space position of the start and end point of the current line segment, I have the world-space distance of both points to the start point of the line strip along the line (again vertex attribute 0, 0->1, 0->1 + 1->2, ...) and I can provided any other uniform data about the line strip (total length in world space units, number of segments, ...).
Edit: I do not want to compute the Euclidean distance to the start point of the line strip, but the distance along the whole line, i.e. the sum of the lengths of all projected line segments up to the current fragment.

I see two ways:
You can use the vertex shader. Add the start coordinates of the line strip as additional values of each vertex of the line. The vertex shader can then compute the distance to the start point and pass them interpolated to the fragment shader. There you have to rescale them to receive pixel values.
Or, you can tell the vertex shader about the start coordinates of each line strip by using uniforms. There you would need to transform them like the vertex coordinates are transformed and move them towards the fragment shader. There you would have to transform them to pixel coordinates and calculate the distance to the actual fragment.

I thought I was just missing something here and my task could be solved by simple perspective calculation magic. Thanks derhass for pointing out the pointlessness of my quest.
As always, formulating the problem was already half the solution. When you know what to look for ("continuous parameterization of a line"), you can stumble upon the paper
Forrester Cole and Adam Finkelstein. Two Fast Methods for High-Quality Line Visibility. IEEE Transactions on Visualization and Computer Graphics 16(5), February 2010.
which deals with this very problem. The solution is similar to what derhass already proposed. Cole and Finkelstein use a segment atlas which they calculate per frame on the GPU. It includes a list of all projected line points (among other attributes such as visibility) to keep track of the position along the line at each fragment. This segment atlas is computed in a framebuffer, as the paper (draft) dates back to 2009. Implementing the same method in a compute shader and storing the results in a buffer seems like the way to go.

Related

Is there a way to split a gl line once it is the shader?

I have a view where I can display the Earth, and satellites in a 3d view, or 2d view (Mercator). I can send the XYZ data to the shader, and the shader will do the appropriate display for 3d or 2d. In 2d, it does an XYZ to Lat/Lon/Alt conversion in the shader. The problem is as follows: In the 3d view, everything is great:
The 2d view looks good, except for lines that in 3d cross the +/- 180 longitude boundary. The line is of course across the entire earth. What would be better, is if I could break the line into 2 segments (the proposed yellow lines below)
I can't really fix this in the vertex shader, because I only have a single vertex to work with. All options I have come up with are ugly:
Move LLA conversion to CPU instead of GPU, and split lines there. (Waste of CPU, and doubling of data)
Multiply all vertices to have "phantom segments" to add segments, and turn segments on/off by changing transparency (up to 4x times the data to the vertex shader, and lots of extra transparent lines to the fragment shader).
Final limitation: This is webgl, so I can't have a geometry shader.
Does anyone have some shader magic or cleverness so that I can keep this on the GPU?

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

Draw particles trajectories of undefined length with opengl

I have to draw a physical simulation that displays trajectories of moving around particles. 3D position data are read from a database in realtime while drawing. Once set up a VBO for each object, the drawing call will be the standard glDrawArrays(GL_LINE_STRIP, 0, size). The problem is that VBOs storing trail points are updated every frame since new points are added. This seems to me extremely inefficient! Furthermore what if I want to draw the trajectories with a gradient color from the particle's actual position to the older points? I have to update the color of all vertices in the VBO at every draw call! What is the standard way through this kind of stuff?
To summarize:
I want draw lines of undefined - potentially infinite - length (the length increase with time).
I want the color of points in the trajectories to shade based on the actual relative position on the trajectories (for example white in the beginning (actual particle position), black in the end (first particle position), grey in middle).
I read many tutorials but I haven't found nothing about drawing ever-updating and indefinitely-growing lines... I will appreciate any suggestion! Thanks!
Use multiple VBOs so that you have a fixed number of vertices per. That way you only have to modify the last VBO in the sequence when you add new points instead of completely updating one giant VBO.
Add a sequence number vertex attribute or use gl_VertexID and pass in the total point count as a uniform. Then you can divide a given vertex's sequence number by the total count and use that fraction to mix between your gradient colors.

Why does OpenGL allow/use fractional values as the location of vertices?

As far as I understand, location of a point/pixel cannot be a fraction, at least on a raster graphics system where hardwares use pixels to display images.
Then, why and how does OpenGL use fractional values for plotting pixels?
For example, how is it possible: glVertex2f(0.15f, 0.51f); ?
This command does not plot any pixels. It merely defines the location of a point in 3D space (you'll notice that there are 3 coordinates, while for a pixel on the screen you'd only need 2). This is the starting point for the OpenGL pipeline. This point then goes through a lot of transformations before it ends up on the screen.
Also, the coordinates are unitless. For example, you can say that your viewport is between 0.0f and 1.0f, then these coordinates make a lot of sense. Basically you have to think of these point in terms of mathematics, not pixels.
I would suggest some reading on how OpenGL transformations work, for example here, here or the tutorial here.
The vectors you pass into OpenGL are not viewport positions but arbitrary numbers in some vector space. Only after a chain of transformations these numbers are mapped into viewport pixel positions. With the old fixed function pipeline this could be anything that can be represented by a vector–matrix multiplication.
These days, where everything is programmable (shaders) the mapping can very well be any kind of function you can think of. For example the values you pass into glVertex (immediate mode call, but available to shaders with OpenGL-2.1) may be interpreted as polar coordinates in the vertex shader:
This is a perfectly valid OpenGL-2.1 vertex shader that interprets the vertex position to be in polar coordinates. Note that due to triangles and lines being straight edges and polar coordinates being curvilinear this gives good visual results only for points or highly tesselated primitives.
#version 110
void main() {
gl_Position =
gl_ModelViewProjectionMatrix
* vec4( gl_Vertex.y*vec2(sin(gl_Vertex.x),cos(gl_Vertex.x)) , 0, 1);
}
As you can see here the valus passed to glVertex are actually arbitrary, unitless components of vectors in some vector space. Only by applying some transformation to the viewport space these vectors gain meaning. Hence it makes no way to impose a certain value range onto the values that go into the vertex attribute.
Vertex and pixel are very different things.
It's quite possible to have all your vertices within one pixel (although in this case you probably need help with LODing).
You might want to start here...
http://www.glprogramming.com/blue/ch01.html
Specifically...
Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with a vertex, and each vertex and its associated data are processed independently, in order, and in the same way.
And...
Rasterization produces a series of frame buffer addresses and associated values using a two-dimensional description of a point, line segment, or polygon. Each fragment so produced is fed into the last stage, per-fragment operations, which performs the final operations on the data before it's stored as pixels in the frame buffer.
For your example, before glVertex2f(0.15f, 0.51f) is on the screen, there are many transforms to be done. Making complex thing crudely simpler, after moving your vertex to view space (applying camera position and direction), the magic here is (1) projection matrix, and (2) viewport setting.
Internally, OpenGL "screen coordinates" are in a cube (-1, -1, -1) - (1, 1, 1), :
http://www.matrix44.net/cms/wp-content/uploads/2011/03/ogl_coord_object_space_cube.png
Projection matrix 'squeezes' the frustum in this cube (which you do in vertex shader), assuming you have perspective transform - if projection is orthogonal, the projection is just a tube, limited by near and far values (and like in both cases, scaling factors):
http://www.songho.ca/opengl/files/gl_projectionmatrix01.png
EDIT: Maybe better example here:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#The_Projection_matrix
(EDIT: The Z-coordinate is used as depth value) When fragments are finally transferred to pixels on texture/framebuffer/screen, these are multiplied with viewport settings:
https://www3.ntu.edu.sg/home/ehchua/programming/opengl/images/GL_2DViewportAspectRatio.png
Hope this helps!

texture mapping over many triangles in a circle

After some help, i want to texture onto a circle as you can see below.
I want to do it in such a way that the centre of the circle starts on the shared point of the triangles.
the triangles can change in size and number and will range over varying degrees ie 45, 68, 250 so only the part of the texture visible in the triangle can be seen.
its basically a one to one mapping shift the image to the left and you see only the part where there are triangles.
not sure what this is called or what to google for, can any one makes some suggestions or point me to relevant information.
i was thinking i would have to generate the texture coordinates on the fly to select the relevant part, but it feels like i should be able to do a one to one mapping which would be simpler than calculating triangles on the texture to map to the opengl triangles.
Generating texture coordinates for this isn't difficult. Each point of polygon corresponds to certain angle, so i'th point angle will be i*2*pi/N, where N is the order of regular polygon (number of sides). Then you can use the following to evaluate each point texture coordinates:
texX = (cos(i*2*pi/N)+1)/2
texY = (sin(i*2*pi/N)+1)/2
Well, and the center point has (0.5, 0.5).
It may be even simpler to generate coordinates in the shader, if you have one specially for this:
I assume, you get pos vertex position. It depends on how you store the polygon vertexes, but let the center be (0,0) and other points ranging from (-1;-1) to (1;1). Then the pos should be simply used as texture coordinates with offset:
vec2 texCoords = (pos + vec2(1,1))*0.5;
and the pos itself then should be passed to vector-matrix multiplication as usual.