Draw particles trajectories of undefined length with opengl - opengl

I have to draw a physical simulation that displays trajectories of moving around particles. 3D position data are read from a database in realtime while drawing. Once set up a VBO for each object, the drawing call will be the standard glDrawArrays(GL_LINE_STRIP, 0, size). The problem is that VBOs storing trail points are updated every frame since new points are added. This seems to me extremely inefficient! Furthermore what if I want to draw the trajectories with a gradient color from the particle's actual position to the older points? I have to update the color of all vertices in the VBO at every draw call! What is the standard way through this kind of stuff?
To summarize:
I want draw lines of undefined - potentially infinite - length (the length increase with time).
I want the color of points in the trajectories to shade based on the actual relative position on the trajectories (for example white in the beginning (actual particle position), black in the end (first particle position), grey in middle).
I read many tutorials but I haven't found nothing about drawing ever-updating and indefinitely-growing lines... I will appreciate any suggestion! Thanks!

Use multiple VBOs so that you have a fixed number of vertices per. That way you only have to modify the last VBO in the sequence when you add new points instead of completely updating one giant VBO.
Add a sequence number vertex attribute or use gl_VertexID and pass in the total point count as a uniform. Then you can divide a given vertex's sequence number by the total count and use that fraction to mix between your gradient colors.

Related

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

Whats the proper way to draw a large model in OpenGL?

Im trying to draw large gridblock using OpenGL (for example: 114x112x21 cells).
As far as I know ... each cell should be drawn as 6 faces (12 triangle), each contains 4 vertices. each of the vertices has position, normal, and color vectors (each of these is 3*sizeof(GLfloat)).
These values will be passed to VRam in VBO(s). I did some calculations for the example mentioned and found out that it will cost ~200MB to store this data. I'm not sure if this is right, but if it is, I think it's way too much VRAM for such 1 model.
I'm sure there are more efficient ways to do this. and if any can point me to the right direction I would be very thankful.
EDIT: I may have been unclear about the nature of the cells. they do NOT have uniform dimensions that can be scaled/translated to produce other cells or other faces on the same cell. Almost each cell has different dimensions on each face. (these are predefined)
Also let me note that the colors are per cell and are based on a algorithmic scale of different values (depending on which the user wants to visualize). so if the user chooses a value (one for each cell) to be visualized, colors are calculated based on a scale and used to color the cells.
As #BDL suggested in his answer, I'll probably use geometry shader to calculate per face normals.
There are several things that can be done:
First of all, each vertex position (except the ones on the sides) are shared between 8 cells.
If you need per face normals, in which case a position would require several normals, calculate them in a geometry shader instead of storing them in the VBO.
If each cell has a constant color, store it in a 3d-texture and sample the texture in the fragment shader.
For more hints you would have to provide more details on the cells and on what you want to achieve.
There are a few tricks you could do.
To start with, you could use instancing per cube. You then have per vertex positions and normals for a single cell plus a single position and color per cell.
You can actually eliminate the cell positions by deriving it from the instance id, by reversing the formula id = z * width * height + y * width + x.
Furthermore, using a float per component is probably overkill for your colors, you may want to use a smaller format such as GL_RGBA8.
Applying that to your example (268128 cells) we get a buffer size of approximately 1 MiB (of which the 4 bytes color per cell is the most significant, the others are only for a single cell).
Note that this assumes that you want a single color for your entire cell. If you want a color per vertex, or per vertex per face, you can do so by using an 1D texture and indexing by instance and vertex id.
The biggest part of your data is going to be color though, unless there is a constant pattern. If you still want floats per component en per face per vertex colors it is going to take ~73 MiB on color alone.
You can use instanced rendering. It renders the same vertex data with the same shader multiple times in just 1 draw call. Here is a link to the wiki(external): https://en.wikipedia.org/wiki/Geometry_instancing

How can I apply a depth test to vertices (not fragments)?

TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?

OpenGL - Access next 3 vertices in buffer from the vertex shader

Im placing a bunch of square tiles around a world using 2 buffers fed from vector arrays, one for color and the other for position. The triangles vertex colors arent smooth as they dont interpolate between the two tris in the square. To combat this I wanted to set each fragments color individually, blending the colors of the vertices manually. I cannot substitute this process with premade textures either.
The issue Ive come across is passing the next 3 vertices position and location in the buffer into the vertex shader. Is there any easy way to do this?
Thanks and have a great day!
Add another set of attributes and setup the glVertexAttribPointer to point into the vertex position buffer as well, but with an offset. Keep in mind, to add a bit of dummy padding to the end, so that when reaching the end of the array you don't access out of bounds. Also the …_ADJACENCY drawing modes are useful in this situation (if available).

OpenGL- drawarrays or drawelements?

I'm making a small 2D game demo and from what I've read, it's better to use drawElements() to draw an indexed triangle list than using drawArrays() to draw an unindexed triangle list.
But it doesn't seem possible as far as I know to draw multiple elements that are not connected in a single draw call with drawElements().
So for my 2D game demo where I'm only ever going to draw squares made of two triangles, what would be the best approach so I don't end having one draw call per object?
Yes, it's better to use indices in many cases since you don't have to store or transfer duplicate vertices and you don't have to process duplicate vertices (vertex shader only needs to be run once per vertex). In the case of quads, you reduce 6 vertices to 4, plus a small amount of index data. Two thirds is quite a good improvement really, especially if your vertex data is more than just position.
In summary, glDrawElements results in
Less data (mostly), which means more GPU memory for other things
Faster updating if the data changes
Faster transfer to the GPU
Faster vertex processing (no duplicates)
Indexing can affect cache performance, if the reference vertices that aren't near each other in memory. Modellers commonly produce meshes which are optimized with this in mind.
For multiple elements, if you're referring to GL_TRIANGLE_STRIP you could use glPrimitiveRestartIndex to draw multiple strips of triangles with the one glDrawElements call. In your case it's easy enough to use GL_TRIANGLES and reference 4 vertices with 6 indices for each quad. Your vertex array then needs to store all the vertices for all your quads. If they're moving you still need to send that data to the GPU every frame. You could position all the moving quads at the front of the array and only update the active ones. You could also store static vertex data in a separate array.
The typical approach to drawing a 3D model is to provide a list of fixed vertices for the geometry and move the whole thing with the model matrix (as part of the model-view). The confusing part here is that the mesh data is so small that, as you say, the overhead of the draw calls may become quite prominent. I think you'll have to draw a LOT of quads before you get to the stage where it'll be a problem. However, if you do, instancing or some similar idea such as particle systems is where you should look.
Perhaps only go down the following track if the draw calls or data transfer becomes a problem as there's a lot involved. A good way of implementing particle systems entirely on the GPU is to store instance attributes such as position/colour in a texture. Each frame you use an FBO/render-to-texture to "ping-pong" this data between another texture and update the attributes in a fragment shader. To draw the particles, you can set up a static VBO which stores quads with the attribute-data texture coordinates for use in the vertex shader where the particle position can be read and applied. I'm sure there's a bunch of good tutorials/implementations to follow out there (please comment if you know of a good one).