Alternative to glMultiDrawArrays when using uniform stride? - c++

Background: I am developing an application that essentially draws a huge 3D graph of nodes and edges. The nodes are drawn as instanced cubes while the edges are drawn with GL_LINE and expanded with a geometry shader into 3D volumetric "lines" made of triangle strips. Currently, I am performing this expansion every time I redraw the edges. However, as my graph is entirely static (nodes can not move and thus neither can the edges), I figure that I only need to expand the GL_LINE definitions into triangle strips once, capture the expanded vertices into a buffer (using Tranform Feedback), and then from that point on draw the edges using those captured vertices with glMutliDrawArrays using a primitive type of GL_TRIANGLE_STRIP.
Question: All of these volumetric lines I am drawing contain 10 vertices. However, glMultiDrawArrays requires an array of starting indices and count sizes that essentially describe the start point and count (in elements) of each primitive. As none of my primitives vary in size, I would be making an seemingly unnecessary list of starting indices & counts. Is there any functionality OpenGL provides that would allow me to simply specify a stride (in elements) at which primitive restart would occur?

There is no such function, but for your needs, you don't need one. Transform feedback cannot output triangle strips. It outputs only basic primitives: individual points, lines, or triangles. That's why glBeginTransformFeedback only takes GL_POINTS, GL_LINES, or GL_TRIANGLES.
All you have to do is render all of your edges at once, collect the results into a single buffer via feedback, and later render the entire thing with a single call to glDrawArrays.

Related

Since OpenGL can perform built in backface culling, it must calculate vertex normals; can these be accessed instead of sending them as attributes?

Would like to understand this; it’s not quite clear why you have to upload normals but at the same time respect the winding order for normal calculation.
All vertices you give OpenGL within a rendering command are in a specific order. For array rendering, this means the order of the vertices in the array, as specified by the drawing command. For indexed rendering, it's the order of the indices in the range of index values you're rendering from. Instanced rendering defines that the vertices for each instance happens after the previous instance's vertices. And so forth.
The primitive assembly system for a triangle takes this sequence of vertices and breaks it up into triangles, depend on which kind of primitive you rendered. This means that each vertex output by the primitive assembly system has an order relative to the others for that triange; one vertex came first, then another, then the third.
Since triangles only have 3 vertices, there are two ways for the rasterizer to look at this order. The vertices can either wrap clockwise around the triangle or counter-clockwise, as seen in screen space. This is the triangle's winding order: the apparent order of the vertices, as seen from screen space.
It is the winding order which is used to determine how face culling works, not the normal. The GPU never calculates vertex normals. Or normals of any kind.

Can glDrawElements be used independently from polygon type?

Is the only solution grouping vertices into separate glDrawElements(GL_TRIANGLES, ...) and glDrawElements(GL_QUADS, ...) draw calls or is there a way of sending data describing no of polygon sides into geometry shader and sorting out type of polygon inside geometry shader?
https://i.stack.imgur.com/4Ee4e.jpg
What you see as my output console is output of my mesh structure. I have:
vector <float> vert_data;
unsigned int faces_no;
vector <unsigned int> indices_on_face;
vector <unsigned int> indices;
First is basicly exactly what is send to open-gl buffer: coordinates, normals, colors etc. Second one says how many faces are described in this data. Third says number of verticles that are in the polygon. It goes in order. Forth one are indices(glBufferData(GL_ELEMENT_ARRAY_BUFFER, ...)). So basicly I am looking for a way to send third one into geometry shader.
I know it is possible to order faces types using flag while importing from assimp but that would loose the faces order. And still wouldnt let me draw everything with single draw call, so I would have to create bunch of draw functions for every type of polygon:(
Maybe there would be possible something like: first change every indices_on_face[i] by adding all previous ones to it. Set first verticle number that is drawn inside geometry shader during draw call. Inside geometry shader compare current number of verticle with indices_on_face[i] that would tell when to generate polygon out of vertices. Does gl_VertexID hold number dependent from count of passed vertex, independent from indices?
How can I formulate a draw call that will fit it?
No, there is no way to do what you want. A primitive includes the count of vertices for each base primitive, and that doesn't get to change within a rendering command.
Also, a geometry shader would almost certainly slow down your rendering rather than speed it up, particularly compared to the standard solution: just send triangles. Break up any non-triangular polygons into triangles, so you can just send them all in a single draw call. I mean, the hardware is going to do that for you regardless, so you may as well do it yourself.
Most mesh exporters will have an option to do this for you. And in those cases where they don't, Open Asset Importer can do it for you at load time by passing the aiProcess_Triangulate flag to your loading function.

Dynamically create complementary triangles of a regular grid in OpenGL

I have created a regular grid which originates from a 2D image, i.e. each pixels has a vertex. There are two triangles per four pixels so that I have a triangle in the top right and in the bottom left. I use vertex and index buffers for that.
Now I dynamically remove triangles / faces at the border of two different kinds of vertices (according to my application) because else there would be distortions. I wrote a geometry shader which takes a triangle and outputs the triangle or nothing (see first picture). The shader recognizes if a triangle is "faulty" (has orange edges) and omits it.
Now this works fine, but I may lose some details because of my vertex geometry. I can add complementary triangles to the mesh (see second picture, new triangles with dashed orange line).
How do I accomplish this in OpenGL?
My first idea is to create one quad instead of two triangles, check for the four possible triangles cases and create those triangles dynamically in the geometry shader. But this might be slow; GL_QUADs are deprecated and alternatives might be slow too. What do you have in mind?
Here's my idea:
Put the whole grid in a buffer/texture.
Build four triangles for each four pixels. They cross each other, yes.
In the geometry shader you can tell if a triangle is "faulty" because it connects two wrong regions. Or, sampling form the texture, because the crossing triangle is valid, so this new one can be discarded.
EDIT: Another approach
Use the texture. Draw instanced with GL_POINTS. With some order and the help of the instanceID the shader knows where the point is.
For this point test the four possible triangles. If you instance top to down and left to right, only a point to the right and the two below are used for the four triangles. And you avoid repeating tests.
Emit only those you choose.

OpenGL: Multi-texturing an array of "linked" quads

I recently completed my system for loading an array of quads into VBOs. This system allows quads to share vertices in order to save a substantial amount of memory. For example, an array of 100x100 quads would use 100x100x4=40000 vertices normally (4 vertices per quad), but with this system, it would only use 101x101=10201 vertices. That is a huge amount of space saving when you get into even larger scales.
My problem is is that in order to texture each quad individually, each vertex needs a "UV" coordinate pair (or "ST" coordinate) to map one part of the texture to. This leads to the problem, how do I texture each quad independently of each other? Even if two of the same textured quads are next to each other, I cannot use the same texture coordinate for both of the quads. This is illustrated below:
*Each quad being 16x16 pixels in dimension and the texture coordinates having a range of 0 to 1.
To make things even more complicated, some quads in the array might not even be there (because that part of the terrain is just an empty block). So as you might have guessed, this is for a rendering engine for those 2D tile games everyone is trying to make.
Is there a way to texture quads using the vertex saving technique or will I just have to trash this method and just use the way less efficient way?
You can't.
Vertices in OpenGL are a collection of data. They may contain positions, but they also contain texture coordinates or other things. Every vertex, every collection of position/coordinate/etc, must be unique. So if you need to pair the same position with different texture coordinates, then you have different vertices.

Pairwise vertex attributes in OpenGL

I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.