Does the Geometry Shader affects the buffer of vertices and indices (VBO and EBO data in GPU memory) initially specified in the cpu side?
For example, suppose I have a vertex buffer containing three vertices, each with some attributes attached to it. Suppose then these three vertices are given to the geometry shader as input and the geometry shader outputs out a large set of vertices based of off the first three where it started with, generating thus a new polygon made up of more than 3 vertices. Does this process alters the content of the element array and vertex buffers?
Only thing I know is that of course it doesn't get altered. Because otherwise the next rendering call would generate even more vertices and that would be a mess. So where does OpenGL store the new generated vertices?
Suppose then these three vertices are given to the geometry shader as input
That doesn't happen. The GS is fed by the vertex shader's outputs. GSs never have any direct contact with the initial vertex data.
So where does OpenGL store the new generated vertices?
Wherever the implementation needs to. The rasterizer hardware will generally have a small buffer for primitive data to be rasterized. That's where the GS outputs will go.
But that's an implementation detail which is not exposed to OpenGL.
Related
I send a VertexBuffer+IndexBuffer of GL_TRIANGLES via glDrawElements() to the GPU.
In the vertex shader I wanted snap some vertices to the same coordinates to simplify a large mesh on-the-fly.
As result I expeceted a major performance boost because a lot of triangle are collapsing to the same point and would be degenerated.
But I don't get any fps gain.
Due testing I set my vertex shader just to gl_Position(vec4(0)) to degenerate ALL triangles, but still no difference...
Is there any flag to "activate" the degeneration or what am I'm missing?
glQuery of GL_PRIMITIVES_GENERATED also prints always the number of all mesh faces.
What you're missing is how the optimization you're trying to use actually works.
The particular optimization you're talking about is post-caching of T&L. That is, if the same vertex is going to get processed twice, you only process it once and use the results twice.
What you don't understand is how "the same vertex" is actually determined. It isn't determined by anything your vertex shader could compute. Why? Well, the whole point of caching is to avoid running the vertex shader. If the vertex shader was used to determine if the value was already cached... you've saved nothing since you had to recompute it to determine that.
"The same vertex" is actually determined by matching the vertex index and vertex instance. Each vertex in the vertex array has a unique index associated with it. If you use the same index twice (only possible with indexed rendering of course), then the vertex shader would receive the same input data. And therefore, it would produce the same output data. So you can use the cached output data.
Instance ids also play into this, since when doing instanced rendering, the same vertex index does not necessarily mean the same inputs to the VS. But even then, if you get the same vertex index and the same instance id, then you would get the same VS inputs, and therefore the same VS outputs. So within an instance, the same vertex index represents the same value.
Both the instance count and the vertex indices are part of the rendering process. They don't come from anything the vertex shader can compute. The vertex shader could generate the same positions, normals, or anything else, but the actual post-transform cache is based on the vertex index and instance.
So if you want to "snap some vertices to the same coordinates to simplify a large mesh", you have to do that before your rendering command. If you want to do it "on the fly" in a shader, then you're going to need some kind of compute shader or geometry shader/transform feedback process that will compute the new mesh. Then you need to render this new mesh.
You can discard a primitive in a geometry shader. But you still had to do T&L on it. Plus, using a GS at all slows things down, so I highly doubt you'll gain much performance by doing this.
I'm rendering a line that is composed of triangles in OpenGL.
Right now I have it working where:
Vertex buffer: {v0, v1, v2, v3}
Index buffer (triangle strip): {0, 1, 2, 3}
The top image is the raw data passed into the vertex shader and the bottom is the vertex shader output after applying an offset to v1 and v3 (using a vertex attribute).
My goal is to use one vertex per point on the line and generate the offset some other way. I was looking at gl_VertexID, but I want something more like an element ID. Here's my desired setup:
Vertex buffer: {v0, v2}
Index buffer (triangle strip): {0, 0, 1, 1}
and use an imaginary gl_ElementID % 2 to offset every other vertex.
I'm trying to avoid using geometry shaders or additional vertex attributes. Is there any way of doing this? I'm open to completely different ideas.
I can think of one way to avoid the geometry shader and still work with a compact representation: instanced rendering. Just draw many instances of one quad (as a triangle strip), and define the two positions as per-instance attributes via glVertexAttribDivisor().
Note that you don't need a "template quad" with 4 vertices at all. You just need conceptually two attributes, one for your start point, and one for your end point. (If you work in 2D, you can fuse that into one vec4, of course). In each vertex shader invocation, you will have access to both points, and can construct the final vertex position based on that and the value of gl_VertexID (which will only be in range 0 to 3). That way, you can get away with exactly that vertex array layout of two points per line segment you are aiming for, and still only need a single draw call and a vertex shader.
No, that is not possible, because each vertex is only processed once. So if you're referencing a vertex 10 times with an index buffer, the corresponding vertex shader is still only executed one time.
This is implemented in hardware with the Post Transform Cache.
In the absolute best case, you never have to process the same vertex
more than once.
The test for whether a vertex is the same as a previous one is
somewhat indirect. It would be impractical to test all of the
user-defined attributes for inequality. So instead, a different means
is used.
Two vertices are considered equal (within a single rendering command)
if the vertex's index and instance count are the same (gl_VertexID​
and gl_InstanceID​ in the shader). Since vertices for non-indexed
rendering are always increasing, it is not possible to use the post
transform cache with non-indexed rendering.
If the vertex is in the post transform cache, then that vertex data is
not necessarily even read from the input vertex arrays again. The
process skips the read and vertex shader execution steps, and simply
adds another copy of that vertex's post-transform data to the output
stream.
To solve your problem I would use a geometry shader with a line (or line strip) as input and a triangle strip as output. With this setup you could get rid of the index buffer, since it's only working on lines.
I'm starting to learn modern OpenGL, and as the title says, I just wanted to be sure of the purpose of VAO's in the rendering pipeline.
When rendering we use VBO to store datas, and then we use OpenGL functions like: glAttribe to say to the GPU that we are going to use this datas "in That way", like: the first 3 floats in the vertices that we passes through vbo are in fact positions, and the next 3 floats are colors etc... So then I readed that we need some VAO that stores the descriptions of the vertices but what's the goal there ?
Thanks in advance.
Vertex array objects store a set of buffer names (usually vertex and index buffers) to get vertex data from, as well as how the vertices are layed out in the vertex buffers.
Their main purpose is so that when you want to render a different model from a different set of buffers, instead of binding each buffer and then setting the vertex attribute formats each time, you just bind a different VAO, and all the buffers and attributes are set up for you.
Not only is this more convenient for the programmer, it reduces the number of OpenGL calls required and thus CPU usage, which can clear up a CPU bottleneck.
I like to generate flat shading triangle normales in the vertex shader. To do this, I need to access the current, and the two next vertices attributes in the current vertex shader. Obviously this can be done by a geometry shader, but those don't exist in GL ES for example.
So is there any way to make GLSL access three consecutive vertex positions, but only advance by one position in every vertex invocation? Otherwise I would have to assign the data of three vertices to every vertex.. a vast overhead.
Accessing multiple vertices isn't possible in the vertex shader. If you want to create flat shading, you need to replicate your vertices, so that you have a seperate copy of the vertex for each surface normal you will need (one for each face that shares that vertex position). This is probably the solution you'd need for OpenGL ES.
Alternatively, if you are targeting OpenGL 4.x you can use the concept of the provoking vertex, which allows you to specify a vertex for which the attributes should be re-used across all the vertices of the face. The attributes affected are only those which have the flat keyword associated with them in the shader source.
Taking the standard opengl 4.0+ functions & specifications into consideration; i've seen that geometries and shapes can be created in either two ways:
making use of VAO & VBO s.
using shader programs.
which one is the standard way of creating shapes?? are they consistent with each other? or they are two different ways for creating geometry and shapes?
Geometry is loaded into the GPU with VAO & VBO.
Geometry shaders produce new geometry based on uploaded. Use them to make special effects like particles, shadows(Shadow Volumes) in more efficient way.
tessellation shaders serve to subdivide geometry for some effects like displacement mapping.
I strongly (like really strongly) recommend you reading this http://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
VAOs and VBOs how about what geometry to draw (specifying per-vertex data). Shader programs are about how to draw them (which program gets applied to each provided vertex, each fragment and so on).
Let's lay out the full facts.
Shaders need input. Without input that changes, every shader invocation will produce exactly the same values. That's how shaders work. When you issue a draw call, a number of shader invocations are launched. The only variables that will change from invocation to invocation within this draw call are in variables. So unless you use some sort of input, every shader will produce the same outputs.
However, that doesn't mean you absolutely need a VAO that actually contains things. It is perfectly legal (though there are some drivers that don't support it) to render with a VAO that doesn't have any attributes enabled (though you have to use array rendering, not indexed rendering). In which case, all user-defined inputs to the vertex shader (if any) will be filled in with context state, which will be constant.
The vertex shader does have some other, built-in per-vertex inputs generated by the system. Namely gl_VertexID. This is the index used by OpenGL to uniquely identify this particular vertex. It will be different for every vertex.
So you could, for example, fetch geometry data yourself based on this index through uniform buffers, buffer textures, or some other mechanism. Or you can procedurally generate vertex data based on the index. Or something else. You could pass that data along to tessellation shaders for them to tessellate the generated data. Or to geometry shaders to do whatever it is you want with those. However you want to turn that index into real data is up to you.
Here's an example from my tutorial series that generates vertex data from nothing more than an index.
i've seen that geometries and shapes can be created in either two ways:
Not either. In modern OpenGL-4 you need both data and programs.
VBOs and VAOs do contain the raw geometry data. Shaders are the programs (usually executed on the GPU) that turn the raw data into pixels on the screen.
Vertex shaders can be used to displace vertices, or to generate them from a builtin formula and the vertex index, which is available as a built in attribute in later open gl versions.
The difference between vertex and geometry shaders is that vertex shader is a 1:1 mapping, while geometry shader can create more vertices -- can be utilized in automatic Level of Detail generation for e.g. NURBS or perlin noise based terrains etc.