Using the geometry shader for instancing - opengl

So I want to draw lots of quads (or even cubes), and stumbled across this lovely thing called the geometry shader.
I kinda get how it works now, and I could probably manipulte it into drawing a cube for every vertex in the vertex buffer, but I'm not sure if it's the right way to do it. The geometry shader happens between the vertex shader and the fragment shader, so it works on the vertices in screen space. But I need them in world space to do transformations.
So, is it OK to have my vertex shader simply pipe the inputs to the geometry shader, and have the geometry shader multiply by the modelviewproj matrix after creating the primitives? It should be no problem with the unified shader architecture, but I still feel queasy when making the vertex shader redundant.
Are there alternatives? Or is this really the 'right' way to do it?

It is perfectly OK.
Aside from that, consider using instanced rendering (glDrawArraysInstanced,glDrawElementsInstanced) with vertex attribute divisor (glVertexAttribDivisor). This way you can accomplish the same task without geometry shader at all.
For example, you can have a regular cube geometry bound. Then you have a special vertex attribute carrying cube positions you want for each instance. You should bind it with a divisor=1, what will make it advance for each instance drawn. Then draw the cube using glDraw*Instanced, specifying the number of instances.
You can also sample input data from textures, using gl_VertexID or gl_InstanceID for coordinates.

Related

OpenGL Lighting Shader

I can't understand concept of smaller shaders in OpenGL. How does it work? For example: do I need to create one shader for positioning object in space and then shader another shader for lighting or what? Could someone explain this to me? Thanks in advance.
This is a very complex topic, especially since your question isn't very specific. At first, there are various shader stages (vertex shader, pixel shader, and so on). A shader program consists of different shader stages, at least a pixel and a vertex shader (except for compute shader programs, which are each single compute shaders). The vertex shader calculates the possition of the points on screen, so here the objects are being moved. The pixel shader calculates the color of each pixel, that is covered by the rendered geometry your vertex shader produced. Now, in terms of lighting, there are different ways of doing it:
Forward Shading
This is the straight-forward way, where you simply calculate the lighting in pixel shader of the same shader program, that moves to objects. This is the oldest way of calculating lighting, and the easiest one. However, it's abilities are very limited.
Deffered Shading
For ages, this is the go-to variant in games. Here, you have one shader program (vertex + pixel shader) that renders the geometrie on one (or multiple) textures (so it moves the objects, but it doesn't save the lit color, but rather things like the base color and surface normals into the texture), and then an other shader program that renders a quad on screen for each light you want to render, the pixel shader of this shader program reads the informations previously rendered in the textur by the first shader program, and uses it to render the lit objects on an other textur (which is then the final image). In constrast to forward shading, this allows (in theory) any number of lights in the scene, and allows easier usage of shadow maps
Tiled/Clustered Shading
This is a rather new and very complex way of calculating lighting, that can be build on top of deffered or forward shading. It basicly uses compute shaders to calculate an accelleration-structure on the gpu, which is then used draw huge amount of lights very fast. This allows to render thousands of lights in a scene in real time, but using shadow maps for these lights is very hard, and the algorithm is way more complex then the previous ones.
Writing smaller shaders means to separate some of your shader functionalities in another files. Then if you are writing a big shader which contains lightning algorithms, antialiasing algorithms, and any other shader computation algorithm, you can separate them in smaller shader files (light.glsl, fxaa.glsl, and so on...) and you have to link these files in your main shader file (the shader file which contains the void main() function) since in OpenGL a vertex array can only have one shader program (composition of vertex shader, fragment shader, geometry shader, etc...) during the rendering pipeline.
The way of writing smaller shader depends also on your rendering algorithm (forward rendering, deffered rendering, or forward+ rendering).
It's important to notice that writing a lot of shader will increase the shader compilation time, and also, writing a big shader with a lot of uniforms will also slow things down...

A few questions about shaders

I am using opengl shaders.
Does count of uniforms affect shader performance? If I pass 5 uniforms or 50 will it matter?
Does each shader has its own area where it working on? Or each shader can draw at any point of my application?
I often create vertex shader just to pass attributes to fragment shader. What benefit of vertex shader and why not just pass attributes in fragment?
I would guess it doesn't (and if it does, only a very minor one). But I don't have any evidence for that, so I might be wrong. This is almost certainly driver-specific.
A shader does not draw anything. A shader just processes data. In the pipeline, the rasterizer produces the fragments that are covered by your shape. And these are the fragments that you can potentially draw to. The fragment shader calculates the color (and possibly depth) and the rest of the pipeline decides what to do with the result (either updating the frame buffer, blending, or discarding it altogether). Each draw call can potentially produce a framebuffer update everywhere, not just at some specific locations.
This is perfectly fine if the application requires it. The main difference is that vertex shaders process vertices and fragment shaders process fragments. Usually, there are much more fragments than vertices, so the fragment shader is called more often than the vertex shader. Therefore, you should do as much work in the vertex shader as possible. Of course, there are things that you just cannot calculate in a vertex shader.

GLSL Geometry Shaders and projection matrices

So from playing around with it so far, I gather that GLSL geometry shaders work after the input vertices are transformed by the projection/modelview matrices. In other words, the geometry shaders processes things on clip coordinate.
What if I was to use the geometry shader to transform GL_POINTS into, say, cubes made out of GL_TRIANGLES? When calculating things on clip coordinates, the resulting shape always seem to face you / ignore rotations/scaling etc.
Also, it seems that GL_TRIANGLES is not supported as one of the possible geometry output types. But I tried anyways, and it seems to work. I suppose this is video card dependent? Is it possible to make cubes if GL_TRIANGLES is not supported? Make zero width triangle strips in between spaces maybe??
You are using shaders: geometry shaders work on whatever the vertex shader passed them. If you want that to be clip-space values, then the geometry shader works on clip-space values. If your vertex shader passes them eye-space values, then the geometry shader must work on eye-space values.
What matters is what the final pre-rasterization shader stage outputs to gl_Position. That is what needs to be in homogeneous clip-space. A vertex shader that has a geometry shader behind it doesn't even need to write to gl_Position.
Also, it seems that GL_TRIANGLES is not supported as one of the possible geometry output types.
You must be using ARB_geometry_shader4, not the actual core geometry shader functionality. You probably should avoid that extension if you are able. Any hardware that has geometry shaders can run OpenGL 3.2.
In any case, the core feature doesn't support triangles as output. It supports points, line strips, and triangle strips.
Is it possible to make cubes if GL_TRIANGLES is not supported?
That's what EndPrimitive() is for. You call it when you are finished with a primitive; there's nothing that stops you from emitting a second primitive. Or third.
Also, you should be advised that this will probably be slow. Geometry shaders are not known for fast rendering performance.

Vertex shader vs Fragment Shader [duplicate]

This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.

Can I use a vertex shader to display a models normals?

I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?
No, it is not possible to draw additional lines from a vertex shader.
A vertex shader is not about creating geometry, it is about doing per vertex computation. Using vertex shaders, when you say glDrawArrays(GL_TRIANGLES,0,3), this is what specifies exactly what you will draw, i.e. 1 triangle. Once processing reaches the vertex shader, you can only alter the properties of the vertices of that triangle, not modify in any way, shape or form, the topology and/or count of the geometry.
What you're looking for is what OpenGL 3.2 defines as a geometry shader, that allows to output arbitrary geometry count/topology out of a shader. Note however that this is only supported through OpenGL 3.2, that not many cards/drivers support right now (it's been out for a few months now).
However, I must point out that showing normals (in most engines that support some kind of debugging) is usually done with the traditional line rendering, with an additional vertex buffer that gets filled in with the proper positions (P, P+C*N) for each mesh position, where C is a constant that represents the length you want to use to show the normals. It is not that complex to write...
You could approximate this by drawing the geometry twice. Once draw it as you normally would. The second time, draw the geometry as GL_POINTS, and attach a vertex shader which offsets each vertex position by the vertex normal.
This would result in your model having a set of points floating over the surface. Each point would show the direction of the normal from the vertex it corresponds to.
This isn't perfect, but might be sufficient, depending on what it is you're hoping to use it for.
UPDATE: AHA! And if you pass in a constant scaling factor to the vertex shader, and have your application interpolate that factor between 0 and 1 as time goes by, your points rendered by the vertex shader will animate over time, starting at the vertex they apply to, and then floating off in the direction of its normal.
It's probably possible to get more or less the right effect with a cleverly written vertex shader, but it'd be a lot of work. Since this is for debugging purposes anyway, it seems better to just draw a few lines; the performance hit will not be severe.