I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?
No, it is not possible to draw additional lines from a vertex shader.
A vertex shader is not about creating geometry, it is about doing per vertex computation. Using vertex shaders, when you say glDrawArrays(GL_TRIANGLES,0,3), this is what specifies exactly what you will draw, i.e. 1 triangle. Once processing reaches the vertex shader, you can only alter the properties of the vertices of that triangle, not modify in any way, shape or form, the topology and/or count of the geometry.
What you're looking for is what OpenGL 3.2 defines as a geometry shader, that allows to output arbitrary geometry count/topology out of a shader. Note however that this is only supported through OpenGL 3.2, that not many cards/drivers support right now (it's been out for a few months now).
However, I must point out that showing normals (in most engines that support some kind of debugging) is usually done with the traditional line rendering, with an additional vertex buffer that gets filled in with the proper positions (P, P+C*N) for each mesh position, where C is a constant that represents the length you want to use to show the normals. It is not that complex to write...
You could approximate this by drawing the geometry twice. Once draw it as you normally would. The second time, draw the geometry as GL_POINTS, and attach a vertex shader which offsets each vertex position by the vertex normal.
This would result in your model having a set of points floating over the surface. Each point would show the direction of the normal from the vertex it corresponds to.
This isn't perfect, but might be sufficient, depending on what it is you're hoping to use it for.
UPDATE: AHA! And if you pass in a constant scaling factor to the vertex shader, and have your application interpolate that factor between 0 and 1 as time goes by, your points rendered by the vertex shader will animate over time, starting at the vertex they apply to, and then floating off in the direction of its normal.
It's probably possible to get more or less the right effect with a cleverly written vertex shader, but it'd be a lot of work. Since this is for debugging purposes anyway, it seems better to just draw a few lines; the performance hit will not be severe.
Related
Let there be a vertex which is part of a triangle, and of a quad.
To my best understanding, the normal of that vertex is the average of the normal of the quad and the normal of the triangle.
The triangle is drawn before the quad. When should I call glNormal and with what vector?
Should I call glNormal 2 times, each time with the same vector (the average normal vector)?
Should I call glNormal the last time the vertex is drawn, with the average normal vector?
To my best understanding, the normal of that vertex is the average of
the normal of the quad and the normal of the triangle.
Ideally, the normal vector should be orthogonal to the surface that you are rendering, on any point. However, the GL only supports rendering surfaces only as polygonal models (at least directly). So there are two principal possibilities:
The polygonal representation does exactly represent the object you want to visualize. A simple example would be a cube.
The polygonal represantation is just an (picewise linear) approximation of the surface you want to visualize. Think of smooth surfaces.
In case 1, you need one nomral per triangle (as the normal is unchaning for a flat surface defined by a triangle). However, this means that either for neighboring triangles who share an edge or corner, the normals will have to be different. From GL's point of view, each of the trianlges use different vertices, even if those vertices share the position in space. A vertex is the set of all attributes, not just the position. For the cube, that means that you will need not just 8 different vertices, but 24, so you have 3 at each corner.
In case 2, you do want to cover up the polygonal structure of the model as good as possible. One aspect of this is using smooth shading techniques. Averaging the normales of adjacent traingles at each vertex is one heuristic of doing so. In this case, neighboring primitives actually can share vertices, as the normal and the position of some corner point is the same for any triangle connected to it.
This heuristic has some drawbacks, especially if your surface does contain both smooth parts and "sharp edges" you want to preserve. There are some improved heuristics which try to detect sharp edges and splitting vertices to allow different normals for the connected triangles to not shooth such edges. But all such heuristics might fail in some cases - ideally, the normals are provided when the model is created in the first place.
The triangle is drawn before the quad. When should I call glNormal and
with what vector?
OpenGL is a state machine, meaning that things you set kepp that way until you channge them again - and setting normals is no exception. The second thing to note is that normals are a vertex attribute. So for every vertex, every arrtibute has always some value (but depending on the rest of your GL state, not all of these attributes are used when rendering).
Since you use the fixed-function GL, normals are builtin vertex attributes - so every vertex you issue in some way has some value as its normal attribute - in immediate mode rendering with glBegin()/End(), it will be the one you set with the most recent glNormal() call (or it will have the initial default value if you never called glNormal()).
So to answer you question:
YOu have to set that normal before you issue the glVertex() call for that particular vertex for the first time, and you have to re-issue that normal command for the second time drawing with "this" vertex (which technically is a different vertex anyway) if you did change it inbetween when specifying some other vertices.
To my best understanding, the normal of that vertex is the average of the normal of the quad and the normal of the triangle.
No. The normal of a plane is a vector pointing 'out of' the plane at a 90 degree angle. In OpenGL, this is used in shading calculations, and to support various effects, OpenGL lets you specify whatever normal you want instead of calculating it from the primitive. For flat lighting, the normal should be set to the mathematical definition of the normal for each primitive, while for smooth lighting, the normal should be set to the average normal of all primitives that share the vertex.
glNormal sets a value in OpenGL that is read whenever you call glVertex, and is persistent until you call glNormal again. So this code
glNormal3d(0,0,1)
glVertex3d(1,0,0)
glVertex3d(1,1,0)
glVertex3d(0,1,0)
glVertex3d(0,0,0)
specifies 4 vertices, each with a normal of (0,0,1).
My current rendering implementation is as follows:
Store all vertex information as quads rather than triangles
For triangles, simply repeat the last vertex (i.e. v0 v1 v2 v2)
Pass vertex information as lines_adjacency to geometry shader
Check if quad or triangle, output as triangle_strip
The reason I went this route was because I was implementing a wireframe shader, and I wanted to draw the quads without a diagonal line through them. But, I've since discarded the feature.
I'm now wondering if I should go back to simply drawing GL_TRIANGLES, and leave the geometry shader out of the equation. But that got me thinking... what's actually more efficient from a performance point of view?
In average, my scenes are composed of quads and triangles in equal amounts.
Drawing with all triangles would mean: 6 vertices per quad, 3 per triangle.
Drawing with lines_adjacency would mean: 4 vertices per quad, 4 per triangle.
(This is with indexed drawing, so the vertex buffer is the same size for both of them)
So the vertex ratio is 9:8 (triangles : lines_adjacency).
Would I be correct in assuming that with indexed drawing, each vertex is only getting processed once by the vertex shader (as opposed to once per index)? In which case drawing triangles is going to be more efficient (since there isn't an extra geometry-shader step to perform), with the only negative being the slight amount of extra memory the indices take up.
Then again, if the vertices do get processed once per index, I could see the edge being with the lines_adjacency method, considering the geometry conversion is very simple, whilst the vertex shader might be running more intensive lighting calculations.
So that pretty much sums up my question: how do vertices get treated with indexed drawing, and what sort of performance impact could be expected if including a simple geometry shader?
Geometry shaders never improve efficiency in this sort of situation, they only complicate the primitive assembly process. When you use geometry shaders, the post-T&L cache no longer works the way it was originally designed.
While it is true that the geometry shader will reuse any shared (indexed) vertices transformed in the vertex shader stage when it needs to fetch vertex data, the geometry shader still computes and emits a unique set of vertices per-output-primitive.
Furthermore, because geometry shaders are allowed to emit a variable number of data points they are unlike other shader stages. It is much more difficult to parallelize geometry shaders than it is vertex or fragment. There are just too many negative things about geometry shaders for me to suggest using them unless you actually need them.
So I want to draw lots of quads (or even cubes), and stumbled across this lovely thing called the geometry shader.
I kinda get how it works now, and I could probably manipulte it into drawing a cube for every vertex in the vertex buffer, but I'm not sure if it's the right way to do it. The geometry shader happens between the vertex shader and the fragment shader, so it works on the vertices in screen space. But I need them in world space to do transformations.
So, is it OK to have my vertex shader simply pipe the inputs to the geometry shader, and have the geometry shader multiply by the modelviewproj matrix after creating the primitives? It should be no problem with the unified shader architecture, but I still feel queasy when making the vertex shader redundant.
Are there alternatives? Or is this really the 'right' way to do it?
It is perfectly OK.
Aside from that, consider using instanced rendering (glDrawArraysInstanced,glDrawElementsInstanced) with vertex attribute divisor (glVertexAttribDivisor). This way you can accomplish the same task without geometry shader at all.
For example, you can have a regular cube geometry bound. Then you have a special vertex attribute carrying cube positions you want for each instance. You should bind it with a divisor=1, what will make it advance for each instance drawn. Then draw the cube using glDraw*Instanced, specifying the number of instances.
You can also sample input data from textures, using gl_VertexID or gl_InstanceID for coordinates.
This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.
Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized