I have a list of vertices and their arrangement into triangles as well as the per-triangle normalized normal vectors.
Ideally, I'd like to do as little work as possible in somehow converting the (triangle,normal) pairs into (vertex,vertex_normal) pairs that I can stick into my VAO. Is there a way for OpenGL to deal with the face normals directly? Or do I have to keep track of each face a given vertex is involved in (which more or less happens already when I calculate the index buffers) and then manually calculate the averaged normal at the vertex?
Also, is there a way to skip per-vertex normal calculation altogether and just find a way to inform the fragment shader of the face-normal directly?
Edit: I'm using something that should be portable to ES devices so the fixed-function stuff is unusable
I can't necessarily speak as to the latest full-fat OpenGL specifications but certainly in ES you're going to have to do the work yourself.
Although the normal was modal under the old fixed pipeline like just about everything else, it was attached to each vertex. If you opted for the flat shading model then GL would use the colour at the first vertex on the face across the entire thing rather than interpolating it. There's no way to recreate that behaviour under ES.
Attributes are per vertex and uniforms are — at best — per batch. In ES there's no way to specify per-triangle properties and there's no stage of the rendering pipeline where you have an overview of the geometry when you could distribute them to each vertex individually. Each vertex is processed separately, varyings are interpolation and then each fragment is processed separately.
Related
Using OpenGL, is it possible to apply a fragment shader to a specified region around a single vertex i.e. glPoint, rather than creating an array of quads and mapping a texture coordinate of a "duck" to each vertex?
I guess it would be more efficient as it would only require one vertex to be sent to the GPU per duck displayed, rather than the four vertices currently needed.
I have achieved a similar effect by using a geometry shader to build a quad around each vertex. Still, I am wondering if it is possible to achieve the same result without using a geometry shader.
I played around with gl_PointSize, but it is a limited feature, and I don't think it is the proper way to do it.
To summarize, I would like to know if OpenGL would allow filling a region around a single vertex using the fragment shader rather than, expressively creating a quad. Is that possible?
What is the final stage that is still possible to return the indexes that was not clipped or culled or occluded, and that are going to be rendered?
To answer the question asked, there isn't one. All vertex processing rendering stages happen before triangle clipping. As does transform feedback. And fragment shaders don't get vertex indices; they only get the per-vertex values from the vertex processing stage, after interpolation.
In theory, you could do something like this. Your VS outputs an integer index for the vertex, taken from gl_VertexID. You would need a GS that takes the three indices and packages them together into a flat uvec3. Each output vertex would be given the same values. And then, the fragment shader could get the uvec3 and write each of those indices out to a buffer via SSBO and an atomic counter.
Of course, you'll get the same index multiple times (assuming that triangles share indices). But you can do it.
It just doesn't serve much point. Rendering part of a mesh is a lot more trouble than it's worth. For performance, it's better to render either all of it or none of it, based on its visibility. And detecting that is best done via occlusion tests on a different, less complex shape.
I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.
Taking the standard opengl 4.0+ functions & specifications into consideration; i've seen that geometries and shapes can be created in either two ways:
making use of VAO & VBO s.
using shader programs.
which one is the standard way of creating shapes?? are they consistent with each other? or they are two different ways for creating geometry and shapes?
Geometry is loaded into the GPU with VAO & VBO.
Geometry shaders produce new geometry based on uploaded. Use them to make special effects like particles, shadows(Shadow Volumes) in more efficient way.
tessellation shaders serve to subdivide geometry for some effects like displacement mapping.
I strongly (like really strongly) recommend you reading this http://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
VAOs and VBOs how about what geometry to draw (specifying per-vertex data). Shader programs are about how to draw them (which program gets applied to each provided vertex, each fragment and so on).
Let's lay out the full facts.
Shaders need input. Without input that changes, every shader invocation will produce exactly the same values. That's how shaders work. When you issue a draw call, a number of shader invocations are launched. The only variables that will change from invocation to invocation within this draw call are in variables. So unless you use some sort of input, every shader will produce the same outputs.
However, that doesn't mean you absolutely need a VAO that actually contains things. It is perfectly legal (though there are some drivers that don't support it) to render with a VAO that doesn't have any attributes enabled (though you have to use array rendering, not indexed rendering). In which case, all user-defined inputs to the vertex shader (if any) will be filled in with context state, which will be constant.
The vertex shader does have some other, built-in per-vertex inputs generated by the system. Namely gl_VertexID. This is the index used by OpenGL to uniquely identify this particular vertex. It will be different for every vertex.
So you could, for example, fetch geometry data yourself based on this index through uniform buffers, buffer textures, or some other mechanism. Or you can procedurally generate vertex data based on the index. Or something else. You could pass that data along to tessellation shaders for them to tessellate the generated data. Or to geometry shaders to do whatever it is you want with those. However you want to turn that index into real data is up to you.
Here's an example from my tutorial series that generates vertex data from nothing more than an index.
i've seen that geometries and shapes can be created in either two ways:
Not either. In modern OpenGL-4 you need both data and programs.
VBOs and VAOs do contain the raw geometry data. Shaders are the programs (usually executed on the GPU) that turn the raw data into pixels on the screen.
Vertex shaders can be used to displace vertices, or to generate them from a builtin formula and the vertex index, which is available as a built in attribute in later open gl versions.
The difference between vertex and geometry shaders is that vertex shader is a 1:1 mapping, while geometry shader can create more vertices -- can be utilized in automatic Level of Detail generation for e.g. NURBS or perlin noise based terrains etc.
Is it possible to achieve flat shading in OpenGL when using glDrawElements to draw objects, and if so how? The ideal way would be to calculate a normal for each triangle only once, if possible.
The solution must only use the programmable pipeline (core profile).
There are indeed ways around this without duplicating vertices, with some limitations for each one (at least those I can think of with my limited OpenGL experience).
I can see two solutions that would give you a constant value for the normal over each triangle :
declare the input as flat in your shader and pick which vertex gives its value via glProvokingVertex; fast but you'll get the normal for one vertex as the normal for the whole triangle, which might not look right
use a geometry shader taking triangles and outputing triangles to calculate a single normal per face. This is the most flexible way, allowing you to control the resulting effect, but it might be slow (and required geometry shader capable hardware, obviously)
Sadly, the only way to do that is to duplicate all your vertices, since attributes are per-vertex and not per-triangle
When you think about it, this is what we did in immediate mode...