Is it possible to configure OpenGL so that the vertex data will be interpolated in a different way than OpenGL normally does? For example, I would like to try out logarithmic interpolation
No. OpenGL does only linear interpolation. It does not produce distorted primitives.
But for other vars, likely the color, you can code your own way of interpolation in the fragment shader (FS).
In the VS you read the vertices. Right. Let's suppose the goal is to pass the three vertices/colors/whatever of every triangle to the FS. Thus, you need to read the three "data" for a triangle in every VS run.
That's pretty simple if you add more "input" vars in the VS. You can use more buffers (repeating the data) or interleave them in the same buffer, your choice.
Do usual matrix operations with them. Set the needed gl_Position for the first vertex.
Set the needed "output/input" vars in the VS and FS.
Perhaps you want tu use the flat qualifier to avoid interpolation.
Related
EDIT:
My question was unclear at first, I'll try to rephrase it:
How do I use different shaders to do different rendering operations on the same mesh polygons? For example, I want to add lighting using one shader and add fog using another shader. I need to use the color interpolated from the first shader in the calculation of the second shader, but I don't know how to do it if I can't (or rather not supposed to) pass around the color buffer between shaders.
Also (and that was where my question started), I need the same world-view-projection calculations for both shaders, so am I supposed to calculate it in every shader seperatly? Am I supposed to use one big shader for all my rendering operations?
Original question:
Say I have two different shader programs. The first one calculates the vertex positions in the vertex shader and does some operations in the fragment shader.
Let's say I want to use the fragment shader to do different calculations, but I still want to use the same vertex positions calculated by the first vertex shader. Do I have to calculate the vertex positions again or is there a way to share state between different shader programs?
you got more options:
multi pass
this one usually render the geometry into depth and "color" buffer first and then in next passes uses that as input textures for rendering single rectangle covering whole screen/view. Deferred shading is an example of this but there are many other implementations of effects that are not Deferred shading related. Here an example of multi pass:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js?
In first pass the planets and stars and stuff is rendered, in second the atmosphere is added.
You can combine the passes either by blending or direct rendering. The direct rendering requires that you render to texture each pass and render in the last one. Blending is changing the color of the output in each pass.
single pass
what you describe is more like you should encode the different shaders as a functions for single fragment shader... Yes you can combine more shaders into single one if they are compatible and combine their results to final output color.
Big shader is a performance hit but I think it would be still faster than having multiple passes doing the same.
Take a look at this example:
Normal mapping gone horribly wrong
this one computes enviromental reflection, lighting, geometry color and combines them together to single output color.
Exotic shaders
There are also exotic shaders that go around the pipeline limitations like this one:
Reflection and refraction impossible without recursive ray tracing?
Which are used for stuff that is believed to be not possible to implement in GL/GLSL pipeline. Anyway If the limitations are too binding you can still use compute shader...
I built a 2D graphical engine, and I created a batching system for it, so, if I have 1000 sprites with the same texture, I can draw them with one single call to openGl.
This is achieved by putting in a single vbo vertex array all the vertices of all the sprites with the same texture.
Instead of "print these vertices, print these vertices, print these vertices", I do "put all the vertices toghether, print", just to be very clear.
Easy enough, but now I'm trying to achieve the same thing in 3D, and I'm having a big problem.
The problem is that I'm using a Model View Projection matrix to place and render my models, which is the common approach to render a model in 3D space.
For each model on screen, I need to pass the MVP matrix to the shader, so that I can use it to transform each vertex to the correct position.
If I would do the transformation outside the shader, it would be executed by the cpu, which I not a good idea, for obvious reasons.
But the problem lies there. I need to pass the matrix to the shader, but for each model the matrix is different.
So I cannot do the same I did with 2d sprites, because changing a shader uniform requires a draw every time.
I hope I've been clear, maybe you have a good idea I didn't have or you already had the same problem. I know for a fact that there is a solution somewhere, because in engine like Unity, you can use the same shader for multiple models, and get away with one draw call
There exists a feature exactly like what you're looking for, and it's called instancing. With instancing, you store n matrices (or whatever else you need) in a Uniform Buffer and call glDrawElementsInstanced to draw n copies. In the shader, you get an extra input gl_InstanceID, with which you index into the Uniform Buffer to fetch the matrix you need for that particular instance.
You can read more about instancing here: https://www.opengl.org/wiki/Vertex_Rendering#Instancing
The answer depends on whether the vertex data for each item is identical or not. If it is, you can use instancing as in #orost's answer, using glDrawElementsInstanced, and gl_InstanceID within the vertex shader, and that method should be preferred.
However, if each 3D model requires different vertex data (which is frequently the case), you can still render them using a single draw call. To do this, you would add another stream into your vertex data with glVertexAttribPointer (and glEnableVertexAttribArray). This extra stream would contain the index of the matrix within the uniform buffer that vertex should use when rendering - so each mesh within the VBO would have an identical index in the extra stream. The uniform buffer contains the same data as in the instancing setup.
Note this method may require some extra CPU processing, if you need to redo the batching - for example, an object within a batch should not be rendered anymore. If this process is required frequently, it should be determined whether batching items is actually beneficial or not.
Besides instancing and adding another vertex attribute as some object ID, I'd like to also mention another strategy (which requires modern OpenGL, though):
The extension ARB_multi_draw_indirect (in core since GL 4.3) adds indirect drawing commands. These commands do source their parameters (number of vertices, starting index and so on) directly from another buffer object. With these functions, many different objects can be drawn with a single draw call.
However, as you still want some per-object state like transformation matrices, that feature is not enough. But in combination with ARB_shader_draw_parameters (not in core GL yet), you get the gl_DrawID parameter, which will be incremented by one for each single object in one mult draw indirect call. That way, you can index into some UBO, or TBO, or SSBO (or whatever) where you store per-object data.
I have a list of vertices and their arrangement into triangles as well as the per-triangle normalized normal vectors.
Ideally, I'd like to do as little work as possible in somehow converting the (triangle,normal) pairs into (vertex,vertex_normal) pairs that I can stick into my VAO. Is there a way for OpenGL to deal with the face normals directly? Or do I have to keep track of each face a given vertex is involved in (which more or less happens already when I calculate the index buffers) and then manually calculate the averaged normal at the vertex?
Also, is there a way to skip per-vertex normal calculation altogether and just find a way to inform the fragment shader of the face-normal directly?
Edit: I'm using something that should be portable to ES devices so the fixed-function stuff is unusable
I can't necessarily speak as to the latest full-fat OpenGL specifications but certainly in ES you're going to have to do the work yourself.
Although the normal was modal under the old fixed pipeline like just about everything else, it was attached to each vertex. If you opted for the flat shading model then GL would use the colour at the first vertex on the face across the entire thing rather than interpolating it. There's no way to recreate that behaviour under ES.
Attributes are per vertex and uniforms are — at best — per batch. In ES there's no way to specify per-triangle properties and there's no stage of the rendering pipeline where you have an overview of the geometry when you could distribute them to each vertex individually. Each vertex is processed separately, varyings are interpolation and then each fragment is processed separately.
Is it possible to achieve flat shading in OpenGL when using glDrawElements to draw objects, and if so how? The ideal way would be to calculate a normal for each triangle only once, if possible.
The solution must only use the programmable pipeline (core profile).
There are indeed ways around this without duplicating vertices, with some limitations for each one (at least those I can think of with my limited OpenGL experience).
I can see two solutions that would give you a constant value for the normal over each triangle :
declare the input as flat in your shader and pick which vertex gives its value via glProvokingVertex; fast but you'll get the normal for one vertex as the normal for the whole triangle, which might not look right
use a geometry shader taking triangles and outputing triangles to calculate a single normal per face. This is the most flexible way, allowing you to control the resulting effect, but it might be slow (and required geometry shader capable hardware, obviously)
Sadly, the only way to do that is to duplicate all your vertices, since attributes are per-vertex and not per-triangle
When you think about it, this is what we did in immediate mode...
I'm trying to make bilinear color interpolation on a quad, i succeeded with the help of my previous question on here, but it has bad performance because its requires me to repeat glBegin() and glEnd() and 4 times glUniform() before glBegin().
The question is: is it anyhow possible to apply bilinear color interpolation on a quad like this:
glBegin(GL_QUADS);
glColor4f(...); glVertexAttrib2f(uv, 0, 0); glTexCoord2f(...); glVertex3f(...);
glColor4f(...); glVertexAttrib2f(uv, 1, 0); glTexCoord2f(...); glVertex3f(...);
glColor4f(...); glVertexAttrib2f(uv, 1, 1); glTexCoord2f(...); glVertex3f(...);
glColor4f(...); glVertexAttrib2f(uv, 0, 1); glTexCoord2f(...); glVertex3f(...);
... // here can be any amount of quads without repeating glBegin()/glEnd()
glEnd();
To do this, i think i should somehow access the nearby vertex colors, but how? Or is there any other solutions for this?
I need this to work this way so i can easily switch between different interpolation shaders.
Any other solution that works with one glBegin() command is good too, but sending all corner colors per vertex isnt acceptable, unless thats the only solution here?
Edit: The example code uses immediate mode for clarity only. Even with vertex arrays/buffers the problem would be the same: i would have to split the rendering calls into 4 vertices chunks, which causes the whole speed drop here!
Long story short: You cannot do this with a vertex shader.
The interpolator (or rasterizer) is one of the components of the graphics pipeline that is not programmable. Given how the graphics pipe works, neither a vertex shader nor a fragment shader are allowed access to anything but their vertex (or fragment, respectively), for reasons of speed, simplicity, and parallelism.
The workaround is to use a texture lookup, which has already been noted in previous answers.
In newer versions of OpenGL (3.0 and up I believe?) there is now the concept of a geometry shader. Geometry shaders are more complicated to implement than the relatively simple vertex and fragment shaders, but geometry shaders are given topological information. That is, they execute on a primitive (triangle, line, quad, etc) rather than a single point. With that information, they could create additional geometry in order to resolve your alternate color interpolation method.
However, that's far more complicated than necessary. I'd stick with a 4 texel texture map and implement your logic in your fragment lookup.
Under the hood, OpenGL (and all the hardware that it drives) will do everything as triangles, so if you choose to blend colors via vertex interpolation, it will be triangular interpolation because the hardware doesn't work any other way.
If you want "quad" interpolation, you should put your colors into a texture, because in hardware a texture is always "quad" shaped.
If you really think it's the number of draws that cause your performance drop, you can try to use Instancing (Using glDrawArrayInstanced+glVertexAttribDivisor), available in GL 3.1 core.
An alternative might be point sprites, depending on your usage model (mostly, maximum size of your quads, and are they always perpendicular to the view). That's available since GL 2.0 core.
Linear interpolation with colours specified per vertex can be set up efficiently using glColorPointer. Similarly you should use glTexCoordPointer/glVertexAttribPointer/glVertexPointer to replace all those individual per-vertex calls with a single call referencing the data in an array. Then render all your quads with a single (or at most a handful of) glDrawArrays or glDrawElements call. You'll see a huge improvement from this even without VBOs (which just change where the arrays are stored).
You mention you want to change shaders (between ShaderA and ShaderB say) on a quad by quad basis. You should either:
Arrange things so you can batch all of the ShaderA quads together and all the ShaderB quads together and render all of each together with a single call. Changing shader is generally quite expensive so you want to minimise the number of changes.
or
Implement all the different shader logic you want in a single "unified" shader, but selected by another vertex attribute which selects between the different codepaths. Whether this is anywhere near as efficient as the batching approach (which is preferable) depends on whether or not each "tile" of SIMD shaders tends to have to run a mixture of paths or just one.