I have an OpenGL shader program which renders a cube. To colour the cube, I pass the normal of each vertex to the vertex shader, and calculate its greyscale shade with respect to a point light source.
However, I now want to also render a red triangle, whose colour is always red and does not depend on lighting. But if I just pass the normal to the vertex shader as before, the triangle's colour will be affected by the light.
What is the best solution for this? Should I calculate the vertex colour before the shaders, and pass that to the vertex shader? Or is that bad practice?
There are two main options:
Use one shader program that is flexible enough to handle both cases.
For a shader that applies basic lighting, it is common to pass in values (typically as uniforms) that determine the weight of the ambient and diffuse terms in the lighting equation. With a shader like this, if you want a solid color for part of your objects, you simply crank up the ambient term all the way by setting the uniform accordingly.
Use different shader programs.
Each one has benefits, and you have to figure out which works best for you.
The main downside of approach 1 is that your shader might do more work than needed. In this example, it will still evaluate the diffuse term of the lighting equation for the solid objects, even though it does not contribute to the final result. If you draw a lot of solid geometry, that could hurt performance.
The main downside of approach 2 is that you have to switch shaders. If you frequently switch between solid and lighted rendering, that can hurt performance. One way to work around this is that you first draw all lighted objects, then all solid objects, so that you have to switch shaders only once per frame. Depending on your software architecture, that may be easy to do, or it could require significant restructuring.
Create another shader program that uses a fixed color as the output, since there are two types of rendering you want to do, is better to separate it accordingly.
Related
I'm working on a simple 2D game engine. I'd like to support a variety of light types, like the URP in Unity. Lighting would be calculated according to the normal and specular maps attached to the sprite.
Because of the different geometry of the light types, it would be ideal to defer the lighting calculations, and use a dedicated shader for it. However it would limit the user in the folowing ways:
No transparent textures, as they would override the normal and specular data, so the sprites behind would appear unlit.
No stylized lighting (like a toon shader) for individual sprites.
To resolve these issues I tried implementing something like in Godot. In a Godot shader, one can write a light function, that is called per pixel for every light in range. Now I have two shaders per sprite. One that outputs normal and specular information to an intermediate framebuffer, and a light shader that is called on the geometry of the light, and outputs to the screen.
The problem is that this method decreeses performance significantly, because I change buffers and shaders twice for every sprite, and the draw calls per frame also doubled.
Is there a way to decrease this methods overhead?
Or is there any other solution to this problem, that i missed?
I am learning to make a graphical engine with OpenGL. I wanted to know, should repetitive operations be moved from the vertex shader to the fragment shader, since from what I understood the vertex shader is only run once per vertex?
For instance, when normalizing a vector for the light direction, since this light is the same in the entire vertex should it be moved to the vertex shader, instead of calculating it for every pixel? Is there a particular reason to keep it in the fragment shader?
If the calculation is exactly the same: yes, it should usually be more efficient to do it in the vertex shader than the fragment shader. Some situations where it might not be more efficient:
when drawing geometry that results in fewer shaded pixels than transformable vertices -- either due to dense geometry or extreme discards/occlusion. If this is the case, usually you would want to address it by switching to lower level-of-detail geometry or smarter geometry culling.
when doing the calculation in the vertex shader requires you to send more data to the fragment shader in order to use the calculation's results. Sending more data can be slower because it requires more memory manipulation and because the rasterizer needs to interpolate more "varying" values across each polygon.
For light calculations, specifically, be mindful that moving calculations from the fragment shader to the vertex shader can affect the quality of your rendering. Particularly, normalized direction vectors at each vertex can become shorter after "varying" interpolation, which can slightly darken triangle interiors if used directly without renormalization. And, of course, moving the entire lighting calculation to the vertex shader has even more drastic effects.
But how visible these effects are depends on the frequency of textures, the resolution of geometry, the size on screen, how far away the lights are, etc. -- in some cases, the quality/performance tradeoff may make sense.
So, I have a simple project right now. Basically its just a bunch of cuboids that are all axis alligned... so it has really simple geometry.
Anyway I am considering adding a better shader to it. Currently I am using the "flat shader" that is a stock shader in GLShaderManager. It is coloring everything with a flat color. However I would love if I could build a shader like the following.
Basically I want a shader that has an array of point lights at various positions with varying intensities.
Probably defined like this.
struct Light {
float x;
float y;
float z;
float intensity;
};
Light Lighting[20];
And basically based on the level geometry and lights, I would love to simulate basic lighting and shadows, also it would be cool to have a circle under the player (like the player is actually their).
How hard would this be to make? How would I pass it my level geometry and light array. (note even though each cuboid is its own QUADS batch it will be easy to make any kind of variable that stores the data).
I am using Glew, GLTools, GLShaderManager, GLBatch, visual studio 2010, probably whatever "GSHL".
If you could just let me know how complicated a shader like this would be let me know. Also if it is easy to find a shader that works like this online if you could link it.
Also what are the difference between the two types of shaders? (Vertex, and fragment).
I would say it's relatively simple, but the thing about modern GL is that the initial learning curve is quite steep. At first it seems like you have to roll up your sleeves and learn how to do everything (essentially true) but later, it starts to seem like it made things easier than ever before with much more predictable behaviors since you're in the driver's seat.
One of the first things you want to learn to do is understand how to specify attributes from CPU to GPU. For attributes which don't vary on a per-vertex or per-fragment basis such as your light positions and intensities, you want uniform attributes. Check out examples utilizing the glUniform* functions to see how to do this. This will allow you to then experiment, passing values from the CPU side to the GPU side and then seeing how they affect the shader to accelerate your learning.
After that, it's worth learning how direct lighting is computed given a ray bouncing off a surface with phong shading, separating ambient, diffuse, and specular terms.
Later you might even want to store this light data into an environment map. That'll give you the ability to use as many lights as you want without affecting the speed of the shader.
About vertex vs. fragment shaders, vertex shaders compute things on a vertex-by-vertex basis, including data for the fragment shader to then use. The fragment shader is kind of like a pixel shader (in HLSL, it's actually just called a 'pixel shader'). It deals with rasterizing what's in between those vertices and is operating on a pixel-by-pixel basis (however with some potential overdraw). Often for lighting, the real heart of the logic will be in the fragment shader, while the vertex shader serves as an intermediary step to compute all the relevant values for the fragment shader to interpolate and use. The vertex shader is part of the 3D geometry pipeline, while the fragment shader is part of 2D rasterization.
It shouldn't take too long or be too hard to get the hang of this, but you want to approach this kind of slowly and in babysteps. There's a lot of setup work involved in establishing a lighting/shading pipeline for your software with the precise characteristics you want, and for the final work, you want to kind of plan ahead. So it's good to establish a separate scrap project and start experimenting away to figure out how things work.
I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/
I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.