Flexible llighting method in 2D (using OpenGL) - opengl

I'm working on a simple 2D game engine. I'd like to support a variety of light types, like the URP in Unity. Lighting would be calculated according to the normal and specular maps attached to the sprite.
Because of the different geometry of the light types, it would be ideal to defer the lighting calculations, and use a dedicated shader for it. However it would limit the user in the folowing ways:
No transparent textures, as they would override the normal and specular data, so the sprites behind would appear unlit.
No stylized lighting (like a toon shader) for individual sprites.
To resolve these issues I tried implementing something like in Godot. In a Godot shader, one can write a light function, that is called per pixel for every light in range. Now I have two shaders per sprite. One that outputs normal and specular information to an intermediate framebuffer, and a light shader that is called on the geometry of the light, and outputs to the screen.
The problem is that this method decreeses performance significantly, because I change buffers and shaders twice for every sprite, and the draw calls per frame also doubled.
Is there a way to decrease this methods overhead?
Or is there any other solution to this problem, that i missed?

Related

OpenGL vertex colours with and without illumination

I have an OpenGL shader program which renders a cube. To colour the cube, I pass the normal of each vertex to the vertex shader, and calculate its greyscale shade with respect to a point light source.
However, I now want to also render a red triangle, whose colour is always red and does not depend on lighting. But if I just pass the normal to the vertex shader as before, the triangle's colour will be affected by the light.
What is the best solution for this? Should I calculate the vertex colour before the shaders, and pass that to the vertex shader? Or is that bad practice?
There are two main options:
Use one shader program that is flexible enough to handle both cases.
For a shader that applies basic lighting, it is common to pass in values (typically as uniforms) that determine the weight of the ambient and diffuse terms in the lighting equation. With a shader like this, if you want a solid color for part of your objects, you simply crank up the ambient term all the way by setting the uniform accordingly.
Use different shader programs.
Each one has benefits, and you have to figure out which works best for you.
The main downside of approach 1 is that your shader might do more work than needed. In this example, it will still evaluate the diffuse term of the lighting equation for the solid objects, even though it does not contribute to the final result. If you draw a lot of solid geometry, that could hurt performance.
The main downside of approach 2 is that you have to switch shaders. If you frequently switch between solid and lighted rendering, that can hurt performance. One way to work around this is that you first draw all lighted objects, then all solid objects, so that you have to switch shaders only once per frame. Depending on your software architecture, that may be easy to do, or it could require significant restructuring.
Create another shader program that uses a fixed color as the output, since there are two types of rendering you want to do, is better to separate it accordingly.

Multiple light sources & implementation details around GLSL normal mapping shader

The first issue, is how to get from a single light source, to using multiple light sources, without using more than one fragment shader.
My instinct is that each run through of the shader calculations needs light source coordinates, and maybe some color information, and we can just run through the calculations in a loop for n light sources.
How do I pass the multiple lights into the shader program? Do I use an array of uniforms? My guess would be do pass in an array of uniforms with the coordinates of each light source, and then specify how many light sources there are, and then set a maximum value.
Can I call getter or setter methods for a shader program? Instead of just manipulating the globals?
I'm using this tutorial and the libGDX implementation to learn how to do this:
https://gist.github.com/mattdesl/4653464
There are many methods to have multiple light sources. I'll point 3 most commonly used.
1) Specify each light source in array of uniform structures. Light calculations are made in shader loop over all active lights and accumulating result into single vertex-color or fragment-color depending if shading is done per vertex or per fragment. (this is how fixed-function OpenGL was calculating multiple lights)
2) Multipass rendering with single light source enabled per pass, in simplest form passes could be composited by additive blending (srcFactor=ONE dstFactor=ONE). Don't forget to change depth func after first pass from GL_LESS to GL_EQUAL or simply use GL_LEQUAL for all passes.
3) Many environment lighting algorithms mimic multiple light sources, assuming they are at infinte distance from your scene. Simplest renderer should store light intensities into environment texture (prefferably a cubemap), shader job then would be to sample this texture several times in direction around the surface normal with some random angular offsets.

Difference between tessellation shaders and Geometry shaders

I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/

Understanding the shader workflow in OpenGL?

I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.

OpenGL: Using shaders to create vertex lighting by using pre-calculated colormap?

First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.