Is this a good way to render multiple lights in OpenGL? - opengl

I am currently programming a graphics renderer in OpenGL by following several online tutorials. I've ended up with an engine which has a rendering pipeline which basically consists of rendering an object using a simple Phong Shader. My Phong Shader has a basic vertex shader which modifies the vertex based on a transformation and a fragment shader which looks something like this:
// PhongFragment.glsl
uniform DirectionalLight dirLight;
...
vec3 calculateDirLight() { /* Calculates Directional Light using the uniform */ }
...
void main() {
gl_FragColor = calculateDirLight();
The actual drawing of my object looks something like this:
// Render a Mesh
bindPhongShader();
setPhongShaderUniform(transform);
setPhongShaderUniform(directionalLight1);
mesh->draw(); // glDrawElements using the Phong Shader
This technique works well, but has the obvious downside that I can only have one directional light, unless I use uniform arrays. I could do that but instead I wanted to see what other solutions were available (mostly since I don't want to make an array of some large amount of lights in the shader and have most of them be empty), and I stumbled on this one, which seems really inefficient but I am not sure. It basically involves redrawing the mesh every single time with a new light, like so:
// New Render
bindBasicShader(); // just transforms vertices, and sets the frag color to white.
setBasicShaderUniform(transform); // Set transformation uniform
mesh->draw();
// Enable Blending so that all light contributions are added up...
bindDirectionalShader();
setDirectionalShaderUniform(transform); // Set transformation uniform
setDirectionalShaderUniform(directionalLight1);
mesh->draw(); // Draw the mesh using the directionalLight1
setDirectionalShaderUniform(directionalLight2);
mesh->draw(); // Draw the mesh using the directionalLight2
setDirectionalShaderUniform(directionalLight3);
mesh->draw(); // Draw the mesh using the directionalLight3
This seems terribly inefficient to me, though. Aren't I redrawing all the mesh geometry over and over again? I have implemented this and it does give me the result I was looking for, multiple directional lights, but the frame rate has dropped considerably. Is this a stupid way of rendering multiple lights, or is it on par with using shader uniform arrays?

For forward rendering engines where lighting is handled in the same shader as the main geometry processing, the only really efficient way of doing this is to generate lots of shaders which can cope with the various combinations of light source, light count, and material under illumination.
In your case you would have one shader for 1 light, one for 2 lights, one for 3 lights, etc. It's a combinatorial nightmare in terms of number of shaders, but you really don't want to send all of your meshes multiple times (especially if you are writing games for mobile devices - geometry is very bandwidth heavy and sucks power out of the battery).
The other common approach is a deferred lighting scheme. These schemes store albedo, normals, material properties, etc into a "Geometry Buffer" (e.g. a set of multiple-render-target FBO attachments), and then apply lighting after the fact as a set of post-processing operations. The complex geometry is sent once, with the resulting data stored in the MRT+depth render targets as a set of texture data. The lighting is then applied as a set of basic geometry (typically spheres or 2D quads), using the depth texture as a means to clip and cull light sources, and the other MRT attachments to compute the lighting intensity and color. It's a bit of a long topic for a SO post - but there are lots of good presentations around on the web from GDC and Sigraph.
Basic idea outlined here:
https://en.wikipedia.org/wiki/Deferred_shading

Related

Flexible llighting method in 2D (using OpenGL)

I'm working on a simple 2D game engine. I'd like to support a variety of light types, like the URP in Unity. Lighting would be calculated according to the normal and specular maps attached to the sprite.
Because of the different geometry of the light types, it would be ideal to defer the lighting calculations, and use a dedicated shader for it. However it would limit the user in the folowing ways:
No transparent textures, as they would override the normal and specular data, so the sprites behind would appear unlit.
No stylized lighting (like a toon shader) for individual sprites.
To resolve these issues I tried implementing something like in Godot. In a Godot shader, one can write a light function, that is called per pixel for every light in range. Now I have two shaders per sprite. One that outputs normal and specular information to an intermediate framebuffer, and a light shader that is called on the geometry of the light, and outputs to the screen.
The problem is that this method decreeses performance significantly, because I change buffers and shaders twice for every sprite, and the draw calls per frame also doubled.
Is there a way to decrease this methods overhead?
Or is there any other solution to this problem, that i missed?

OpenGL Lighting Shader

I can't understand concept of smaller shaders in OpenGL. How does it work? For example: do I need to create one shader for positioning object in space and then shader another shader for lighting or what? Could someone explain this to me? Thanks in advance.
This is a very complex topic, especially since your question isn't very specific. At first, there are various shader stages (vertex shader, pixel shader, and so on). A shader program consists of different shader stages, at least a pixel and a vertex shader (except for compute shader programs, which are each single compute shaders). The vertex shader calculates the possition of the points on screen, so here the objects are being moved. The pixel shader calculates the color of each pixel, that is covered by the rendered geometry your vertex shader produced. Now, in terms of lighting, there are different ways of doing it:
Forward Shading
This is the straight-forward way, where you simply calculate the lighting in pixel shader of the same shader program, that moves to objects. This is the oldest way of calculating lighting, and the easiest one. However, it's abilities are very limited.
Deffered Shading
For ages, this is the go-to variant in games. Here, you have one shader program (vertex + pixel shader) that renders the geometrie on one (or multiple) textures (so it moves the objects, but it doesn't save the lit color, but rather things like the base color and surface normals into the texture), and then an other shader program that renders a quad on screen for each light you want to render, the pixel shader of this shader program reads the informations previously rendered in the textur by the first shader program, and uses it to render the lit objects on an other textur (which is then the final image). In constrast to forward shading, this allows (in theory) any number of lights in the scene, and allows easier usage of shadow maps
Tiled/Clustered Shading
This is a rather new and very complex way of calculating lighting, that can be build on top of deffered or forward shading. It basicly uses compute shaders to calculate an accelleration-structure on the gpu, which is then used draw huge amount of lights very fast. This allows to render thousands of lights in a scene in real time, but using shadow maps for these lights is very hard, and the algorithm is way more complex then the previous ones.
Writing smaller shaders means to separate some of your shader functionalities in another files. Then if you are writing a big shader which contains lightning algorithms, antialiasing algorithms, and any other shader computation algorithm, you can separate them in smaller shader files (light.glsl, fxaa.glsl, and so on...) and you have to link these files in your main shader file (the shader file which contains the void main() function) since in OpenGL a vertex array can only have one shader program (composition of vertex shader, fragment shader, geometry shader, etc...) during the rendering pipeline.
The way of writing smaller shader depends also on your rendering algorithm (forward rendering, deffered rendering, or forward+ rendering).
It's important to notice that writing a lot of shader will increase the shader compilation time, and also, writing a big shader with a lot of uniforms will also slow things down...

Multiple light sources & implementation details around GLSL normal mapping shader

The first issue, is how to get from a single light source, to using multiple light sources, without using more than one fragment shader.
My instinct is that each run through of the shader calculations needs light source coordinates, and maybe some color information, and we can just run through the calculations in a loop for n light sources.
How do I pass the multiple lights into the shader program? Do I use an array of uniforms? My guess would be do pass in an array of uniforms with the coordinates of each light source, and then specify how many light sources there are, and then set a maximum value.
Can I call getter or setter methods for a shader program? Instead of just manipulating the globals?
I'm using this tutorial and the libGDX implementation to learn how to do this:
https://gist.github.com/mattdesl/4653464
There are many methods to have multiple light sources. I'll point 3 most commonly used.
1) Specify each light source in array of uniform structures. Light calculations are made in shader loop over all active lights and accumulating result into single vertex-color or fragment-color depending if shading is done per vertex or per fragment. (this is how fixed-function OpenGL was calculating multiple lights)
2) Multipass rendering with single light source enabled per pass, in simplest form passes could be composited by additive blending (srcFactor=ONE dstFactor=ONE). Don't forget to change depth func after first pass from GL_LESS to GL_EQUAL or simply use GL_LEQUAL for all passes.
3) Many environment lighting algorithms mimic multiple light sources, assuming they are at infinte distance from your scene. Simplest renderer should store light intensities into environment texture (prefferably a cubemap), shader job then would be to sample this texture several times in direction around the surface normal with some random angular offsets.

Questions Deferred Shading

I just have some questions about deferred shading. I have gotten to the point where I have the Color, Position ,Normal and textures from the Multiple Render Targets. My questions pertain to what I do next. To make sure that I have gotten the correct data from the textures I have put a plane on the screen and rendered the textures onto that plane. What I don't understand is how to manipulate those textures so that the final output is shaded with lighting. Do I need to render a plane or a quad that takes up the screen and apply all the calculations onto that plane? If I do that I am kind of confused how I would be able to get multiple lights to work this way since the "plane" would be a renderable object so for each light I would need to re-render the plane. Am I thinking of this incorrectly?
You need to render some geometry to represent the area covered by the light(s). The lighting term for each pixel of the light is accumulated into a destination render target. This gives you your lit result.
There are various ways to do this. To get up and running, a simple / easy (and hellishly slow) method is to render a full-screen quad for each light.
Basically:
Setup: Render all objects into the g-buffer, storing the various object properties (albedo, specular, normals,
depth, whatever you need)
Lighting: For each light:
Render some geometry to represent the area the light is going to cover on screen
Sample the g-buffer for the data you need to calculate the lighting contribution (you can use the vpos register to find the uv)
Accumulate the lighting term into a destination render target (the backbuffer will do nicely for simple cases)
Once you've got this working, there's loads of different ways to speed it up (scissor rect, meshes that tightly bound the light, stencil tests to avoid shading 'floating' regions, multiple lights drawn at once and higher level techniques such as tiling).
There's a lot of different slants on Deferred Shading these days, but the original technique is covered thoroughly here : http://http.download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_Deferred_Shading.pdf

Difference between tessellation shaders and Geometry shaders

I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/