Implementing Ambient lighting in a deferred Renderer? - opengl

I recently added deferred shading into my engine, and ran across a technique called "light volumes". While being great because it reduces lighting computations to the minimum (executing only fragments in the light volume), I cannot figure out how I could render the rest of the scene with ambient lighting!
I get the following scene without ambient lighting: (the light volume has been highlighted in gray)
Of course, I could always render a fullscreen quad, but I would loose the benefit of this technique.
Any suggestions?
Edit : I finally got it to work thanks to Nicol : ) Here is a new picture :

You do the ambient lighting in a separate pass. Just like you do with lights in general with deferred rendering. That is the general idea, that each light happens in its own pass, with you accumulating the results into the framebuffer by doing additive blending with them.
The ambient light is simply considered another lightsource.

Related

Flexible llighting method in 2D (using OpenGL)

I'm working on a simple 2D game engine. I'd like to support a variety of light types, like the URP in Unity. Lighting would be calculated according to the normal and specular maps attached to the sprite.
Because of the different geometry of the light types, it would be ideal to defer the lighting calculations, and use a dedicated shader for it. However it would limit the user in the folowing ways:
No transparent textures, as they would override the normal and specular data, so the sprites behind would appear unlit.
No stylized lighting (like a toon shader) for individual sprites.
To resolve these issues I tried implementing something like in Godot. In a Godot shader, one can write a light function, that is called per pixel for every light in range. Now I have two shaders per sprite. One that outputs normal and specular information to an intermediate framebuffer, and a light shader that is called on the geometry of the light, and outputs to the screen.
The problem is that this method decreeses performance significantly, because I change buffers and shaders twice for every sprite, and the draw calls per frame also doubled.
Is there a way to decrease this methods overhead?
Or is there any other solution to this problem, that i missed?

Is this a good way to render multiple lights in OpenGL?

I am currently programming a graphics renderer in OpenGL by following several online tutorials. I've ended up with an engine which has a rendering pipeline which basically consists of rendering an object using a simple Phong Shader. My Phong Shader has a basic vertex shader which modifies the vertex based on a transformation and a fragment shader which looks something like this:
// PhongFragment.glsl
uniform DirectionalLight dirLight;
...
vec3 calculateDirLight() { /* Calculates Directional Light using the uniform */ }
...
void main() {
gl_FragColor = calculateDirLight();
The actual drawing of my object looks something like this:
// Render a Mesh
bindPhongShader();
setPhongShaderUniform(transform);
setPhongShaderUniform(directionalLight1);
mesh->draw(); // glDrawElements using the Phong Shader
This technique works well, but has the obvious downside that I can only have one directional light, unless I use uniform arrays. I could do that but instead I wanted to see what other solutions were available (mostly since I don't want to make an array of some large amount of lights in the shader and have most of them be empty), and I stumbled on this one, which seems really inefficient but I am not sure. It basically involves redrawing the mesh every single time with a new light, like so:
// New Render
bindBasicShader(); // just transforms vertices, and sets the frag color to white.
setBasicShaderUniform(transform); // Set transformation uniform
mesh->draw();
// Enable Blending so that all light contributions are added up...
bindDirectionalShader();
setDirectionalShaderUniform(transform); // Set transformation uniform
setDirectionalShaderUniform(directionalLight1);
mesh->draw(); // Draw the mesh using the directionalLight1
setDirectionalShaderUniform(directionalLight2);
mesh->draw(); // Draw the mesh using the directionalLight2
setDirectionalShaderUniform(directionalLight3);
mesh->draw(); // Draw the mesh using the directionalLight3
This seems terribly inefficient to me, though. Aren't I redrawing all the mesh geometry over and over again? I have implemented this and it does give me the result I was looking for, multiple directional lights, but the frame rate has dropped considerably. Is this a stupid way of rendering multiple lights, or is it on par with using shader uniform arrays?
For forward rendering engines where lighting is handled in the same shader as the main geometry processing, the only really efficient way of doing this is to generate lots of shaders which can cope with the various combinations of light source, light count, and material under illumination.
In your case you would have one shader for 1 light, one for 2 lights, one for 3 lights, etc. It's a combinatorial nightmare in terms of number of shaders, but you really don't want to send all of your meshes multiple times (especially if you are writing games for mobile devices - geometry is very bandwidth heavy and sucks power out of the battery).
The other common approach is a deferred lighting scheme. These schemes store albedo, normals, material properties, etc into a "Geometry Buffer" (e.g. a set of multiple-render-target FBO attachments), and then apply lighting after the fact as a set of post-processing operations. The complex geometry is sent once, with the resulting data stored in the MRT+depth render targets as a set of texture data. The lighting is then applied as a set of basic geometry (typically spheres or 2D quads), using the depth texture as a means to clip and cull light sources, and the other MRT attachments to compute the lighting intensity and color. It's a bit of a long topic for a SO post - but there are lots of good presentations around on the web from GDC and Sigraph.
Basic idea outlined here:
https://en.wikipedia.org/wiki/Deferred_shading

Questions Deferred Shading

I just have some questions about deferred shading. I have gotten to the point where I have the Color, Position ,Normal and textures from the Multiple Render Targets. My questions pertain to what I do next. To make sure that I have gotten the correct data from the textures I have put a plane on the screen and rendered the textures onto that plane. What I don't understand is how to manipulate those textures so that the final output is shaded with lighting. Do I need to render a plane or a quad that takes up the screen and apply all the calculations onto that plane? If I do that I am kind of confused how I would be able to get multiple lights to work this way since the "plane" would be a renderable object so for each light I would need to re-render the plane. Am I thinking of this incorrectly?
You need to render some geometry to represent the area covered by the light(s). The lighting term for each pixel of the light is accumulated into a destination render target. This gives you your lit result.
There are various ways to do this. To get up and running, a simple / easy (and hellishly slow) method is to render a full-screen quad for each light.
Basically:
Setup: Render all objects into the g-buffer, storing the various object properties (albedo, specular, normals,
depth, whatever you need)
Lighting: For each light:
Render some geometry to represent the area the light is going to cover on screen
Sample the g-buffer for the data you need to calculate the lighting contribution (you can use the vpos register to find the uv)
Accumulate the lighting term into a destination render target (the backbuffer will do nicely for simple cases)
Once you've got this working, there's loads of different ways to speed it up (scissor rect, meshes that tightly bound the light, stencil tests to avoid shading 'floating' regions, multiple lights drawn at once and higher level techniques such as tiling).
There's a lot of different slants on Deferred Shading these days, but the original technique is covered thoroughly here : http://http.download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_Deferred_Shading.pdf

Handling multiple lights and GLSL shader programs

I have a point-light shadow map program, but am a bit lost in how to control a multiple light scenario. How do I setup multiple lights, and does each light have its own 'depth texture'? If so, how are they combined for the final render of the scene (obviously, not all lights will be active all of the time)?
When thinking about this is does seem more logical to have separate (small) depth cube-maps, as opposed to one which is used for the entire scene (especially for a close quarters level as opposed to an open landscape), but implementing such a system just leaves me staring at the screen.
Thanks.
There are a number of ways you can accomplish what you want. The method depends on your exact needs. If you're using OpenGL 2.1 or earlier, you can set up multiple lights by enabling multiple OpenGL lights:
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, <light 0 settings>);
glEnable(GL_LIGHT1);
glLightfv(GL_LIGHT1, <light 1 settings>);
...etc. with up to GL_MAX_LIGHTS lights
If you want more than GL_MAX_LIGHTS lights, or you're using OpenGL 3 or later, you'll need to pass the light data into your shader program as uniforms, for example.
When I've worked with shadows in the past, I've use a depth texture per light.
To combine them, you generally test each fragment against each light. If the fragment is visible to the light (i.e. not in shadow), then you add in the light's contribution. If the fragment is in shade (not visible to the light), then you don't add in that light's contribution.
If you need hundreds of lights or something like that, you can also look into Deferred Shading. (See also the Wikipedia entry on Deferred Shading.)

Basic OpenGL lighting question

I think this is an extremely stupid and newbie question, but then I am a newbie in graphics and openGL. Having drawn a sphere and put a light source nearby, also having specified ambient light, I started experimenting with light and material values and came to a surprising conclusion: the colors which we specify with glColor* do not matter at all when lighting is enabled. Instead, the equivalent is the material's ambient component. Is this conclusion correct? Thanks
If the lighting is enabled, then instead of the vertex color, the material color (well, colors - there are several of them for different types of response to light) is used. Material colors are specified by glMaterial* functions.
If you want to reuse your code, you can use glEnable(GL_COLOR_MATERIAL) and glColorMaterial(GL_AMBIENT_AND_DIFFUSE) to have your old glColor* calls mapped to material color automatically.
(And please switch to shaders as fast as possible - the shader approach is both easier and more powerful)
I suppose you don't use fragment shader yet. From glprogramming.com:
vertex color =
the material emission at that vertex +
the global ambient light scaled by the materials ambient
property at that vertex +
the ambient, diffuse, and specular contributions from all the
light sources, properly attenuated
So yes, vertex color is not used.
Edit: You can also look for GL lightning equation in GL specification (you have one nearby, do you? ^^)