I have a point-light shadow map program, but am a bit lost in how to control a multiple light scenario. How do I setup multiple lights, and does each light have its own 'depth texture'? If so, how are they combined for the final render of the scene (obviously, not all lights will be active all of the time)?
When thinking about this is does seem more logical to have separate (small) depth cube-maps, as opposed to one which is used for the entire scene (especially for a close quarters level as opposed to an open landscape), but implementing such a system just leaves me staring at the screen.
Thanks.
There are a number of ways you can accomplish what you want. The method depends on your exact needs. If you're using OpenGL 2.1 or earlier, you can set up multiple lights by enabling multiple OpenGL lights:
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, <light 0 settings>);
glEnable(GL_LIGHT1);
glLightfv(GL_LIGHT1, <light 1 settings>);
...etc. with up to GL_MAX_LIGHTS lights
If you want more than GL_MAX_LIGHTS lights, or you're using OpenGL 3 or later, you'll need to pass the light data into your shader program as uniforms, for example.
When I've worked with shadows in the past, I've use a depth texture per light.
To combine them, you generally test each fragment against each light. If the fragment is visible to the light (i.e. not in shadow), then you add in the light's contribution. If the fragment is in shade (not visible to the light), then you don't add in that light's contribution.
If you need hundreds of lights or something like that, you can also look into Deferred Shading. (See also the Wikipedia entry on Deferred Shading.)
Related
I'm working on a simple 2D game engine. I'd like to support a variety of light types, like the URP in Unity. Lighting would be calculated according to the normal and specular maps attached to the sprite.
Because of the different geometry of the light types, it would be ideal to defer the lighting calculations, and use a dedicated shader for it. However it would limit the user in the folowing ways:
No transparent textures, as they would override the normal and specular data, so the sprites behind would appear unlit.
No stylized lighting (like a toon shader) for individual sprites.
To resolve these issues I tried implementing something like in Godot. In a Godot shader, one can write a light function, that is called per pixel for every light in range. Now I have two shaders per sprite. One that outputs normal and specular information to an intermediate framebuffer, and a light shader that is called on the geometry of the light, and outputs to the screen.
The problem is that this method decreeses performance significantly, because I change buffers and shaders twice for every sprite, and the draw calls per frame also doubled.
Is there a way to decrease this methods overhead?
Or is there any other solution to this problem, that i missed?
I am currently programming a graphics renderer in OpenGL by following several online tutorials. I've ended up with an engine which has a rendering pipeline which basically consists of rendering an object using a simple Phong Shader. My Phong Shader has a basic vertex shader which modifies the vertex based on a transformation and a fragment shader which looks something like this:
// PhongFragment.glsl
uniform DirectionalLight dirLight;
...
vec3 calculateDirLight() { /* Calculates Directional Light using the uniform */ }
...
void main() {
gl_FragColor = calculateDirLight();
The actual drawing of my object looks something like this:
// Render a Mesh
bindPhongShader();
setPhongShaderUniform(transform);
setPhongShaderUniform(directionalLight1);
mesh->draw(); // glDrawElements using the Phong Shader
This technique works well, but has the obvious downside that I can only have one directional light, unless I use uniform arrays. I could do that but instead I wanted to see what other solutions were available (mostly since I don't want to make an array of some large amount of lights in the shader and have most of them be empty), and I stumbled on this one, which seems really inefficient but I am not sure. It basically involves redrawing the mesh every single time with a new light, like so:
// New Render
bindBasicShader(); // just transforms vertices, and sets the frag color to white.
setBasicShaderUniform(transform); // Set transformation uniform
mesh->draw();
// Enable Blending so that all light contributions are added up...
bindDirectionalShader();
setDirectionalShaderUniform(transform); // Set transformation uniform
setDirectionalShaderUniform(directionalLight1);
mesh->draw(); // Draw the mesh using the directionalLight1
setDirectionalShaderUniform(directionalLight2);
mesh->draw(); // Draw the mesh using the directionalLight2
setDirectionalShaderUniform(directionalLight3);
mesh->draw(); // Draw the mesh using the directionalLight3
This seems terribly inefficient to me, though. Aren't I redrawing all the mesh geometry over and over again? I have implemented this and it does give me the result I was looking for, multiple directional lights, but the frame rate has dropped considerably. Is this a stupid way of rendering multiple lights, or is it on par with using shader uniform arrays?
For forward rendering engines where lighting is handled in the same shader as the main geometry processing, the only really efficient way of doing this is to generate lots of shaders which can cope with the various combinations of light source, light count, and material under illumination.
In your case you would have one shader for 1 light, one for 2 lights, one for 3 lights, etc. It's a combinatorial nightmare in terms of number of shaders, but you really don't want to send all of your meshes multiple times (especially if you are writing games for mobile devices - geometry is very bandwidth heavy and sucks power out of the battery).
The other common approach is a deferred lighting scheme. These schemes store albedo, normals, material properties, etc into a "Geometry Buffer" (e.g. a set of multiple-render-target FBO attachments), and then apply lighting after the fact as a set of post-processing operations. The complex geometry is sent once, with the resulting data stored in the MRT+depth render targets as a set of texture data. The lighting is then applied as a set of basic geometry (typically spheres or 2D quads), using the depth texture as a means to clip and cull light sources, and the other MRT attachments to compute the lighting intensity and color. It's a bit of a long topic for a SO post - but there are lots of good presentations around on the web from GDC and Sigraph.
Basic idea outlined here:
https://en.wikipedia.org/wiki/Deferred_shading
I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).
The first issue, is how to get from a single light source, to using multiple light sources, without using more than one fragment shader.
My instinct is that each run through of the shader calculations needs light source coordinates, and maybe some color information, and we can just run through the calculations in a loop for n light sources.
How do I pass the multiple lights into the shader program? Do I use an array of uniforms? My guess would be do pass in an array of uniforms with the coordinates of each light source, and then specify how many light sources there are, and then set a maximum value.
Can I call getter or setter methods for a shader program? Instead of just manipulating the globals?
I'm using this tutorial and the libGDX implementation to learn how to do this:
https://gist.github.com/mattdesl/4653464
There are many methods to have multiple light sources. I'll point 3 most commonly used.
1) Specify each light source in array of uniform structures. Light calculations are made in shader loop over all active lights and accumulating result into single vertex-color or fragment-color depending if shading is done per vertex or per fragment. (this is how fixed-function OpenGL was calculating multiple lights)
2) Multipass rendering with single light source enabled per pass, in simplest form passes could be composited by additive blending (srcFactor=ONE dstFactor=ONE). Don't forget to change depth func after first pass from GL_LESS to GL_EQUAL or simply use GL_LEQUAL for all passes.
3) Many environment lighting algorithms mimic multiple light sources, assuming they are at infinte distance from your scene. Simplest renderer should store light intensities into environment texture (prefferably a cubemap), shader job then would be to sample this texture several times in direction around the surface normal with some random angular offsets.
I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.