Currently, I am working on precomputing static lighting for a small graphics engine.
Here is my current workflow:
Objects and Lights have a mobility tag (either static or dynamic)
Prior to rendering, lightmaps are computed for static lights and static objects.
Dynamic lighting calculations are done using a deferred shading model.
Static objects apply their lightmaps during the geometry pass.
The problem is as follows:
Ideally, static lights will also light dynamic objects and dynamic lights will light static objects (all in the deferred shader). The issue is that static lighting contributions on static objects will be "double counted" in the deferred shader, i.e. static lights will light objects that have already been lit by those lights. What are some methods of accounting for this?
Here are some of the ideas that I have had; however, all of these are not ideal:
Make static lights not light dynamic objects (or the opposite: make dynamic lights not light static objects).
This would probably be my last choice of a solution.
Pass the mobility tag for each light to the deferred shader as well as make the mobility tag part of the gbuffer.
I don't really want to add another render target for the mobility tag (I am already using 4 for PBR!).
Get rid of precomputed lightmaps for static geometry altogether?
I don't really want to do this either.
Are there any other possible solutions to this issue?
Related
I'm working on a simple 2D game engine. I'd like to support a variety of light types, like the URP in Unity. Lighting would be calculated according to the normal and specular maps attached to the sprite.
Because of the different geometry of the light types, it would be ideal to defer the lighting calculations, and use a dedicated shader for it. However it would limit the user in the folowing ways:
No transparent textures, as they would override the normal and specular data, so the sprites behind would appear unlit.
No stylized lighting (like a toon shader) for individual sprites.
To resolve these issues I tried implementing something like in Godot. In a Godot shader, one can write a light function, that is called per pixel for every light in range. Now I have two shaders per sprite. One that outputs normal and specular information to an intermediate framebuffer, and a light shader that is called on the geometry of the light, and outputs to the screen.
The problem is that this method decreeses performance significantly, because I change buffers and shaders twice for every sprite, and the draw calls per frame also doubled.
Is there a way to decrease this methods overhead?
Or is there any other solution to this problem, that i missed?
I am learning OpenGL and trying to get grasp of the best practices. I am working on a simple demonstration project in C++ which however aims to be a bit more generic and better structured (not everything just put in main()) than most of the tutorials I have seen on the web. I want to use the modern OpenGL ways which means VAOs and shaders. My biggest concern is about the relation of VAOs and shader programs. Maybe I am missing something here.
I am now thinking about the best design. Consider the following scenario:
there is a scene which contains multiple objects
each object has its individual size, position and rotation (i.e. transform matrix)
each object has a certain basic shape (e.g. box, ball), there can be multiple objects of the same shape
there can be multiple shader programs (e.g. one with plain interpolated RGBA colors, another with textures)
This leads me to the basic three components of my design:
ShaderProgram class - each instance contains a vertex shader and fragment shader (initialized from given strings)
Object class - has transform matrix and reference to a shape instance
Shape base class - and derived classes e.g. BoxShape, SphereShape; each derived class knows how to generate its mesh and turn it into buffer and how to map it to vertex attributes, in other words it will initialize its own VAO; it also known which glDraw... function(s) to use to render itself
When a scene is being rendered, I will call glUseProgram(rgbaShaderProgram). Then I will go through all objects which can be rendered using this program and render them. Then I will switch to glUseProgram(textureShaderProgram) and go through all textured objects.
When rendering an individual object:
1) I will call glUniformMatrix4fv() to set the individual transformation matrix (of course including projection matrix etc.)
2) then I will call the shape to which the object is associated to render
3) when shape is redered, it will bind its VAO, call its specific glDraw...() function and then unbind VAO
In my design I wanted to uncouple the dependency between Shape and ShaderProgram as they in theory can be interchangeable. But still some dependency seems to be there. When generating vertices in a specific ...Shape class and setting buffers for them I already need to know that I for example need to generate texture coordinates rather than RGBA components for each vertex. And when setting vertex attribute pointers glVertexAttribPointer I already must know that the shader program will use for example floats rather than integers (otherwise I would have to call glVertexAttribIPointer). I also need to know which attribute will be at which location in the shader program. In other words I am mixing the responsibility for sole shape geometry and the prior knowledge about how it will be rendered. And as a consequence of this I cannot render a shape with a shader program which is not compatible with it.
So finally my question: how to improve my design to achieve the goal (render the scene) and at the same time keep the versatility (interchangeability of shaders and shapes), force the correct usage (not to allow mixing wrong shapes with incompatible shaders), have the best performance possible (avoid unnecessarry program or context switching) and maintain good design principles (one class - one responsibility).
What I would do, is prepare ShaderProgram(s) templates, which I adapt to VAO attributes.
After all, shader programs are text, initially. What you might do is write your main functions for vertex and fragment programs, and attach the "header" to them, depending on the bound VAO. It could be useful to use standardized names for the variables you use inside the shaders, such as InPos, InNor, InTan, InTex.
That way, you can scan for elements that are missing in the VAO, but used inside the shader, and simply appending them in the header as const value with some default setting.
I do this via ShaderManager, with RequestShader(template,VAO) kind of function, which adapts the template to the VAO, and caches the compiled shader for later use.
If another VAO has the same attributes, and requires the same template, the precompiled cached version is returned to avoid the same adaptation process.
In Minecraft for example, you can place torches anywhere and each one effects the light level in the world and there is no limit to the amount of torches / light sources you can put down in the world. I am 99% sure that the lighting for the torches is taken care of on the CPU and stored for each block and so when rendering the light value at that certain block just needs to be passed into the shader, but light sources cannot move for this reason. If you had a game where you could place light sources that could move around (arrow on fire, minecart with a light on it, glowing ball of energy) and the lighting wasn't as simple (color was included) what are the most efficient ways to calculate the lighting effects.
From my research I have found differed rendering, differed lighting, dynamically creating shaders with different amounts of lights available and using a for loop (can't use uniforms due to unrolling), and static light maps (these would probably only be used for the still lights). Are there any other ways to do lighting calculations such as doing what minecraft does except allowing moving lights, or is it possible to take an infinite amount of lights and mathematically combine them into an approximation that only involves a few lights (this is an idea I came up with but I can't figure out how it could be done)?
If it helps, I am a programmer with decent experience in OpenGL (legacy and modern) so you can give me code snippets although I have not done too much with lighting so brief explanations would be appreciated. I am also willing to do research if you can point me in the right direction!
Your title is a bit misleading infinite light implies directional light in infinite distance like Sun. I would use unlimited number of lights instead. Here some approaches for this I know of:
(back) ray-tracers
they can handle any number of light sources natively. Light is just another object in engine. If ray hits the light source it just take the light intensity and stop the recursion. Unfortunately current gfx hardware is not suited for this kind of rendering. There are GPU enhanced engines for this but the specialized gfx HW is still in development and did not hit the market yet. Memory requirements are not much different then standard BR rendering and You can still use BR meshes but mathematical (analytical) meshes are natively supported and are better for this.
Standard BR rendering
BR means boundary representation such engines (Like OpenGL fixed function) can handle only limited number of lights. This is because each primitive/fragment needs the complete list of lights and the computations are done for all light on per primitive or per fragment basis. If you got many light this would be slow.
GLSL example of fixed number of light sources see the fragment shader
Also the current GPU's have limited memory for uniforms (registers) in which the lights and other rendering parameters are stored so there are possible workarounds like have light parameters stored in a texture and iterate over all of them per primitive/fragment inside GLSL shader but the number of lights affect performance of coarse so you are limited by target frame-rate and computational power. Additional memory requirements for this is just the texture with light parameters which is not so much (few vectors per light).
light maps
they can be computed even for moving objects. Complex light maps can be computed slowly (not per frame). This leads to small lighting artifacts but you need to know what to look for to spot it. Light maps and shadow maps are very similar and often computed at once. There are simple light maps and complex radiation maps models out there
look Shading mask algorithm for radiation calculations
These are either:
projected 2D maps (hard to implement/use and often less precise)
3D Voxel maps (Memory demanding but easier to compute/use)
Some approaches uses pre-rendered Z-Buffer as geometry source and then fill the lights via Radiosity or any other technique. These can handle any number of lights as these maps can be computation demanding they are often computed in the background and updated once in a while.
fast moving light sources are usually updated more often or excluded from maps and rendered as transparent geometry to make impression of light. The computational power needed for this depends on the computation method the basic are done like:
set a camera to the larges visible surfaces
render scene and handle the result as light/shadow map
store it as 2D or 3D texture or voxel map
and then continue with normal rendering from camera view
So you need to render scene more then once per frame/map update and also need additional buffers to store the rendered result which for high resolution or Voxel maps can be a big chunk of memory.
multi pass light layer
there are cases when light is added after rendering of the scene for example I used it for
Atmospheric scattering in GLSL
Here comes all multi pass rendering techniques you need additional buffers to store the sub results and usually the multi pass rendering is done on the same view/scene so pre-rendered geometry is used which significantly speeds this up either as locked VAO or as already rendered Z-buffer Color and Index buffers from first pass. After this handle next passes as single or few Quads (like in the Atmospheric scattering link) so the computational power needed for this is not much bigger in comparison to basic BR rendering
forward rendering vs. deferred-rendering
in a google this forward rendering vs. deferred-rendering is first relevant hit I found. It is not very good one (a bit to vague for my taste) but for starters it is enough
forward rendering techniques are usually standard single pass BR renders
deffered rendering is standard multi pass renders. In first pass is rendered all the geometries of the scene into Z buffer, Color buffer and some auxiliary buffers just to know which fragment of the result belongs to which object,material,... And then in the next passes are added effects,light,shadows,... but the geometry is not rendered again instead just single or few overlay QUADs/per pass are rendered so the next passes are usually pretty fast ...
The link suggest that for high lights number is the deffered rendering more suited but that strongly depends on which of the previous technique is used. Usually the multi pass light layer is used (with is one of the standard deffered rendering techniques) so in that case it is true, and the memory and computational power demands are the same see the previous section.
The first issue, is how to get from a single light source, to using multiple light sources, without using more than one fragment shader.
My instinct is that each run through of the shader calculations needs light source coordinates, and maybe some color information, and we can just run through the calculations in a loop for n light sources.
How do I pass the multiple lights into the shader program? Do I use an array of uniforms? My guess would be do pass in an array of uniforms with the coordinates of each light source, and then specify how many light sources there are, and then set a maximum value.
Can I call getter or setter methods for a shader program? Instead of just manipulating the globals?
I'm using this tutorial and the libGDX implementation to learn how to do this:
https://gist.github.com/mattdesl/4653464
There are many methods to have multiple light sources. I'll point 3 most commonly used.
1) Specify each light source in array of uniform structures. Light calculations are made in shader loop over all active lights and accumulating result into single vertex-color or fragment-color depending if shading is done per vertex or per fragment. (this is how fixed-function OpenGL was calculating multiple lights)
2) Multipass rendering with single light source enabled per pass, in simplest form passes could be composited by additive blending (srcFactor=ONE dstFactor=ONE). Don't forget to change depth func after first pass from GL_LESS to GL_EQUAL or simply use GL_LEQUAL for all passes.
3) Many environment lighting algorithms mimic multiple light sources, assuming they are at infinte distance from your scene. Simplest renderer should store light intensities into environment texture (prefferably a cubemap), shader job then would be to sample this texture several times in direction around the surface normal with some random angular offsets.
I'm using GLSL for raytracing because this is all happening in browser via WebGL. I can get my object information through to the fragment shader via floating point textures. In looking through the texture to find my object information, I tried to use a for loop with a variable in the expression to say when it was complete. It didn't compile, it wanted a constant expression. I can do this, but it is a dynamic scene, so I don't know how many objects there are going to be.
What's the correct way to find all the objects in the scene?
You could just compile your shader to include all of the objects in your scene and appropriate intersection tests all called, then when you need to update your scene just include all the scene objects in to the shader and recompile