How to implement light occlusion in deferred shading system? - c++

I am implementing a deferred shading system which uses the compute shader(in DirectX 11) to cull lights in tiles, so I can get thousands of lights at a stable framerate.The problem comes when I have to determine whether a light is blocked by scene geometry.I mean my point lights pass trough walls and bridges.I have a shadow map on the main light's(the sun) point of view, but generating a shadow map for each point light on the scene would require generating a thousand cubemaps and that's not possible.So how is this problem usually dealt with?Games like Dead Space 3 and Battlefield 3 have a lot of lights on scene, yet they don't bleed trough solid objects.

One straight forward solution would be the use of Screen-Space Ambient Occlusion approaches. There you try to estimate the occlusion based on sampling of the neighbourhood. One approach I know is SSDO which directly targets the creation of shadows in screen-space. Probably you will end up with lots of artifacts in complex scenes. The advantage is that SSDO also adds some global illumination effects.
I think most games/engines are trying to overcome such problems by preprocessing steps.
Static Lightening: If your light source does not move around (lights in buildings...) compute Lightmaps or some extra vertex attributes containing the light.
Tweak the lights: Just adjust falloff distance or intensity or location until there is no noticeable bleeding.
Some own ideas: Depending on your representation of the light (sphere/disc?) you could compute a pruned shape for the lights. Pixels behind a wall would not lie inside the new light volume and are not lightened this way. If you can't shape your light volume arbitrary you probably could add one or two planes per light defining walls. These planes can be undefined for most lights and only pushed to the GPU for a light near a wall. Than a pixel can be checked on which side it lies during the lightening process for the respective light.

Related

Voxel Cone Tracing in Deferred pipeline?

I am working on a project where I have to implement voxel cone tracing for indirect light in C++/OpenGL. I already have a deferred renderer setup but most of the VCT examples I could find usually draw the scene once for voxelization and once with cone tracing shaders. Is it possible to run cone tracing shaders over a fullscreen quad and sample vertex data from the GBuffer or is that generally a stupid idea? Do I lose accuracy because I only have per pixel vertex data?
Is it possible to run cone tracing shaders over a fullscreen quad and sample vertex data from the GBuffer or is that generally a stupid idea?
Yes, however that's not voxel cone tracing anymore. That's Screen-Space Global Illumination (SSGI) instead. You can think of the voxelized scene in VCT as a 3D GBuffer, which makes all the difference between "screen space" and "full scene".
Do I lose accuracy because I only have per pixel vertex data?
Absolutely. All screen space approximations suffer from the same set of artifacts. They do not account for surfaces that aren't directly visible on the screen (either out of frame or occluded by visible geometry). Most noticeably, when the camera moves and objects enter or exit the frame, the reflections on visible surfaces would also change unrealistically.
A question is perhaps, what's your motivation for trying doing both.
When you do Voxel Cone tracing you are trying to solve the exact same problem you would be trying to solve with deferred rendering and now have the overhead of both techniques, if you are already willing to deal with the overhead of Voxel Cone tracing then it's better to fully commit to that technique.
The reason is simple, if you are doing voxel cone tracing then you have a 3D texture of some sort (could be voxel oct tree, and actual 3D texture or some other structure). That is essentially a 3D Gbuffer.
If your idea is simply to eliminate the need for such a structure and use the existing planar GBuffer instead, then you are introducing artifacts that do not appear with traditional SSRT techniques.
In essence trying both at the same time is likely to give you the worst of both worlds rather than the best.

Shadow Rendering optimization

Ok this is basically an engine design question. I came to the point that it's time to optimise rendering process and have started with shadows. Currently my engine only renders shadow maps. For aproximately 5 projected shadow maps and 5 cube map shadows I get around 235FPS on my GT 640 (all shadows are 512x512 resolution). If I change resolution to 2048x2048 I get 55FPS which is actually quite good. But I am still considering to redesign shadow map rendering entirelly and this is why I have opened this topic just to discuss. There are two approaches that I am thinking about and I would like you to share ideas or experiences.
First approach is based on what I call straight-forward shadow mapping approach. Each light has it's own framebuffer object, with 16-floating point RG texture (for variance shadows). One idea that pops to my mind is to have 2 fbo's of some sort. If the light is completelly static I would render only static objects and save depth buffer (static objects depth buffer). This would only need to be rendered once. Now I render second framebuffer with only dynamic objects and merge it with "static depth buffer" (ofcourse use static depth buffer when rendering dynamics to do early-z test and discard fragments). Whenever an object inside lights frustum moves or another object gets into frustum the step is repeated. "static depth buffer" is copied, dynamics are rendered and merged with static. In this way I could avoid rendering static objects over and over each frame. But my consern here is memory overhead. Thinking about a scene with 50+ lights (some of them point lights with cubemap shadows) and probably colossal amounts of memory would be used.
Second aproach would be as I found it on the internet a little ago it was called "Importance shadow mapping" I guess. The engine has for example 10 projected shadow maps and 5 cubemap shadows prepared. When doing the frustum culling and list of objects and lights to be rendered, 10 projected lights and 5 point lights get assigned to shadow maps waiting. This way engine is limited to a few shadows only and memory is saved. But in this case I can't implement separated static and dynamic shadows system. Or maybe I could combine both methods and per each light only save "static depth buffer" and combine it with engines shadow maps?
With the second approach, another thing that pops into my mind is to detect which shadows fall into cameras field of view. For example: A cube is visible to the light, but not to the camera. Obviously a shadow of the cube might be visible on the final image. It depends on how light and camera are positioned and rotated. Sometimes cube's shadow might affect the final image other times not. My idea is to do a simple test on CPU to determine if shadow is visible or not. Doing a simple edge detection I can create a polygon from lights point to cubes AABB edge and into the distance (clipped by the lights range). I could then detect if this polygon intersects with cameras frustum to determine if shadow is visible to the camera or not and so discard it from rendeing into the shadow map.
These are some quick ideas and I would like your thoughts on them? What could be done and what is worth trying.

OpenGL/OpenTK Fill Interior Space

I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.

Adding sunlight to a scene

I am rendering some interactive scene in 3D and I am wondering: How do I add sunlight to it? I'll try to explain the best how I have it setup now.
What you see right now is that the directional light (the sun) is denoted by the yellow dot, which I want to replace with a realistic sunlight.
The current order of drawing is:
For all objects:
Do a light depth pass for the shadow.
Then for all objects:
Do a draw pass for the object itself, using the light depth texture.
Where would I add adding a realistic sunlight? I have a few ideas though about it:
After the current drawing order, save the output into a texture, and use a shader that takes the texture and adds sunlight on top of it.
After the current drawing order, use a shader that adds the sunlight simply to what has been drawn so far, such that it will be drawn after everything is on the screen.
Or maybe draw the sunlight before the rest of the scene gets drawn?
How would you deal with rendering a nice sunlight that represents a real life sun?
To realistically simulate sunlight, you probably need to implement some form of global illumination. A lot of the lighting we see on objects comes not directly from the light source, but from light bounced off of other objects. Global illumination simulates the bounced light.
[Global Illumination] take[s] into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination).
Another techniques that may not be physically accurate, but gives "nice" looking results is Ambient Occlusion:
ambient occlusion is used to represent how exposed each point in a scene is to ambient lighting. So the enclosed inside of a tube is typically more occluded (and hence darker) than the exposed outer surfaces; and deeper inside the tube, the more occluded (and darker) it becomes.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.