I have implemented PSSM (Parallel-Split Shadow Map) for my RPG. It uses only the "sun" (one directional light high above.
So my question is, is there a special technique to add say max 4 omni-directional lights to the pixel shader?
It would work somewhat along these lines :
At the shadowmap application (or maybe at its creation):
if in light: do as usual
else: check if any light is close enough to light this pixel (and if, don't shadow it).
Maybe this can even be done in the shadowmap generation (so filtering will be applied to those omni lights too)
Any hints or tips warmly welcomed!
The answer to this question is so obvious that the question itself suggests that you've gone too far into the hacks of 3D graphics and need to remember what all of this is actually supposed to be doing.
A shadowmap is a way to tell whether a particular location on a surface is in shadow relative to a particular light in the scene. The shadowmap answers the question, "Is there something solid between the point on the surface and the light source?"
If the answer to this question is "yes", then that light source does not contribute to the lighting computations for that point. If the answer is "no", then it does.
The color of a point on a surface is based on the incoming light from all light sources and the surface characteristics of that point on the surface (diffuse color, specular shininess, normal, etc). All of the computations from each light add into one another to produce the final color value that the point represents.
You generally also have various hacks. The ambient term is often used to represent lots of indirect illumination. Light maps can take the place of light from other sources that you're not computing dynamically in the shader. And so on. But in the end, they are all just lights.
Your lighting equation takes the light parameters (color, direction/position, or just the ambient intensity for the ambient light) and the surface characteristics (as stated above), and produces the quantity of light reflected from the surface. Since the reflectance from one light is not affected by the reflectance from other lights, you can compute this independently for each light.
All of these values are added together to produce the final value.
The fact that you can short-circuit the computation of one of these via a shadowmap test is irrelevant to the overall scheme. Just add the various lighting terms to one another, and that's your answer. There is no "special technique" to doing this; you just do it.
Related
I am rendering some interactive scene in 3D and I am wondering: How do I add sunlight to it? I'll try to explain the best how I have it setup now.
What you see right now is that the directional light (the sun) is denoted by the yellow dot, which I want to replace with a realistic sunlight.
The current order of drawing is:
For all objects:
Do a light depth pass for the shadow.
Then for all objects:
Do a draw pass for the object itself, using the light depth texture.
Where would I add adding a realistic sunlight? I have a few ideas though about it:
After the current drawing order, save the output into a texture, and use a shader that takes the texture and adds sunlight on top of it.
After the current drawing order, use a shader that adds the sunlight simply to what has been drawn so far, such that it will be drawn after everything is on the screen.
Or maybe draw the sunlight before the rest of the scene gets drawn?
How would you deal with rendering a nice sunlight that represents a real life sun?
To realistically simulate sunlight, you probably need to implement some form of global illumination. A lot of the lighting we see on objects comes not directly from the light source, but from light bounced off of other objects. Global illumination simulates the bounced light.
[Global Illumination] take[s] into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination).
Another techniques that may not be physically accurate, but gives "nice" looking results is Ambient Occlusion:
ambient occlusion is used to represent how exposed each point in a scene is to ambient lighting. So the enclosed inside of a tube is typically more occluded (and hence darker) than the exposed outer surfaces; and deeper inside the tube, the more occluded (and darker) it becomes.
While experimenting with lighting in OpenGL (using the LWJGL) I found that a positional light illuminates parts of a model which actually should be in the shadow. Here is an example:
Am I doing something wrong or is this just the way OpenGL's positional lights work?
Shadow mapping is not a built-in feature of OpenGL. In the normal case, the visibility of specular lighting only considers the angle of the surface relative to the light source and the camera. Determining whether or not there is something between the light source and a surface requires greater sophistication and additional computation.
You doing it right and result is as expected.
By introducing directional light you do not cast any shadows. You just darkening pixels where normals are faced out of light source.
Tail just don't know about existence of rabbit. To darken a tail you need to implement shadow mapping (basically, you need to know if tail's geometry is visible from a point of view of the light source, or it is occluded by rabbit).
I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.
I am implementing a deferred shading system which uses the compute shader(in DirectX 11) to cull lights in tiles, so I can get thousands of lights at a stable framerate.The problem comes when I have to determine whether a light is blocked by scene geometry.I mean my point lights pass trough walls and bridges.I have a shadow map on the main light's(the sun) point of view, but generating a shadow map for each point light on the scene would require generating a thousand cubemaps and that's not possible.So how is this problem usually dealt with?Games like Dead Space 3 and Battlefield 3 have a lot of lights on scene, yet they don't bleed trough solid objects.
One straight forward solution would be the use of Screen-Space Ambient Occlusion approaches. There you try to estimate the occlusion based on sampling of the neighbourhood. One approach I know is SSDO which directly targets the creation of shadows in screen-space. Probably you will end up with lots of artifacts in complex scenes. The advantage is that SSDO also adds some global illumination effects.
I think most games/engines are trying to overcome such problems by preprocessing steps.
Static Lightening: If your light source does not move around (lights in buildings...) compute Lightmaps or some extra vertex attributes containing the light.
Tweak the lights: Just adjust falloff distance or intensity or location until there is no noticeable bleeding.
Some own ideas: Depending on your representation of the light (sphere/disc?) you could compute a pruned shape for the lights. Pixels behind a wall would not lie inside the new light volume and are not lightened this way. If you can't shape your light volume arbitrary you probably could add one or two planes per light defining walls. These planes can be undefined for most lights and only pushed to the GPU for a light near a wall. Than a pixel can be checked on which side it lies during the lightening process for the respective light.
In Computer graphics, what's the difference between material and texture?
In OpenGL, a material is a set of coefficients that define how the lighting model interacts with the surface. In particular, ambient, diffuse, and specular coefficients for each color component (R,G,B) are defined and applied to a surface and effectively multiplied by the amount of light of each kind/color that strikes the surface. A final emmisivity coefficient is then added to each color component that allows objects to appear luminous without actually interacting with other objects.
A texture, on the other hand, is a set of 1-, 2-, 3-, or 4- dimensional bitmap (image) data that is applied and interpolated on to a surface according to texture coordinates at the vertices. Texture data alters the color of the surface whether or not lighting is enabled (and depending on the texture mode, e.g. decal, modulate, etc.). Textures are used frequently to provide sub-polygon level detail to a surface, e.g. applying a repeating brick and mortar texture to a quad to simulate a brick wall, rather than modeling the geometry of each individual brick.
In the classical (fixed-pipeline) OpenGL model, textures and materials are somewhat orthogonal. In the new programmable shader world, the line has blurred quite a bit. Frequently textures are used to influence lighting in other ways. For example, bump maps are textures that are used to perturb surface normals to effect lighting, rather than modifying pixel color directly as a regular "image" texture would.
The question suggests a common misunderstanding of various computer graphics concepts. It is one born of pre-shader thinking and coding.
A texture is nothing more than a container for a series of one or more images, where an image is an array of some dimensionality (1D, 2D, etc) of values, where each value can be a vector of between 1 and 4 numbers. Textures also have some special techniques for accessing values from them that allow for interpolation and the minimizing of aliasing artifacts from sampling.
A texture can contain colors, but textures do not have to contain colors. Textures can be used to vary parameters across an objects surface, but that is not all textures can be used for.
Textures have no direct association with "materials"; you can use them for a wide variety of things (or nothing at all).
A material is a concept in lighting. Given a particular light and a point on the surface, the intensity (ie: color) of light reflected from that surface at that point is governed by a lighting equation. This equation is a function of many parameters. But those parameters are grouped into two categories.
One category of light equation parameters are the light parameters. These describe the characteristics of the light source. Point lights vs. directional lights vs. spot lights. The light intensity (again: color) is another parameter. Since the intensity itself may vary depending on the direction of the surface point relative to the light (think flashlights or spotlights), the intensity may be accessed from a texture. That's how many games project flashlights over a dark room.
The other category of light equation parameters describes the characteristics of the surface at that point. These are called material parameters. The material parameters, or material for short, describe important properties of the surface at the point in question. The normal at that point is an important one. There is also the diffuse reflectance (color), specular reflectance, specular shininess (exponent for Phong/Blinn-Phong) and various other parameters, depending on how comprehensive your lighting equation is.
Where do these values come from? Light parameters tend to be fixed in the world. Lights don't move per-object (though if you're doing lighting in object space, then each object would have its own light position). The light intensity may vary. But that's mostly it; anything else happens between frames, not within a single frame's rendering. So most light parameters are shader uniforms.
Material parameters can come from a variety of sources. Using a shader uniform effectively means that all points on the surface have that same value. So your could have the diffuse color come from a uniform, which would give the surface a uniform color (modified by lighting, of course). You can vary material parameters per-vertex, by passing them as vertex attributes or computing them from other attributes.
Or you can provide a parameter by mapping a texture to a surface. This mapping involves associating texture coordinates with vertex positions, so that the texture is directly attached to the surface. The texture is sampled at that location during rendering, and that value is used to perform the lighting.
The most common textures you're familiar with, "color textures", are simply varying the diffuse reflectance of the surface. The texture provides the diffuse color at each point along the surface.
This is not the only function of textures. You could just as easily vary the specular shininess over the surface. Or the light intensity. Or anything else.
Textures are tools. Materials are just a group of light equation parameters.
What I think of with those terms:
A texture is an image that is mapped onto a 3D object.
A material simulates a physical material. Take "glass" for example. You couldn't produce a glass effect with a plain texture map. It has parameters like how it reflects and refracts light at different angles. A material could also be a simple texture map so sometimes the terms mean the same thing.
Although the terms can be used interchangeably, it's common to refer to a bitmap as a texture.
While a fully defined texture, with lighting properties, bump mapping etc, would more usually be referred to as a material.
But I should stress that depending on the tools being used, their terminology will be used by the related community.