3d shading/lighting is lost with ambient shaded sides - opengl

How do you guys handle shading in a 3d game? I have a directional light source that shades one side of a tree made of cubes. The remaining 3 sides all get ambient shading only. So the 3d effect is lost when looking at two ambient shaded sides. Am I missing something? Should I be shading the side furthest from the light source even darker? I tried looking at Fallout 3 and it kinda looks like this is what they do however Minecraft appears to shade a grass mound with two opposite sides light and the remaining two opposite sides dark kinda giving the effect that there are two directional lights for the two light shaded sides and ambient light for the dark shaded sides.

It sounds like your light source is currently axis-aligned (e.g. has direction: x, y, 0, or 0, y, z). This will fully-light the side of your tree facing the light, and not light the others at all. One thing you could do to improve things is move the light slightly (by adding a bit to the 0 for x or z). this will mean that two faces are lit, by different amounts (assuming that the 0 is increased to somewhere between 0 and the x/z value). Then you've only got two un-lit faces. A second light will make them less similar if necessary.

With an object made out of cubes, you're guaranteed that one side of the object is going to be in the dark. With ambient light, you'll illuminate it a bit, but the edges will still be unshaded. There are a few options you can use:
Use a texture for the cubes to help show the shape of the cubes
Take multiple passes, bouncing light off specular surfaces (expensive!)
Make a second light source (though this will probably look very unnatural)
It sounds like what you're doing is supposed to be very simple, so I'd say your current implementation seems satisfactory.

As a start, try calling glLightModeli to set GL_LIGHT_MODEL_TWO_SIDE set to GL_TRUE.

Related

Computing normals for squares

I've got a model that I've loaded from a JSON file (stored as each tile /w lots of bools for height, slope, smooth, etc.). I've then computed face normals for all of it's faces and copied them to each of their verticies. What I now want to do (have been trying for days) is to smooth the vertex normals, in the simplest way possible. What I'm trying to do is set each vertex normal to a normalized sum of it's surrounding face normals. Now, my problem is this:
The two circled vertices should end up with perfectly mirrored normals. However, the one on the left has 2 light faces and 4 dark faces. The one on the right has 1 light face and 6 dark faces. As such, they both end up with completely different normals.
What I can't work out is how to do this properly. What faces should I be summing up? Or perhaps there is a completely different method I should be using? All of my attempts so far have come up with junk and / or consisted of hundreds of (almost certainly pointless) special cases.
Thanks for any advice, James
Edit: Actually, I just had a thought about what to try next. Would only adding a percentage of each triangle based on it's angle work (if that makes sense). I mean, for the left, clockwise: x1/8, x1/8, x1/4, x1/8, x1/8, x1/4 ???
And then not normalize it?
That solution worked wonderfully. Final result:
Based on the image it looks like you might want to take the average of all unique normals of all adjacent faces. This avoids double counting faces with the same normal.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

glsl effect on low poly surface

I've got a vertex/fragment shader, point light and attenuation, I need to apply such shader to a cube face and I need to see a change in gradation of colours, if I use an high poly mesh
everything works quite well and the effect it's nice my goal is to have a gradient on this low poly mesh.
I tried to do this gl_FragColor = vec4(n,1) n = normal but I get a solid colour per surface
and this can be the reason why I don't see a gradation?
cheers
It is correct behaviour that you are observing. Cube is perfectly flat, thus it's normals per face vertex are the same.
Note however, that in calculations of Phong lighting you also should use the position of fragment, which is interpolated between 3 (or 4, when using quads) vertices of the given (sub)face. It can be used to calculate angle between light position and eye vector in the given fragment's position.
I've experienced similar problems lately, and I figured out that your cube really needs to shine, if you want to see something non-flat; and I mean literally. Set the shininess to reasonably high value (250-500). You should see a focused, moving point of light on the face that is reflecting directly to you. If not, your lightning shader is probably wrong.

Can you mix omnidirectional lights with a shadowmap?

I have implemented PSSM (Parallel-Split Shadow Map) for my RPG. It uses only the "sun" (one directional light high above.
So my question is, is there a special technique to add say max 4 omni-directional lights to the pixel shader?
It would work somewhat along these lines :
At the shadowmap application (or maybe at its creation):
if in light: do as usual
else: check if any light is close enough to light this pixel (and if, don't shadow it).
Maybe this can even be done in the shadowmap generation (so filtering will be applied to those omni lights too)
Any hints or tips warmly welcomed!
The answer to this question is so obvious that the question itself suggests that you've gone too far into the hacks of 3D graphics and need to remember what all of this is actually supposed to be doing.
A shadowmap is a way to tell whether a particular location on a surface is in shadow relative to a particular light in the scene. The shadowmap answers the question, "Is there something solid between the point on the surface and the light source?"
If the answer to this question is "yes", then that light source does not contribute to the lighting computations for that point. If the answer is "no", then it does.
The color of a point on a surface is based on the incoming light from all light sources and the surface characteristics of that point on the surface (diffuse color, specular shininess, normal, etc). All of the computations from each light add into one another to produce the final color value that the point represents.
You generally also have various hacks. The ambient term is often used to represent lots of indirect illumination. Light maps can take the place of light from other sources that you're not computing dynamically in the shader. And so on. But in the end, they are all just lights.
Your lighting equation takes the light parameters (color, direction/position, or just the ambient intensity for the ambient light) and the surface characteristics (as stated above), and produces the quantity of light reflected from the surface. Since the reflectance from one light is not affected by the reflectance from other lights, you can compute this independently for each light.
All of these values are added together to produce the final value.
The fact that you can short-circuit the computation of one of these via a shadowmap test is irrelevant to the overall scheme. Just add the various lighting terms to one another, and that's your answer. There is no "special technique" to doing this; you just do it.

OpenGL Spotlight shining through from rear-face

I have a Spotlight source in OpenGL, pointing towards a texture mapped sphere.
I rotate the lightsource with the sphere, such that if I rotate the sphere to the 'non-light' side, that side should be dark.
The odd part is, the spotlight seems to be shining through my sphere (it's a solid, no gaps between triangles. The light seems to be 'leaking' through to the other side.
Any thoughts on why this is happening?
Screenshots:
Front view, low light to emphasize the problem
Back view, notice the round area that is 'shining through'
Its really hard to tell from the images, but:
Check if GL_LIGHT_MODEL_TWO_SIDE is being set (two sided lighting), but more importantly have a look at the normals of the sphere you are rendering.
Edit: Also - change the background colour to something lighter. Oh and make sure you aren't rendering with alpha blending turned on (maybe its a polygon sorting issue).
OK, I'm a nob - I was specifying my normals, but not calling glEnableClientState(GL_NORMAL_ARRAY). Hence all normals were facing one direction (I think that's the default, no?)
Anyway - a lesson learned - always go back over the basics.