OpenGL Spotlight shining through from rear-face - opengl

I have a Spotlight source in OpenGL, pointing towards a texture mapped sphere.
I rotate the lightsource with the sphere, such that if I rotate the sphere to the 'non-light' side, that side should be dark.
The odd part is, the spotlight seems to be shining through my sphere (it's a solid, no gaps between triangles. The light seems to be 'leaking' through to the other side.
Any thoughts on why this is happening?
Screenshots:
Front view, low light to emphasize the problem
Back view, notice the round area that is 'shining through'

Its really hard to tell from the images, but:
Check if GL_LIGHT_MODEL_TWO_SIDE is being set (two sided lighting), but more importantly have a look at the normals of the sphere you are rendering.
Edit: Also - change the background colour to something lighter. Oh and make sure you aren't rendering with alpha blending turned on (maybe its a polygon sorting issue).

OK, I'm a nob - I was specifying my normals, but not calling glEnableClientState(GL_NORMAL_ARRAY). Hence all normals were facing one direction (I think that's the default, no?)
Anyway - a lesson learned - always go back over the basics.

Related

OpenGL/OpenTK Fill Interior Space

I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

ray tracing shadow bug

http://pastebin.com/vkTJt0sT
I am trying to render an image similiar to the left one and having problem with shadows+reflections.
Right now, only the shadow code is open for showing the problem.
As you can see, the red ball should be shadowed near the green one, but the pixels all get messed up for some weird reason. When I close the shadow part of the code, it renders the red ball normally without shadows.
I think the root of this problem is also affecting reflections. hope you guys can give me some tips; I’m losing it.
Given that your left image shows cancer, this is a classic case of the shadow ray hitting the object off which it was reflected. When hit-testing a shadow ray, you need to exclude the surface that generated the ray. Just pass the source object into your shadow function, and ignore it.
This method only works for convex shapes. If you have shapes that do self-shadow (a torus, for example), you need to be more general. The usual approach is to define an epsilon (floating-point error tolerance) and ignore any intersection points that are nearer than that.
The other approach is to detect which side of a surface you hit. You should not self-shadow on a sphere because the ray is being cast in the same general direction as the surface normal (ie the dot product of the outgoing ray and the surface normal is positive) - this should not be counted as a shadow.
Solved:
There was an algorithmic problem which isn't easy to explain,
Another method is: Basicly I make a check if the cosine angle is bigger than 0.0001, if it is bigger then this I don't shadow it

Cylinder Normals

I'm developing an OpenGL application which right now only draws a big tube consisting of several small cylinders (kind of like a slinky). I'm getting an annoying effect when I turn the lighting and normals on, as from certain angles I get these annoying black dots on the cylinders' borders:
I belive this is a byproduct of the fact the cylinders are very thin. Basically I set the normal to (0,0,+/- 1) when setting the top/base, and then side normals are (cos(toRadian(beta)), sin(toRadian(beta)), 0).
Is it possible to remove this effect whitout getting 'fatter' cylinders? Or is there something wrong in the way I define the normals?
Thanks
This appears to be the rendering of the sides of the cylinders. The yellow in the image corresponds to the tops of the cylinders. The sides of the cylinders are at 90 degrees to the tops, so they are not lit (I'm guessing the light is in the same direction as the camera) and appear black. With the cylinders being so thin these will not fill make pixels so they don't show up much.
How to fix it? I've got a couple of ideas:
1) Just draw the tops and bottoms, not the sides - this will definitely fix the problem when viewed from this angle, but will lead to further problems if your camera moves.
2) Disable lighting, then all faces will be drawn the same colour (assuming the sides are the same colour as the top).
3) Use multi-sampling - this means most of the edges won't appear (as each pixel is more likely to be top than side due to the angle).
4) Add more lights around the scene, perpendicular to the current light.
1 & 2 are your best bet depending on what you're trying to achieve.

3d shading/lighting is lost with ambient shaded sides

How do you guys handle shading in a 3d game? I have a directional light source that shades one side of a tree made of cubes. The remaining 3 sides all get ambient shading only. So the 3d effect is lost when looking at two ambient shaded sides. Am I missing something? Should I be shading the side furthest from the light source even darker? I tried looking at Fallout 3 and it kinda looks like this is what they do however Minecraft appears to shade a grass mound with two opposite sides light and the remaining two opposite sides dark kinda giving the effect that there are two directional lights for the two light shaded sides and ambient light for the dark shaded sides.
It sounds like your light source is currently axis-aligned (e.g. has direction: x, y, 0, or 0, y, z). This will fully-light the side of your tree facing the light, and not light the others at all. One thing you could do to improve things is move the light slightly (by adding a bit to the 0 for x or z). this will mean that two faces are lit, by different amounts (assuming that the 0 is increased to somewhere between 0 and the x/z value). Then you've only got two un-lit faces. A second light will make them less similar if necessary.
With an object made out of cubes, you're guaranteed that one side of the object is going to be in the dark. With ambient light, you'll illuminate it a bit, but the edges will still be unshaded. There are a few options you can use:
Use a texture for the cubes to help show the shape of the cubes
Take multiple passes, bouncing light off specular surfaces (expensive!)
Make a second light source (though this will probably look very unnatural)
It sounds like what you're doing is supposed to be very simple, so I'd say your current implementation seems satisfactory.
As a start, try calling glLightModeli to set GL_LIGHT_MODEL_TWO_SIDE set to GL_TRUE.