OpenGL Lighting for uniform illumination [closed] - opengl

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Have a look at the following 2 images -
The 2 images are of the same model at different angles. It is made of multiple cylinders stacked on top of each other. As you can see there is something funny with the lighting. One side of all cylinders is dark in the first image. When the same model is rotated, the other end of all the cylinders becomes dark. The explanation is pretty clear. The normals get aligned in the direction of light to light up one side. I want both sides to be equally well lit and not compromise on the 3d look and feel of the cylinder. How should I set up the lighting?
I am using Smooth Shading.

It's hard to say what the problem is without more information.
What I think is happening is that the cylinders were created with smooth shading normals. This is visually pleasing but it can create problems like this one when the poly count is low.
(source: k-3D.org)
In this image, the first cylinder from the left has flat shading and the middle one has smooth shading. As you can see in this example, the smooth shaded one also has problems with too little light on one side. The reason is that, with smooth shading, the normals on the edge of the cylinder are an average of the normals from the side and the normals from the top, and that can cause lighting problems. See this diagram:
The yellow arrow is the light direction, the red is the smooth normal and the greens are the flat normals. See how the angle between the smooth normal and the light is around 90ยบ so it will get no light.
The solution is to set the normals as smooth, but detach the top and bottom faces from the side. This way, the circular edge won't get smoothed but the side will. The result is the third cylinder on the first image.
If you cannot achieve that with your software, an easy solution is to add a bevel around the edges like this:
The bevel can be as small as you want and it will achieve the effect you want.
Hope it helps.

Related

How to polygonize or create triangles in surface nets isosurface creation algorithm

I am implementing a naive surface net algorithm and have a question related to creating triangles. I think I am unable to understand how triangulation works in surface nets.
So I have voxels where the surface intersects. I also have a center of a surface node (for now its just a center of the cube). Now I am ready to create triangles between 6 possible neighbor for each surface net cube. I created 12 possible triangles for each node but am looking for ways to reduce the number of triangles since there are duplicates.
In the figure below, I am considering building triangles only for a single quadrant. In this case cubes A,B,C and D which is also left, center, back and bottom. If all 4 surface nodes have intersection I currently create face 1,2 and 3 and also the remaining faces 4,5,6 of the box. Something doesn't seem right. I am wondering if I am in the right direction or if there is another way to do create triangles in surface nets.
Image source.
There is another way of doing this, and a bit of code is shown here:
Basic Dual Contouring Theory
(under the line #construct faces)
you can also negate the "dirs" (then it would be (-1,0,0),(0,-1,0),(0,0,-1)) variable and do it in the same loop as you are doing your sampling, this increases performance a little bit
I hope this helped

OpenGL - How to show the occluded region of a sprite as a silhouette [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm making a 2D game with OpenGL, using textured quads to display 2D sprites. I'd like to create an effect whereby any character sprite that's partially hidden by a terrain sprite will have the occluded region visible as a solid-colored silhouette, as demonstrated by the pastoral scene in this image.
I'm really not sure how to achieve this. I'm guessing the solution will involve some trick with the fragment shader, but as for specifics I'm stumped. Can anyone point me in the right direction?
Here's what I've done in the past
Draw the world/terrain (everything you want the silhouette to show through)
Disable depth test
disable draw to depth buffer
Draw sprites in silhouette mode (a different shader or texture)
enable depth test
enable draw to depth buffer
draw sprites in normal mode
draw anything else that should go on top (like the HUD)
Explanation:
When you draw the first time (in silhouette mode) it will draw over everything, but not affect the depth buffer, so that when you draw the 2nd time you won't get z-fighting. When you draw the 2nd time, some of it will be behind the terrain, but where the silhouette has already been drawn.
You can do things like this using stenciling or depth buffering.
When rendering the wall make sure that it writes a different value to the stencil buffer than the background. Then render the cow twice, once passing the stencil test when not at the wall, and once otherwise. Use a different shader each time.

3d shading/lighting is lost with ambient shaded sides

How do you guys handle shading in a 3d game? I have a directional light source that shades one side of a tree made of cubes. The remaining 3 sides all get ambient shading only. So the 3d effect is lost when looking at two ambient shaded sides. Am I missing something? Should I be shading the side furthest from the light source even darker? I tried looking at Fallout 3 and it kinda looks like this is what they do however Minecraft appears to shade a grass mound with two opposite sides light and the remaining two opposite sides dark kinda giving the effect that there are two directional lights for the two light shaded sides and ambient light for the dark shaded sides.
It sounds like your light source is currently axis-aligned (e.g. has direction: x, y, 0, or 0, y, z). This will fully-light the side of your tree facing the light, and not light the others at all. One thing you could do to improve things is move the light slightly (by adding a bit to the 0 for x or z). this will mean that two faces are lit, by different amounts (assuming that the 0 is increased to somewhere between 0 and the x/z value). Then you've only got two un-lit faces. A second light will make them less similar if necessary.
With an object made out of cubes, you're guaranteed that one side of the object is going to be in the dark. With ambient light, you'll illuminate it a bit, but the edges will still be unshaded. There are a few options you can use:
Use a texture for the cubes to help show the shape of the cubes
Take multiple passes, bouncing light off specular surfaces (expensive!)
Make a second light source (though this will probably look very unnatural)
It sounds like what you're doing is supposed to be very simple, so I'd say your current implementation seems satisfactory.
As a start, try calling glLightModeli to set GL_LIGHT_MODEL_TWO_SIDE set to GL_TRUE.

OpenGL lighting question?

Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)

Can someone describe the algorithm used by Ken Silverman's Voxlap engine?

From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.