Headlight or understanding of way the light works - opengl-es-lighting

I'm developing a project of 3D maze with OpenGL. Despite my poor knowledge of programming, I managed to build it, add textures and steering with mouse and keyboard (stolen from net in most part, though). Now I'm stuck with lighting.
My idea (similar to famous game SCP-087) was to make a headlight, with a short range of light. I've read a lot about lighting in OpenGL and normals, but light still acts odd and kind of randomly.
I have the functions, that move (translate) my world around the camera. I understand, that if I place my light0 on the beginning of code, before translations in Render() function, that light should be "still placed" with camera, and world shall move around them - that sounds simple. But it doesn't seem to work. Most of walls are lit when I'm standing sideways to them (and completely dark when I turn to them with other side).
My other problem are normals. I'm pretty sure, that my normals aren't specified well, but have no idea how they should look like. I'm using triangle strips for walls (theoretically for improved efficiency, but practically it's just my caprice) and I could nowhere find a solution how normals should be specified for straight walls made of triangle strips (how to attach normals for faces, when I have only two vertices definign a single face, actually). Making one wall of simple quads and attaching four perpendicular, lenght of one unit vectors for one face also didn't make any improvement.
I can neither handle properly parameters of OpenGL state machine, especially attenuation - no changes in code seems to affect final lighting.
I have currently no idea, where to search and what to read about lighting in OpenGL. I would like to make it supersimple, with no additional libraries (except glu.h). Can you give me a hand with this problem or refer me to any useful source of knowledge (simple examples needed...)?

Related

openGL simple 2d light

I am making a simple pixel top-down game. And I want to add some simple lights there, but I don't know what the best way to do that. This image is an example of light what I want to realise.
http://imgur.com/a/PpYiR
When I googled that task, I saw only solutions for that kind of light.
https://www.youtube.com/watch?v=mVlYsGOkkyM
But I need to increase a brightness of the texture part when the light source is near. How can I do this if I am using textures with GL_QUADS without UV?
Ok, my response may not totally answer you question, but it will lead you down the right path.
It appears you are using immediate mode, this is now depreciated and changing to VBOs (vertex buffer objects) will make you life easier.
The lighting in the picture appears to be hand drawn. You cannot create that style of lighting exactly with even the best algorithm.
You really have two options to solve your problem, and both of them will require texture coordinates and shaders.
You could go with lightmaps, which use a pre generated texture multiplied over the texture of a quad. This is extremely fast, but requires some sort of tool to generate the lightmaps which might be a bit over your head at the moment.
Instead, learn shader based lighting. Many tutorials exist for 3d lighting but the principles remain the same for 2D.
Some Googling will get you the resources you need to implement shaders.
A basic distance based lighting algorithm will look like this:
GL_Color = texturecolor * 1.0/distance(light_position,world_position);
It multiplies the color of the texel by how far away the texel is from the light position. There are tutorials that go more into depth on this.
If you want to make the lighting look "retro" like in the first image,you can downsample the colors in a postprocesing step.

OpenGL/OpenTK Fill Interior Space

I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.

OpenGL, How to create a "bumpy Polygon"?

I am unsure of how to describe what I'm after, so I drew a picture to help:
My question, is it possible within OpenGL to create the illusion of those pixel looking bumps on a single polygon, without having to resort to using many polygons? And if it is, what's the method?
I think what your looking for is actually Parallax mapping (Or Parallax Occlusion mapping).
Demos:
http://www.youtube.com/watch?v=01owTezYC-w
http://www.youtube.com/watch?v=gcAsJdo7dME&NR=1
http://www.youtube.com/watch?v=njKdLvmBl88
Parralax mapping basically works by using the height map to alter the texture UV coordinate being used.
The main disadvantage to parallax is that anything that appears to be 'outside' the polygon will be clipped (think of looking at an image on a 3D tv), so it's best for things indented in a surface rather than sticking out of it (although you can reduce this by making the polygon lager than the visible texture area). It's also fairly complex and would need to be combined with other shader techniques for a good effect.
Bump mapping works by using a texture for normal's, this makes the light's shading appear to be 3D however it does not change 3D data depending on the position of the viewer only the shading. Bump mapping would also be fairly useless for the OP's sample image since the surface is all the same angle just at different heights, bump mapping relies on the changes in the surfaces angles. You would have to slope the edges like this.
Displacement mapping/tessellation uses a texture to generate more polygons rather than just being 1 polygon.
There's a video comparing all 3 here
EDIT: There is also Relief mapping, which is a similar to parallax. See demo. There's a comparison video too (it's a bit lowquality but relief looks like it gives better depth).
I think what you're after is bump mapping. The link goes to a simple tutorial.
You may also be thinking of displacement mapping.
Of the techniques mentioned in other people's answers:
Bump mapping is the easiest to achieve, but doesn't do any occlusion.
Parallax mapping is probably the most complex to achieve, and doesn't work well in all cases.
Displacement mapping requires high-end hardware and drivers, and creates additional geometry.
Actually modeling the polygons is always an option.
It really depends on how close you expect the viewer to be and how prominent the bumps are. If you're flying down the Death Star trench, you'll need to model the bumps or use displacement mapping. If you're a few hundred meters up, bumpmapping should suffice.
If you have DX11 class hardware then you could tessellate the polygon and then apply displacement mapping. See http://developer.nvidia.com/node/24. But then it gets a little complicated to get it running and develop something on top of it.

MipMapping problems in OpenGL

I'm loading 3D objects (obj or 3ds or collada files) into my openGL application. The 3 environment is quite large (a few hundred metres in all axis').
My problem is that smaller 3D objects (i.e. in the order of ~< 1-2m ) don't appear to be depth-tested properly. Depending on the zoom of the camera, I can sometimes see the back face of the object (I have been using a simple cube for testing) or other faces becoming visible/invisible/torn. Please see the attached images for a better explanation.
I am led to believe the problem is due to mipmapping being enabled. I would either like to disable mipmapping (can someone suggest a simple, fast way to do this) or set the resolution to be greater for the mipmapped objects. Or am I barking up the wrong tree completely?
Thanks
Chris
That's the result of insufficient z-buffer precission, which is an issue in games that have huge worlds but (relatively) small objects. The immediate solution would be to try using a 24 bit z-buffer instead of a 16 bit one. Another way to tackle this would be to render the game world it two steps, first the big distant objects, then clearing the zbuffer and then drawing the closer objects.
This specific problem is called z-fighting by the way, here's a great resource on this issue: http://www.codermind.com/articles/Depth-buffer-tutorial.html
The take-away is the last paragraph of the article above:
the true issue is that you can't draw
both objects that are very far and
objects that are very near with the
same depth buffer equations. If you
want to draw very far objects then you
need to sacrifice your near view by
pushing it further. To avoid clipping
artifacts you can make your collision
envelope large enough so that your
clip plane will never intercept an
existing object within your frustum.
Or you can make object gradually
disappear with transparency as they
come near your clip plane.
If you want to keep near objects and
at the same time draw mountains (or
planets) in the far distance, then you
can cut your rendering in parts. First
drawing your far objects, then
clearing the depth buffer and
rendering the near objects with a
different z buffer.
Like Julio, I believe that this is a depth precision issues, not something related to mip-mapping. However, I suggest you start by adjusting your near and far clipping plane before changing anything else (You are probably already using a 24-bit depth buffer anyways, as that is the default on most drivers/cards). Particularly the near plane should be as far away as possible for your scene. Look for calls to glFrustum or gluPerspective.

Occlusion with octrees

I just started learning opengl and writing a first person shooter but I'm getting horrible framerates when I draw 5000 cubes. So now I'm attempting to perform occlusion and culling using an octree. What I'm confused about is where to cast the rays from. Do I only cast them from the fustrum near plane? It seems like I would miss part of the fustrum that expands. Any help is appreciated.
If 5000 cubes already gives bad framerates, you should consider changing the way you render your cubes.
It's very unclear to us what you are drawing the cubes for. If they are static (ie. don't move), then its best to pack them all into a single vertex buffer. If the cubes are supposed to move, then you should go for instancing. If you're going for a landscape made of cubes like minecraft, then you should create vertex buffers but only put in the faces of cubes that are actually visible.
I'd like to help more, but I'm unsure what you're doing.