OpenGL: Lighting Inside of Cube - opengl

I am creating a scene where I use a box to represent a room and different models within that box. When I enable lighting, my models light up fine but the room itself (the inside of the box) does not light up, or rather it is darker than it should be. Is it because I am trying to light up the inside of a cube? I am sure the normals are correct. Please let me know what you think!
I suppose the normals aren't correct but how do I go about finding the correct normals for inside of the cube. Currently, I am only passing the center point of each face into the normalf function.

If you pass in the center points your normals will be facing the wrong way.
For example, if your cube is 2 units in size and centered on the origin, the center point of the face on the positive X axis will be (1, 0, 0), and that would also happen to be the correct normal for the outward facing side of that face.
However the face pointing inwards will have a normal that's the inverse of that, i.e. (-1, 0, 0).

Related

2d shadow mapping

I have been wondering about how to implement this with openGL:
I have a map, with a flat floor and walls. Every thing here is 2d, there is no 3d geometry, only 2d poligons that compose the map.
Using the vertex of the polygons I cast shadows, to define the viewable area.
The shadows define the field of view, but since the cells with walls obstructi view, they are also darkened. I can draw the walls on top of the shadows, but doing so would show even walls outside the field of view.
I have been suggested to approach this problem with shadow mapping. I should render the 2D scene into 4 different 1D textures that hold the depth of the distance to the first colliding surface.
The problem is that I have no idea how to render the projection of the 2d scene into the 1D texture. If I use, for example:
gluLookAt (x, y, 0.0, 0.0, x , y+1, 0.0, 0.0, 1.0);
To render the top view, the result is still 2D. Also, nothing would be rendered since all the vertex will be at the same plane, so all surfaces will be ortogonal to the camera.
Do you have any tip or idea of how to do these 2D-to-1D projections? I have been googling for scenarios like this one, but all of them are in 3D enviroments.
Shadow mapping assumes either a directional light, or a spotlight, and you have a point light. But since you only need shadow on the floor, you could model it as a spotlight that hovers e.g. 2 m above the floor and points downwards. All the walls would have to be at least 2 m high. In the first shadow mapping pass, you could render the floor and all the walls into the shadow buffer.
However, I would not go with shadow mapping, but use volumetric shadows instead. If you go from 3D to 2D, a 3D volume becomes a 2D polygon.
Assuming that all the walls are on a regular grid, we can compute view rays from the player's position P to all the corners of walls. For each corner, store the adjacent walls, and ignore all the walls that face away from the player. Then cast rays from P to each corner, convert the rays to polar coordinates, and sort them by their angle, say counter-clockwise. Now go through this sorted list in a sweeping motion, and build the shadow polygon.
The shadow polygon consists either of corner points in this list, or of intersections between a) a line that is parallel to a wall and b) a line that goes through P and a corner. The only thing that makes this a bit complicated is that you have to find the wall that receives the shadow. Since the input is so small, I would probably start with brute force (check the corner against each wall), and see how slow it is. Note that only player-facing walls can cast shadows. Also note that the point closest to the player doesn't need to be visible.
It's probably going to look really cool with a moving character.

Rotating a cube so that the front facing face remains square

In OpenGL I have to rotate a cube (and translate it) so that it looks like in these two images.
Without any transformations only the front facing red face is visible. I just don't understand how I can rotate it (so that the top and right sides are visible like in the images) and keep the red face perfectly square.
I've thought about translating it to the bottom left, but that only moves the red square around, it doesn't make the other faces visible.
I'm using glFrustum(-20, 20, -20, 20, -1, -10);
If you are using a perspective projection (which you are) and the front face of your cube is parallel to the x-y plane, then you will only see the other two faces if the cube is entirely in one quadrant of the eye space; that is, if there were horizontal and vertical lines cutting the window in half, the cube would have to lie entirely within one of the four resulting rectangles.
Other options for making the other two faces show are
use an isometric projection
rotate the cube to bring the other faces into view.
To aid in visualising this, try playing Minecraft (say) and moving around in different ways to see how different sides of different blocks come into view.
That is not a rotation.
The second picture looks like an orthographic projection (glOrtho), but that may be a coincidence.
In either case, you can only get an image like that if the cube is translated away from the origin toward the bottom left, as you suggest.

OpenGL rubiks cube - face rotation with mouse

I am working on my first real OpenGL Project. It is a 3x3x3 Rubiks Cube. Here is a link to a simple screenshot of what i have so far(my rubiks cube)
Rotating the cube is done with dragging the mouse while holding the right mouse button. This works using the example of a arcball from NeHe Tutorials(NeHe Arcball)
I have the class singleCubes which represents one cube via 6 actual quads, stored in a display list that can be used in it´s draw method.
Class ComplexCube has an array of 3x3x3 singleCubes and is used as interface when interacting with the complete rubiks cube.
Now i want to rotate each specific face according to the mousedragging with left mouse button down. I use picking to get the id of the corresponding side of the single cube the user clicked on. This works also. So i click on a side of one cube on a face and depending on the direction of the dragging i set a rotation and offset factor of the cubes that get affected. (i also want to implement that u actually see the face rotate instead of just changing the color)
Now my Problem is that when i rotate the rubiks cube in any direction with right mouse dragging, it becomes upside down for example. So when i click on a side and want to rotate the face to the right, it´s going the wrong direction because i can´t keep track if the cube is upside down or whatever. Due to the use of the arcball rotation i dont have a x- or y-rotation angle which i could use to determine this.
Question 1: How can i keep track or later on get the information if the cube is upside down, tilted etc in order to translate the mouse dragging information(when rotating one face) when using the arcball example linked above?
// In render function
glPushMatrix();
{
glMultMatrixf(Transform.M); // Rotation applied by arcball object
complCube.draw(); // Draw all the cubes using display lists
}
glPopMatrix();
Setup: C++ with Microsoft Visual Studio 2008, GLEW, freeglut
You could use gluUnProject to convert mouse coordinates to 3d space and get a vector (difference between two points). This vector could then be used to apply a "force" to the selected face. Since gluUnProject uses the projection matrix, it would automatically deal with the orientation of the camera.
Basically, once you get your "force" vector, you project it onto the three axes (so onto (1,0,0), (0,1,0), (0,0,1)). Then choose the one with the largest magnitude. Then you have to convert this direction into a rotation axis as in the diagram below (sorry for the bad paint skills):
So what we have is the "force" vector in black and the selected rubiks face in grey. To get the rotation axis, just take the cross product the "force" vector with the normal of the selected face. This gives the red arrow. From that, you should be able to rotate your cubes in the right direction.
Edit to answer the question in more detail
So continuing from my explanation, I will give an example of how this will help you. Let's first assume your screen is 800x800 pixels and your rubiks cube is always centred. Now lets also assume that, as per your drawings in the comments, that we are in the case on the left.
We drag the mouse and get two positions which using gluUnProject are transformed into world coordinates (the numbers were chosen to show my point, not by any calculation):
p1 : (600, 600) -> (1, -0.5, 0)
p2 : (630, 605) -> (1.3, -0.505, 0)
Now we get the difference vector: p2 - p1 = v = (0.3, -0.05, 0). The reason that I was saying to "project onto the three axes" is so that you extract your major movement (which in this case is 0.3 in the x axis) (since the rubiks cube can't rotate along diagonals). To do the "projection" you just have to take the x, y, z axes individually and create vectors from them so you wind up with:
v1 = (0.3, 0, 0)
v2 = (0, -0.05, 0)
v3 = (0, 0, 0)
Now take the magnitudes and discard the smallest vectors, so we are left with the vector v1 = (0.3, 0, 0). This is your movement vector in world space. Now you take the cross product of that vector, with the normal vector of the selected face (which in this case would be (0, 0, 1)). This gives you a vector which points down (0, 1, 0) (after normalization) (in this step you will probably also have to extract the largest component only (0.02, 1.2, 0.8) -> (0, 1, 0) otherwise you would get bizarre rotations if your camera was not pointing directly along the main axes). You can now use that vector as the rotation axis and use 0.3 as your rotation amount (if it rotates in the opposite direciton to that expected, just put a -).
Now how does this help if your cube is upside down? Suppose we click on the screen in the same way. We now get:
p1 : (600, 600) -> (-1, 0.5, 0)
p2 : (630, 605) -> (-1.3, 0.505, 0)
See the difference in the world coordinates? They are inverted! So when you take the difference vector p2 - p1 = v = (-0.3, 0.05, 0). Extracting the largest component vector gives (-0.3, 0, 0). Doing the cross product once again gives you the rotation axis, but now the rotation is in the opposite direction, which is what you want.
Another reason for the cross product with the normal of the face is that if you were to select the faces on the top (in our drawings), then it would either give a rotation axis along the x or z axes (to the left, or into the screen) which is what you want for the top faces.
Like most of us, you will encounter the famous problem called Gimbal Lock.
see: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=208925
This problem is extremely well documented so there is not much point for me to go into details here. I am sure you will find a ton of information about it.

How to draw smooth lines without using GLSL, FSAA nor GL_LINE_SMOOTH?

So i need a method to do smooth lines without using:
Full Screen Antialiasing (slow)
Shaders (not supported on all cards)
GL_LINE_SMOOTH (causes a crash on some cards)
Only way i could think of doing this was using a textured rectangle that is always faced at camera direction, but the problems are:
1. how do i always face the rectangle at the camera (efficiently) ?
2. how do i keep its size always the same no matter how far away my camera is looking at it?
Any other ideas?
Billboarding is a simple concept, but can be difficult to implement. A billboard is a flat object, usually a quad (square), which faces the camera. This direction usually changes constantly during runtime as the object and camera move, and the object needs to be rotated each frame to point in that direction. There are two types of billboarding: point and axis. Point sprites, or point billboards, are a quad that is centered at a point and the billboard rotates about that central point to face the user. Axis billboards come in two types: axis aligned and arbitrary. The axis-aligned (AA) billboards always have one local axis that is aligned with a global axis, and they are rotated about that axis to face the user. The arbitrary axis billboards are rotated about any axis to face the user.
http://nehe.gamedev.net/data/articles/article.asp?article=19
You can use point sprites, they are always the same size and always face the camera.
http://www.opengl.org/registry/specs/ARB/point_sprite.txt

What is back face culling?

What exactly is back face culling in OpenGL? Can you give me a specific example with e.g. one triangle?
If you look carefully you can see examples of this in a lot of video games. Any time the camera accidentally moves through an object - typically a moving object like a character - notice how the world continues to render correctly. That's because the back sides of the triangles that form the skin of the character are not rendered; they are effectively transparent. If this were not the case then every time the camera accidentally moved inside an object either the screen would go black (because the interior of the object is not lit) or you'd get a bizarre perspective on what the skin of the object looks like from the inside.
Back face culling is where the triangles pointing away from the camera/viewpoint are not considered for drawing.
Wikipedia defines this as:
It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera and will not be drawn.
Other systems use the face normal and do the dot product with the view direction.
It is a relatively quick way of deciding whether to draw a triangle or not. Consider a cube. At any one time 3 of the sides of the cube are going to be facing away from the user and hence not visible. Even if these were drawn they would be obscured by the three "forward" facing sides. By performing back face culling you are reducing the number of triangles drawn from 12 to 6 (2 per side).
Back face culling works best with closed "solid" objects such as cubes, spheres, walls.
Some systems don't have this as they consider faces to be double sided and hence visible from either direction.
It's only and optimization technique.
When you look at a closed object, say a cube, you only see about half the faces : the faces that are towards you (or, at least, the faces that are not towards you are always occluded by another face that points towards you)
If you skip drawing all these backwards-facing faces, it will have two consequences :
- the rendering time will be twice better (on average)
- the final render won't change (since another, front-facing face will be drawn on top of a "culled" one)
So you basically get x2 perf for free.
In order to know whether the triangle is front- or back-facing, you take v0-v1 and v0-v2, make a cross product. This gives you the face normal. If this vector is towards you ( dot(normal, viewVector) < 0), draw.
Triangles have their coordinates specificed in a specific order, clockwise IIRC.
When the graphics engine look at a triangle from a specific direction, and the coordinates are counter-clockwise, it knows that it's looking at the backside of the triangle through an object. As the front side of the object is covering the triangle, it doesn't have to be drawn.