Calculating normals for lighting in opengl - opengl

I have two quads for which I need to find the normal.The co-ordinates are as follows
for quad 1:
(-2,1.25,-1)
(-2,2.2,0)
(1,2.2,0)
(2,1.25,-1)
I have got the normal as (0,.73,-.69)
for quad 2:
(-2,2.2,0)
(2,2.2,0)
(2,1.25,1)
(-2,1.25,1)
normal:(0,.73,.69)
I have already calculated the normals.Can someone please confirm whether these normals are correct?
Also I read about normal pointing inwards and outwards..would someone explain that concept to me?

Your normals basically look correct. For the first quad, I get:
(0.0, 0.725, -0.689)
For the second one:
(0.0, -0.725, -0.689)
As you can see, I got the opposite sign for the second normal. Which leads directly to the second part of your question.
The term "outwards" does not really make sense for a isolated quad. It is mostly applied to closed shapes, where it should make intuitive sense. Picture a sphere, with a normal vector drawn starting at a point on the sphere. The normal pointing "outwards" means that it points away from the center of the cube, which means that it points to the outside. "inwards" is then of course the opposite, where the normal points towards the center of the sphere, or to the inside of the shape.
There's another way of looking at it, since normals are mostly used for lighting calculations. The normals need to point to the side of the surface that you want to see lighted. Most often, you look at shapes from the outside, so you want the outside lighted. Which means that you mostly want the normals pointing outwards. If you have open surfaces that need to be lighted when viewed from either side, there are slightly more complex lighting calculations that can handle that, which are typically found under "two-sided lighting".
There's a related concept that is also important here, which is the "winding order". It defines if the vertices are arranged clockwise or counter-clockwise when viewing them from a certain direction. OpenGL uses the winding order to decide if a triangle faces the viewer. Again, you care about having the desired winding order when looking at the surface from the outside, or more generally from the side you want to see when you display the surface. OpenGL uses counter-clockwise winding by default, so you want counter-clockwise winding when looking at a surface from the side you want to be visible, which for closed shapes is mostly from the outside. You can often get away with the winding order being "wrong" if you don't eliminate backwards facing triangles, which is done with glEnable(GL_CULL_FACE). But in any case, you can save yourself from running into problems later if you always use a consisting winding order for your primitives.
This leads us back to the normal calculation. Since only the sign ended up different, none of our calculations are technically wrong. I assumed that the quads used counter-clockwise winding, which means that I see the "outside" of the quad from the direction where the vertices appear in counter-clockwise order. Since I also want the normals pointing towards the outside, I calculated the normals that way. In other words, with the normal I calculated, if you move away from the quad in direction of the normal, and then look back at the quad, the vertices would be in counter-clockwise order.

Related

Normalvector for Quad

I've drawn a simple quad with glBegin and glEnd. With a for-loop I create copies of the quad and rotate it around my y-Axis in 3D space.
Now the problem is that I only see the quads in the front. These in the back are not displayed. I assume that the problem lies within the normal vector, which direction is towards me. Is there a possibility to define two normal vectors for one quad.
Sounds like you need to disable backface culling:
glDisable(GL_CULL_FACE);
These in the back are not displayed. I assume that th problem lies within the normal-vector,
The problem is not the normal vector, but what OpenGL considers front side and backside. What's what is determined by the winding of the vertices on the screen. If the vertices are on screen in counterclockwise order, then by default OpenGL assumes the front face. If back face culling is enables, back faces will not be drawn. You can disable culling, but then you'll get odd lighting results.
The best way is to draw the back side explicitly with it's own set of quads; windings and normals adjusted.

How to create an even sphere with triangles in OpenGL?

Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.

can vertices coincide in convex polygons?

I am new to openGL and I am reading the redbook. Now, as an exercise I want to manually draw a sphere. For that I am dividing the sphere into slices and stacks, and thus I get multiple rectangles, but near the poles of the sphere I get triangles. (hope this was clear what I am doing). Now I know that if you draw a polygon with GL_POLYGON and it happens to intersect itself, the behavior is undefined. My question is this: given three points v1, v2, v3 which are not on one line, is it undefined behavior to do this:
glBegin(GL_POLYGON)
vertex v1
vertex v1
vertex v2
vertex v3
glEnd();
This may be combining two unrelated questions into one, but I am wondering also this: if I choose to divide the rectangles in my sphere routine into triangles, does it matter how I do it, that is, by which diagonal I divide the rect into two triangles? I am guessing that for drawing a single-colored sphere it won't matter, but I don't know about textures, shaders, lighting etc.
When I was doing openGL stuff, I quickly stuck to using just triangles. They are special in that a triangle is not ambiguous in any way.
You example though, I would imagine this will work, though probably with artefacts.
how you split a rectangle shouldn't matter, just as long as you pay attention to which way your triangles are wound, which way you define the points as this is what dictates their front and back.
But definitely stick to triangles, images these four points of a square
(0,0,0) (1,0,0) (1,0,1) (0,0,1)
Fairly easy to see it is a flat square, but what if I change them to
(0,1,0) (1,0,0) (1,1,1) (0,0,1)
what do you have now? it could be drawn like a valley or like a hill. if I defined this shape with triangles, you know exactly what I am describing
(0,1,0) (1,0,0) (1,1,1)
(1,1,1) (1,0,1) (0,1,0)
A hill like shape
ok... so I side tracked a little here... my point is, I don't know what your code would do in practice, but I don't think you should use it any way. and how you split up rectangles shouldn't matter as long as your triangles are described the right way around.
This is no problem whatsoever. OpenGL always has to be able to deal with the possibility of rasterising geometry where multiple vertices fall in the same location since even different input points may end up as the same output point depending on your modelview and projection matrices (or your geometry and/or vertex shader if you're on the programmable pipeline). It is designed to deal in the mathematically correct way under a wide variety of edge cases.
OpenGL's primary test for whether to paint a pixel with geometry is whether its centre falls within the mathematical bounds of the primitive being drawn*. So OpenGL can render polygons that paint non-continuous sets of pixels (which generally happens when they become almost vanishingly thin) or that paint no pixels whatsoever (which tends to be when they end up really small, but they may technically be of arbitrarily large size as long as they squeeze between pixel centres).
The exact tests used at the hardware level may vary from vendor to vendor and are guaranteed to be correct only for geometry that is convex on screen — which is why most people say stick to triangles, since they're unavoidably convex.
(*) a separate screen-oriented test being applied to pixels exactly on a boundary to ensure they're attributed to only exactly one polygon where polygons meet along a common edge

What is back face culling?

What exactly is back face culling in OpenGL? Can you give me a specific example with e.g. one triangle?
If you look carefully you can see examples of this in a lot of video games. Any time the camera accidentally moves through an object - typically a moving object like a character - notice how the world continues to render correctly. That's because the back sides of the triangles that form the skin of the character are not rendered; they are effectively transparent. If this were not the case then every time the camera accidentally moved inside an object either the screen would go black (because the interior of the object is not lit) or you'd get a bizarre perspective on what the skin of the object looks like from the inside.
Back face culling is where the triangles pointing away from the camera/viewpoint are not considered for drawing.
Wikipedia defines this as:
It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera and will not be drawn.
Other systems use the face normal and do the dot product with the view direction.
It is a relatively quick way of deciding whether to draw a triangle or not. Consider a cube. At any one time 3 of the sides of the cube are going to be facing away from the user and hence not visible. Even if these were drawn they would be obscured by the three "forward" facing sides. By performing back face culling you are reducing the number of triangles drawn from 12 to 6 (2 per side).
Back face culling works best with closed "solid" objects such as cubes, spheres, walls.
Some systems don't have this as they consider faces to be double sided and hence visible from either direction.
It's only and optimization technique.
When you look at a closed object, say a cube, you only see about half the faces : the faces that are towards you (or, at least, the faces that are not towards you are always occluded by another face that points towards you)
If you skip drawing all these backwards-facing faces, it will have two consequences :
- the rendering time will be twice better (on average)
- the final render won't change (since another, front-facing face will be drawn on top of a "culled" one)
So you basically get x2 perf for free.
In order to know whether the triangle is front- or back-facing, you take v0-v1 and v0-v2, make a cross product. This gives you the face normal. If this vector is towards you ( dot(normal, viewVector) < 0), draw.
Triangles have their coordinates specificed in a specific order, clockwise IIRC.
When the graphics engine look at a triangle from a specific direction, and the coordinates are counter-clockwise, it knows that it's looking at the backside of the triangle through an object. As the front side of the object is covering the triangle, it doesn't have to be drawn.

Opengl Depth buffer and Culling

Whats's the difference between use back face culling and a buffer of depth in OpenGL?
Backface culling is when OpenGL determines which faces are facing away from the viewer and are therefore unseen. Think of a cube. No matter how your rotate the cube, 3 faces will always be invisible. Figure out which faces these are, remove them from the list of polygons to be drawn and you just halved your drawing list.
Depth buffering is fairly simple. For every pixel of every polygon drawn, compare it's z value to the z buffer. if it's less than the value in the z buffer set the z buffer value as the new z buffer value. If not, discard the pixel. Depth buffering gives very good results but can be fairly slow as each and every pixel requires a value lookup.
In reality there is nothing similar between these two methods and they are often both used. Given a cube you can first cut out half the polygons using culling then draw them using z buffering.
Culling can cut down on the polygons rendered, but it's not a sorting algorithm. That's what Z buffering is.
A given triangle has two sides, the front face and the back face. The side you are looking at is determined by the order the points appear in the vertex list (also called the winding). Typically lists of triangles have alternating winding so that you can reuse the preceding two points but the facing of a given triangle in the strip doesn't alternate. Back face culling is the optimization step where in triangles in the scene which are oriented away from the view are removed from the list of triangles to draw.
A depth buffer (z-buffer) is used to hang onto the closest thing (the depth is relative to the view) that has already been rendered. If the thing that comes up next in the draw list is behind something that I've drawn already (ie, it has a depth that places it farther away) I can skip drawing it, as it is obstructed. If the new thing to draw is closer, I draw it and I update the depth buffer with the new, closer value.