OpenGL Calculating Normals (Quads) - c++

My issue is regarding OpenGL, and Normals, I understand the math behind them, and I am having some success.
The function I've attached below accepts an interleaved Vertex Array, and calculates the normals for every 4 vertices. These represent QUADS that having the same directions. By my understanding these 4 vertices should share the same Normal. So long as they face the same way.
The problem I am having is that my QUADS are rendering with a diagonal gradient, much like this: Light Effect - Except that the shadow is in the middle, with the light in the corners.
I draw my QUADS in a consistent fashion. TopLeft, TopRight, BottomRight, BottomLeft, and the vertices I use to calculate my normals are TopRight - TopLeft, and BottomRight - TopLeft.
Hopefully someone can see something I've made a blunder on, but I have been at this for hours to no prevail.
For the record I render a Cube, and a Teapot next to my objects to check my lighting is functioning, so I'm fairly sure there is no issue regarding Light position.
void CalculateNormals(point8 toCalc[], int toCalcLength)
{
GLfloat N[3], U[3], V[3];//N will be our final calculated normal, U and V will be the subjects of cross-product
float length;
for (int i = 0; i < toCalcLength; i+=4) //Starting with every first corner QUAD vertice
{
U[0] = toCalc[i+1][5] - toCalc[i][5]; U[1] = toCalc[i+1][6] - toCalc[i][6]; U[2] = toCalc[i+1][7] - toCalc[i][7]; //Calculate Ux Uy Uz
V[0] = toCalc[i+3][5] - toCalc[i][5]; V[1] = toCalc[i+3][6] - toCalc[i][6]; V[2] = toCalc[i+3][7] - toCalc[i][7]; //Calculate Vx Vy Vz
N[0] = (U[1]*V[2]) - (U[2] * V[1]);
N[1] = (U[2]*V[0]) - (U[0] * V[2]);
N[2] = (U[0]*V[1]) - (U[1] * V[0]);
//Calculate length for normalising
length = (float)sqrt((pow(N[0],2)) + (pow(N[1],2)) + (pow(N[2],2)));
for (int a = 0; a < 3; a++)
{
N[a]/=length;
}
for (int j = 0; i < 4; i++)
{
//Apply normals to QUAD vertices (3,4,5 index position of normals in interleaved array)
toCalc[i+j][3] = N[0]; toCalc[i+j][4] = N[1]; toCalc[i+j][5] = N[2];
}
}
}

It seems like you are taking the vertex position values for use in calculations from indices 5, 6, and 7, and then writing out the normals at indices 3, 4, and 5. Note how index 5 is used on both. I suppose one of them is not correct.

It looks like your for-loops are biting you.
for (int i = 0; i < toCalcLength; i+=4) //Starting with every first corner QUAD vertice
{
...
for (int j = 0; i < 4; i++)
{ // ^ ^
// Should you be using 'j' instead of 'i' here?
// j will never increment
// This loop won't be called at all after the first time through the outer loop
...
}
}

You use indexes 3, 4, and 5 for storing normal:
toCalc[i+j][3] = N[0]; toCalc[i+j][4] = N[1]; toCalc[i+j][5] = N[2];
AND you use indexes 5, 6 and 7 to get point coordinates:
U[0] = toCalc[i+1][5] - toCalc[i][5]; U[1] = toCalc[i+1][6] - toCalc[i][6]; U[2] = toCalc[i+1][7] - toCalc[i][7];
Those indexes overlap (normal.x shares same index as position.z), which shouldn't be happening.
Recommendations:
Put everything into structures.
Either:
Use math library.
OR put vector arithmetics into separate appropriately named subroutines.
Use named variables instead of indexes.
By doing so you'll reduce number of bugs in your code. a.position.x is easier to read than quad[0][5], and it is easier to fix a typo in vector operation when the code hasn't been copy-pasted.
You can use unions to access vector components by both index and name:
struct Vector3{
union{
struct{
float x, y, z;
};
float v[3];
};
};
For calcualting normal in quad ABCD
A--B
| |
C--D
Use formula:
normal = normalize((B.position - A.position) X (C.position - A.position)).
OR
normal = normalize((D.position - A.position) X (C.position - B.position)).
Where "X" means "cross-product".
Either way will work fine.

Related

How do I draw a rectangular box using GL_TRIANGLE_STRIP?

I'm new to OpenGL programming and need some help wrapping my head around this issue. I found this answer detailing how to create a cube mesh using a GL_TRIANGLE_STRIP. However, I want to create a rectangular box where the one axis isn't just elongated but rather repeated, so that the geometry ends up something like this:
I can't figure out how I'm supposed to construct such a shape where the geometry gets generated correctly, with correct normals and closed ends, with the winding order to keep in mind and what-not.
How should I be thinking and defining the vertices?
The formula for the triangle strip cube does most of the work, all you have to do is extend the long faces to add more in between, which can be done with for loops.
There are two times the strip changes direction, both on the bottom face, so we just need a little manual work there. I wrote up this simple function to create vertices based on a length, and it will build a length by 1 by 1 rectangle.
void generateRect(int length, std::vector<glm::vec3>& vertices) {
std::vector<glm::vec3> vertexArray;
//Generate nescessary points
float x = length / 2.0f;
for (int i = 0; i <= length; i++) {
vertexArray.push_back(glm::vec3(x, -0.5f, 0.5f));
vertexArray.push_back(glm::vec3(x, -0.5f, -0.5f));
vertexArray.push_back(glm::vec3(x, 0.5f, 0.5f));
vertexArray.push_back(glm::vec3(x, 0.5f, -0.5f));
x -= 1.0f;
}
//+Y face
for (int i = 0; i <= length; i++) {
int index = i * 4 + 3;
vertices.push_back(vertexArray.at(index));
vertices.push_back(vertexArray.at(index - 1));
}
//Change direction (Half of -X face)
vertices.push_back(vertexArray.at(length * 4));
//+Z face
for (int i = length - 1; i >= 0; i--) {
int index = i * 4;
vertices.push_back(vertexArray.at(index + 2));
vertices.push_back(vertexArray.at(index));
}
//-Z face (+X face created as well)
for (int i = 0; i <= length; i++) {
int index = i * 4 + 3;
vertices.push_back(vertexArray.at(index));
vertices.push_back(vertexArray.at(index - 2));
}
//Change direction (Other half of -X face)
vertices.push_back(vertexArray.at(length * 4));
//-Y face
for (int i = length - 1; i >= 0; i--) {
int index = i * 4;
vertices.push_back(vertexArray.at(index + 1));
vertices.push_back(vertexArray.at(index));
}
}
From this we get our rectangle, and for texturing I just used a cubemap as I've been doing skyboxes. OpenGL is smart enough to know the winding order will be reversed every other triangle, so no need to do any fancy math. You just have to make sure it's right for the first one, in this case the first one is counter-clockwise.
For normal generation, it's a little harder as the vertices must share a normal as well, even if it's being used for a different face. I don't think there is a workaround, but I haven't done much with triangle strip so there may be, perhaps something to do with a geometry shader.

Calculating Vertex normals weird results

I know this has been asked quiet a few times but my Problem is not about how to do it. I know how this works (or at least I think so ^^) but something seems to be wrong with my implementation and I can't get behind it.
I have a procedurally generated Terrain mesh and I'm trying to calculate the normals for each vertex by averaging the normals of all the triangles this vertex is connected to. When setting the normal xyz to the rgb vertex colors it seems as if it's randomly either black (0, 0, 0) or blue (0, 0, 1).
void CalculateVertexNormal(int index){ //index of the vertex in the mesh's vertex array
std::vector<int> indices; //indices of triangles the vertex is a part of
Vector normals = Vector(0.0f, 0.0f, 0.0f, 0.0f); //sum of all the face normals
for(int i = 0; i < triangles.size(); i += 3){ //iterate over the triangle array in order
if(triangles[i] == index) //to find the triangle indices
indices.push_back(triangles[i]);
else if(triangles[i + 1] == index)
indices.push_back(triangles[i]);
else if(triangles[i + 2] == index)
indices.push_back(triangles[i]);
}
for(int i = 0; i < indices.size(); i++){ //iterate over the indices to calculate the normal for each tri
int vertex = indices[i];
Vector v1 = vertices[vertex + 1].GetLocation() - vertices[vertex].GetLocation(); //p1->p2
Vector v2 = vertices[vertex + 2].GetLocation() - vertices[vertex].GetLocation(); //p1->p3
normals += v1.Cross(v2); //cross product with two edges to receive face normal
}
vertices[index].SetNormals(normals.Normalize()); //normalize the sum of face normals and set to vertex
}
Maybe somebody could have a look and tell me what I'm doing wrong.
Thank you.
Edit:
Thanks to molbdnilo's comment I finally understood what was wrong. It was a problem with indexing the arrays and my two loops were kind of confusing as well, maybe I should get some rest ;)
I eventually came up with this, reduced to one loop:
for(int i = 0; i < triangles.size(); i += 3){
if(triangles[i] == index || triangles[i + 1] == index || triangles[i + 2] == index){
Vector v1 = vertices[triangles[i + 1]].GetLocation() - vertices[index].GetLocation();
Vector v2 = vertices[triangles[i + 2]].GetLocation() - vertices[index].GetLocation();
faceNormals += v1.Cross(v2);
}
}
vertices[index].SetNormals(faceNormals.Normalize());

C++ How to scale a shape and create an if function to not print if too big after scale?

given a shapes orignal centroid + vertices .. i.e. if its a triangle, i know all three vertices coords. How could i then create a scaling function with a scaling factor as a parameter as below.. however my current code is with error and the result are huge shapes, much more than what im scaling by (only want scale factor of 2).
void Shape::scale(double factor)
{
int x, y, xx, xy;
int disx, disy;
for (itr = vertices.begin(); itr != vertices.end(); ++itr) {
//translate obj to origin (0,0)
x = itr->getX() - centroid.getX();
y = itr->getY() - centroid.getY();
//finds distance between centroid and vertex
disx = x + itr->getX();
disy = y + itr->getY();
xx = disx * factor;
xy = disy * factor;
//translate obj back
xx = xx + centroid.getX();
xy = xy + centroid.getY();
//set new coord
itr->setX(xx);
itr->setY(xy);
}
}
I know of using iterations to run through the vertices, my main point of confusion is how can i do the maths between the factor to scale my shapes size?
this is how i declare and itialise a vertex
// could i possible do (scale*x,scale*y)? or would that be problematic..
vertices.push_back(Vertex(x, y));
Also.. the grid is i.e. 100x100. if a scaled shape was to be too big to fit into that grid, i want an exit from the scale function so that the shape wont be enlarged, how can this be done effectively? so far i have a for look but that just loops on vertices, so it will only stop those that would be outside the grid, instead of cancelling the entire shape which would be ideal
if my question is too broad, please ask and i shall edit further to standard
First thing you need to do is find the center of mass of your set of points. That is the arithmetic mean of the coordinates of your points. Then, for each point calculate the line between the center of mass and that point. Now the only thing left is to put the point on that line, but in factor * current_distance away, where current_distance is the distance from the mass center to the given point before rescaling.
void Shape::scale(double factor)
{
Vertex mass_center = Vertex(0., 0.);
for(int i = 0; i < vertices.size(); i++)
{
mass_center.x += vertices[i].x;
mass_center.y += vertices[i].y;
}
mass_center.x /= vertices.size();
mass_center.y /= vertices.size();
for(int i = 0; i < vertices.size(); i++)
{
//this is a vector that leads from mass center to current vertex
Vertex vec = Vertex(vertices[i].x - mass_center.x, vertices[i].y - mass_center.y);
vertices[i].x = mass_center.x + factor * vec.x;
vertices[i].y = mass_center.y + factor * vec.y;
}
}
If you already know the centroid of a shape and the vertexes are the distance from that point then scaling in rectangular coordinates is just multiplying the x and y components of each vertex by the appropriate scaling factor (with a negative value flipping the shape around the axis.
void Shape::scale(double x_factor, double y_factor){
for(auto i=0; i < verticies.size();++i){
verticies[i].x *= x_scale;
verticies[i].y *= y_scale;
}
}
You could then just overload this function with one that takes a single parameter and calls this function with the same value for x and y.
void Shape::scale(double factor){
Shape::scale(factor, factor);
}
If you're vertex values are not centered at the origin then you will also have to multiply those values by your scaling factor.

Optimizing a Ray Tracer

I'm tasked with optimizing the following ray tracer:
void Scene::RayTrace()
{
for (int v = 0; v < fb->h; v++) // all vertical pixels in framebuffer
{
calculateFPS(); // calculates the current fps and prints it
for (int u = 0; u < fb->w; u++) // all horizontal pixels in framebuffer
{
fb->Set(u, v, 0xFFAAAAAA); // background color
fb->SetZ(u, v, FLT_MAX); // sets the Z values to all be maximum at beginning
V3 ray = (ppc->c + ppc->a*((float)u + .5f) + ppc->b*((float)v + .5f)).UnitVector(); // gets the camera ray
for (int tmi = 0; tmi < tmeshesN; tmi++) // iterates over all triangle meshes
{
if (!tmeshes[tmi]->enabled) // doesn't render a tmesh if it's not set to be enabled
continue;
for (int tri = 0; tri < tmeshes[tmi]->trisN; tri++) // iterates over all triangles in the mesh
{
V3 Vs[3]; // triangle vertices
Vs[0] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 0]];
Vs[1] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 1]];
Vs[2] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 2]];
V3 bgt = ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); // I don't entirely understand what this does
if (bgt[2] < 0.0f || bgt[0] < 0.0f || bgt[1] < 0.0f || bgt[0] + bgt[1] > 1.0f)
continue;
if (fb->zb[(fb->h - 1 - v)*fb->w + u] < bgt[2])
continue;
fb->SetZ(u, v, bgt[2]);
float alpha = 1.0f - bgt[0] - bgt[1];
float beta = bgt[0];
float gamma = bgt[1];
V3 Cs[3]; // triangle vertex colors
Cs[0] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 0]];
Cs[1] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 1]];
Cs[2] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 2]];
V3 color = Cs[0] * alpha + Cs[1] * beta + Cs[2] * gamma;
fb->Set(u, v, color.GetColor()); // sets this pixel accordingly
}
}
}
fb->redraw();
Fl::check();
}
}
Two things:
I don't entirely understand what ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); does. Can anyone explain this, in terms of ray-tracing, to me? Here is the function inside my "Planar Pinhole Camera" class (this function was given to me):
V3 V3::IntersectRayWithTriangleWithThisOrigin(V3 r, V3 Vs[3])
{
M33 m; // 3X3 matrix class
m.SetColumn(0, Vs[1] - Vs[0]);
m.SetColumn(1, Vs[2] - Vs[0]);
m.SetColumn(2, r*-1.0f);
V3 ret; // Vector3 class
V3 &C = *this;
ret = m.Inverse() * (C - Vs[0]);
return ret;
}
The basic steps of this are apparent, I just don't see what it's actually doing.
How would I go about optimizing this ray-tracer from here? I've found something online about "kd trees," but I'm unsure how complex they are. Does anyone have some good resources on simple solutions for optimizing this? I've had some difficulty deciphering what's out there.
Thanks!
Probably the largest optimisation by far would be to use some sort of bounding volume hierarchy. Right now the code intersects all rays with all triangles of all objects. With a BVH, we instead ask: "given this ray, which triangles intersect?" This means that for each ray, you generally only need to test for intersection with a handful of primitives and triangles, rather than every single triangle in the scene.
IntersectRayWithTriangleWithThisOrigin
from the look of it
it creates inverse transform matrix from the triangle edges (triangle basis vectors are X,Y)
do not get the Z axis I would expect the ray direction there and not position of pixel (ray origin)
but can be misinterpreting something
anyway the inverse matrix computation is the biggest problem
you are computing it for each triangle per pixel that is a lot
faster would be having computed inverse transform matrix of each triangle before raytracing (once)
where X,Y are the basis and Z is perpendicular to booth of them facing always the same direction to camera
and then just transform your ray into it and check for limits of intersection
that is just matrix*vector and few ifs instead of inverse matrix computation
another way would be to algebraically solve ray vs. plane intersection
that should lead to much simpler equation then matrix inversion
after that is that just a mater of basis vector bound checking

My shadow volumes don't move with my light

I'm currently trying to implement shadow volumes in my opengl world. Right now I'm just focusing on getting the volumes calculated correctly.
Right now I have a teapot that's rendered, and I can get it to generate some shadow volumes, however they always point directly to the left of the teapot. No matter where I move my light(and I can tell that I'm actually moving the light because the teapot is lit with diffuse lighting), the shadow volumes always go straight left.
The method I'm using to create the volumes is:
1. Find silhouette edges by looking at every triangle in the object. If the triangle isn't lit up(tested with the dot product), then skip it. If it is lit, then check all of its edges. If the edge is currently in the list of silhouette edges, remove it. Otherwise add it.
2. Once I have all the silhouette edges, I go through each edge creating a quad with one vertex at each vertex of the edge, and the other two just extended away from the light.
Here is my code that does it all:
void getSilhoueteEdges(Model model, vector<Edge> &edges, Vector3f lightPos) {
//for every triangle
// if triangle is not facing the light then skip
// for every edge
// if edge is already in the list
// remove
// else
// add
vector<Face> faces = model.faces;
//for every triangle
for ( unsigned int i = 0; i < faces.size(); i++ ) {
Face currentFace = faces.at(i);
//if triangle is not facing the light
//for this i'll just use the normal of any vertex, it should be the same for all of them
Vector3f v1 = model.vertices[currentFace.vertices[0] - 1];
Vector3f n1 = model.normals[currentFace.normals[0] - 1];
Vector3f dirToLight = lightPos - v1;
dirToLight.normalize();
float dot = n1.dot(dirToLight);
if ( dot <= 0.0f )
continue; //then skip
//lets get the edges
//v1,v2; v2,v3; v3,v1
Vector3f v2 = model.vertices[currentFace.vertices[1] - 1];
Vector3f v3 = model.vertices[currentFace.vertices[2] - 1];
Edge e[3];
e[0] = Edge(v1, v2);
e[1] = Edge(v2, v3);
e[2] = Edge(v3, v1);
//for every edge
//triangles only have 3 edges so loop 3 times
for ( int j = 0; j < 3; j++ ) {
if ( edges.size() == 0 ) {
edges.push_back(e[j]);
continue;
}
bool wasRemoved = false;
//if edge is in the list
for ( unsigned int k = 0; k < edges.size(); k++ ) {
Edge tempEdge = edges.at(k);
if ( tempEdge == e[j] ) {
edges.erase(edges.begin() + k);
wasRemoved = true;
break;
}
}
if ( ! wasRemoved )
edges.push_back(e[j]);
}
}
}
void extendEdges(vector<Edge> edges, Vector3f lightPos, GLBatch &batch) {
float extrudeSize = 100.0f;
batch.Begin(GL_QUADS, edges.size() * 4);
for ( unsigned int i = 0; i < edges.size(); i++ ) {
Edge edge = edges.at(i);
batch.Vertex3f(edge.v1.x, edge.v1.y, edge.v1.z);
batch.Vertex3f(edge.v2.x, edge.v2.y, edge.v2.z);
Vector3f temp = edge.v2 + (( edge.v2 - lightPos ) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
temp = edge.v1 + ((edge.v1 - lightPos) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
}
batch.End();
}
void createShadowVolumesLM(Vector3f lightPos, Model model) {
getSilhoueteEdges(model, silhoueteEdges, lightPos);
extendEdges(silhoueteEdges, lightPos, boxShadow);
}
I have my light defined as and the main shadow volume generation method is called by:
Vector3f vLightPos = Vector3f(-5.0f,0.0f,2.0f);
createShadowVolumesLM(vLightPos, boxModel);
All of my code seems self documented in places I don't have any comments, but if there are any confusing parts, let me know.
I have a feeling it's just a simple mistake I over looked. Here is what it looks like with and without the shadow volumes being rendered.
It would seem you aren't transforming the shadow volumes. You either need to set the model view matrix on them so they get transformed the same as the rest of the geometry. Or you need to transform all the vertices (by hand) into view space and then do the silhouetting and transformation in view space.
Obviously the first method will use less CPU time and would be, IMO, preferrable.