Building a UV Sphere in c++ - c++

I'm trying to make a UV sphere in C++ using Qt Creator, I want to build the sphere without using the openGL commands. I'm trying trying to add the vertices to lObject and then add the normals and triangles. The sphere will have a radius of 1. First problem is that it doesn't render a sphere when drawn, so maybe I'm not adding the right vertices or maybe I'm not adding the triangles correctly. Any help on what I'm doing wrong would be great.
Here's what I've tried:
NodeObject* ObjectFactory::buildSphere(int slices, int stacks)
{
// Allocate a new node object
NodeObject* lObject = new NodeObject();
for(int i=0; i<stacks; i++)
{
double lnum1 = 360.0/stacks;
double lTheta = ((double)i)*(lnum1*(M_PI/180.0));
double lNextTheta = ((double)(i+1))*lnum1*(M_PI/180.0);
for(int j=0; j<slices; j++)
{
double lnum2 = 180.0/slices;
double lPhi = ((double)i)*(lnum2*(M_PI/180.0));
double lNextPhi = ((double)(i+1))*lnum1*(M_PI/180.0);
lObject->addVertex(0.0, 1.0, 0.0); //Top
lObject->addVertex(sin(lTheta)*cos(lPhi), sin(lTheta)*sin(lPhi), cos(lTheta));
lObject->addVertex(sin(lNextTheta)*cos(lNextPhi), sin(lNextTheta)*sin(lNextPhi), cos(lNextTheta));
lObject->addVertex(sin(lTheta)*cos(lPhi), -(sin(lTheta)*sin(lPhi)), cos(lTheta));
lObject->addVertex(sin(lNextTheta)*cos(lNextPhi), -(sin(lNextTheta)*sin(lNextPhi)), cos(lNextTheta));
lObject->addVertex(0.0, -1.0, 0.0); //Bottom
lObject->addNormal(0.0,1.0,0.0);
lObject->addNormal(0.0,-1.0,0.0);
lObject->addNormal(sin(lTheta)*cos(lPhi),sin(lTheta)*sin(lPhi), cos(lTheta));
for(int k=0; k<pSlices*6; k++)
{
if(i==0) { lObject->addTriangle(0,1,2,0,0,0); }
else if(i+1 == stacks) {lObject->addTriangle(2,0,1,0,0,0); }
else
{
lObject->addTriangle(k, k+1, k+2,k,k+1,k+2);
}
}
}
}
return lObject;
}

In you third for loop what is the value for pSlices? Also why are you adding top and bottom for each stack?
And for better practice generate the vertices first, then do other stuff.
You can use a simple data structure for holding the data such as, also generate one layer of vertices at the inner loop.:
QVector3D sphereVertices[stacks][slices];
For first layer fill it with (0.0, 1.0, 0.0);
For last layer fill it with (0.0, -1.0, 0.0);
Then iterate over the vertices to calculate normals and create triangles in CCW.
for ( int i = 0; i < stacks -1; i++){ //-1 is due to we are using the next stack to create the face
for ( int j = 0; j < slices -1; j++){ //we are also using the next slice
//Add these vertex indices
//First triangle indices
//i*slices + j, (i+1)*slices + j, (i+1)*slices + j + 1
//Second triangle indices
//i*slices + j, (i+1)*slices + j + 1, i*slices + j + 1
//Furthermore you can calculate triangle normal by using these vertices
//https://www.opengl.org/wiki/Calculating_a_Surface_Normal
}
}

Related

How do I draw a rectangular box using GL_TRIANGLE_STRIP?

I'm new to OpenGL programming and need some help wrapping my head around this issue. I found this answer detailing how to create a cube mesh using a GL_TRIANGLE_STRIP. However, I want to create a rectangular box where the one axis isn't just elongated but rather repeated, so that the geometry ends up something like this:
I can't figure out how I'm supposed to construct such a shape where the geometry gets generated correctly, with correct normals and closed ends, with the winding order to keep in mind and what-not.
How should I be thinking and defining the vertices?
The formula for the triangle strip cube does most of the work, all you have to do is extend the long faces to add more in between, which can be done with for loops.
There are two times the strip changes direction, both on the bottom face, so we just need a little manual work there. I wrote up this simple function to create vertices based on a length, and it will build a length by 1 by 1 rectangle.
void generateRect(int length, std::vector<glm::vec3>& vertices) {
std::vector<glm::vec3> vertexArray;
//Generate nescessary points
float x = length / 2.0f;
for (int i = 0; i <= length; i++) {
vertexArray.push_back(glm::vec3(x, -0.5f, 0.5f));
vertexArray.push_back(glm::vec3(x, -0.5f, -0.5f));
vertexArray.push_back(glm::vec3(x, 0.5f, 0.5f));
vertexArray.push_back(glm::vec3(x, 0.5f, -0.5f));
x -= 1.0f;
}
//+Y face
for (int i = 0; i <= length; i++) {
int index = i * 4 + 3;
vertices.push_back(vertexArray.at(index));
vertices.push_back(vertexArray.at(index - 1));
}
//Change direction (Half of -X face)
vertices.push_back(vertexArray.at(length * 4));
//+Z face
for (int i = length - 1; i >= 0; i--) {
int index = i * 4;
vertices.push_back(vertexArray.at(index + 2));
vertices.push_back(vertexArray.at(index));
}
//-Z face (+X face created as well)
for (int i = 0; i <= length; i++) {
int index = i * 4 + 3;
vertices.push_back(vertexArray.at(index));
vertices.push_back(vertexArray.at(index - 2));
}
//Change direction (Other half of -X face)
vertices.push_back(vertexArray.at(length * 4));
//-Y face
for (int i = length - 1; i >= 0; i--) {
int index = i * 4;
vertices.push_back(vertexArray.at(index + 1));
vertices.push_back(vertexArray.at(index));
}
}
From this we get our rectangle, and for texturing I just used a cubemap as I've been doing skyboxes. OpenGL is smart enough to know the winding order will be reversed every other triangle, so no need to do any fancy math. You just have to make sure it's right for the first one, in this case the first one is counter-clockwise.
For normal generation, it's a little harder as the vertices must share a normal as well, even if it's being used for a different face. I don't think there is a workaround, but I haven't done much with triangle strip so there may be, perhaps something to do with a geometry shader.

Calculating Vertex normals weird results

I know this has been asked quiet a few times but my Problem is not about how to do it. I know how this works (or at least I think so ^^) but something seems to be wrong with my implementation and I can't get behind it.
I have a procedurally generated Terrain mesh and I'm trying to calculate the normals for each vertex by averaging the normals of all the triangles this vertex is connected to. When setting the normal xyz to the rgb vertex colors it seems as if it's randomly either black (0, 0, 0) or blue (0, 0, 1).
void CalculateVertexNormal(int index){ //index of the vertex in the mesh's vertex array
std::vector<int> indices; //indices of triangles the vertex is a part of
Vector normals = Vector(0.0f, 0.0f, 0.0f, 0.0f); //sum of all the face normals
for(int i = 0; i < triangles.size(); i += 3){ //iterate over the triangle array in order
if(triangles[i] == index) //to find the triangle indices
indices.push_back(triangles[i]);
else if(triangles[i + 1] == index)
indices.push_back(triangles[i]);
else if(triangles[i + 2] == index)
indices.push_back(triangles[i]);
}
for(int i = 0; i < indices.size(); i++){ //iterate over the indices to calculate the normal for each tri
int vertex = indices[i];
Vector v1 = vertices[vertex + 1].GetLocation() - vertices[vertex].GetLocation(); //p1->p2
Vector v2 = vertices[vertex + 2].GetLocation() - vertices[vertex].GetLocation(); //p1->p3
normals += v1.Cross(v2); //cross product with two edges to receive face normal
}
vertices[index].SetNormals(normals.Normalize()); //normalize the sum of face normals and set to vertex
}
Maybe somebody could have a look and tell me what I'm doing wrong.
Thank you.
Edit:
Thanks to molbdnilo's comment I finally understood what was wrong. It was a problem with indexing the arrays and my two loops were kind of confusing as well, maybe I should get some rest ;)
I eventually came up with this, reduced to one loop:
for(int i = 0; i < triangles.size(); i += 3){
if(triangles[i] == index || triangles[i + 1] == index || triangles[i + 2] == index){
Vector v1 = vertices[triangles[i + 1]].GetLocation() - vertices[index].GetLocation();
Vector v2 = vertices[triangles[i + 2]].GetLocation() - vertices[index].GetLocation();
faceNormals += v1.Cross(v2);
}
}
vertices[index].SetNormals(faceNormals.Normalize());

Calculating Normals for Surface in Open GL

I am trying to add shading/lighting to my terrain generator. But for some reason my output still looks blocky even after I calculate surface normals.
set<pair<int,int> >::const_iterator it;
for ( it = mRandomPoints.begin(); it != mRandomPoints.end(); ++it )
{
for ( int i = 0; i < GetXSize(); ++i )
{
for ( int j = 0; j < GetZSize(); ++j )
{
float pd = sqrt(pow((*it).first - i,2) + pow((*it).second - j,2))*2 / mCircleSize;
if(fabs(pd) <= 1.0)
{
mMap[i][j][2] += mCircleHeight/2 + cos(pd*3.14)*mCircleHeight/2; ;
}
}
}
}
/*
The three points being considered to compute normals are
(i,j)
(i+1,j)
(i, j+1)
*/
for ( int i = 0; i < GetXSize() -1 ; ++i )
{
for ( int j = 0; j < GetZSize() - 1; ++j )
{
float b[] = {mMap[i+1][j][0]-mMap[i][j][0], mMap[i+1][j][1]-mMap[i][j][1], mMap[i+1][j][2]-mMap[i][j][2] };
float c[] = {mMap[i][j+1][0]-mMap[i][j][0], mMap[i][j+1][1]-mMap[i][j][1], mMap[i][j+1][2]-mMap[i][j][2] };
float a[] = {b[1]*c[2] - b[2]*c[1], b[2]*c[0]-b[0]*c[2], b[0]*c[1]-b[1]*c[0]};
float Vnorm = sqrt(pow(a[0],2) + pow(a[1],2) + pow(a[2],2));
mNormalMap[i][j][0] = a[0]/Vnorm;
mNormalMap[i][j][1] = a[1]/Vnorm;
mNormalMap[i][j][2] = a[2]/Vnorm;
}
}
Then when drawing this I use the following
float*** normal = map->GetNormalMap();
for (int i = 0 ; i < map->GetXSize() - 1; ++i)
{
glBegin(GL_TRIANGLE_STRIP);
for (int j = 0; j < map->GetZSize() - 1; ++j)
{
glNormal3fv(normal[i][j]);
float color = 1 - (terrain[i][j][2]/height);
glColor3f(color,color, color);
glVertex3f(terrain[i][j][0], terrain[i][j][2], terrain[i][j][1]);
glVertex3f(terrain[i+1][j][0], terrain[i+1][j][2], terrain[i+1][j][1]);
glVertex3f(terrain[i][j+1][0], terrain[i][j+1][2], terrain[i][j+1][1]);
glVertex3f(terrain[i+1][j+1][0], terrain[i+1][j+1][2], terrain[i+1][j+1][1]);
}
glEnd();
}
EDIT: Initialization Code
glFrontFace(GL_CCW);
glCullFace(GL_FRONT); // glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glMatrixMode(GL_PROJECTION);
Am I calculating the Normals Properly?
In addition to what Bovinedragon suggested, namely glShadeModel(GL_SMOOTH);, you should probably use per-vertex normals. This means that each glVertex3f would be preceded by a glNormal3fv call, which would define the average normal of all adjacent faces. To obtain it, you can simply add up these neighbouring normal vectors and normalize the result.
Reference this question: Techniques to smooth face edges in OpenGL
Have you set glShadeModel to GL_SMOOTH?
See: http://www.khronos.org/opengles/documentation/opengles1_0/html/glShadeModel.html
This settings also effects vertex colors in addition to lighting. You seem to say it was blocky even before lighting which makes me think this is the issue.

OpenGL Calculating Normals (Quads)

My issue is regarding OpenGL, and Normals, I understand the math behind them, and I am having some success.
The function I've attached below accepts an interleaved Vertex Array, and calculates the normals for every 4 vertices. These represent QUADS that having the same directions. By my understanding these 4 vertices should share the same Normal. So long as they face the same way.
The problem I am having is that my QUADS are rendering with a diagonal gradient, much like this: Light Effect - Except that the shadow is in the middle, with the light in the corners.
I draw my QUADS in a consistent fashion. TopLeft, TopRight, BottomRight, BottomLeft, and the vertices I use to calculate my normals are TopRight - TopLeft, and BottomRight - TopLeft.
Hopefully someone can see something I've made a blunder on, but I have been at this for hours to no prevail.
For the record I render a Cube, and a Teapot next to my objects to check my lighting is functioning, so I'm fairly sure there is no issue regarding Light position.
void CalculateNormals(point8 toCalc[], int toCalcLength)
{
GLfloat N[3], U[3], V[3];//N will be our final calculated normal, U and V will be the subjects of cross-product
float length;
for (int i = 0; i < toCalcLength; i+=4) //Starting with every first corner QUAD vertice
{
U[0] = toCalc[i+1][5] - toCalc[i][5]; U[1] = toCalc[i+1][6] - toCalc[i][6]; U[2] = toCalc[i+1][7] - toCalc[i][7]; //Calculate Ux Uy Uz
V[0] = toCalc[i+3][5] - toCalc[i][5]; V[1] = toCalc[i+3][6] - toCalc[i][6]; V[2] = toCalc[i+3][7] - toCalc[i][7]; //Calculate Vx Vy Vz
N[0] = (U[1]*V[2]) - (U[2] * V[1]);
N[1] = (U[2]*V[0]) - (U[0] * V[2]);
N[2] = (U[0]*V[1]) - (U[1] * V[0]);
//Calculate length for normalising
length = (float)sqrt((pow(N[0],2)) + (pow(N[1],2)) + (pow(N[2],2)));
for (int a = 0; a < 3; a++)
{
N[a]/=length;
}
for (int j = 0; i < 4; i++)
{
//Apply normals to QUAD vertices (3,4,5 index position of normals in interleaved array)
toCalc[i+j][3] = N[0]; toCalc[i+j][4] = N[1]; toCalc[i+j][5] = N[2];
}
}
}
It seems like you are taking the vertex position values for use in calculations from indices 5, 6, and 7, and then writing out the normals at indices 3, 4, and 5. Note how index 5 is used on both. I suppose one of them is not correct.
It looks like your for-loops are biting you.
for (int i = 0; i < toCalcLength; i+=4) //Starting with every first corner QUAD vertice
{
...
for (int j = 0; i < 4; i++)
{ // ^ ^
// Should you be using 'j' instead of 'i' here?
// j will never increment
// This loop won't be called at all after the first time through the outer loop
...
}
}
You use indexes 3, 4, and 5 for storing normal:
toCalc[i+j][3] = N[0]; toCalc[i+j][4] = N[1]; toCalc[i+j][5] = N[2];
AND you use indexes 5, 6 and 7 to get point coordinates:
U[0] = toCalc[i+1][5] - toCalc[i][5]; U[1] = toCalc[i+1][6] - toCalc[i][6]; U[2] = toCalc[i+1][7] - toCalc[i][7];
Those indexes overlap (normal.x shares same index as position.z), which shouldn't be happening.
Recommendations:
Put everything into structures.
Either:
Use math library.
OR put vector arithmetics into separate appropriately named subroutines.
Use named variables instead of indexes.
By doing so you'll reduce number of bugs in your code. a.position.x is easier to read than quad[0][5], and it is easier to fix a typo in vector operation when the code hasn't been copy-pasted.
You can use unions to access vector components by both index and name:
struct Vector3{
union{
struct{
float x, y, z;
};
float v[3];
};
};
For calcualting normal in quad ABCD
A--B
| |
C--D
Use formula:
normal = normalize((B.position - A.position) X (C.position - A.position)).
OR
normal = normalize((D.position - A.position) X (C.position - B.position)).
Where "X" means "cross-product".
Either way will work fine.

My shadow volumes don't move with my light

I'm currently trying to implement shadow volumes in my opengl world. Right now I'm just focusing on getting the volumes calculated correctly.
Right now I have a teapot that's rendered, and I can get it to generate some shadow volumes, however they always point directly to the left of the teapot. No matter where I move my light(and I can tell that I'm actually moving the light because the teapot is lit with diffuse lighting), the shadow volumes always go straight left.
The method I'm using to create the volumes is:
1. Find silhouette edges by looking at every triangle in the object. If the triangle isn't lit up(tested with the dot product), then skip it. If it is lit, then check all of its edges. If the edge is currently in the list of silhouette edges, remove it. Otherwise add it.
2. Once I have all the silhouette edges, I go through each edge creating a quad with one vertex at each vertex of the edge, and the other two just extended away from the light.
Here is my code that does it all:
void getSilhoueteEdges(Model model, vector<Edge> &edges, Vector3f lightPos) {
//for every triangle
// if triangle is not facing the light then skip
// for every edge
// if edge is already in the list
// remove
// else
// add
vector<Face> faces = model.faces;
//for every triangle
for ( unsigned int i = 0; i < faces.size(); i++ ) {
Face currentFace = faces.at(i);
//if triangle is not facing the light
//for this i'll just use the normal of any vertex, it should be the same for all of them
Vector3f v1 = model.vertices[currentFace.vertices[0] - 1];
Vector3f n1 = model.normals[currentFace.normals[0] - 1];
Vector3f dirToLight = lightPos - v1;
dirToLight.normalize();
float dot = n1.dot(dirToLight);
if ( dot <= 0.0f )
continue; //then skip
//lets get the edges
//v1,v2; v2,v3; v3,v1
Vector3f v2 = model.vertices[currentFace.vertices[1] - 1];
Vector3f v3 = model.vertices[currentFace.vertices[2] - 1];
Edge e[3];
e[0] = Edge(v1, v2);
e[1] = Edge(v2, v3);
e[2] = Edge(v3, v1);
//for every edge
//triangles only have 3 edges so loop 3 times
for ( int j = 0; j < 3; j++ ) {
if ( edges.size() == 0 ) {
edges.push_back(e[j]);
continue;
}
bool wasRemoved = false;
//if edge is in the list
for ( unsigned int k = 0; k < edges.size(); k++ ) {
Edge tempEdge = edges.at(k);
if ( tempEdge == e[j] ) {
edges.erase(edges.begin() + k);
wasRemoved = true;
break;
}
}
if ( ! wasRemoved )
edges.push_back(e[j]);
}
}
}
void extendEdges(vector<Edge> edges, Vector3f lightPos, GLBatch &batch) {
float extrudeSize = 100.0f;
batch.Begin(GL_QUADS, edges.size() * 4);
for ( unsigned int i = 0; i < edges.size(); i++ ) {
Edge edge = edges.at(i);
batch.Vertex3f(edge.v1.x, edge.v1.y, edge.v1.z);
batch.Vertex3f(edge.v2.x, edge.v2.y, edge.v2.z);
Vector3f temp = edge.v2 + (( edge.v2 - lightPos ) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
temp = edge.v1 + ((edge.v1 - lightPos) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
}
batch.End();
}
void createShadowVolumesLM(Vector3f lightPos, Model model) {
getSilhoueteEdges(model, silhoueteEdges, lightPos);
extendEdges(silhoueteEdges, lightPos, boxShadow);
}
I have my light defined as and the main shadow volume generation method is called by:
Vector3f vLightPos = Vector3f(-5.0f,0.0f,2.0f);
createShadowVolumesLM(vLightPos, boxModel);
All of my code seems self documented in places I don't have any comments, but if there are any confusing parts, let me know.
I have a feeling it's just a simple mistake I over looked. Here is what it looks like with and without the shadow volumes being rendered.
It would seem you aren't transforming the shadow volumes. You either need to set the model view matrix on them so they get transformed the same as the rest of the geometry. Or you need to transform all the vertices (by hand) into view space and then do the silhouetting and transformation in view space.
Obviously the first method will use less CPU time and would be, IMO, preferrable.