GL_TRIANGLES instead of GL_QUADS - c++

I want to modify the code for generating 3D sphere, so it uses triangles for drawing instead of quads. The problem is, as usual, that I get some error -- "vector iterator not incrementable". What's wrong with it?
SolidSphere(float radius, unsigned int rings, unsigned int sectors)
{
float const R = 1.0f / (float)(rings-1);
float const S = 1.0f / (float)(sectors-1);
int r, s;
vertices.resize(rings * sectors * 3);
normals.resize(rings * sectors * 3);
texcoords.resize(rings * sectors * 2);
std::vector<GLfloat>::iterator v = vertices.begin();
std::vector<GLfloat>::iterator n = normals.begin();
std::vector<GLfloat>::iterator t = texcoords.begin();
for(r = 0; r < rings; r++) for(s = 0; s < sectors; s++) {
float const x = sinf(M_PI * r * R) * cosf(2 * M_PI * s * S);
float const y = sinf(-M_PI_2 + M_PI * r * R );
float const z = sinf(2.0f * M_PI * s * S) * sinf( M_PI * r * R );
*t++ = s*S;
*t++ = r*R;
*v++ = x * radius;
*v++ = y * radius;
*v++ = z * radius;
*n++ = x;
*n++ = y;
*n++ = z;
}
indices.resize(rings * sectors * 4);
std::vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings-1; r++) for(s = 0; s < sectors-1; s++) {
/* triangles -- not working!
*i++ = r * sectors + s;
*i++ = (r + 1) * sectors + (s + 1);
*i++ = r * sectors + (s + 1);
*i++ = r * sectors + s;
*i++ = (r + 1) * sectors + s;
*i++ = (r + 1) * sectors + (s + 1); */
/* quads */
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
}
}

Looks like your triangle-generator-version of the loop does(sectors-1) times (rings-1) iterations, each one increasing i six times, but you have resized the vector i iterates through to just (rings * sectors * 4), which was enough to the quad-generator-version of the loop.
Assuming the triangle version was ok, this adjust shall fix it:
indices.resize(rings * sectors * 6)
This kind of oversight generally arises when you code without drinking enough coffee. Or too much of it (graph showing the hyperbola boundaries when your code will actually fail, (mapping rings and sectors numbers to x and y) due to the fact that you are allocating more space then needed for the iterations).

Related

OpenGL sphere without GLUT and culling issue

I am modifying the excellent code provided in this answer Creating a 3D sphere in Opengl using Visual C++ to suit my needs, which is to draw it using GL_TRIANGLES and face culling like this:
GLCALL(glEnable(GL_CULL_FACE));
GLCALL(glCullFace(GL_BACK));
GLCALL(glFrontFace(GL_CCW));
I've managed to make the indices work for triangles, but I cant bloody figure out how to eliminate the cull face problem; as it is, it culls the wrong face using the indice setup I got below:
bool CreateSphereData(const float radius, const uint32_t rings, const uint32_t sectors, std::vector<float>& vertexData, std::vector<float>& normalData, std::vector<float>& texcoordData, std::vector<uint32_t>& indiceData)
{
if (radius <= 0 || rings <= 0 || sectors <= 0)
return false;
const float R = 1.0f / (float)(rings - 1);
const float S = 1.0f / (float)(sectors - 1);
const float pi = boost::math::constants::pi<float>();
const float pi_2 = pi / 2;
vertexData.resize(rings * sectors * 3);
normalData.resize(rings * sectors * 3);
texcoordData.resize(rings * sectors * 2);
auto v = vertexData.begin();
auto n = normalData.begin();
auto t = texcoordData.begin();
for (uint32_t r = 0; r < rings; r++)
{
for (uint32_t s = 0; s < sectors; s++)
{
const float y = sin(-pi_2 + pi * r * R);
const float x = cos(2 * pi * s * S) * sin(pi * r * R);
const float z = sin(2 * pi * s * S) * sin(pi * r * R);
*t++ = s*S;
*t++ = r*R;
*v++ = x * radius;
*v++ = y * radius;
*v++ = z * radius;
*n++ = x;
*n++ = y;
*n++ = z;
}
}
indiceData.resize(rings * sectors * 6);
auto i = indiceData.begin();
for (uint32_t r = 0; r < rings; r++)
{
for (uint32_t s = 0; s < sectors; s++)
{
*i++ = (r + 1) * sectors + s;
*i++ = r * sectors + s;
*i++ = r * sectors + (s + 1);
*i++ = (r + 1) * sectors + (s + 1);
*i++ = (r + 1) * sectors + s;
*i++ = r * sectors + (s + 1);
}
}
return true;
}
I am sure it is simple but I can't get it right. Any pointers?
EDIT/ANSWER:
Simply the following indices draws it correctly with setup mentioned above:
indiceData.resize(rings * sectors * 6);
auto i = indiceData.begin();
for (uint32_t r = 0; r < rings-1; r++)
{
for (uint32_t s = 0; s < sectors-1; s++)
{
*i++ = r * sectors + (s + 1);
*i++ = r * sectors + s;
*i++ = (r + 1) * sectors + s;
*i++ = r * sectors + (s + 1);
*i++ = (r + 1) * sectors + s;
*i++ = (r + 1) * sectors + (s + 1);
}
}

opengl indexed drawing issue

i'm trying to render a sphere with opengl (3.0). the algorithm below computes the vertices and the according indices. all in all it works quite well, however, there seem to be a glitch in the matrix, cause i get an ugly cone inside the sphere.
vertices.resize(rings * segments * 3);
colors.resize(rings * segments * 3);
indices.resize(6 * rings * segments);
auto v = vertices.begin();
auto c = colors.begin();
auto i = indices.begin();
auto dTheta = M_PI / (f32)rings;
auto dPhi = 2 * M_PI / (f32)segments;
for ( u32 ring = 0; ring < rings; ++ring ) {
auto r0 = radius * sinf(ring * dTheta);
auto y0 = radius * cosf(ring * dTheta);
for ( u32 segment = 0; segment < segments; ++segment ) {
auto x0 = r0 * sinf(segment * dPhi);
auto z0 = r0 * cosf(segment * dPhi);
*v++ = x0; *c++ = color.r;
*v++ = y0; *c++ = color.g;
*v++ = z0; *c++ = color.b;
if (ring < rings) {
*i++ = ( (ring ) * segments ) + segment;
*i++ = ( (ring+1) * segments ) + segment;
*i++ = ( (ring+1) * segments ) + segment + 1;
*i++ = ( (ring+1) * segments ) + segment + 1;
*i++ = ( (ring ) * segments ) + segment + 1;
*i++ = ( (ring ) * segments ) + segment;
}
}
}
any idea what i'm missing?
*i++ = ( (ring+1) * segments ) + segment + 1;
*i++ = ( (ring+1) * segments ) + segment + 1;
*i++ = ( (ring ) * segments ) + segment + 1;
What index do you compute if segment is equal to segments - 1? Yes, you get an index from the next segment.
And if there is no next segment? Then you get an index outside of your list of points.
Your segment increment needs to wrap around to 0:
*i++ = ( (ring+1) * segments ) + (segment + 1) % segments;
*i++ = ( (ring+1) * segments ) + (segment + 1) % segments;
*i++ = ( (ring ) * segments ) + (segment + 1) % segments;
Also, consider what happens if ring is equal to rings - 1. Same problem, but you need a different solution. Your if statement is wrong. It should be:
if((ring + 1) < rings)

Trouble with OpenGL Vertex Array Sphere C

Heres a pic of the problem:
And here is the wireframe:
As you can see from the pictures above, I have some sort of weird graphical issue and am not sure how to fix, I think somethings wrong with the code although noone else has had trouble with the code.
Code:
unsigned int rings = 12, sectors = 24;
float const R = 1./(float)(rings-1);
float const S = 1./(float)(sectors-1);
int r, s;
vertices.resize(rings * sectors * 3);
normals.resize(rings * sectors * 3);
texcoords.resize(rings * sectors * 2);
std::vector<GLfloat>::iterator v = vertices.begin();
std::vector<GLfloat>::iterator n = normals.begin();
std::vector<GLfloat>::iterator t = texcoords.begin();
for(r = 0; r < rings; r++) for(s = 0; s < sectors; s++) {
float const y = sin( -M_PI_2 + M_PI * r * R );
float const x = cos(2*M_PI * s * S) * sin( M_PI * r * R );
float const z = sin(2*M_PI * s * S) * sin( M_PI * r * R );
*t++ = s*S;
*t++ = r*R;
*v++ = x * getR();
*v++ = y * getR();
*v++ = z * getR();
*n++ = x;
*n++ = y;
*n++ = z;
}
indices.resize(rings * sectors * 4);
std:vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings; r++) for(s = 0; s < sectors; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices.data());
glNormalPointer(GL_FLOAT, 0, normals.data());
glTexCoordPointer(2, GL_FLOAT, 0, texcoords.data());
glDrawElements(GL_QUADS, indices.size(), GL_UNSIGNED_SHORT, indices.data());
Code taken from (Creating a 3D sphere in Opengl using Visual C++)
indices will end up with indexes outside of vertices.
The last four values in indices will be:
*i++ = 11 * 24 + 23 = 287;
*i++ = 11 * 24 + (23 + 1) = 288;
*i++ = (11 + 1) * 24 + (23 + 1) = 312;
*i++ = (11 + 1) * 24 + 23 = 311;
but vertices only contains 288 vertexes. I assume why it works for other people is that glDrawElements might wrap the indexes in some implementations.

Implementing opengl sphere example code

Trying to implement a sphere example code I found here on stackoverflow I have been running into problems. Note that I am not using the draw code, I am using the void SolidSphere function from this example to create my vertex data.
This is my code with some minor changes to the example that I refered to:
std::vector<GLfloat> vertices;
std::vector<GLfloat> normals;
std::vector<GLfloat> texcoords;
std::vector<GLushort> indices;
Sphere(shared_ptr<GL::Shader> _shader, shared_ptr<Material> _material, float radius, unsigned int rings, unsigned int sectors)
{
this->attachShader(_shader);
this->attachMaterial(_material);
/** Implementation from:
https://stackoverflow.com/questions/5988686/how-do-i-create-a-3d-sphere-in-opengl-using-visual-c?lq=1
**/
float const R = 1./(float)(rings-1);
float const S = 1./(float)(sectors-1);
int r, s;
vertices.resize(rings * sectors * 3);
//normals.resize(rings * sectors * 3);
texcoords.resize(rings * sectors * 2);
std::vector<GLfloat>::iterator v = vertices.begin();
//std::vector<GLfloat>::iterator n = sphere_normals.begin();
std::vector<GLfloat>::iterator t = texcoords.begin();
for(r = 0; r < rings; r++) {
for(s = 0; s < sectors; s++) {
float const y = sin( -M_PI/2 + M_PI * r * R );
float const x = cos(2*M_PI * s * S) * sin( M_PI * r * R );
float const z = sin(2*M_PI * s * S) * sin( M_PI * r * R );
*t++ = s*S;
*t++ = r*R;
*v++ = x * radius;
*v++ = y * radius;
*v++ = z * radius;
//*n++ = x;
//*n++ = y;
//*n++ = z;
}
}
indices.resize(rings * sectors * 4);
std::vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings; r++) {
for(s = 0; s < sectors; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
}
}
/************* THIS IS THE ADDED PART ********/
for (unsigned int i = 0; i < vertices.size(); i += 3) {
vertexCoords.push_back(glm::vec4(vertices[i], vertices[i+1], vertices[i+2], 1));
}
for (unsigned int i = 0; i < texcoords.size(); i += 2) {
textureCoords.push_back(glm::vec2(texcoords[i], texcoords[i+1]));
}
indexElements = indices;
cout << "Vertex vector size: " << vertexCoords.size() << endl;
cout << "Texture vector size: " << textureCoords.size() << endl;
cout << "Element index vector size: " << indexElements.size() << endl;
}
The only big change here is using M_PI / 2 instead of M_PI_2 and two loops that pushes the value into the format my code uses which is:
vector <glm::vec4> vertexCoords;
vector <glm::vec2> textureCoords;
vector <GLushort> indexElements;
Here is a picture of what I am rendering:
so I am seeing this last line on the top of the sphere..
Is it anything obvious I am doing wrong in this code that could cause this?
In this loop:
for(r = 0; r < rings; r++) {
for(s = 0; s < sectors; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
}
}
you are accessing r+1 and s+1 which when r = rings - 1 and s = sectors - 1 (i.e. the last iteration of your loops) will be one higher than the data you have available.
This will be pointing to an uninitialised area of memory. This could contain anything, but in this case it appears to contain zeros which would explain the line going off to the origin.
As you are drawing the "next" quad you simply need to stop these loops one iteration earlier:
for(r = 0; r < rings-1; r++) {
for(s = 0; s < sectors-1; s++) {
You have only rings*sectors vertices but your indices go up to (rings-1+1)*sectors+(sectors-1+1)=rings*sectors+sectors. Which is exactly sectors number of vertices too much.
The problem is the output indices of bounds. Try to fix this part of the code so
indices.resize(rings * sectors * 4);
std::vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings-1; r++) {
for(s = 0; s < sectors-1; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
}
}

Gradient algorithm produces little white dots

I'm working on an algorithm to generate point to point linear gradients. I have a rough proof of concept implementation done:
GLuint OGLENGINEFUNCTIONS::CreateGradient( std::vector<ARGBCOLORF> &input,POINTFLOAT start, POINTFLOAT end, int width, int height,bool radial )
{
std::vector<POINT> pol;
std::vector<GLubyte> pdata(width * height * 4);
std::vector<POINTFLOAT> linearpts;
std::vector<float> lookup;
float distance = GetDistance(start,end);
RoundNumber(distance);
POINTFLOAT temp;
float incr = 1 / (distance + 1);
for(int l = 0; l < 100; l ++)
{
POINTFLOAT outA;
POINTFLOAT OutB;
float dirlen;
float perplen;
POINTFLOAT dir;
POINTFLOAT ndir;
POINTFLOAT perp;
POINTFLOAT nperp;
POINTFLOAT perpoffset;
POINTFLOAT diroffset;
dir.x = end.x - start.x;
dir.y = end.y - start.y;
dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y));
ndir.x = static_cast<float>(dir.x * 1.0 / dirlen);
ndir.y = static_cast<float>(dir.y * 1.0 / dirlen);
perp.x = dir.y;
perp.y = -dir.x;
perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y));
nperp.x = static_cast<float>(perp.x * 1.0 / perplen);
nperp.y = static_cast<float>(perp.y * 1.0 / perplen);
perpoffset.x = static_cast<float>(nperp.x * l * 0.5);
perpoffset.y = static_cast<float>(nperp.y * l * 0.5);
diroffset.x = static_cast<float>(ndir.x * 0 * 0.5);
diroffset.y = static_cast<float>(ndir.y * 0 * 0.5);
outA.x = end.x + perpoffset.x + diroffset.x;
outA.y = end.y + perpoffset.y + diroffset.y;
OutB.x = start.x + perpoffset.x - diroffset.x;
OutB.y = start.y + perpoffset.y - diroffset.y;
for (float i = 0; i < 1; i += incr)
{
temp = GetLinearBezier(i,outA,OutB);
RoundNumber(temp.x);
RoundNumber(temp.y);
linearpts.push_back(temp);
lookup.push_back(i);
}
for (unsigned int j = 0; j < linearpts.size(); j++) {
if(linearpts[j].x < width && linearpts[j].x >= 0 &&
linearpts[j].y < height && linearpts[j].y >=0)
{
pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 0] = (GLubyte) j;
pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 1] = (GLubyte) j;
pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 2] = (GLubyte) j;
pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 3] = (GLubyte) 255;
}
}
lookup.clear();
linearpts.clear();
}
return CreateTexture(pdata,width,height);
}
It works as I would expect most of the time, but at certain angles it produces little white dots. I can't figure out what does this.
This is what it looks like at most angles (good) http://img9.imageshack.us/img9/5922/goodgradient.png
But once in a while it looks like this (bad): http://img155.imageshack.us/img155/760/badgradient.png
What could be causing the white dots?
Is there maybe also a better way to generate my gradients if no solution is possible for this?
Thanks
I think you have a bug indexing into the pdata byte vector. Your x domain is [0, width) but when you multiply out the indices you're doing x * 4 * width. It should probably be x * 4 + y * 4 * width or x * 4 * height + y * 4 depending on whether you're data is arranged row or column major.