I'm creating a cuboid in OpenGL using Vertex arrays but my texture seems to have gone a bit wrong.
Heres my code:
float halfW = getW() / 2;
float halfH = getH() / 2;
GLubyte myIndices[]={
1, 0, 2, //front
2, 0, 3,
4, 5, 7, //back
7, 5, 6,
0, 4, 3, //left
3, 4, 7,
5, 1, 6, //right
6, 1, 2,
7, 6, 3, //up
3, 6, 2,
1, 0, 5, //down
5, 0, 4
};
float myVertices[] = {
-halfW, -halfH, -halfW, // Vertex #0
halfW, -halfH, -halfW, // Vertex #1
halfW, halfH, -halfW, // Vertex #2
-halfW, halfH, -halfW, // Vertex #3
-halfW, -halfH, halfW, // Vertex #4
halfW, -halfH, halfW, // Vertex #5
halfW, halfH, halfW, // Vertex #6
-halfW, halfH, halfW // Vertex #7
};
float myTexCoords[]= {
1.0, 1.0, //0
0.0, 1.0, //1
0.0, 0.0, //2
1.0, 0.0, //3
0.0, 0.0, //4
1.0, 0.0, //5
1.0, 1.0, //6
0.0, 1.0 //7
};
Heres the problem:
The front and back are rendering fine but top, bottom, left and right are skewed.
Where am I going wrong?
Your texture coordinates and vertex indices look are off. Since Vertex #2 (with the coordinates halfW, halfH, -halfW) has texture coordinate (0, 0), Vertex #6 (with the coordinates halfW, halfH, halfW) should not have the texture coordinate (1, 1). What it does is it puts vertices with the texture coordinates (0, 0) and (1, 1) along the same edge and that leads to trouble.
The problem with a cube is that one 2D texture coordinate per vertex is not enough to for mapping a texture onto a cube like this (however you try to put them, you will end up with weird sides).
The solution is to either add extra triangles so that no side share vertices with another side, or look in to Cube Maps where you specify the texture coordinates in 3D (just as the vertices). I would suggest using Cube Maps, since it avoids adding redundant vertices.
Related
I'm trying to understand what I'm doing wrong displaying two different cubes with a grid through the x and z axis. I'm using gluLookAt() to view both cubes at the same angle. I'm very confused why the first viewport does not show the grid but the second one does. Here's my code and an example picture of why I'm confused.
def draw(c1, c2):
glClearColor(0.7, 0.7, 0.7, 0)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glBegin(GL_LINES)
for edge in grid_edges:
for vertex in edge:
glColor3fv((0.0, 0.0, 0.0))
glVertex3fv(grid_vertices[vertex])
glEnd()
glViewport(0, 0, WIDTH // 2, HEIGHT)
glLoadIdentity()
gluPerspective(90, (display[0] / display[1]) / 2, 0.1, 50.0)
gluLookAt(c1.center_pos[0], c1.center_pos[1], c1.center_pos[2] + 8, c1.center_pos[0], c1.center_pos[1], c1.center_pos[2], 0, 1, 0)
glPushMatrix()
glTranslatef(c1.center_pos[0], c1.center_pos[1], c1.center_pos[2])
glRotatef(c1.rotation[0], c1.rotation[1], c1.rotation[2], c1.rotation[3])
glTranslatef(-c1.center_pos[0], -c1.center_pos[1], -c1.center_pos[2])
glBegin(GL_LINES)
for edge in c1.edges:
for vertex in edge:
glColor3fv((0, 0, 0))
glVertex3fv(c1.vertices[vertex])
glEnd()
glPopMatrix()
glViewport(WIDTH // 2, 0, WIDTH // 2, HEIGHT)
glLoadIdentity()
gluPerspective(90, (display[0] / display[1]) / 2, 0.1, 50.0)
gluLookAt(c2.center_pos[0], c2.center_pos[1], c2.center_pos[2] + 8, c2.center_pos[0], c2.center_pos[1], c2.center_pos[2], 0, 1, 0)
glPushMatrix()
glTranslatef(c2.center_pos[0], c2.center_pos[1], c2.center_pos[2])
glRotatef(c2.rotation[0], c2.rotation[1], c2.rotation[2], c2.rotation[3])
glTranslatef(-c2.center_pos[0], -c2.center_pos[1], -c2.center_pos[2])
glBegin(GL_LINES)
for edge in c2.edges:
for vertex in edge:
glColor3fv((0, 0, 0))
glVertex3fv(c2.vertices[vertex])
glEnd()
glPopMatrix()
OpenGL is a state machine. Once a state is set, it persists even beyond frames. This means if you change the viewport or set a matrix, that viewport and matrix are the same at the beginning of the next frame. These states are not "reset" from one frame to the next. You need to set the viewport and set the identity matrix at the beginning of draw:
def draw(c1, c2):
glClearColor(0.7, 0.7, 0.7, 0)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glViewport(0, 0, WIDTH, HEIGHT)
glLoadIdentity()
glBegin(GL_LINES)
for edge in grid_edges:
for vertex in edge:
glColor3fv((0.0, 0.0, 0.0))
glVertex3fv(grid_vertices[vertex])
glEnd()
# [...]
I have written the following function to draw a cube:
void drawCube() {
Point vertices[8] = { Point(-1.0, -1.0, -1.0), Point(-1.0, -1.0, 1.0), Point(1.0, -1.0, 1.0), Point(1.0, -1.0, -1.0),
Point(-1.0, 1.0, -1.0), Point(-1.0, 1.0, 1.0), Point(1.0, 1.0, 1.0), Point(1.0, 1.0, -1.0) };
int faces[6][4] = {{0, 1, 2, 3}, {0, 3, 7, 4}, {0, 1, 5, 4}, {1, 2, 6, 5}, {3, 2, 6, 7}, {4, 5, 6, 7}};
glBegin(GL_QUADS);
for (unsigned int face = 0; face < 6; face++) {
Vector v = vertices[faces[face][1]] - vertices[faces[face][0]];
Vector w = vertices[faces[face][2]] - vertices[faces[face][0]];
Vector normal = v.cross(w).normalised();
glNormal3f(normal.dx, normal.dy, normal.dz);
for (unsigned int vertex = 0; vertex < 4; vertex++) {
switch (vertex) {
case 0: glTexCoord2f(0, 0); break;
case 1: glTexCoord2f(1, 0); break;
case 2: glTexCoord2f(1, 1); break;
case 3: glTexCoord2f(0, 1); break;
}
glVertex3f(vertices[faces[face][vertex]].x, vertices[faces[face][vertex]].y, vertices[faces[face][vertex]].z);
}
}
glEnd();
}
When the cube is rendered with a light shining on to it, it appears that as I rotate the cube, the correct shading transitions are only happening for around half the faces. The rest just remain a very dark shade, as if I had removed the normal calculations.
I then decided to remove a couple of faces to see inside the cube. The faces that are not reflecting the light correctly on the outside, are doing so correctly on the inside. How can I ensure that the normal to each face is pointing out from that face, rather than in towards the centre of the cube?
To reverse the direction of the normal, swap the order you use for the cross product:
Vector normal = w.cross(v).normalised();
Maybe there is a more efficient way, but a imho quite easy to understand way is the following....
Calculate the vector that points from the center of the side to the center of the cube. Call it
m = center cube - center side
Then calculate the scalar product of that vector with your normal:
x = < m , n >
The scalar product is positive if the two vectors point in the same direction with respect to the side (the angle between them is less than 90 degree). It is negative, if they point in opposite directions (angle is bigger than 90 degree). Then correct your normal via
if ( x > 0 ) n = -n;
to make sure it always points outwards.
I'm trying to map a texture on a cube which is basicly a triangle strip with 8 vertices and 14 indicies:
static const GLfloat vertices[8] =
{
-1.f,-1.f,-1.f,
-1.f,-1.f, 1.f,
-1.f, 1.f,-1.f,
-1.f, 1.f, 1.f,
1.f,-1.f,-1.f,
1.f,-1.f, 1.f,
1.f, 1.f,-1.f,
1.f, 1.f, 1.f
};
static const GLubyte indices[14] =
{
2, 0, 6, 4, 5, 0, 1, 2, 3, 6, 7, 5, 3, 1
};
As you can see it starts drawing the back with 4 indices 2, 0, 6, 4, then the bottom with 3 indices 5, 0, 1 and then starting off with triangles only 1, 2, 3 is a triangle on the left, 3, 6, 7 is a triangle on the top, and so on...
I'm a bit lost how to map a texture on this cube. This is my texture (you get the idea):
I manage to get the back textured and somehow can add something to the front, but the other 4 faces are totally messed up and I'm a bit confused how the shader deals with the triangles regarding to the texture coordinates.
The best I could achieve is this:
You can clearly see the triangles on the sides. And these are my texture coordinates:
static const GLfloat texCoords[] = {
0.5, 0.5,
1.0, 0.5,
0.5, 1.0,
1.0, 1.0,
0.5, 0.5,
0.5, 1.0,
1.0, 0.5,
1.0, 1.0,
// ... ?
};
But whenever I try to add more coordinates it's totally creating something different I can not explain really why. Any idea how to improve this?
The mental obstacle you're running into is assuming that your cube has only 8 vertices. Yes, there are only 8 corer positions. But each face adjacent to that corner shows a different part of the image and hence has a different texture coordinate at that corner.
Vertices are tuples of
position
texture coordinate
…
any other attribute you can come up
As soon as one of that attribute changes you're dealing with an entirely different vertex. Which means for you, that you're dealing with 8 corner positions, but 3 different vertices at each corner, because there are meeting faces with different texture coordinates at that corner. So you actually need 24 vertices that make up 6 different faces which share no vertices at all.
To make things easier for you as a beginner, don't put vertex positions and texture coordinates into different arrays. Instead write it like this:
struct vertex_pos3_tex2 {
float x,y,z;
float s,t;
} cube_vertices[24] =
{
/* 24 vertices of position and texture coordinate */
};
I need to draw a rectangle in OpenGL ES 2.0 but to arrange color for rectangle in fragment shader. I will draw two triangles to represent the rectangle. This is similar to texture mapping but without the texture coordinates. What would be ideal is to take a specific pixel and calculate its position. On the other side there is an array containing 16 elements of 0s and 1s. The pixel is calculated and it is compared to the arrays element at the same location (this is possible if you take remainder of division with 16 since it will return 0 - 15). If the element in corresponding array's index is 1, that pixel will be colored using a specific color from a fragment shader attribute, otherwise it will not be colored.
The following diagram illustrates the problem:
This is my problem
The array is passed as uniform value to fragment shader and it seems that it is not passed correctly:
The following is the code where uniform value is passed to fragment shader:
void GeometryEngine::drawLineGeometry(QGLShaderProgram *program)
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
// Bind attribute position in vertex shader to the program
int vertexLocation = program->attributeLocation("position");
program->enableAttributeArray(vertexLocation);
glVertexAttribPointer(vertexLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);
const int array[16] = {1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0};
int arrayLocation = program->attributeLocation("array");
program->setUniformValueArray(arrayLocation, array, 16);
// Draw triangles (6 points)
glDrawArrays(GL_TRIANGLES, 0, 6);
}
Fragment shader:
uniform int array[16];
void main()
{
gl_FragColor = vec4(array[0], 0.0, 0.0, 1.0);
/* int index = int(mod(gl_FragCoord.x, 16.0));
if( array[index] == 1 )
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
else
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
*/
}
Commented lines should create specific color for the fragments, but I cannot test this part because the array is not passed correctly. The rectangle is black, instead of red (array[0] is 1, not 0).
I am trying to draw one line with ar[]. It contains the point cords. I am also trying to use the color described in clr[]. Can any one tell me what is wrong with my ver function. When I run it, only a white screen comes up.
void ver(void)
{
glClear(GL_DEPTH_BUFFER_BIT);
glPushMatrix();
GLfloat ar [] = {0.25, 0.25,
0.5, 0.25,
};
GLfloat clr [] = {1.0, 0.0,0.0
};
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2,GL_FLOAT, 0, ar);
glColorPointer(3,GL_FLOAT,0,clr);
glDrawElements(GL_LINES, 2, GL_FLOAT, ar);
glDrawElements(GL_LINES, 3, GL_FLOAT, clr);
glPopMatrix();
glutSwapBuffers();
}
Your call to glDrawElements() is wrong. You need to pass an array of indices to it, and you only need to call it once. So you need something like this:
GLuint indices[] = { 0, 1, 2, 3 };
glDrawElements (GL_LINES, 2, GL_UNSIGNED_INT, indices);
Also, I think you need to expand your color array to have one color per vertex, so it should look more like:
GLfloat clr [] = { 1.0, 0.0, 0.0,
1.0, 0.0, 0.0 };