I have written the following function to draw a cube:
void drawCube() {
Point vertices[8] = { Point(-1.0, -1.0, -1.0), Point(-1.0, -1.0, 1.0), Point(1.0, -1.0, 1.0), Point(1.0, -1.0, -1.0),
Point(-1.0, 1.0, -1.0), Point(-1.0, 1.0, 1.0), Point(1.0, 1.0, 1.0), Point(1.0, 1.0, -1.0) };
int faces[6][4] = {{0, 1, 2, 3}, {0, 3, 7, 4}, {0, 1, 5, 4}, {1, 2, 6, 5}, {3, 2, 6, 7}, {4, 5, 6, 7}};
glBegin(GL_QUADS);
for (unsigned int face = 0; face < 6; face++) {
Vector v = vertices[faces[face][1]] - vertices[faces[face][0]];
Vector w = vertices[faces[face][2]] - vertices[faces[face][0]];
Vector normal = v.cross(w).normalised();
glNormal3f(normal.dx, normal.dy, normal.dz);
for (unsigned int vertex = 0; vertex < 4; vertex++) {
switch (vertex) {
case 0: glTexCoord2f(0, 0); break;
case 1: glTexCoord2f(1, 0); break;
case 2: glTexCoord2f(1, 1); break;
case 3: glTexCoord2f(0, 1); break;
}
glVertex3f(vertices[faces[face][vertex]].x, vertices[faces[face][vertex]].y, vertices[faces[face][vertex]].z);
}
}
glEnd();
}
When the cube is rendered with a light shining on to it, it appears that as I rotate the cube, the correct shading transitions are only happening for around half the faces. The rest just remain a very dark shade, as if I had removed the normal calculations.
I then decided to remove a couple of faces to see inside the cube. The faces that are not reflecting the light correctly on the outside, are doing so correctly on the inside. How can I ensure that the normal to each face is pointing out from that face, rather than in towards the centre of the cube?
To reverse the direction of the normal, swap the order you use for the cross product:
Vector normal = w.cross(v).normalised();
Maybe there is a more efficient way, but a imho quite easy to understand way is the following....
Calculate the vector that points from the center of the side to the center of the cube. Call it
m = center cube - center side
Then calculate the scalar product of that vector with your normal:
x = < m , n >
The scalar product is positive if the two vectors point in the same direction with respect to the side (the angle between them is less than 90 degree). It is negative, if they point in opposite directions (angle is bigger than 90 degree). Then correct your normal via
if ( x > 0 ) n = -n;
to make sure it always points outwards.
Related
I have several planes(2 for example). I need to rotate each of them around share center and then translate it to different coordinates. For example, i have two planes
Mesh CreateMeshPlane(Vector3D bottomleft, size_t numvertices_x,
size_t numvertices_y, size_t max_x,
size_t max_y)
{
Just generation VBO for plane.
}
planeZY = CreateMeshPlane({0.0, 0.0, 0.0}, 100, 100, 2, 2);
planeZY1 = CreateMeshPlane({0.0, 0.0, 0.0}, 100, 100, 2, 2);
i need to rotate them around some origin, then i need to move them in some point.
What do i do?
planeZY.setupMesh();
planeZY.position = QVector3D(0.0, 0.0, 2.0);
planeZY.origin = QVector3D(1, 1, 1);
planeZY.rotation = QVector3D(0.0f, -90.0f, 0.0f);
planeZY.updateModelMatrix();
planeZY1.setupMesh();
planeZY.origin = QVector3D(1, 1, 1);
planeZY1.position = QVector3D(2.0, 0.0, 0.0);
planeZY1.rotation = QVector3D(0.0f, -90.0f, 0.0f);
planeZY1.updateModelMatrix();
void updateModelMatrix()
{
this->modelMatrix.setToIdentity();
this->modelMatrix.translate(this->origin);
this->modelMatrix.rotate(this->rotation.z(), QVector3D(0.f, 0.f, 1.f));
this->modelMatrix.rotate(this->rotation.y(), QVector3D(0.f, 1.f, 0.f));
this->modelMatrix.rotate(this->rotation.x(), QVector3D(1.f, 0.f, 0.f));
this->modelMatrix.translate(this->position - this->origin);
}
There is a problem, local axis changes their direction after rotation and my plane move in a wrong direction. How to rotate objects in opengl and move them along global axes?
I am using C++, OpenGL and glut. I am trying to make 5 houses that are rotated properly like this:
However, whenever I try to implement the glRotatef function, I seem to not be able to either get the proper coordinates or something is off somewhere in my code. Furthermore, I set the background color to white but it's still all black, how come? For now I have the houses set to white to counter this for now. Here is my code:
#include <GL/glut.h>
typedef int vert2D[2];
void initialize()
{
glClearColor(1.0, 1.0, 1.0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(10.0, 215.0, 0.0, 250.0);
glMatrixMode(GL_MODELVIEW);
}
void drawHouse(vert2D* sq, vert2D* tri)
{
glColor3f(1.0, 1.0, 1.0);
glBegin(GL_LINE_LOOP);
glVertex2iv(sq[0]);
glVertex2iv(sq[1]);
glVertex2iv(sq[2]);
glVertex2iv(sq[3]);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2iv(tri[0]);
glVertex2iv(tri[1]);
glVertex2iv(tri[2]);
glEnd();
}
void render()
{
vert2D sqPts[4] = { {115, 150}, {115, 125}, {100,125}, {100,150} };
vert2D triPts[3] = { {120, 150}, {95,150}, {108,160} };
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glMatrixMode(GL_MODELVIEW);
drawHouse(sqPts, triPts);
glPushMatrix();
glTranslatef(1.0, 0.0, 0.0);
glRotatef(-10.0, 0.0, 0.0, 1.0);
drawHouse(sqPts, triPts);
glTranslatef(1.0, 0.0, 0.0);
glRotatef(-10.0, 0.0, 0.0, -1.0);
drawHouse(sqPts, triPts);
glPopMatrix();
glPushMatrix();
glTranslatef(-1.0, 0.0, 0.0);
glRotatef(10.0, 0.0, 0.0, 1.0);
drawHouse(sqPts, triPts);
glTranslatef(-1.0, 0.0, 0.0);
glRotatef(10.0, 0.0, 0.0, 1.0);
drawHouse(sqPts, triPts);
glPopMatrix();
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowPosition(100, 100);
glutInitWindowSize(640, 480);
glutCreateWindow("TestMeOut");
initialize();
glutDisplayFunc(render);
glutMainLoop();
}
Let's answer the simpler question of why your background is still black, first:
You simply never glClear(GL_COLOR_BUFFER_BIT) the color buffer. You tell OpenGL "hey, the next time I call glClear with (at least) the GL_COLOR_BUFFER_BIT, I want the color buffer to be cleared to white." but you never actually clear the buffer.
Now, onto how we can draw the houses with their correct locations and orientations:
You should first start by defining your house's vertices in a sensible local coordinate system/frame that is suitable for transforming them in further steps. Currently, with how you define your house's vertices, it is hard to do any transformations on those (mainly because linear transformations like rotation are always relative to the coordinate system's origin).
So, let's change that. Let's define the origin (0, 0) for your house to be the center of the bottom/base line of the house. And let's also define that your house's quad has a side length of 10 "units":
vert2D sqPts[4] = {
{-5, 0}, // <- bottom left
{ 5, 0}, // <- bottom right
{ 5,10}, // <- top right
{-5,10} // <- top left
};
Now, for the roof of the house, we assume the same coordinate system (with (0, 0) being the center of the house's base/bottom line), so we start at Y=10:
vert2D triPts[3] = {
{-6, 10}, // <- left
{ 6, 10}, // <- right
{ 0, 12} // <- top
};
Next, we need to define where (0, 0) should be in our "world", so to speak. One definition could be: (0, 0) should be the center of the bottom of the viewport/screen and the viewport should have a length of 100 "units". Right now, we don't care about a correct aspect ratio when the viewport's width does not equal the viewport's height. This can be added later.
Starting from the clip space coordinate system, we can transform this clip space into our own "world space" by using these transformations:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslatef(0.0, -1.0, 0.0); // <- move the origin down to the bottom of the viewport
glScalef(1.0 / 50.0, 1.0 / 50.0, 1.0); // <- "scale down" the clip space to cover more space in the viewport
Now, the above part is essentially what gluOrtho2D() does as well, but highlighting the actual coordinate system transformation steps is useful here.
Now that we defined our house's local coordinate system and our "world" coordinate system, we can rotate and translate the world coordinate system such that the houses appear at their correct locations and orientations in our world.
In order to draw 5 houses, we just use a for-loop:
glMatrixMode(GL_MODELVIEW);
for (int i = -2; i <= 2; i++) { // <- 5 steps
glPushMatrix();
glRotatef(i * 20.0, 0.0, 0.0, 1.0);
glTranslatef(0.0, 50.0, 0.0);
drawHouse(sqPts, triPts);
glPopMatrix();
}
So, starting from our world coordinate system, we transform it by rotating the appropriate amount around its origin (0, 0) for the house with index i to have the correct rotation, and then translate the coordinate system by 50 units along its (now rotated) Y axis.
These two transformations will now result in a house to be drawn at the desired location. So, repeat that 5 times in total with differing rotation angles, and you're done.
I draw many lines to form a grid. I want to see the grid rotated on its X-axis, but I never get the intended result. I tried glRotatef and gluLookAt which does not work the way I want. Please see the pictures below.
this is the grid
this is how I want to see it
Edit: geez, posting the code here is also hard, lol, anyway here it is.
Edit2: removed, only leave the code that has issues.
Please find the code below, no matter how I set the gluLookAt, the grid result won't be in the perspective I want.
#include <GL/glut.h>
void display() {
...
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_LINES);
for (int i = 0; i < 720; i += 3)
{
glColor3f(0, 1, 1);
glVertex3f(linePoints[i], linePoints[i + 1], linePoints[i + 2]);
}
glEnd();
glFlush();
}
void init() {
glClearColor(0.0, 0.0, 0.0, 1.0);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, 4.0 / 3.0, 1, 40);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, -2, 1.25, 0, 0, 0, 0, 1, 0);
}
Lets assume, that you have a grid in the xy plane of the world:
glColor3f(0, 1, 1);
glBegin(GL_LINES);
for (int i = 0; i <= 10; i ++)
{
// horizontal
glVertex3f(-50.0f + i*10.0f, -50.0f, 0.0f);
glVertex3f(-50.0f + i*10.0f, 50.0f, 0.0f);
// vertical
glVertex3f(-50.0f, -50.0f + i*10.0f, 0.0f);
glVertex3f( 50.0f, -50.0f + i*10.0f, 0.0f);
}
glEnd();
Ensure that the distance of to the far plane of the projection is large enough (see gluPerspective). All the geometry which is not in between the near an far plane of the Viewing frustum is clipped.
Further more ensure that the aspect ratio (4.0 / 3.0) match the ratio of the viewport rectangle (window).
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, 4.0 / 3.0, 1, 200);
For the use of gluLookAt, the up vector of the view has to be perpendicular to the grid. If the grid is arranged parallel to the xy plane, then the up vector is z axis (0, 0, 1).
The target (center) is the center of the grid (0, 0, 0).
The point of view (eye position) is ought to be above and in front of the grid, for instance (0, -55, 50). Note the point of view is used for a grid with the bottom left of (-50, -50, 0) and a top right of (50, 50, 0).
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, -55.0, 50.0, 0, 0, 0, 0, 0, 1);
I'm trying to map a texture on a cube which is basicly a triangle strip with 8 vertices and 14 indicies:
static const GLfloat vertices[8] =
{
-1.f,-1.f,-1.f,
-1.f,-1.f, 1.f,
-1.f, 1.f,-1.f,
-1.f, 1.f, 1.f,
1.f,-1.f,-1.f,
1.f,-1.f, 1.f,
1.f, 1.f,-1.f,
1.f, 1.f, 1.f
};
static const GLubyte indices[14] =
{
2, 0, 6, 4, 5, 0, 1, 2, 3, 6, 7, 5, 3, 1
};
As you can see it starts drawing the back with 4 indices 2, 0, 6, 4, then the bottom with 3 indices 5, 0, 1 and then starting off with triangles only 1, 2, 3 is a triangle on the left, 3, 6, 7 is a triangle on the top, and so on...
I'm a bit lost how to map a texture on this cube. This is my texture (you get the idea):
I manage to get the back textured and somehow can add something to the front, but the other 4 faces are totally messed up and I'm a bit confused how the shader deals with the triangles regarding to the texture coordinates.
The best I could achieve is this:
You can clearly see the triangles on the sides. And these are my texture coordinates:
static const GLfloat texCoords[] = {
0.5, 0.5,
1.0, 0.5,
0.5, 1.0,
1.0, 1.0,
0.5, 0.5,
0.5, 1.0,
1.0, 0.5,
1.0, 1.0,
// ... ?
};
But whenever I try to add more coordinates it's totally creating something different I can not explain really why. Any idea how to improve this?
The mental obstacle you're running into is assuming that your cube has only 8 vertices. Yes, there are only 8 corer positions. But each face adjacent to that corner shows a different part of the image and hence has a different texture coordinate at that corner.
Vertices are tuples of
position
texture coordinate
…
any other attribute you can come up
As soon as one of that attribute changes you're dealing with an entirely different vertex. Which means for you, that you're dealing with 8 corner positions, but 3 different vertices at each corner, because there are meeting faces with different texture coordinates at that corner. So you actually need 24 vertices that make up 6 different faces which share no vertices at all.
To make things easier for you as a beginner, don't put vertex positions and texture coordinates into different arrays. Instead write it like this:
struct vertex_pos3_tex2 {
float x,y,z;
float s,t;
} cube_vertices[24] =
{
/* 24 vertices of position and texture coordinate */
};
I'm creating a cuboid in OpenGL using Vertex arrays but my texture seems to have gone a bit wrong.
Heres my code:
float halfW = getW() / 2;
float halfH = getH() / 2;
GLubyte myIndices[]={
1, 0, 2, //front
2, 0, 3,
4, 5, 7, //back
7, 5, 6,
0, 4, 3, //left
3, 4, 7,
5, 1, 6, //right
6, 1, 2,
7, 6, 3, //up
3, 6, 2,
1, 0, 5, //down
5, 0, 4
};
float myVertices[] = {
-halfW, -halfH, -halfW, // Vertex #0
halfW, -halfH, -halfW, // Vertex #1
halfW, halfH, -halfW, // Vertex #2
-halfW, halfH, -halfW, // Vertex #3
-halfW, -halfH, halfW, // Vertex #4
halfW, -halfH, halfW, // Vertex #5
halfW, halfH, halfW, // Vertex #6
-halfW, halfH, halfW // Vertex #7
};
float myTexCoords[]= {
1.0, 1.0, //0
0.0, 1.0, //1
0.0, 0.0, //2
1.0, 0.0, //3
0.0, 0.0, //4
1.0, 0.0, //5
1.0, 1.0, //6
0.0, 1.0 //7
};
Heres the problem:
The front and back are rendering fine but top, bottom, left and right are skewed.
Where am I going wrong?
Your texture coordinates and vertex indices look are off. Since Vertex #2 (with the coordinates halfW, halfH, -halfW) has texture coordinate (0, 0), Vertex #6 (with the coordinates halfW, halfH, halfW) should not have the texture coordinate (1, 1). What it does is it puts vertices with the texture coordinates (0, 0) and (1, 1) along the same edge and that leads to trouble.
The problem with a cube is that one 2D texture coordinate per vertex is not enough to for mapping a texture onto a cube like this (however you try to put them, you will end up with weird sides).
The solution is to either add extra triangles so that no side share vertices with another side, or look in to Cube Maps where you specify the texture coordinates in 3D (just as the vertices). I would suggest using Cube Maps, since it avoids adding redundant vertices.