Following is the part of code that I am using to draw a rectangle.
And I can see the rectangle on the display but confused with the quadrants and co-ordinates on display plane.
int position_loc = glGetAttribLocation(ProgramObject, "vertex");
int color_loc = glGetAttribLocation(ProgramObject, "color_a");
GLfloat Vertices[4][4] = {
-0.8f, 0.6f, 0.0f, 1.0f,
-0.1f, 0.6, 0.0f, 1.0f,
-0.8f, 0.8f, 0.0f, 1.0f,
-0.1f, 0.8f, 0.0f, 1.0f
};
GLfloat red[4] = {1, 0, 1, 1};
glUniform4fv(glGetUniformLocation(ProgramObject, "color"), 1, red);
PrintGlError();
glEnableVertexAttribArray(position_loc);
PrintGlError();
printf("\nAfter Enable Vertex Attrib Array");
glBindBuffer(GL_ARRAY_BUFFER, VBO);
PrintGlError();
glVertexAttribPointer(position_loc, 4, GL_FLOAT, GL_FALSE, 0, 0);
PrintGlError();
glBufferData(GL_ARRAY_BUFFER, sizeof Vertices, Vertices, GL_DYNAMIC_DRAW);
PrintGlError();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
PrintGlError();
So keeping in mind the above vertices
GLfloat Vertices[4][4] = {
x, y, p, q,
x1, y1, p1, q1,
x2, y2, p2, q2,
x3, y3, p3, q3,
};
what is p,q .. p1,q1.. ? on what basis are these last two points determined?
And how does it effect x,y or x1,y1 .. and so on?
OpenGL works with a 3-dimensional coordinate system with a homogeneous coordinate. Usually the values are donated [x,y,z,w] with w being the homogeneous part. Before any projection, [x,y,z] describe the position of the point in 3D space. w will usually be 1 for positions and 0 for directions.
During rendering, OpenGL handles transformations (vertex shader) resulting in a new point [x', y', z', w']. The w component is needed here because it allows us to describe all transformations, especially translations and (perspective) projections as 4x4 matrices. Have a look at 1 and 2 for details about transformations.
Afterwards clipping happens and the resulting vectors gets divided by the w component giving so-called Normalized device coordinates [x'/w', y'/w', z'/w', 1]. This NDC coordinates is what is actually used to draw to the screen. The first and second component (x'/w' and y'/w') are multiplied by the viewport size to get to the final pixel coordinates. The third component (z'/w', aka depth) is used to determine which points are in front during depth-testing. The last coordinate has no purpose here anymore.
In your case, without using any transformations or projections, you are drawing directly in NDC space, thus z can be used to order triangles in depth and w always has to be 1.
Related
I'm trying to visualize normals of triangles.
I have created a triangle to use as the visual representation of the normal but I'm having trouble aligning it to the normal.
I have tried using glm::lookAt but the triangle ends up in some weird position and rotation after that. I am able to move the triangle in the right place with glm::translate though.
Here is my code to create the triangle which is used for the visualization:
// xyz rgb
float vertex_data[] =
{
0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f,
0.25f, 0.0f, 0.025f, 0.0f, 1.0f, 1.0f,
0.25f, 0.0f, -0.025f, 0.0f, 1.0f, 1.0f,
};
unsigned int index_data[] = {0, 1, 2};
glGenVertexArrays(1, &nrmGizmoVAO);
glGenBuffers(1, &nrmGizmoVBO);
glGenBuffers(1, &nrmGizmoEBO);
glBindVertexArray(nrmGizmoVAO);
glBindBuffer(GL_ARRAY_BUFFER, nmrGizmoVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_data), vertex_data, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, nrmGizmoEBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(index_data), index_data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
and here is the code to draw the visualizations:
for(unsigned int i = 0; i < worldTriangles->size(); i++)
{
Triangle *tri = &worldTriangles->at(i);
glm::vec3 wp = tri->worldPosition;
glm::vec3 nrm = tri->normal;
nrmGizmoMatrix = glm::mat4(1.0f);
//nrmGizmoMatrix = glm::translate(nrmGizmoMatrix, wp);
nrmGizmoMatrix = glm::lookAt(wp, wp + nrm, glm::vec3(0.0f, 1.0f, 0.0f));
gizmoShader.setMatrix(projectionMatrix, viewMatrix, nrmGizmoMatrix);
glBindVertexArray(nrmGizmoVAO);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
}
When using only glm::translate, the triangles appear in right positions but all point in the same direction. How can I rotate them so that they point in the direction of the normal vector?
Your code doesn't work because lookAt is intended to be used as the view matrix, thus it returns the transform from world space to local (camera) space. In your case you want the reverse -- from local (triangle) to world space. Taking an inverse of lookAt should solve that.
However, I'd take a step back and look at (haha) the bigger picture. What I notice about your approach:
It's very inefficient -- you issue a separate call with a different model matrix for every single normal.
You don't even need the entire model matrix. A triangle is a 2-d shape, so all you need is two basis vectors.
I'd instead generate all the vertices for the normals in a single array, and then use glDrawArrays to draw that. For the actual calculation, observe that we have one degree of freedom when it comes to aligning the triangle along the normal. Your lookAt code resolves that DoF rather arbitrary. A better way to resolve that is to constrain it by requiring that it faces towards the camera, thus maximizing the visible area. The calculation is straightforward:
// inputs: vertices output array, normal position, normal direction, camera position
void emit_normal(std::vector<vec3> &v, const vec3 &p, const vec3 &n, const vec3 &c) {
static const float length = 0.25f, width = 0.025f;
vec3 t = normalize(cross(n, c - p)); // tangent
v.push_back(p);
v.push_back(p + length*n + width*t);
v.push_back(p + length*n - width*t);
}
// ... in your code, generate normals through:
std::vector<vec3> normals;
for(unsigned int i = 0; i < worldTriangles->size(); i++) {
Triangle *tri = &worldTriangles->at(i);
emit_normal(normals, tri->worldPosition, tri->normal, camera_position);
}
// ... create VAO for normals ...
glDrawArrays(GL_TRIANGLES, 0, normals.size());
Note, however, that this would make the normal mesh camera-dependent -- which is desirable when rendering normals with triangles. Most CAD software draws normals with lines instead, which is much simpler and avoids many problems:
void emit_normal(std::vector<vec3> &v, const vec3 &p, const vec3 &n) {
static const float length = 0.25f;
v.push_back(p);
v.push_back(p + length*n);
}
// ... in your code, generate normals through:
std::vector<vec3> normals;
for(unsigned int i = 0; i < worldTriangles->size(); i++) {
Triangle *tri = &worldTriangles->at(i);
emit_normal(normals, tri->worldPosition, tri->normal);
}
// ... create VAO for normals ...
glDrawArrays(GL_LINES, 0, normals.size());
I followed a guide to draw a Lorenz system in 2D.
I want now to extend my project and switch from 2D to 3D. As far as I know I have to substitute the gluOrtho2D call with either gluPerspective or glFrustum. Unfortunately whatever I try is useless.
This is my initialization code:
// set the background color
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
/// set the foreground (pen) color
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);*/
// set the foreground (pen) color
glColor4f(1.0f, 1.0f, 1.0f, 0.02f);
// enable blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// enable point smoothing
glEnable(GL_POINT_SMOOTH);
glPointSize(1.0f);
// set up the viewport
glViewport(0, 0, 400, 400);
// set up the projection matrix (the camera)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//gluOrtho2D(-2.0f, 2.0f, -2.0f, 2.0f);
gluPerspective(45.0f, 1.0f, 0.1f, 100.0f); //Sets the frustum to perspective mode
// set up the modelview matrix (the objects)
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
while to draw I do this:
glClear(GL_COLOR_BUFFER_BIT);
// draw some points
glBegin(GL_POINTS);
// go through the equations many times, drawing a point for each iteration
for (int i = 0; i < iterations; i++) {
// compute a new point using the strange attractor equations
float xnew=z*sin(a*x)+cos(b*y);
float ynew=x*sin(c*y)+cos(d*z);
float znew=y*sin(e*z)+cos(f*x);
// save the new point
x = xnew;
y = ynew;
z = znew;
// draw the new point
glVertex3f(x, y, z);
}
glEnd();
// swap the buffers
glutSwapBuffers();
the problem is that I don't visualize anything in my window. It's all black. What am I doing wrong?
The name "gluOrtho2D" is a bit misleading. In fact gluOrtho2D is probably the most useless function ever. The definition of gluOrtho2D is
void gluOrtho2D(
GLdouble left,
GLdouble right,
GLdouble bottom,
GLdouble top )
{
glOrtho(left, right, bottom, top, -1, 1);
}
i.e. the only thing it does it calling glOrtho with default values for near and far. Wow, how complicated and ingenious </sarcasm>.
Anyway, even if it's called ...2D, there's nothing 2-dimensional about it. The projection volume still has a depth range of [-1 ; 1] which is perfectly 3-dimensional.
Most likely the points generated lie outside the projection volume, which has a Z value range of [0.1 ; 100] in your case, but your points are confined to the range [-1 ; 1] in either axis (and IIRC the Z range of the strange attractor is entirely positive). So you have to apply some translation to see something. I suggest you choose
near = 1
far = 10
and apply a translation of Z: -5.5 to move things into the center of the viewing volume.
I'm writing a small 2D game-engine (educative purpose) in C++ and OpenGL 3.3, while writing the code I noted that almost all sprites (if not all) use the same vertexBuffer values:
const float vertexBuffer[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f
}
That is 2 triangles (if using VBO indexing) in model space that form a square, the indexBuffer goes like:
const unsigned short indexBuffer[] = { 0, 1, 2, 2, 0, 3 }
Why I'm using the same model-space values for all my sprites? Well I use a different MVP matrix for all of them:
P (projection): The orthogonal camera transform, usually with the same width and height of the glContext.
V (view): A lookAt transformation, it just sits in the z axis looking to the xy plane perpendicullary. This is also used to move the camera (follow the player, etc).
M (model): this matrix is created using transformations belonging to each sprite:
glm::mat4 model = <translate> * <rotate> * <scale>
Where:
<translate> is the position of the sprite in screen-space
<rotate> the rotation of the sprite
<scale> The size of the sprite in pixels divided by 2. Why? Each corner of the model-space corresponds to a vertex, and the square formed by these with its center in the origin, so if our sprite is 250x250 pixels, we scale by 125px to each side in each axis, thus transforming our model-space square to a screen-space square.
So, if I have 5 sprites I'll call glDrawElements 5 times, with differents MVPs and Textures each time, but same vertexBuffer, indexBuffer and uvCoordinates.
Do you think this is a error-prone approach for using in the future? Or should I instead apply the <translate> and <scale> transformations directly to the vertices when creating them? And leave the Model matrix with only the rotation.
I am trying to understand how to specify texture coordinates for a GL_QUAD_STRIP.
I have managed to get things working for one quad:
float vertices[] = { 0.0f, 0.0f, 1.0f, +1.0f, 0.0f, 0.0f, // bottom line
0.0f, 1.0f, 1.0f, +1.0f, 1.0f, 0.0f}; // top line
unsigned int indices[] = {2, 0, // x = 0
3, 1}; // x = +1
float textureCoordinates[] = { 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f};
...
glBindBuffer(GL_ARRAY_BUFFER, 0); // unbinds any buffer object previously bound
glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibufferid);
glDrawElements(GL_QUAD_STRIP, 4, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
And here is how the result looks (white rectangle with image, rest is drawn on to help explain):
However I do not understand the logic behind the choice of textureCoordinates[] :-(.
The first texture coordinate is (1,0); I would assume that this corresponds to lower right corner?
Also I would assume that when OpenGL reads the first index: 2, it uses this to look up the vertex: (0,1,1): upper left corner. Next it reads the first texture coordinate: (1,0).
But as mentioned above I would assume this to be the lower right corner of the texture !?
However the texture is shown unrotated so this can not be the case!?
Just like the vertices, the texture coordinates are also selected based on the indices used by glDrawElements(). So the first texture coordinate is not (1,0), but (1,1) because the first index is 2. Vertices and coordinates would be according to the following table, where i = index, v = vertex and t = texture coordinate. (I'll only take the x and y coordinates into consideration for the vertices, as the z coordinate doesn't really matter in this case.)
i v t
2 (0,1) (1,1)
0 (0,0) (1,0)
3 (1,1) (0,1)
1 (1,0) (0,0)
If we draw this on a piece of paper, we can see that this means the coordinates make more sense, since the indices matter. (I recommend that you do this! I had to do that to understand what was going on.) Notice in the table how the y coordinates match perfectly between the vertex and texture coordinate for a given index. But the x coordinates don't match: when the vertex has x = 0, the texture coordinate has x = 1 and vice versa. I assume this would make the image appear mirrored around the y axis instead of rotated in any way. What does the original image look like? Is it mirrored compared to what we see in the image you posted so that the building is on the left? If so, the texture coordinates would be the explanation. In that case, texture coordinate 2 and 3 should switch places.
In case you are curious, you could take a look at the OpenGL 2.1 specification on page 18, Figure 2.5(a), to see why the vertex indices were selected as they were. It would create a quad with vertices specified in a counterclockwise direction when projected on the screen. This is good because the initial value for glFrontFace() is GL_CCW, which means we see the front face of the polygons in the rendered image and the polygons would not have been culled if culling was enabled (see glCullFace()). (Culling is not enabled by default though, so it may or may not have mattered in your case.)
I hope this helped. Do comment if something is unclear!
I'm trying to rotate a 2D image using OGL ES. After load it I can move it through the screen but when trying to rotate the image through its center, it has an odd behavior as the rotation center is the lower-left screen corner, not the center of the image itself.
Googling around I've read that I could push the current matrix, change whatever I need (translate the coords, rotate the image, etc) and then pop the matrix coming back to the previous matrix status... I did it but still not working as I'm looking for (but at least now seems that the original coords where it does the rotation are not the lower-left corner...)
Any thoughts? Anyone could spot where my problem is?
Any help would be much appreciated! Thanks!!
void drawImage(Image *img)
{
GLfloat fX = (GLfloat)img->x;
GLfloat fY = (GLfloat)(flipY(img->m_height+img->y));
GLfloat coordinates[] = { 0, img->m_textureHeight, img->m_textureWidth, img->m_textureHeight, 0, 0, img->m_textureWidth, 0 };
GLfloat vertices[] =
{
fX, fY, 0.0,
img->m_width+fX, fY, 0.0,
fX, img->m_height+fY, 0.0,
img->m_width+fX, img->m_height+fY, 0.0
};
//Push and change de matrix, translate coords, rotate and scale image and then pop the matrix
glPushMatrix(); //push texture matrix
glTranslatef((int)fX, (int)fY, 0.0); //translate texture matrix
// rotate
if (img->rotation != 0.0f )
glRotatef( -img->rotation, 0.0f, 0.0f, 1.0f );
// scale
if (img->scaleX != 1.0f || img->scaleY != 1.0f)
glScalef( img->scaleX, img->scaleY, 1.0f );
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glBindTexture(GL_TEXTURE_2D, img->m_name);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
}
Most importantly, you need to understand how to do this operation.
before doing a rotation you have to translate your self in the rotation origin and only then apply to rotation.
Check out this article which explains it well.
The simple breakdown is:
move object to origin.
Rotate.
Move object back.