i have a "normal" opengl scene, and want to overlay this scene with a simple quad.
gl.glMatrixMode(GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glMatrixMode(GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glTranslatef(-0.8f, 0.0f, 0.0f);
// draw background
gl.glBegin(GL_QUADS); // of the color cube
gl.glColor3f(1.0f, 0.5f, 0.5f);
gl.glVertex3f(0, 0, 1.0f);
gl.glVertex3f(0.5f, 0, 1.0f);
gl.glVertex3f(0.5f, 0.5f, 1.0f);
gl.glVertex3f(0, 0.5f, 1.0f);
gl.glEnd();
gl.glMatrixMode(GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL_MODELVIEW);
gl.glPopMatrix();
as u can see on the screenshot, the new matrix is distorted. the colored lines are the axes.
Now the question: How can i set an aspect ratio (like in gluPerspective) for my matrix to render a correct squad?
You need to account for the size and shape of the viewport. Transformations are done in "normalized coordinates". This means that the bottom left corner of the viewport is (0,0) and the top right corner of the viewport is (1,1). Your projection matrix needs to account for the shape of the viewport to compensate and make your quad square.
The ratio of the height and width of a viewport is known as the "aspect ratio".
See this page, the important section for your problem is The field of view and Image Aspect Ratio. There's too much content to post a usable excerpt here. You should have no trouble finding resources on projection matrix aspect ratios, and how to compute a correct projection matrix.
Related
My openGL application draws the circle as an oval instead of a circle. My code is:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 800, 0.0f, 400, 0.0f, 1.0f);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glColor3f(255.0, 255.0, 255.0);
drawRect(racket_left_x, racket_left_y, racket_width, racket_height);
drawRect(racket_right_x, racket_right_y, racket_width,
racket_height);
glPopMatrix();
// drawBall();
//drawBall2();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
drawBall();
glPopMatrix();
glPopMatrix();
glutSwapBuffers();
How can I fix this?
I've tried changing the glMatrixModes but that doesn't seems to work. Thanks.
The projection matrix transforms all vertex data from the eye coordinates to the clip coordinates.
Then, these clip coordinates are also transformed to the normalized device coordinates (NDC) by dividing with w component of the clip coordinates.
The normalized device coordinates is in range (-1, -1, -1) to (1, 1, 1).
With the orthographic projection, the eye space coordinates are linearly mapped to the NDC.
The orthographic projection can be set up by glOrtho. If you want to set up a projection that allows you to draw in window size scales, then you have to do it like this:
int wndWidth = 800;
int wndHeight = 400;
glOrtho( 0.0, (float)wndWidth, 0.0, (float)wndHeight, -1.0, 1.0 );
If the viewport is rectangular this has to be considerd by mapping the coordinates.
float aspect = (float)widht/height;
glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);
You set up a proper window size projection matrix before you draw the rectangles (drawRect)
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 800, 0.0f, 400, 0.0f, 1.0f);
.....
drawRect( ..... );
But you "clear" the projection matrix and do not care about the aspect of the view before you draw the circle (drawBall).
Change your code somehow like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float aspect = 800.0f/400.0f;
glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);
drawBall();
By the way, while glPushMatrix push an element to the matrix stack, glPopMatrix pop an element from the matrix stack. In OpenGL there is one matrix stack for each matrix mode (See glMatrixMode). The matrix modes are GL_MODELVIEW, GL_PROJECTION, and GL_TEXTURE and all matrix operations, are applied to the matrix stack you have specified by glMatrixMode.
This means the both glPopMatrix instructions at the end of your code snippet should not be there.
Is it possible to draw something in OpenGL on the drawing scene with giving window pixel coordinates?
For example, I'd like to draw a single point in a 400x400 window (e.g. in the middle of that window). Is there any quick way to set everything up so I could just type:
glVertex3f(200.0 , 200.0 , 1.0);?
You need to set up an orthogonal projection matrix for that first.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0.0f, WindowWidth, WindowHeight, 0.0f, 0.0f, 10.0f);
glMatrixMode(GL_MODELVIEW);
You can then render in window coordinates.
glPointSize(5.0f);
glBegin(GL_POINTS);
glVertex3f(100.0f, 100.0f, 1.0f);
glEnd();
Should render a point with a diameter of 5 pixels on window coordinates [100, 100]
Do note that this old way of rendering is deprecated and you should use VBOs and the like, but it is still good for testing.
I've got a small scene with a loaded mesh, a ground plane and a skybox. I am generating a cube, and using the vertex positions as the cubemap texture co-ordinates.
Horizontal rotation (about the y-axis) works perfectly and the world movement is aligned with the skybox. Vertical rotation (about the camera's x-axis) doesn't seem to match up with the movement of the other objects, except that the strange thing is that when the camera is looking at the center of a cube face, everything seems aligned. In other words, the movement is non-linear and I'll try my best to illustrate the effect with some images:
First, the horizontal movement which as far as I can tell is correct:
Facing forward:
Facing left at almost 45Deg:
Facing left at 90Deg:
And now the vertical movement which seems to have some discrepancy in movement:
Facing forward again:
Notice the position of the ground plane in relation to the skybox in this image. I rotated slightly left to make it more apparent that the Sun is being obscured when it shouldn't.
Facing slightly down:
Finally, a view straight up to show the view is correctly centered on the (skybox) cube face.
Facing straight up:
Here's my drawing code, with the ground plane and mesh drawing omitted for brevity. (Note that the cube in the center is a loaded mesh, and isn't generated by the same function for the skybox).
void MeshWidget::draw() {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glRotatef(-rot_[MOVE_CAMERA][1], 0.0f, 1.0f, 0.0f);
glRotatef(-rot_[MOVE_CAMERA][0], 1.0f, 0.0f, 0.0f);
glRotatef(-rot_[MOVE_CAMERA][2], 0.0f, 0.0f, 1.0f);
glDisable(GL_DEPTH_TEST);
glUseProgramObjectARB(shader_prog_ids_[2]);
glBindBuffer(GL_ARRAY_BUFFER, SkyBoxVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vec3), BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, SkyIndexVBOID);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glUseProgramObjectARB(shader_prog_ids_[0]);
glEnable(GL_DEPTH_TEST);
glPopMatrix();
glTranslatef(0.0f, 0.0f, -4.0f + zoom_factor_);
glRotatef(rot_[MOVE_CAMERA][0], 1.0f, 0.0f, 0.0f);
glRotatef(rot_[MOVE_CAMERA][1], 0.0f, 1.0f, 0.0f);
glRotatef(rot_[MOVE_CAMERA][2], 0.0f, 0.0f, 1.0f);
glPushMatrix();
// Transform light to be relative to world, not camera.
glRotatef(rot_[MOVE_LIGHT][1], 0.0f, 1.0f, 0.0f);
glRotatef(rot_[MOVE_LIGHT][0], 1.0f, 0.0f, 0.0f);
glRotatef(rot_[MOVE_LIGHT][2], 0.0f, 0.0f, 1.0f);
float lightpos[] = {10.0f, 0.0f, 0.0f, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION, lightpos);
glPopMatrix();
if (show_ground_) {
// Draw ground...
}
glPushMatrix();
// Transform and draw mesh...
glPopMatrix();
}
And finally, here's the GLSL code for the skybox, which generates the texture co-ordinates:
Vertex shader:
void main()
{
vec4 vVertex = vec4(gl_ModelViewMatrix * gl_Vertex);
gl_TexCoord[0].xyz = normalize(vVertex).xyz;
gl_TexCoord[0].w = 1.0;
gl_TexCoord[0].x = -gl_TexCoord[0].x;
gl_Position = gl_Vertex;
}
Fragment shader:
uniform samplerCube cubeMap;
void main()
{
gl_FragColor = texture(cubeMap, gl_TexCoord[0]);
}
I'd also like to know if using quaternions for all camera and object rotations would help.
If you need any more information (or images), please ask!
I think you should be generating your skybox texture lookup based on a worldspace vector (gl_Vertex?) not a view space vector (vVertex).
I'm assuming your skybox coordinates are already defined in worldspace as I don't see a model matrix transform before drawing it (only the camera rotations). In that case, you should be sampling the skybox texture based on the worldspace position of a vertex, it doesn't need to be transformed by the camera. You're already translating the vertex by the camera, you shouldn't need to translate the lookup vector as well.
Try replacing normalize(vVertex) with normalize(gl_Vertex) and see if that improves things.
Also I might get rid of the x = -x thing, I suspect that was put in to compensate for the fact that the texture was rotating in the wrong direction originally?
I'd also like to know if using quaternions for all camera and object rotations would help.
Help how? It doesn't offer any new functionality over using matrices. I've heard arguments both ways as to whether matrixes or quaternions have better performance, but I see no need to use them.
I have the same qustion as in the title :/
I make something like:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0, -zoom);
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
//Here model render :/
And in my app camera is rotating not model :/
Presumably the reason you believe it's your camera moving and not your model is because all the objects in the scene are moving together?
After rendering your model and before rendering other objects, are you resetting the MODELVIEW matrix? In other words, are you doing another glLoadIdentity() or glPopMatrix() after you render the model you're talking about and before rendering other objects?
Because if not, whatever transformations you applied to that model will also apply to other objects rendered, and it will be as if you rotated the whole world (or the camera).
I think there may be another problem with your code though:
glTranslatef(0.0f, 0, -zoom);
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
//Here model render :/
Are you trying to rotate your model around the point (0, 0, -zoom)?
Normally, in order to rotate around a certain point (x,y,z), you do:
Translate (x,y,z) to the origin (0,0,0) by translating by the vector (-x,-y,-z)
Perform rotations
Translate the origin back to (x,y,z) by translating by the vector (x,y,z)
Draw
If you are trying to rotate around the point (0,0,zoom), you are missing step 3.
So try adding this before you render your model:
glTranslatef(0.0f, 0, zoom); // translate back from origin
On the other hand if you are trying to rotate the model around the origin (0,0,0), and also move it along the z-axis, then you will want your translation to come after your rotation, as #nonVirtual said.
And don't forget to reset the MODELVIEW matrix before you draw other objects. So the whole sequence would be something like:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
#if you want to rotate around (0, 0, -zoom):
glTranslatef(0.0f, 0, -zoom);
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
glTranslatef(0.0f, 0, zoom); // translate back from origin
#else if you want to rotate around (0, 0, 0):
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
glTranslatef(0.0f, 0, -zoom);
#endif
//Here model render :/
glLoadIdentity();
// translate/rotate for other objects if necessary
// draw other objects
You use GL_MODELVIEW when defining the transformation for your objects and GL_PROJECTION to transform your viewpoint.
I believe it has to do with the order in which you are applying your transformations. Try switching the order of the call to glRotate and glTranslate. As it is now you are moving it outward from the origin, then rotating it about the origin, which will give the appearance of the object orbiting around the camera. If you instead rotate it while it is still at the origin, then move it, it should give you the desired result. Hope that helps.
How can I retrieve the current position of the vertices after they have been transformed? I have the following code....
How can I get the position of the "modelVertices" after the transform has been applied?
I am really after the screen coordinates so I can tell if a vert has been clicked by the mouse.
glMatrixMode(GL_MODELVIEW);
// save model transform matrix
GLfloat currentModelMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelMatrix);
// clear the model transform matrix
glLoadIdentity();
// rotate the x and y axis of the model transform matrix
glRotatef(rotateX, 1.0f, 0.0f, 0.0f);
glRotatef(rotateY, 0.0f, 1.0f, 0.0f);
// reapply the previous transforms
glMultMatrixf(currentModelMatrix);
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// since each vertes is defined in 3D instead of 2D parameter one has been
// upped from 2 to 3 to respect this change.
glVertexPointer(3, GL_FLOAT, 0, modelVertices);
glEnableClientState(GL_VERTEX_ARRAY);
GLU has the function gluProject() and gluUnProject():
http://www.opengl.org/sdk/docs/man/xhtml/gluProject.xml
If your platform doesn't have GLU you can always grab the code for MESA and see how it is implemented.
You could use the feedback buffer.