I am writing an application that draws a 2D triangle and rotates it around its z-axis depending on its position. Its middle (t1.tx, t1.ty) is constantly being changed when the triangle is dragged with the mouse. The problem is that when I drag the triangle to another location, instead of staying where its at and rotating, it rotates in a circle path around its center point.
What I am doing wrong? I want it to rotate in its position.
void drawTriangle() {
glBegin(GL_POLYGON);
glColor3f((float)200/255, (float)200/255, (float)200/255);
glVertex2f(t1.tx, t1.ty + .2); // top point of triangle
glVertex2f(t1.tx - .2, t1.ty - .2); // left point
glVertex2f(t1.tx + .2, t1.ty - .2); // right point
glEnd();
}
void display() {
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glPushMatrix();
glTranslatef(t1.tx, t1.ty, 0); // move matrix to triangle's current center point
glRotatef(theta, 0, 0, 1.0); // rotate on z-axis
drawTriangle();
glPopMatrix();
glutPostRedisplay();
glutSwapBuffers();
}
It has been a WHILE since I did this stuff, but based on what you are saying, my first guess would be to swap the order of the glTranslatef and glRotatef. You want to rotate in model space then transform to world space.
I fixed it. The problem was that I was translating the matrix, and then drawing onto that matrix that was translated. What I did was draw my triangle at the center of the matrix, so it still moves and rotates at its current position as the matrix is translated. Thank you for the help.
Related
I need to draw a triangle in 2d at the center of the screen and rotate it at the center only using a passive mouse actions any reference code?
void myDisplay(void)
{
glClear(GL_COLOR_BUFFER_BIT); // clear the screen
glBegin(GL_TRIANGLES);
glVertex2i(100, 50); // draw three points
glVertex2i(100, 130);
glVertex2i(150, 130);
glEnd();
glFlush();
This is the image and the triangle:
Assume we have a triangle with one vertex lying in the center of the screen (NDC=[0,0]), that is symmetric along the x-axis. The problem of letting the triangle point to the mouse is than the problem of finding the angle between the vector going from the center of the screen to the mouseposition and the x-axis ([1,0]). A solution for doing this can look like the following:
Vec2 mousePosition = queryMousePositionAsDoneInYourAPI();
float angle = acos(dot(normalize(mousePosition - [width/2, height/2]), [1,0]));
glLoadIdentity();
glRotatef(angle, 0,0,1);
glBegin();
drawTriangle
glEnd();
I want to rotate two spheres continuously both with a different rotation.
My code currently doesn't seem to get either to rotate.
Here is my code:
void renderScene(void) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glTranslated(0.0,1.2,-6);
glRotatef(angle,0,1.2,-6);
glutSolidSphere(1,50,50);
glPopMatrix();
glPushMatrix();
glTranslatef(0.0,-1.5,-6);
glRotatef(angle,0,1.5,-6);
glutSolidSphere(0.4,50,50);
glPopMatrix();
angle=+0.1;
glutSwapBuffers();
}
Is there something I haven't added?
I have tried the rotate everywhere, and it only seems to work outside the push and pop matrix, which is not what I want.
angle=+0.1; // assign the value +0.1 to angle
Did you mean:
angle += 0.1; // increment angle by 0.1
glutSolidSphere draws a sphere around the origin: (0,0,0). glRotatef rotates also around an axis that passes through the origin. Now, as you probably should know, rotating a sphere around its center does not change the appearance of the sphere at all.
What you should do is first rotate and then translate. Like this:
glPushMatrix();
glRotatef(angle,0,1.2,-6);
glTranslated(0.0,1.2,-6);
glutSolidSphere(1,50,50);
glPopMatrix();
I am wondering if gluLookAt together with glFrustum is distorting the rendered picture.
This is how a scene is rendered:
And here's the code that rendered it.
InitCamera is called once and should, as I understand it now, set up a matrix so as if I looked from a position 2 units above and 3 units in front of the origin towards the origin. Also glFrustum is used in order to create a perspective`.
void InitCamera() {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt (
0, 2 , 3,
0, 0 , 0,
0, 1 , - 0
);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum (- 1, 1,
- 1, 1,
1,1000.0);
glMatrixMode(GL_MODELVIEW);
}
Then TheScene is what actually draws the picture:
void TheScene() {
glClear(
GL_COLOR_BUFFER_BIT |
GL_DEPTH_BUFFER_BIT
);
glMatrixMode(GL_MODELVIEW);
// Draw red circle around origin and radius 2 units:
glColor3d(1,0,0);
glBegin(GL_LINE_LOOP);
for (double i = 0; i<=2 * M_PI; i+=M_PI / 20.0) {
glVertex3d(std::sin(i) * 2.0, 0, std::cos(i) * 2.0);
}
glEnd();
// draw green sphere at origin:
glColor3d(0,1,0);
glutSolidSphere(0.2,128, 128);
// draw pink sphere a bit away
glPushMatrix();
glColor3d(1,0,1);
glTranslated(8, 3, -10);
glutSolidSphere(0.8, 128, 128);
glPopMatrix();
SwapBuffers(hDC_opengl);
}
The red ball should be drawn in the origin and at the center of the red circle around it. But looking at it just feels wierd, and gives me the imprssion that the green ball is not in the center at all.
Also, the pink ball should, imho, be drawn as a perfect circle, not as an ellipse.
So, am I wrong, and the picture is drawn correctly, or am I setting up something wrong?
Your expectations are simply wrong
The perspective projection of a 3d circle (if the circle is fully visible) is an ellipse, however the projection of the center of the circle is NOT in general the center of the ellipse.
The outline of the perspective projection of a sphere is in general a conic section i.e. can be a circle, an ellipse, a parabola or an hyperbola depending on the position of viewpoint, projection plane and sphere in 3D. The reason is that the outline of the sphere can be imagined as a cone starting from the viewpoint and touching the sphere being intersected with the projection plane.
Of course if you're looking at a circle with a perfectly perpendicular camera the center of the circle will be projected to the center of the circle projection. In the same manner if your camera is pointing exactly to a sphere the sphere outline will be a circle, but those are special cases, not the general case.
These differences between the centers are more evident with strong perspective (wide angle) cameras. With a parallel projection instead this apparent distortion is absent (i.e. the projection of the center of a circle is exactly the center of the projection of the circle).
To see the green sphere in the centre of the screen with a perfect circle around it you need to change the camera location like so:
gluLookAt (
0, 3, 0,
0, 0, 0,
0, 0, 1
);
Not sure what's causing the distortion of the purple sphere though.
The perspective is correct, it just looks distorted because that's how things fell together here.
try this for gluLookAt, and play around a bit more.:
gluLookAt (
0, 2 , 10,
0, 0 , 0,
0, 1 , 0
);
The way I tried it out was with a setup that allows me to adjust the position and view direction with the mouse, so you get real time motion. Your scene looks fine when I move around. If you want I can get you the complete code so you can do that too, but it's a bit more than I want to shove into an answer here.
I am having trouble with a camera class I am trying to use in my program. When I change the camera_target of the gluLookAt call, my whole terrain is rotating instead of just the camera rotating like it should.
Here is some code from my render method:
camera->Place();
ofSetColor(255, 255, 255, 255);
//draw axis lines
//x-axis
glBegin(GL_LINES);
glColor3f(1.0f,0.0f,0.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(100.0f, 0.0f,0.0f);
glEnd();
//y-axis
glBegin(GL_LINES);
glColor3f(0.0f,1.0f,0.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(0.0f, 100.0f,0.0f);
glEnd();
//z-axis
glBegin(GL_LINES);
glColor3f(0.0f,0.0f,1.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(0.0f, 0.0f,100.0f);
glEnd();
glColor3f(1,1,1);
terrain->Draw();
And the rotate and place methods from my camera class:
void Camera::RotateCamera(float h, float v){
hRadians += h;
vRadians += v;
cam_target.y = cam_position.y+(float)(radius*sin(vRadians));
cam_target.x = cam_position.x+(float)(radius*cos(vRadians)*cos(hRadians));
cam_target.z = cam_position.z+(float)(radius*cos(vRadians)*sin(hRadians));
cam_up.x = cam_position.x-cam_target.x;
cam_up.y = ABS(cam_position.y+(float)(radius*sin(vRadians+PI/2))) ;
cam_up.z = cam_position.z-cam_target.z;
}
void Camera::Place() {
//position, camera target, up vector
gluLookAt(cam_position.x, cam_position.y, cam_position.z, cam_target.x, cam_target.y, cam_target.z, cam_up.x, cam_up.y, cam_up.z);
}
The problem is that the whole terrain is moving around the camera, whereas the camera should just be rotating.
Thanks for any help!
EDIT - Found some great tutorials and taking into account the answers on here, I make a better camera class. Thanks guys
From the POV of the terrain, yes, the camera is rotating. But, since your view is from the POV of the camera, when you rotate the camera, it appears that the terrain is rotating. This is the behavior that gluLookAt() is intended to produce. If there is something else that you expected, you will need to rotate only the geometry that you want rotated, and not try to rotate using gluLookAt().
Update 1: Based on the discussion below, try this:
void Camera::RotateCamera(float h, float v)
{
hRadians += h;
vRadians += v;
cam_norm.x = cos(vRadians) * sin(hRadians);
cam_norm.y = -sin(vRadians);
cam_norm.z = cos(vRadians) * sin(hRadians);
cam_up.x = sin(vRadians) * sin(hRadians);
cam_up.y = cos(vRadians);
cam_up.z = sin(vRadians) * cos(hRadians);
}
void Camera::Place()
{
//position, camera target, up vector
gluLookAt(cam_pos.x, cam_pos.y, cam_pos.z,
cam_pos.x+cam_norm.x, cam+pos.y+cam_norm.y, camp_pos.z+cam_norm.z,
cam_up.x, cam_up.y, cam_up.z);
}
Separating the camera normal (the direction the camera is looking) from the position allows you to independently change the position, pan and tilt... that is, you can change the position without having to recompute the normal and up vectors.
Disclaimer: This is untested, just what I could do on the back of a sheet of paper. It assumes a right handed coordinate system and that pan rotation is applied before tilt.
Update 2: Derived using linear algebra rather than geometry... again, untested, but I have more confidence in this.
Well, rotating the camera or orbiting the terrain around the camera looks essentially the same.
If you want to orbit the camera around a fixed terrain point you have to modify the camera position, not the target.
Should it not be?:
cam_up.x = cam_target.x - cam_position.x;
cam_up.y = ABS(cam_position.y+(float)(radius*sin(vRadians+PI/2))) ;
cam_up.z = cam_target.z - cam_position.z;
Perhaps you should normalize cam_up as well.
HTH
I've just started playing with OpenGl to render a number of structure each comprising a number of polygon.
Basically I want to perform the equivalent of setting a camera at (0,0,z) in the world (structure) coordinates and rotate it about the x,y and z-axes of the world axes (in that order!) to render a view of each structure (as I understand it it common practice to do use the inverse camera matrix). Thus as I understand it I need to translate (to world origin i.e. (0,0,-z)) * rotateZrotateYrotateX * translate (re-define world origin see below)
So I think I need something like:
//Called when the window is resized
void handleResize(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(9.148, (double)w / (double)h, 800.0, 1500.0);
}
float _Zangle = 10.0f;
float _cameraAngle = 90.0f;
//Draws the 3D scene
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
glTranslatef(0.0f, 0.0f, -z); //Move forward Z (mm) units
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" by 90 degrees about x-axis
glTranslatef(-11.0f,189.0f,51.0f); //re-define origin of world coordinates to be (11,-189,-51) - applied to all polygon vertices
glPushMatrix(); //Save the transformations performed thus far
glBegin(GL_POLYGON);
glVertex3f(4.91892,-225.978,-50.0009);
glVertex3f(5.73534,-225.978,-50.0009);
glVertex3f(6.55174,-225.978,-50.0009);
glVertex3f(7.36816,-225.978,-50.0009);
.......// etc
glEnd();
glPopMatrix();
However when I compile and run this the _angle and _cameraAngle seem to be reversed i.e. _angle seems to rotate about y-axis (Vertical) of Viewport and _cameraAngle about z-axis (into plane of Viewport)? What am I doing wrong?
Thanks for taking the time to read this
The short answer is: Use gluLookAt(). This utility function creates the proper viewing matrix.
The longer answer is that each OpenGL transformation call takes the current matrix and multiplies it by a matrix built to accomplish the transformation. By calling a series of OpenGL transformation function you build one transformation matrix that will apply the combination of transformations. Effectively, the matrix will be M = M1 * M2 * M3 . . . Mathematically, the transformations are applied from right to left in the above equation.
Your code doesn't move the camera. It stays at the origin, and looks down the negative z-axis. Your transformations move everything in model space to (11,-189,-51), rotates everything 90 degrees about the x-axis, rotates everything 90 degrees about the y-axis, rotates everything 10 degrees about the z-axis, then translates everything -z along the z-axis.
EDIT: More information
I'm a little confused about what you want to accomplish, but I think you want to have elements at the origin, and have the camera look at those elements. The eye coordinates would be where you want the camera, and the center coordinates would be where you want the objects to be. I'd use a little trigonometry to calculate the position of the camera, and point it at the origin.
In this type of situation I usually keep track of camera position using longitude, latitude, and elevation centered on the origin. Calculating x,y,z for the eye coordinates is simplyx = elv * cos(lat) * sin(lon), y = elv * sin(lat), z = elv * cos(lat) * cos(lat).
My gluLookAt call would be gluLookAt(x, y, z, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
You could rotate the up on the camera by changing the last three coordinates for gluLookAt.
The z axis is coming from the center of the monitor into you. So, rotating around the z-axis should make the camera spin in place (like a 2D rotation on just the xy plane). I can't tell, but is that what's happening here?
It's possible that you are encountering Gimbal Lock. Try removing one of the rotations and see if things work the way they should.
While it's true that you can't actually move the camera in OpenGL, you can simulate camera motion by moving everything else. This is why you hear about the inverse camera matrix. Instead of moving the camera by (0, 0, 10), we can move everything in the world by (0, 0, -10). If you expand those out into matrices, you will find that they are inverses of each other.
I also noticed that, given the code presented, you don't need the glPushMatrix()/glPopMatrix() calls. Perhaps there is code that you haven't shown that requires them.
Finally, can you provide an idea of what it is you are trying to render? Debugging rotations can be hard without some context.
Short answer :Good tip
Longer answer: Yes the order of matrix multiplication is clear... that's what I meant by inverse camera matrix to indicate moving all the world coordinates of structures into the camera coordinates (hence the use of "camera" in my comments ;-)) instead of actually translating and rotating camera into the world coordinates.
So if I read between the lines correctly you suggest something like:
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
gluLookAt(0.0,0.0,z,11.0,-189.0,-51.0,0.0,1.0,0.0); //eye(0,0,z) look at re-defined world origin(11,-189,-51) and up(0.0,1.0,0.0)
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" (actually structures) about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" (actually structures!) by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" (actually structures) by 90 degrees about x-axis
glPushMatrix();
Or am I still missing something?
I think you are mixing axes of your world with axes of the camera,
GLRotatef only uses axes of the camera, they are not the same as your the world axes once the camera is rotated.