I'm learning OpenGL (glut) now. By using GL_Lines I draw cube, but it looks like square so I tries to use gluLookAt. I've been searching and experimenting but I can't understand how it works! Help please.
As described in the documentation
C Specification
void gluLookAt(GLdouble eyeX, GLdouble eyeY, GLdouble eyeZ,
GLdouble centerX, GLdouble centerY, GLdouble centerZ,
GLdouble upX, GLdouble upY, GLdouble upZ);
Parameters
eyeX, eyeY, eyeZ
Specifies the position of the eye point.
centerX, centerY, centerZ
Specifies the position of the reference point.
upX, upY, upZ
Specifies the direction of the up vector.
Description
gluLookAt creates a viewing matrix derived from an eye point, a reference point indicating the center of the scene, and an UP vector.
The matrix maps the reference point to the negative z axis and the eye point to the origin. When a typical projection matrix is used, the center of the scene therefore maps to the center of the viewport. Similarly, the direction described by the UP vector projected onto the viewing plane is mapped to the positive y axis so that it points upward in the viewport. The UP vector must not be parallel to the line of sight from the eye point to the reference point.
Borrowing the following image (source)
The eye would be P, the center would be fc, and up would be up. The "near plane" and "far plane" defined the extents of the viewing frustum
Related
I want to be able to view a planet sphere centred 0,0,0 with 10 units radius, 360 degrees and up down by clicking my keyboard buttons. What parameters do I put inside the glulookat() function as?
I know the Center XYZ should be 000 but what should eye and up vector be?
void gluLookAt( GLdouble eyeX,
GLdouble eyeY,
GLdouble eyeZ,
GLdouble centerX,
GLdouble centerY,
GLdouble centerZ,
GLdouble upX,
GLdouble upY,
GLdouble upZ);
Don't use lookAt for this! The amount of trigonometry involved in calculating the eye vector is equivalent to building the view matrix from scratch.
Instead, maintain your camera pitch and yaw, and apply those by a series of translations and rotations:
glTranslatef(0, 0, -radius);
glRotatef(-pitch, 1, 0, 0);
glRotatef(-yaw, 0, 0, 1); // assumes Z is up
gluLookAt is defined as follows
void gluLookAt(GLdouble eyeX, GLdouble eyeY, GLdouble eyeZ,
GLdouble centerX, GLdouble centerY, GLdouble centerZ,
GLdouble upX, GLdouble upY, GLdouble upZ
);
I have two different cameras parameters corresponding to gluLookAt,I am confused about how to implement a smooth transition between views of these two camera parameters.
I hope that somebody can give me some cue or some code example.
I would consider using Spherical Linear Interpolation (slerp) on the rotations produced by gluLookAt (...). The GLM math library (C++) provides everything you need for this, including an implementation of LookAt.
Very roughly, this is what a GLM-based implementation might look like:
// Create quaternions from the rotation matrices produced by glm::lookAt
glm::quat quat_start (glm::lookAt (eye_start, center_start, up_start));
glm::quat quat_end (glm::lookAt (eye_end, center_end, up_end));
// Interpolate half way from original view to the new.
float interp_factor = 0.5; // 0.0 == original, 1.0 == new
// First interpolate the rotation
glm::quat quat_interp = glm::slerp (quat_start, quat_end, interp_factor);
// Then interpolate the translation
glm::vec3 pos_interp = glm::mix (eye_start, eye_end, interp_factor);
glm::mat4 view_matrix = glm::mat4_cast (quat_interp); // Setup rotation
view_matrix [3] = glm::vec4 (pos_interp, 1.0); // Introduce translation
I am confused of u-v-n camera coordinate system when deducing the model-view matrix specified by the function:
void gluLookAt(GLdouble eyeX, GLdouble eyeY, GLdouble eyeZ, GLdouble centerX, GLdouble centerY, GLdouble centerZ, GLdouble upX, GLdouble upY, GLdouble upZ);
if we call this function ,then we get a u-v-n coordinate system ,in which n = center-eye.
it looks like this:
And this u-v-n is left handle coordinate system .
parts confused me :
Can this u-v-n coordinate be the so called camera coordinate many books said ?
And I often read from some tutorial or textbook,said camera coordinate system is right-hand,so why we construct a left-hand u-v-n coordinate to deduce the model-view matrix specified by gluLookAt ,just for convenience ?
update
After reading your answer and investigation,I understood it this way:
1)u-v-n coordinate system is left-hand,and camera coordinate is right-hand,this is the fact.
2) when calculate the matrix,OpenGL will flip the n axis to -n instead,thus make it still a right-hand system,see gluLookAt API and GluLookAt code implementation for details.
To transform from camera coordinates to uvn define points T for target, C for camera and vector up for the up direction. The follow the steps below:
where the norm() function normalizes a vector to magnitude()=1
I am trying to have a ray be traced from a mouse click on the screen into 3D space. From what I understand, the default position and orientation looks like this:
from this code:
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
GLfloat winX, winY;
GLdouble near_x, near_y, near_z;
GLdouble far_x, far_y, far_z;
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
winX = (float)mouse_x;
winY = (float)viewport[3] - (float)mouse_y;
gluUnProject(winX, winY, 0.0, modelview, projection, viewport, &near_x, &near_y, &near_z);
gluUnProject(winX, winY, 1.0, modelview, projection, viewport, &far_x, &far_y, &far_z);
My question is how do you change the position and the angle of the near and far planes so that all mouse clicks will appear to be from the camera's position and angle in 3D space?
Specifically, I am looking for a way to position both the projection planes' center at 65° below the Z axis along the line (0,0,1) -> (0,0,-tan(65°)) [-tan(65°) ~= -2.145] with both the near and far planes perpendicular to the view line through those points.
My question is how do you change the position and the angle of the near and far planes so that all mouse clicks will appear to be from the camera's position and angle in 3D space?
Feeding the projection and modelview matrices into gluUnProject takes care of that. Specifically you pass the projection and modelview matrix used for rendering into gluUnProject so that the unprojected coordinates match those of the local coordinate system described by those matrices.
Hy, I am currently trying to make a first person game.what i was able to do was to make the camera move using the function gluLookAt(), and to rotate it using glRotatef().What I am trying to to is to rotate the camera and then move forward on the direction i have rotated on, but the axes stay the same,and although i have rotated the camera moves sideways not forward. Can someone help me ? this is my code:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(cameraPhi,1,0,0);
glRotatef(cameraTheta,0,1,0);
gluLookAt(move_camera.x,move_camera.y,move_camera.z,move_camera.x,move_camera.y,move_camera.z-10,0,1,0);
drawSkybox2d(treeTexture);
This requires a bit of vector math...
Given these functions, the operation is pretty simple though:
vec rotx(vec v, double a)
{
return vec(v.x, v.y*cos(a) - v.z*sin(a), v.y*sin(a) + v.z*cos(a));
}
vec roty(vec v, double a)
{
return vec(v.x*cos(a) + v.z*sin(a), v.y, -v.x*sin(a) + v.z*cos(a));
}
vec rotz(vec v, double a)
{
return vec(v.x*cos(a) - v.y*sin(a), v.x*sin(a) + v.y*cos(a), v.z);
}
Assuming you have an orientation vector defined as {CameraPhi, CameraTheta, 0.0}, then if you want to move the camera in the direction of a vector v with respect to the camera's axis, you add this to the camera's position p:
p += v.x*roty(rotx(vec(1.0, 0.0, 0.0), CameraPhi), CameraTheta) +
v.y*roty(rotx(vec(0.0, 1.0, 0.0), CameraPhi), CameraTheta) +
v.z*roty(rotx(vec(0.0, 0.0, 1.0), CameraPhi), CameraTheta);
And that should do it.
Keep Coding :)
gluLookAt is defined as follows:
void gluLookAt(GLdouble eyeX, GLdouble eyeY, GLdouble eyeZ,
GLdouble centerX, GLdouble centerY, GLdouble centerZ,
GLdouble upX, GLdouble upY, GLdouble upZ
);
The camera is located at the eye position and looking in the direction from the eye to the center.
eye and center together define the axis (direction) of the camera, and the third vector up defines the rotation about this axis.
You don't need the separate phi and theta rotations, just pass in the correct up vector to get the desired rotation. (0,1,0) means the camera is upright, (0,-1,0) means the camera is upside-down and other vectors define intermediate positions.