How U-V-N camera coordinate system explained with OpenGL? - opengl

I am confused of u-v-n camera coordinate system when deducing the model-view matrix specified by the function:
void gluLookAt(GLdouble eyeX, GLdouble eyeY, GLdouble eyeZ, GLdouble centerX, GLdouble centerY, GLdouble centerZ, GLdouble upX, GLdouble upY, GLdouble upZ);
if we call this function ,then we get a u-v-n coordinate system ,in which n = center-eye.
it looks like this:
And this u-v-n is left handle coordinate system .
parts confused me :
Can this u-v-n coordinate be the so called camera coordinate many books said ?
And I often read from some tutorial or textbook,said camera coordinate system is right-hand,so why we construct a left-hand u-v-n coordinate to deduce the model-view matrix specified by gluLookAt ,just for convenience ?
update
After reading your answer and investigation,I understood it this way:
1)u-v-n coordinate system is left-hand,and camera coordinate is right-hand,this is the fact.
2) when calculate the matrix,OpenGL will flip the n axis to -n instead,thus make it still a right-hand system,see gluLookAt API and GluLookAt code implementation for details.

To transform from camera coordinates to uvn define points T for target, C for camera and vector up for the up direction. The follow the steps below:
where the norm() function normalizes a vector to magnitude()=1

Related

Can't understand gluLookAt arguments

I'm learning OpenGL (glut) now. By using GL_Lines I draw cube, but it looks like square so I tries to use gluLookAt. I've been searching and experimenting but I can't understand how it works! Help please.
As described in the documentation
C Specification
void gluLookAt(GLdouble eyeX, GLdouble eyeY, GLdouble eyeZ,
GLdouble centerX, GLdouble centerY, GLdouble centerZ,
GLdouble upX, GLdouble upY, GLdouble upZ);
Parameters
eyeX, eyeY, eyeZ
Specifies the position of the eye point.
centerX, centerY, centerZ
Specifies the position of the reference point.
upX, upY, upZ
Specifies the direction of the up vector.
Description
gluLookAt creates a viewing matrix derived from an eye point, a reference point indicating the center of the scene, and an UP vector.
The matrix maps the reference point to the negative z axis and the eye point to the origin. When a typical projection matrix is used, the center of the scene therefore maps to the center of the viewport. Similarly, the direction described by the UP vector projected onto the viewing plane is mapped to the positive y axis so that it points upward in the viewport. The UP vector must not be parallel to the line of sight from the eye point to the reference point.
Borrowing the following image (source)
The eye would be P, the center would be fc, and up would be up. The "near plane" and "far plane" defined the extents of the viewing frustum

How to get the position of an orbiting sphere

I'm trying to get the position of a sphere that is rotating around an idle object in my opengl application. This is how I perform the orbiting:
glTranslatef(positions[i].getPosX(), //center of rotation (yellow ball)
positions[i].getPosY(),
positions[i].getPosZ());
glRotatef(rotation_angle,0.0,1.0,0.0); //angle of rotation
glTranslatef(distance[i].getPosX(), //distance from the center of rotation
distance[i].getPosY(),
distance[i].getPosZ());
Variable rotation_angle loops from 0 to 360 endlessly. In the distance vector I'm only changing the z-distance of the object, for example let's say the idle object is in (0,0,0), the distance vector could be (0,0,200).
OpenGL just draws stuff. It doesn't maintain a "scene". So you'll have to do all the math yourself. This is as simple as multiplying the vector (0,0,0,1) with the current modelview-projection matrix and perform viewport remapping. This has been conveniently packed up in the GLU (not OpenGL) function gluProject.
Since you're using the (old-and-busted) fixed function pipeline the procedure follows about
GLdouble x,y,z;
GLdouble win_x, win_y, win_z;
GLdouble mv[16], prj[16];
GLint vp[4];
glGetDoublev(GL_MODELVIEW_MATRIX, mv);
glGetDoublev(GL_PROJECTION_MATRIX, prj);
glGetInteger(GL_VIEWPORT, vp);
gluProjection(x,y,z, mv, prj, vp, win_x, win_y, win_z);
Note that due to OpenGL's stateful nature the value of the modelview and projection matrix and the viewport at the moment of drawing the sphere matters. Retrieving those values at any other moment may produce very different data and result in an outcome inconsistent with the drawing.

How to implement a smooth transition between two different camera view in opengl

gluLookAt is defined as follows
void gluLookAt(GLdouble eyeX, GLdouble eyeY, GLdouble eyeZ,
GLdouble centerX, GLdouble centerY, GLdouble centerZ,
GLdouble upX, GLdouble upY, GLdouble upZ
);
I have two different cameras parameters corresponding to gluLookAt,I am confused about how to implement a smooth transition between views of these two camera parameters.
I hope that somebody can give me some cue or some code example.
I would consider using Spherical Linear Interpolation (slerp) on the rotations produced by gluLookAt (...). The GLM math library (C++) provides everything you need for this, including an implementation of LookAt.
Very roughly, this is what a GLM-based implementation might look like:
// Create quaternions from the rotation matrices produced by glm::lookAt
glm::quat quat_start (glm::lookAt (eye_start, center_start, up_start));
glm::quat quat_end (glm::lookAt (eye_end, center_end, up_end));
// Interpolate half way from original view to the new.
float interp_factor = 0.5; // 0.0 == original, 1.0 == new
// First interpolate the rotation
glm::quat quat_interp = glm::slerp (quat_start, quat_end, interp_factor);
// Then interpolate the translation
glm::vec3 pos_interp = glm::mix (eye_start, eye_end, interp_factor);
glm::mat4 view_matrix = glm::mat4_cast (quat_interp); // Setup rotation
view_matrix [3] = glm::vec4 (pos_interp, 1.0); // Introduce translation

gluProject and 2D display

I would like to display a 2D image at a 2D point calculated from a 3D point using gluProject().
So I have my 3D point, I use gluProject to get its 2D coordinates, then I display my image at this point.
It works well but I have a problem with Z coordinate which makes my image appear two times on the screen : where it should really appear and at "the opposite".
Let's take an example : the camera is at (0,0,0) and I look at (0,0,-1) so in direction of negative Z.
I use 3D point (0,0,-1) for my object, gluProject gives me as 2D point the center of my window which is the good point.
So when I look in direction of (0,0,-1) my 2D image appears, when I rotate, it moves well until the point (0,0,-1) is not visible, which makes the 2D image go out of screen so not displayed.
But when I look at (0,0,1), it also appears. Consequently, I get the same result (for the display of my 2D image) if I use 3D point (0,0,-1) and (0,0,1) for example. I assume there is something to do with the Z coordinate that gluProject returns but I don't know what.
Here is my code : my zNear=0.1 and zFar=1000
GLint viewport[4];
GLdouble modelview[16];
GLdouble viewVector[3];
GLdouble projection[16];
GLdouble winX, winY, winZ;//2D point
GLdouble posX, posY, posZ;//3D point
posX=0.0;
posY=0.0;
posZ=-1.0;//the display is the same if posZ=1 which should not be the case
//get the matrices
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
viewVector[0]=modelview[8];
viewVector[1]=modelview[9];
viewVector[2]=modelview[10];
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
int res=gluProject(posX,posY,posZ,modelview,projection,viewport,&winX,&winY,&winZ);
if(viewVector[0]*posX+viewVector[1]*posY+viewVector[2]*posZ<0){
displayMyImageAt(winX,windowHeight-winY);
}
So, what do I need to do to get the good display of my 2D image, that's to say to take Z into account?
gluProject works correctly, you projection matrix projects points on the screen plane, you should check whether point is behind, you can achieve this by calculating dot product of your view vector and vector to point, if it is less then 0 then point is behind.
what is your DisplayImageAt function?
is it similar to this display function given in my code?
I am trying to as well get 2D coordinate of a point selected 3d coordinates.
here is my display pretty much all..
`void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glMultMatrixd(_matrix);
glColor3f(0.5,0.5,0.5);
glPushMatrix(); //draw terrain
glColor3f(0.7,0.7,0.7);
glBegin(GL_QUADS);
glVertex3f(-3,-0.85,3);
glVertex3f(3,-0.85,3);
glVertex3f(3,-0.85,-3);
glVertex3f(-3,-0.85,-3);
glEnd();
glPopMatrix();
glPushMatrix();
myDefMesh.glDraw(meshModel);
glPopMatrix();
glutSwapBuffers();
}'

How to reorientate the near and far planes from gluUnProject to account for camera position and angle?

I am trying to have a ray be traced from a mouse click on the screen into 3D space. From what I understand, the default position and orientation looks like this:
from this code:
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
GLfloat winX, winY;
GLdouble near_x, near_y, near_z;
GLdouble far_x, far_y, far_z;
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
winX = (float)mouse_x;
winY = (float)viewport[3] - (float)mouse_y;
gluUnProject(winX, winY, 0.0, modelview, projection, viewport, &near_x, &near_y, &near_z);
gluUnProject(winX, winY, 1.0, modelview, projection, viewport, &far_x, &far_y, &far_z);
My question is how do you change the position and the angle of the near and far planes so that all mouse clicks will appear to be from the camera's position and angle in 3D space?
Specifically, I am looking for a way to position both the projection planes' center at 65° below the Z axis along the line (0,0,1) -> (0,0,-tan(65°)) [-tan(65°) ~= -2.145] with both the near and far planes perpendicular to the view line through those points.
My question is how do you change the position and the angle of the near and far planes so that all mouse clicks will appear to be from the camera's position and angle in 3D space?
Feeding the projection and modelview matrices into gluUnProject takes care of that. Specifically you pass the projection and modelview matrix used for rendering into gluUnProject so that the unprojected coordinates match those of the local coordinate system described by those matrices.