gluLookAt specification - opengl

I have some problems understanding the specification for gluLookAt.
For example the z-axis is defined as:
F = ( centerX - eyeX, centerY - eyeY, centerZ - eyeZ )
with center being the point the camera looks at and eye being the position the camera is at.
f = F / |F|
and the View-Matrix M is defined as:
( x[0] x[1] x[2] 0 )
( y[0] y[1] y[2] 0 )
(-f[0] -f[1] -f[2] 0 )
( 0 0 0 1 )
with x and y being the x,y-axis and f being the z-axis
If my camera is positioned at (0, 0, 5) and the camera looks at the center. Then f would look along the negative z-axis because of the first equation (center - eye) the f-vector would be: (0,0,0) - (0,0,5) = (0,0,-5)
So far everything makes sense to me, but then the f-vector is multiplied by -1 in the M-Matrix above.
That way the f-vector looks along the positive z-axis and away from the center.
I found that the perspective matrix gluPerspective will also multiply the z-axis of the camrea with -1 which turns the z-axis again and makes it look toward the world's negative z-axis.
So what is the point of multiplying it with -1?

Because gluLookAt is a View Matrix for a right-handed system. In this space, Z-coordinate increments as it goes out of screen, or behind the camera. So all objects that the camera can see have negative Z in view space.
EDIT
You should review your maths. The matrix you exposed lacks the translation to camera position.
Following this notation let's do:
Obtain f normalized, up normalized, s normalized, and u=sn x f. Notice that s must be normalized because f and up may be not be perpendicular and then their cross-product is not a vector of length=1. This is not mentioned in the link above.
Form the matrix and pre-multiply by the translation to camera position, L= M · T
The resulting lookAt matrix is:
s.x s.y s.z -dot(s, eye)
u.x u.y u.z -dot(u, eye)
-f.x -f.y -f.z dot(f, eye)
0 0 0 1
With your data: camera=(0,0,5), target=(0,0,0), and up=(0,1,0), the matrix is:
1 0 0 0
0 -1 0 0
0 0 1 -5
0 0 0 1
Let's apply this transformation a the point A=(0,0,4). We get A'=(0,0,-1).
Again for B=(0,0,20), B'=(0,0,15).
A' has a negative Z, so the camera sees it. B' has a positive value, the camera can not see it.

I know this isn't a direct answer to the question but it might help someone who is looking for an equivalent function without using GLU, for example, if they are porting old OpenGL2 code to modern OpenGL.
Here is an equivalent function to gluLookAt(...):
void gluLookAt(float eyeX, float eyeY, float eyeZ,
float centreX, float centreY, float centreZ,
float upX, float upY, float upZ) {
GLfloat mat[16];
float forwardX = centreX - eyeX;
float forwardY = centreY - eyeY;
float forwardZ = centreZ - eyeZ;
glm::vec3 forward = glm::normalize(glm::vec3(forwardX, forwardY, forwardZ));
glm::vec3 right = glm::cross(glm::vec3(forwardX, forwardY, forwardZ),
glm::vec3(upX, upY, upZ));
right = glm::normalize(right);
mat[0] = right.x;
mat[1] = right.y;
mat[2] = right.z;
mat[3] = 0.0f;
mat[4] = upX;
mat[5] = upY;
mat[6] = upZ;
mat[7] = 0.0f;
mat[8] = -forward.x;
mat[9] = -forward.y;
mat[10] = -forward.z;
mat[11] = 0.0f;
mat[12] = 0.0f;
mat[13] = 0.0f;
mat[14] = 0.0f;
mat[15] = 1.0f;
glMultMatrixf(mat);
glTranslatef (-eyeX, -eyeY, -eyeZ);
}

Related

Determine angle between camera position and world point in 3D engine

I want to determine the horizontal and vertical angle, from a camera's position to a world point, in respect to the camera's forward axis.
My linear algebra is a bit rusty, but given the camera's forward, up, and right vector, for example:
camForward = [0 0 1];
camUp = [0 1 0];
camRight = [1 0 0];
And the camera position and world point, for example:
camPosition = [1 2 3];
worldPoint = [5 6 4];
The sought-after angles should be determinable by first taking the difference of the positions:
delta = worldPoint-camPosition;
Then projecting it on the camera axes using the dot products:
deltaHorizontal = dot(delta,camRight);
deltaVertical = dot(delta,camUp);
deltaDepth = dot(delta,camForward);
And finally computing angles as:
angleHorizontal = atan(deltaHorizontal/deltaDepth);
angleVertical = atan(deltaVertical/deltaDepth);
In the example case, this yields that both angles become ~76°, which seems reasonable; varying the positions and axes also seem to give reasonable results.
Thus, if I am not getting the angles I expect, it should be due to that I am using either incorrect position and/or camera axes. It is worth noting that the 3D engine is using OpenGL and GLM.
I am fairly certain that the positions are correct, as moving around in the scene and inspecting the positions in relation to known reference points give consistent and correct results. Leading me to believe that I am using the wrong camera axes. To get the angles I am using (the equivalent of):
glm::vec3 worldPoint = glm::unProject( glm::vec3(windowX, windowY, windowZ), viewMatrix, projectionMatrix, glm::vec4(0,0,windowWidth,windowHeight));
glm::vec3 delta = glm::vec3(worldPoint.x, worldPoint.y, worldPoint.z);
float horizontalDistance = glm::dot(delta, cameraData->right);
float verticalDistance = glm::dot(delta, cameraData->up);
float depthDistance = glm::dot(delta, cameraData->forward);
float horizontalAngle = glm::atan(horizontalDistance/depthDistance)
float verticalAngle = glm::atan(verticalDistance/depthDistance)
Each frame, forward, up, and right are read from a view matrix, viewMatrix which in turn is produced by a converting a quaternion, Q, which holds the camera rotation which is controlled by mouse:
void updateView(CameraData * cameraData, MouseData * mouseData, MouseParameters * mouseParameters){
float deltaX = mouseData->currentX - mouseData->lastX;
float deltaY = mouseData->currentY - mouseData->lastY;
mouseData->lastX = mouseData->currentX;
mouseData->lastY = mouseData->currentY;
float pitch = mouseParameters->sensitivityY * deltaY;
float yaw = mouseParameters->sensitivityX * deltaX;
glm::quat pitch_Q = glm::quat(glm::vec3(pitch, 0.0f, 0.0f));
glm::quat yaw_Q = glm::quat(glm::vec3(0.0f, yaw, 0.0f));
cameraData->Q = pitch_Q * cameraData->Q * yaw_Q;
cameraData->Q = glm::normalize(cameraData->Q);
glm::mat4 rotation = glm::toMat4(cameraData->Q);
glm::mat4 translation = glm::mat4(1.0f);
translation = glm::translate(translation, -(cameraData->position));
cameraData->viewMatrix = rotation * translation;
cameraData->forward = (cameraData->viewMatrix)[2];
cameraData->up = (cameraData->viewMatrix)[1];
cameraData->right = (cameraData->viewMatrix)[0];
}
However, something goes wrong, and the correct angles are seemingly only produced while looking along, or perpendicular to, the world z-axis ([0 0 1]). Where am I mistaken?

Determining texture co-ordinates across a geodesic sphere

I've generated a geodesic sphere for opengl rendering following a question on here and I'm trying to put texture on it. I came up with the following code by reversing an algorithm for a point on a sphere:
//complete circle equation is as follows
///<Summary>
///x = r * sin(s) * sin(t)
///y = r* cos(t)
///z = r * cos(s) * sin(t)
///</Summary>
float radius = 1.0f;
//T (height/latitude) angle
float angleT = acos(point.y / radius) ;
//S (longitude )angle
float angleS = ( asin(point.x / (radius * sin(angleT)))) + (1.0f* M_PI);
float angleS2 =( acos(point.z / (radius * sin(angleT)))) + (1.0f * M_PI);
//Angle can be 0-PI (0-180 degs), divide by this to get 0-1
angleT = angleT / (M_PI);
//Angle can be 0-2PI (0-360 degs)S
angleS = angleS / ( M_PI *2 );
angleS2 = angleS2 / ( M_PI *2 );
//Flip the y co-ord
float yTex = 1 - angleT;
float xTex = 0.0f;
//I have found that angleS2 is valid 0.5-1.0, and angleS is valid (0.3-0.5)
if (angleS < 0.5f)
{
xTex = angleS;
}
else
{
xTex = angleS2;
}
return glm::vec2( xTex , yTex);
As you can see, I've found that both versions of calculating the S angle have limited valid ranges.
float angleS = ( asin(point.x / (radius * sin(angleT)))) + (1.0f* M_PI);
float angleS2 =( acos(point.z / (radius * sin(angleT)))) + (1.0f * M_PI);
S1 is gives valid answers between x texture co-ords 0.3 and 0.5 and S2 gives valid answers for between x texture co-ords 0.5 and 1.0 (Conversion to co-ords omitted above but present in first code example). Why is it that neither formula is giving me valid answers for under 0.3?
Thanks
Will
Correct on this side
The weird border between working and not, probably caused by opengl's interpolation
Reversed section
The image being used
Edit: Here is the seam
The equations you use to calculate the longitude angles are not correct seeing what you are trying to accomplish. For the longitude angle, the range you require is 0-360 degrees, which can not be obtained through asin or acos functions, because those functions only return results between -90 and 90 degrees or 0 to 180 degrees. You can, however, use the atan2 function, which returns values from the correct interval. The code I've been working with for the past 2 years is the following:
float longitude = atan2f(point.x, point.z) + (float)M_PI;
This equation will map the horizontal center of the texture in the direction of positive Z axis. If you want the horizontal center of the texture to be in the direction of positive X axis, add M_PI / 2.0.

Picking Ray is inaccurate

I'm trying to implement a picking ray via instructions from this website.
Right now I basically only want to be able to click on the ground to order my little figure to walk towards this point.
Since my ground plane is flat , non-rotated and non-translated I'd have to find the x and z coordinate of my picking ray when y hits 0.
So far so good, this is what I've come up with:
//some constants
float HEIGHT = 768.f;
float LENGTH = 1024.f;
float fovy = 45.f;
float nearClip = 0.1f;
//mouse position on screen
float x = MouseX;
float y = HEIGHT - MouseY;
//GetView() returns the viewing direction, not the lookAt point.
glm::vec3 view = cam->GetView();
glm::normalize(view);
glm::vec3 h = glm::cross(view, glm::vec3(0,1,0) ); //cameraUp
glm::normalize(h);
glm::vec3 v = glm::cross(h, view);
glm::normalize(v);
// convert fovy to radians
float rad = fovy * 3.14 / 180.f;
float vLength = tan(rad/2) * nearClip; //nearClippingPlaneDistance
float hLength = vLength * (LENGTH/HEIGHT);
v *= vLength;
h *= hLength;
// translate mouse coordinates so that the origin lies in the center
// of the view port
x -= LENGTH / 2.f;
y -= HEIGHT / 2.f;
// scale mouse coordinates so that half the view port width and height
// becomes 1
x /= (LENGTH/2.f);
y /= (HEIGHT/2.f);
glm::vec3 cameraPos = cam->GetPosition();
// linear combination to compute intersection of picking ray with
// view port plane
glm::vec3 pos = cameraPos + (view*nearClip) + (h*x) + (v*y);
// compute direction of picking ray by subtracting intersection point
// with camera position
glm::vec3 dir = pos - cameraPos;
//Get intersection between ray and the ground plane
pos -= (dir * (pos.y/dir.y));
At this point I'd expect pos to be the point where my picking ray hits my ground plane.
When I try it, however, I get something like this:
(The mouse cursor wasn't recorded)
It's hard to see since the ground has no texture, but the camera is tilted, like in most RTS games.
My pitiful attempt to model a remotely human looking being in Blender marks the point where the intersection happened according to my calculation.
So it seems that the transformation between view and dir somewhere messed up and my ray ended up pointing in the wrong direction.
The gap between the calculated position and the actual position increases the farther I mouse my move away from the center of the screen.
I've found out that:
HEIGHT and LENGTH aren't acurate. Since Windows cuts away a few pixels for borders it'd be more accurate to use 1006,728 as window resolution. I guess that could make for small discrepancies.
If I increase fovy from 45 to about 78 I get a fairly accurate ray. So maybe there's something wrong with what I use as fovy. I'm explicitely calling glm::perspective(45.f, 1.38f, 0.1f, 500.f) (fovy, aspect ratio, fNear, fFar respectively).
So here's where I am lost. What do I have to do in order to get an accurate ray?
PS: I know that there are functions and libraries that have this implemented, but I try to stay away from these things for learning purposes.
Here's working code that does cursor to 3D conversion using depth buffer info:
glGetIntegerv(GL_VIEWPORT, #fViewport);
glGetDoublev(GL_PROJECTION_MATRIX, #fProjection);
glGetDoublev(GL_MODELVIEW_MATRIX, #fModelview);
//fViewport already contains viewport offsets
PosX := X;
PosY := ScreenY - Y; //In OpenGL Y axis is inverted and starts from bottom
glReadPixels(PosX, PosY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, #vz);
gluUnProject(PosX, PosY, vz, fModelview, fProjection, fViewport, #wx, #wy, #wz);
XYZ.X := wx;
XYZ.Y := wy;
XYZ.Z := wz;
If you do test only ray/plane intersection this is the second part without DepthBuffer:
gluUnProject(PosX, PosY, 0, fModelview, fProjection, fViewport, #x1, #y1, #z1); //Near
gluUnProject(PosX, PosY, 1, fModelview, fProjection, fViewport, #x2, #y2, #z2); //Far
//No intersection
Result := False;
XYZ.X := 0;
XYZ.Y := 0;
XYZ.Z := aZ;
if z2 < z1 then
SwapFloat(z1, z2);
if (z1 <> z2) and InRange(aZ, z1, z2) then
begin
D := 1 - (aZ - z1) / (z2 - z1);
XYZ.X := Lerp(x1, x2, D);
XYZ.Y := Lerp(y1, y2, D);
Result := True;
end;
I find it rather different from what you are doing, but maybe that will make more sense.

Calculating AABB from Box (center, halfSize, rotation)

I want to calculate AABB (axis aligned bounding box) from my Box class.
The box class:
Box{
Point3D center; //x,y,z
Point3D halfSize; //x,y,z
Point3D rotation; //x,y,z rotation
};
The AABB class (Box, but without rotation):
BoundingBox{
Point3D center; //x,y,z
Point3D halfSize; //x,y,z
};
Ofc, when rotation = (0,0,0), BoundingBox = Box. But how to calculate minimum BoundingBox that contains everything from Box when rotation = (rx,ry,rz)?
If somebody asks: the rotation is in radians and I use it in DirectX matrix rotation:
XMMATRIX rotX = XMMatrixRotationX( rotation.getX() );
XMMATRIX rotY = XMMatrixRotationY( rotation.getY() );
XMMATRIX rotZ = XMMatrixRotationZ( rotation.getZ() );
XMMATRIX scale = XMMatrixScaling( 1.0f, 1.0f, 1.0f );
XMMATRIX translate = XMMatrixTranslation( center.getX(), center.getY(), center.getZ() );
XMMATRIX worldM = scale * rotX * rotY * rotZ * translate;
You can use matrix rotations in Cartesian coordinates. A rotation of an angle A around the x axis is defined by the matrix:
1 0 0
Rx(A) = 0 cos(A) -sin(A)
0 sin(A) cos(A)
If you do the same for an angle B around y and C around z you have:
cos(B) 0 sin(B)
Ry(B) = 0 1 0
-sin(B) 0 cos(A)
and
cos(C) -sin(C) 0
Rz(C) = sin(C) cos(C) 0
0 0 1
With this you can calculate (even analytically) the final rotation matrix. Let's say that you rotate (in that order) along z, then along y then along x (note that the axis x,y,z are fixed in space, they do not rotate at each rotation). The final matrix is the product:
R = Rx(A) Ry(B) Rz(C)
Now you can construct vectors with the positions of the six corners and apply the full rotation matrix to these vectors. This will give the positions of the six corners in the rotated version. Then just calculate the distance between opposing corners and you have the new bounding box dimensions.
Well, you should apply the rotation on the vertices of the original bounding box (for the purposes of the calculation), then iterate over all of them to find the min and max x, y and z of all the vertices. That would define your axis-aligned bounding box. That's it at its most basic form, you should try and figure out the details. I hope that's a good start. :)

The order of transformed-matrix multiplication in OpenGL

Here I have a point P(x,y,z,1). Then I rotated the P around a known angel and a vector to point P1(x1,y1,z1,1). And according to P1's coordinates, I can translate P1 to point P2 (0,0,z1,1). Now I want to get only one matrix that can transform P to P2 directly So, my code is below:
GLfloat P[4] ={5,-0.6,3.8,1};
GLfloat m[16]; //This is the rotation matrix to calculate P1 from P
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRotatef(theta, v1,v2,v3);//theta and (v1,v2,v3) is constant
glGetFloatv(GL_MODELVIEW_MATRIX, m);
glPopMatrix();
//calculate P1 from P and matrix m
GLfloat P1;
P1[0] = P[0]*m[0]+P[1]*m[4]+P[2]*m[8]+m[12];
P1[1] = P[0]*m[1]+P[1]*m[5]+P[2]*m[9]+m[13];
P1[2] = P[0]*m[2]+P[1]*m[6]+P[2]*m[10]+m[14];
P1[3] = P[0]*m[3]+P[1]*m[7]+P[2]*m[11]+m[15];
//after calculation P1 = {0.15,-3.51,-5.24,1}
GLfloat m1[16]; //P multiply m1 can get P2 directly
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRotatef(theta, v1,v2,v3);//theta and (v1,v2,v3) is constant as above
glTranslatef(-P1[0], -P1[1], 0);// after rotation we get P1, then translation to P2
glGetFloatv(GL_MODELVIEW_MATRIX, m1);
glPopMatrix();
//calculate P2 from P and matrix m1
GLfloat P2[4];
P2[0] = P[0]*m1[0]+P[1]*m1[4]+P[2]*m1[8]+m1[12];
P2[1] = P[0]*m1[1]+P[1]*m1[5]+P[2]*m1[9]+m1[13];
P2[2] = P[0]*m1[2]+P[1]*m1[6]+P[2]*m1[10]+m1[14];
P2[3] = P[0]*m1[3]+P[1]*m1[7]+P[2]*m1[11]+m1[15];
//after this calculation, I expect P2 should be (0,0,-5.24) that is (0,0,p1[2])
//however, the real result is not my expectation! Where I do wrong???
Actually, I analyzed this problem. I found the order of matrix multiplication is weird.
After I do glRotatef(theta, v1,v2,v3), I get the matrix m. That's OK.
m is
m[0] m[1] m[2] 0
m[4] m[5] m[6] 0
m[8] m[9] m[10] 0
0 0 0 1
And if I do glTranslatef(-P1[0], -P1[1], 0) alone, I get the translation matrix m'.
m' is
1 0 0 0
0 1 0 0
0 0 0 1
-P1[0] -P1[1] 0 1
So I think after do glRotatef(theta, v1,v2,v3) and glTranslatef(-P1[0], -P1[1], 0),
the m1 = m*m', that is
m[0] m[1] m[2] 0
m[4] m[5] m[6] 0
m[8] m[9] m[10] 0
-P1[0] -P2[0] 0 1
However, in the actual program, m1 = m'*m, so the P2 is not my expected result!
I know doing the translate first and then doing the rotation that can get my right result, but why I cannot do the rotation first?
Rotation and translation are not commutable. (Matrix multiplication in general is not commutable, but in some special cases, such as all translations, all 2D rotations, all scalings, one rotation mixed with uniform scalings, the actions/matrices are commutable).
If you rotate first, then the translation will be in the direction of the rotated coordinate.
As an example, translate by (1, 0), then rotate by 90 degree, will give the origin at (1, 0), (assuming starting from identity matrix). As opposed to rotate by 90 degree first, will give the origin at (0, 1).
On the mathematical side, since this is coordinate transformation, any new transformation will be left multiply to the current coordinate transformation matrix. e.g. If T is matrix of a new transformation action, C is the current transformation matrix, then after the transformation, the current transformation matrix will become TC.