Whenever I use my camera focal length in the openCv Posit function I get NaN for all values but when i use a higher one i do not have this error (but the numbers are not true then). I have a pyramid as a model. 3 Points in 1 plane and 1 in the center above. Anyone knows how to handle this? Or some good links?
int depth = -18;
//create the model points
std::vector<CvPoint3D32f> modelPoints;
modelPoints.push_back(cvPoint3D32f( 0.0f, 0.0f, 0.0f)); //(the first must be 0,0,0) middle center diod
modelPoints.push_back(cvPoint3D32f(-18.0f, -30.0f, depth)); //bottom left diode
modelPoints.push_back(cvPoint3D32f( 18.0f, -30.0f, depth)); //bottom right diode
modelPoints.push_back(cvPoint3D32f( 0.0f, 30.0f, depth)); //top center diode
system("cls");
//create the image points
std::vector<CvPoint2D32f> srcImagePoints;
cout << "Source Image Points:" << endl;
for( size_t i = 0; i < circles.size(); i++ )
{
cout << "x: " << cvRound(circles[i][0]) << " y: " << cvRound(circles[i][1]) << endl;
//228, 278
//291, 346 (Pixel coordinates of the points on the image)
//371, 346
//228, 206
srcImagePoints.push_back( cvPoint2D32f(cvRound(circles[i][0]), cvRound(circles[i][1])) );
}
cout << endl;
//create the POSIT object with the model points
CvPOSITObject* positObject = cvCreatePOSITObject( &modelPoints[0], (int)modelPoints.size() );
//calculate the orientation
float* rotation_matrix = new float[9];
float* translation_vector = new float[3];
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 100, 1.0e-4f);
double FOCAL_LENGTH = 16; //16.0; //760.0;
cvPOSIT( positObject, &srcImagePoints[0], FOCAL_LENGTH, criteria, rotation_matrix, translation_vector );
//calculate rotation angles
double beta = atan2((double)(-rotation_matrix[2]), (double)(sqrt(pow(rotation_matrix[0], 2) + pow(rotation_matrix[3], 2))));
double alpha = atan2((rotation_matrix[3]/cos(beta)),(rotation_matrix[0]/cos(beta)));
double gamma = atan2((rotation_matrix[7]/cos(beta)),(rotation_matrix[8]/cos(beta)));
Related
I'm trying to load in a triangle mesh from an .off file and show the triangle mesh centered at the origin and scaled to fit in the unit cube. But for some reason I'm off by a large factor and it looks like
The way I'm doing this is finding the extrema of the mesh, and using that to offset the surface by that amount.
float avgX = (maxX + minX) / 2;
float avgY = (maxY + minY) / 2;
float avgZ = (maxZ + minZ) / 2;
Vector3f center(avgX, avgY, avgZ);
Vector3f offset = Vector3f(0, 0, 0) - center;
Translation3f translation(offset);
cout << "offset is: " << endl << offset << endl;
double d_theta = (M_PI / 180);
AngleAxisf rotation(d_theta, Vector3f(0, 0, 1));
float scaleX = (float) 1 / (abs(maxX - minX));
float scaleY = (float) 1 / (abs(maxY - minY));
float scaleZ = (float) 1 / (abs(maxZ - minZ));
AlignedScaling3f scale = AlignedScaling3f(scaleX, scaleY, scaleZ);
I then put it into a vector of surfaces with
Vector3f translatedCenter = translation * rotation * scale * center;
VertexBufferObject VBO;
VBO.init();
VBO.update(Vertices);
program.bindVertexAttribArray("position", VBO);
VertexBufferObject VBO_N;
VBO_N.init();
VBO_N.update(FlatNormals);
program.bindVertexAttribArray("normals", VBO_N);
cout << "updated normals" << endl;
VertexBufferObject VBO_C;
VBO_C.init();
VBO_C.update(C);
program.bindVertexAttribArray("color",VBO_C);
cout << "updated color " << endl;
Surface* s = new Surface(VBO, Vertices, translation, rotation, scale, percentScale, translatedCenter, SmoothNormals, FlatNormals, C);
And I pass it to the Vertex Shader as "model"
Affine3f model = s->getTranslation() * s->getRotation() * s->getScale();
glUniformMatrix4fv(program.uniform("model"), 1, GL_FALSE, model.data());
This is all being done using the Eigen library (https://eigen.tuxfamily.org/dox/group__TutorialGeometry.html#TutorialGeoTransform)
No matter what I try I'm off by a little bit. What am I doing wrong?
Swap translation and rotation:
Affine3f model = s->getRotation() * s->getTranslation() * s->getScale();
Note, the translation moves the center of the object to the center of the view. After that the rotation matrix rotates around the this center.
If you don't have any projection matrix, then the view space is the normalized device space where each coordinate is in range [-1, 1]. This mean the length of a side is 2 = 1 - (-1). You have to respect this when you calculate the scale:
float scaleX = (float) 2 / (abs(maxX - minX));
float scaleY = (float) 2 / (abs(maxY - minY));
float scaleZ = (float) 2 / (abs(maxZ - minZ));
I am trying to get the 2d world coordinates on a 2D plane (Z = 0) where I clicked with the mouse in a 3D scene. I figured out that ray-casting would probably be the best method.
This code is that I scavenged from the Internet:
glm::vec3 Drawer::MouseToWorldCoords(glm::vec2 coords)
{
//getting camera position
glm::mat3 rotMat(view);
glm::vec3 d(view[3]);
glm::vec3 retVec = -d * rotMat;
//std::cout << " x " << retVec.x << " y " << retVec.y << " z " << retVec.z << std::endl;
//getting mouse coords
float x = 2.0 * coords.x / WINDOW_WIDTH - 1;
float y = -2.0 * coords.y / WINDOW_HEIGHT + 1;
float z = -1.0f;
//raycasting
glm::vec4 ray(x, y, z,1.0f);
glm::vec4 ray_eye = inverse(proj) * ray;
ray_eye = glm::vec4(ray_eye.x,ray_eye.y, 1.0, 0.0);
glm::vec3 ray_world = glm::vec3((glm::inverse(view) * ray_eye));
ray_world = glm::normalize(ray_world);
//intersecting plane with ray
glm::vec3 ba = retVec - ray_world ;
float nDotA = glm::dot(glm::vec3(0.0f,0.0f,1.0f), ray_world);
float nDotBA = glm::dot(glm::vec3(0.0f,0.0f,1.0f), ba);
glm::vec3 intersect = (ray_world + (((0.0f - nDotA) / nDotBA) * ba)) ;
return glm::vec3( -intersect.x * 10.0f,-intersect.y * 10.0f,0.0f );
}
This snippet of code does not work the way it should though. As you can see in the image:
The program simply spawns cubes at the location returned by the function. To produce this result I clicked only on the edges of the screen (except for the 2 in the middle of course).
Edit3 : My problems were in completely different functions than i expected. ill let the code stay, maybe this helps someone :) (and dont forget to debug!).
Im trying to find the vector where a line intersects with a triangle.
Current state: Random intersections even if mouse is not at the floor and camera view dependend (lookat matrix)
Steps
Unproject mouse coordinations
Check line / triangle intersection
Unproject mouse coordinations
I checked the source of glm::unproject and gluUnproject and created this function.
pixel::CVector3 pixel::CVector::unproject(
CVector2 inPosition,
pixel::CShape window,
pixel::matrix4 projectionMatrix,
pixel::matrix4 modelViewMatrix,
float depth
)
{
// transformation of normalized coordinates
CVector4 inVector;
inVector.x = (2.0f * inPosition.x) / window.width - 1.0f;
inVector.y = (2.0f * inPosition.y) / window.height - 1.0f;
inVector.z = 2.0f * depth - 1.0f;
inVector.w = 1.0f;
// multiply inverted matrix with vector
CVector4 rayWorld = pixel::CVector::multMat4Vec4(pixel::CMatrix::invertMatrix(projectionMatrix * modelViewMatrix), inVector);
CVector3 result;
result.x = rayWorld.x / rayWorld.w;
result.y = rayWorld.y / rayWorld.w;
result.z = rayWorld.z / rayWorld.w;
return result;
}
Checking intersection
pixel::CVector3 pixel::Ray::intersection(
Ray ray,
pixel::CVector3 v0,
pixel::CVector3 v1,
pixel::CVector3 v2
)
{
// compute normal
CVector3 a, b, n;
a = v1 - v0;
b = v2 - v0;
n = ray.direction.cross(b);
// find determinant
float det = a.dot(n);
if (det < 0.000001f)
{
std::cout << "Ray intersecting with backface triangles \n";
return pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
}
det = 1.0f / det;
// calculate distance from vertex0 to ray origin
CVector3 s = ray.origin - v0;
float u = det * s.dot(n);
if (u < -0.000001f || u > 1.f + 0.000001f)
{
std::cout << "U: Intersection outside of the triangle!\n";
return pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
}
CVector3 r = s.cross(a);
float v = det * ray.direction.dot(r);
if (v < -0.000001f || u + v > 1.f + 0.000001f)
{
std::cout << "V/U: Intersection outside of triangle!\n";
return pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
}
// distance from ray to triangle
det = det * b.dot(r);
std::cout << "T: " << det << "\n";
CVector3 endPosition;
endPosition.x = ray.origin.x + (ray.direction.x * det);
endPosition.y = ray.origin.y + (ray.direction.y * det);
endPosition.z = ray.origin.z + (ray.direction.z * det);
return endPosition;
}
Usage
if (event.button.button == SDL_BUTTON_RIGHT)
{
camera->setCameraActive();
float mx = event.motion.x;
float my = window->info.height - event.motion.y;
// ray casting
pixel::Ray ray;
std::cout << "\n\n";
// near
pixel::CVector3 rayNear = pixel::CVector::unproject(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
1.0f
);
// far
pixel::CVector3 rayFar = pixel::CVector::unproject(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
0.0f
);
// normalized direction results in the same behavior
ray.origin = cameraPosition;
ray.direction = pixel::CVector::normalize(rayFar- rayNear);
std::cout << "Raycast \n";
std::cout << "Mouse Position: " << mx << " - " << my << "\n";
std::cout << "Camera Position: " << ray.origin.x << " - " << ray.origin.y << " - " << ray.origin.z << "\n";
std::cout << "Ray direction: " << ray.direction.x << " - " << ray.direction.y << " - " << ray.direction.z << "\n";
pixel::CVector3 vertOne = pixel::CVector::vector3(0.0f, 0.0f, -300.0f);
pixel::CVector3 vertTwo = pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
pixel::CVector3 vertThree = pixel::CVector::vector3(300.0f, 0.0f, 0.0f);
pixel::CVector3 vertFour = pixel::CVector::vector3(300.0f, 0.0f, -300.0f);
pixel::CVector3 rayHit = pixel::Ray::intersection(ray, vertOne, vertTwo, vertThree);
pixel::CVector3 rayHit2 = pixel::Ray::intersection(ray, vertThree, vertFour, vertOne);
std::cout << "Ray hit: " << rayHit.x << " - " << rayHit.y << " - " << rayHit.z << "\n";
std::cout << "Ray hit: " << rayHit2.x << " - " << rayHit2.y << " - " << rayHit2.z << "\n";
std::cout << "--------------------\n";
towerHouse->modelMatrix = pixel::CMatrix::translateMatrix(rayHit);
Output
As ive never used glm::unproject or gluUnproject, i dont know how the normal output should look like, but im getting results like:
Ray direction: 0.109035 -0.0380502 0.0114562
Doesnt look right to me, but checking my code against other sources (mentioned above), i dont see the mistake/s.
Ray intersection works in some special cases (camera rotation) and even then i get intersections even if i dont click on the floor.
Same goes with intersection output varying from backface hits to outside of the triangle.
All those errors look like the main source of problem is the unprojection.
Any hints in the right direction?
This is nowhere close to an answer to this question, but this is too complicated to explain in comments or chat.
First of all:
// near
pixel::CVector3 rayNear = pixel::CVector::raycast(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
1.0f // WRONG
);
// far
pixel::CVector3 rayFar = pixel::CVector::raycast(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
0.0f // WRONG
);
Near is 0.0 in window-space, and far is 1.0 (depends on the depth range, but if you changed the depth range you should already know this).
In your ray cast function, you have:
CVector3 result;
result.x = rayWorld.x / rayWorld.w;
result.y = rayWorld.y / rayWorld.w;
result.z = rayWorld.z / rayWorld.w;
There is a chance that w == 0.0, and the result is not yet a ray at this time... it is a position in object-space (not world). Generally you are always going to be working with well-behaved matrices, but if you ever look at a formal implementation of UnProject (...) you will notice that they handle the case where w == 0.0 with a special return value or by setting a status flag.
pixel::CVector3 vertOne = pixel::CVector::vector3(0.0f, 0.0f, -300.0f);
pixel::CVector3 vertTwo = pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
pixel::CVector3 vertThree = pixel::CVector::vector3(300.0f, 0.0f, 0.0f);
pixel::CVector3 vertFour = pixel::CVector::vector3(300.0f, 0.0f, -300.0f);
What coordinate space are these verts in? Presumably object-space, which means if you are casting a ray from your camera's eye point (defined in world-space) that passes through a point on your far plane, and try to test for intersection against a triangle in object-space more often than not you will miss. This is because the origin, scale and rotation for each of these spaces may differ. You need to transform those points into world-space (your original code had a floor->modelMatrix that would work well for this purpose) before you try this test.
I tracked down the problem and fixed bugs. i had wrong matrix*matrix and matrix*vector multiplications operators.
So i've got a little program that draws a few spheres, then i attempt to right click on them. Upon right clicking, it draws a line between the near and far planes, under the mouse when i click. However, its giving strange results, such as the line being out by quite a long way. The direction of the line is right however, just the line is maybe about 10 to the left along X, or 5 to right right along Y (Those are random examples).
Here's my code :
Positioning the camera
gluLookAt(mouse.getScrollY(), 0.0f, 0.0f,
0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f);
glRotated(mouse.getAngleV(), 0.0f, -1.0f, 0.0f);
glRotated(mouse.getAngleH(), 0.0f, 0.0f, -1.0f);
mouse.getScrollY is simply a value based on how far the camera is scrolled back form the origin.
Obtaining the coordinates
void Mouse::GetGLPos(int x, int y)
{
//init vars:
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
GLfloat winX, winY;
GLdouble posX, posY, posZ;
GLdouble FposX, FposY, FposZ;
//get gl specs
glGetDoublev( GL_MODELVIEW_MATRIX, modelview ); //get Modelmatrix
glGetDoublev( GL_PROJECTION_MATRIX, projection ); //get projection matrix
glGetIntegerv( GL_VIEWPORT, viewport ); //get viewport values
//calculate the gl mouseposition
winX = (float)x;
winY = (float)viewport[3] - (float)y;
std::cout << "X "<< winX << " Y " << winY << endl;
gluUnProject( winX, winY, 0.0, modelview, projection, viewport, &posX, &posY, &posZ);
gluUnProject( winX, winY, 1.0, modelview, projection, viewport, &FposX, &FposY, &FposZ);
std::cout << "Near positions:" << posX << " | " << posY << " | " << posZ << endl;
std::cout << " Far positions:" << FposX << " | " << FposY << " | " << FposZ << endl << endl;
for (int i = 0; i <= 15; i++)
{
cout << modelview[i] << endl;
}
if (counter == 5)
{
counter = 0;
}
LinestoreNear[counter + 1] = Vec3(posX, posY, posZ);
LinestoreFar[counter + 1] = Vec3(FposX, FposY, FposZ);
counter ++;
mouseOgl[0] = posX;
mouseOgl[1] = posY;
mouseOgl[2] = posZ;
}
x and y that are passed to it are simply the mouse's x and y coordinates on the screen. ((328,657) for example)
And finally, just in case, here is the drawing the line
glDisable(GL_LIGHTING);
for (int i = 0; i <= 4 - 1 ; i++)
{
glColor3f(1.0f, 1.0f, 1.0f);
glBegin(GL_LINES);
glVertex3f(mouse.LinestoreNear[i].a, mouse.LinestoreNear[i].b, mouse.LinestoreNear[i].c);
glVertex3f(mouse.LinestoreFar[i].a, mouse.LinestoreFar[i].b, mouse.LinestoreFar[i].c);
glEnd();
}
glEnable(GL_LIGHTING);
heres a very brief overview of the process you need to use
to get world coordinates that are correct you need to have a 4x4 model matrix for each object in your scene. The model matrix contains all the transformations convert manipulate model coordinates to/from world coordinates.
ie every time you call glTranslate, you have to transform the model matrix as follows:
T = [ 1 0 0 x ]
[ 0 1 0 y ]
[ 0 0 1 z ]
[ 0 0 0 1 ]
M' = M * T
You can then use the transformed model matrix to obtain world space coordinates
Homogenise the coords first however (or check they are already) ie. just set w = 1, if they are not already homogenous coordinates.
If you do not convert the coordinates between object space and world space then you will get the behaviour you are experiencing, and it is easy to do it at the wrong stage as well, so plan out the required steps and you should be fine:)
Hope this helps.
Finally got time to verify your results. It works exactly as it should.
With LookAt you specify (GLU will calculate these matrices for you) camera position and rotation matrices (indirectly, via 'up' and 'view point' vectors, but still it will result into these transformations). Just right after you specify two additional rotation matrices - which means, you rotating a world around a zero point (which is already shifted by LookAt). For you, this kind of rotation will look like camera rotating around certain point at space, maintaining constant distance. Lines keeping their place, with some distortion introduced by perspective transformation (if you have it), and being clipped by far clipping plane (this will look like lines becoming shorter and longer during rotation).
From my understanding,
gluLookAt(
eye_x, eye_y, eye_z,
center_x, center_y, center_z,
up_x, up_y, up_z
);
is equivalent to:
glRotatef(B, 0.0, 0.0, 1.0);
glRotatef(A, wx, wy, wz);
glTranslatef(-eye_x, -eye_y, -eye_z);
But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet:
#include <stdlib.h>
#include <stdio.h>
#include <GL/glut.h>
#include <iomanip>
#include <iostream>
#include <string>
using namespace std;
static const int Rx = 0;
static const int Ry = 1;
static const int Rz = 2;
static const int Ux = 4;
static const int Uy = 5;
static const int Uz = 6;
static const int Ax = 8;
static const int Ay = 9;
static const int Az = 10;
static const int Tx = 12;
static const int Ty = 13;
static const int Tz = 14;
void init() {
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 };
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient);
}
void displayModelviewMatrix(float MV[16]) {
int SPACING = 12;
cout << left;
cout << "\tMODELVIEW MATRIX\n";
cout << "--------------------------------------------------" << endl;
cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl;
cout << "--------------------------------------------------" << endl;
cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl;
cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl;
cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl;
cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl;
cout << "--------------------------------------------------" << endl;
cout << endl;
}
void reshape(int w, int h) {
float ratio = static_cast<float>(w)/h;
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, ratio, 1.0, 425.0);
}
void draw() {
float m[16];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glGetFloatv(GL_MODELVIEW_MATRIX, m);
gluLookAt(
300.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glColor3f(1.0, 0.0, 0.0);
glutSolidCube(100.0);
glGetFloatv(GL_MODELVIEW_MATRIX, m);
displayModelviewMatrix(m);
glutSwapBuffers();
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(400, 400);
glutInitWindowPosition(100, 100);
glutCreateWindow("Demo");
glutReshapeFunc(reshape);
glutDisplayFunc(draw);
init();
glutMainLoop();
return 0;
}
No matter what value I use for the eye vector:
300, 0, 0 or
0, 300, 0 or
0, 0, 300
the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior?
Here is the output with the eye vector is (0.0f, 300.0f, 0.0f)
MODELVIEW MATRIX
--------------------------------------------------
R U A T
--------------------------------------------------
0 0 0 0
0 0 0 0
0 1 0 -300
0 0 0 1
--------------------------------------------------
I would expect the T column to be (0, -300, 0)! So could anyone help me explain this?
The implementation of gluLookAt from http://www.mesa3d.org
void GLAPIENTRY
gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx,
GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy,
GLdouble upz)
{
float forward[3], side[3], up[3];
GLfloat m[4][4];
forward[0] = centerx - eyex;
forward[1] = centery - eyey;
forward[2] = centerz - eyez;
up[0] = upx;
up[1] = upy;
up[2] = upz;
normalize(forward);
/* Side = forward x up */
cross(forward, up, side);
normalize(side);
/* Recompute up as: up = side x forward */
cross(side, forward, up);
__gluMakeIdentityf(&m[0][0]);
m[0][0] = side[0];
m[1][0] = side[1];
m[2][0] = side[2];
m[0][1] = up[0];
m[1][1] = up[1];
m[2][1] = up[2];
m[0][2] = -forward[0];
m[1][2] = -forward[1];
m[2][2] = -forward[2];
glMultMatrixf(&m[0][0]);
glTranslated(-eyex, -eyey, -eyez);
}
If we let a rotation and translation matrix like your modelview matrix
Rxx Rxy Rxz Tx
Ryx Ryy Ryz Ty
Rzx Rzy Rzz Tz
0 0 0 1
act on an arbitrary vector
x
y
z
1
we get
Rxx x + Rxy y + Rxz z + Tx
Ryx x + Ryy y + Ryz z + Ty
Rzx x + Rzy y + Rzz z + Tz
1
(I'm writing things so vectors get multiplied by matrices on the left).
This shows that the translation components of the matrix give the translation to apply after doing the rotation. That's why they aren't the same as your (-eye_x, -eye_y, -eye_z) vector, because as you point out that translation is being done before the rotation.
The reason that the translation is always along the -z direction is because in the view frame the -z direction points towards the centre. Since you always have the centre 300 units from the eye, all of your eye positions put the centre at (0, 0, -300) in the view frame. Therefore, because the centre starts at the origin before we do any translating, the translation to give it the correct co-orindates must be (0, 0, -300).
Also, you might have noticed this, but the modelview matrix you show is pathological because you have the up vector pointing along the view direction (from eye to centre). That explains why it has two full rows of zeros.
" I'm very confused about how the rotations are performed using the forward, up and side vectors in this code..."
I think you should know something about "UVN camera".There is some theory about coordinates translates between two coordinate systems.In the above examle, the two coordinates are world coordinates and camera coordinates.
And the result is:
x
N - The vector from the target to camera. Also known as the 'look at' vector in some 3D literature. This vector corresponds to the -Z axe.
V - When standing upright this is the vector from your head to the sky. If you are writing a flight simulator and the plane is reversed that vector may very well point to the ground. This vector corresponds to the Y axe.
U - This vector points from the camera to its "right" side". It corresponds to the X axe.
#Andon M. Coleman - how is the above diagram row major?
Having a Row or Column major matrix is about the memory representation of 2D structures in 1D memory and has nothing to do with the above diagram of a 4x4 transformation matrix.
If vectors U,V,N were written as columns as you seem to suggest, then you would have a camera-space to world-space transformation.
However, the input to the matrix is world-space position and the output is camera-space position and so the matrix is a transformation world-space to camera-space.
The reason U,V,N are transposed is because this is the inverse of the matrix you suggest and by using the property of orthogonal matrices, where their inverse is also its transpose. That is, we write U,V,N vectors to the rows to get a world-space to camera-space transformation and U,V,N to the columns to get camera-space to world-space transform.
Also, the choice of multiplying with the world position on the right is because the diagram is using column vectors. We would left multiply if we were using row vectors. It has NOTHING to do with how we store the matrix in memory and has everything to do with how we multiply two matrices together, that is, do we convert our vector to a 1X4 or a 4X1 matrix before we multiply it with the 4x4 matrix.
In short, the above diagram is fine, it's just a transformation from one space to another, please don't confuse the matter with talk about memory layout which is a programming detail.