The one thing that has always hindered me from doing 3D programming is failing to understand how math works. I can go along with math fine in programming flow using methods and functions, then its all clear and logical to me, but in mathematical notation, I just can't make heads or tails from it.
I have been reading websites, a watching videos of institutes trying to explain this, but they all use mathematical notation and I simply get lost in it, my mind won't translate it to something understandable. I might have a defect there.
Also, Just using someone's code isn't my interest, I want to understand the mechanics behind it, the logic. I'd be happy to use someone else's code, but I really want to understand how it works.
The question
Can you explain to me in simple terms without mathematical notation, just programming notation/functions/psuedocode, how to implement a matrix transform along all 3 axes?
Ideally what I want is the material/understanding to write a method/object where I can define the angles of 3 axes similar to glRotate to rotate the collection of quads/triangles I have. (I am trying to program a 3D rotation of a cube shapes without having access to OpenGL functions to do it for me because this is done in one draw call every time something changes in the display list.)
What have I done?
I have attempted at making a 90 degrees transform function to get the hang of the math but failed utterly in making a proper matrix which in theory should have been the simplest to do. You can see my failed attempt in all its glory on http://jsfiddle.net/bLfg0tj8/5/
Vec3 = function(x,y,z) {
this.x = x;
this.y = y;
this.z = z;
}
Matrix = function Matrix() {
this.matrixPoints = new Array();
this.rotationPoint = new Vec3(0,0,0);
this.rotationAngle = 90;
}
Matrix.prototype.addVector = function(vector) {
this.matrixPoints.push(vector);
}
Matrix.prototype.setRotationPoint = function(vector) {
this.rotationPoint = vector;
}
Matrix.prototype.setRotationAngle = function(angle) {
this.rotationAngle = angle;
}
Matrix.prototype.populate = function() {
translateToOrigin = [[1,0,0-this.rotationPoint.x],
[0,1,0-this.rotationPoint.y],
[0,0,0-this.rotationPoint.z]];
rotationMatrix = [[0,-1,0],
[0,1,0],
[0,0,1]];
translateEnd = [[1,0,this.rotationPoint.x],
[0,1,this.rotationPoint.y],
[0,0,this.rotationPoint.z]];
currentColumn = 0;
currentRow = 0;
this.combomatrix = this.mergeMatrices(this.mergeMatrices(translateEnd,rotationMatrix),
translateToOrigin);
}
Matrix.prototype.transform = function() {
newmatrix = new Array();
for(c = 0;c<this.matrixPoints.length;c++) {
newmatrix.push(this.applyToVertex(this.matrixPoints[c]));
}
return newmatrix;
}
Matrix.prototype.applyToVertex = function(vertex) {
ret = new Vec3(vertex.x,vertex.y,vertex.z);
ret.x = ret.x + this.combomatrix[0][0] * vertex.x +
this.combomatrix[0][1] * vertex.y +
this.combomatrix[0][2] * vertex.z;
ret.y = ret.y + this.combomatrix[1][0] * vertex.x +
this.combomatrix[1][1] * vertex.y +
this.combomatrix[1][2] * vertex.z;
ret.z = ret.z + this.combomatrix[2][0] * vertex.x +
this.combomatrix[2][1] * vertex.y +
this.combomatrix[2][2] * vertex.z;
return ret;
}
Matrix.prototype.mergeMatrices = function(lastStep, oneInFront) {
step1 = [[0,0,0],[0,0,0],[0,0,0]];
step1[0][0] = lastStep[0][0] * oneInFront[0][0] +
lastStep[0][1] * oneInFront[1][0] +
lastStep[0][2] * oneInFront[2][0];
step1[0][1] = lastStep[0][0] * oneInFront[0][1] +
lastStep[0][1] * oneInFront[1][1] +
lastStep[0][2] * oneInFront[2][1];
step1[0][2] = lastStep[0][0] * oneInFront[0][2] +
lastStep[0][1] * oneInFront[1][2] +
lastStep[0][2] * oneInFront[2][2];
//============================================================
step1[1][0] = lastStep[1][0] * oneInFront[0][0] +
lastStep[1][1] * oneInFront[1][0] +
lastStep[1][2] * oneInFront[2][0];
step1[1][1] = lastStep[1][0] * oneInFront[0][1] +
lastStep[1][1] * oneInFront[1][1] +
lastStep[1][2] * oneInFront[2][1];
step1[1][2] = lastStep[1][0] * oneInFront[0][2] +
lastStep[1][1] * oneInFront[1][2] +
lastStep[1][2] * oneInFront[2][2];
//============================================================
step1[2][0] = lastStep[2][0] * oneInFront[0][0] +
lastStep[2][1] * oneInFront[1][0] +
lastStep[2][2] * oneInFront[2][0];
step1[2][1] = lastStep[2][0] * oneInFront[0][1] +
lastStep[2][1] * oneInFront[1][1] +
lastStep[2][2] * oneInFront[2][1];
step1[2][2] = lastStep[2][0] * oneInFront[0][2] +
lastStep[2][1] * oneInFront[1][2] +
lastStep[2][2] * oneInFront[2][2];
return step1;
}
Matrix.prototype.getCurrentMatrix = function() {
return this.matrixPoints;
}
myvectors = [new Vec3(50,50,0), new Vec3(20,80,0), new Vec3(80, 80, 0)];
function drawVectors(vectors,color) {
for(c=0;c<vectors.length;c++) {
document.getElementById("whoa").innerHTML += '<div style="color:'+color+';position:absolute;left:'+vectors[c].x+'px; top:'+vectors[c].y+'px;z-index:'+vectors[c].z+';">('+c+').</div>';
}
}
matrix = new Matrix();
for(c=0;c<myvectors.length;c++) {
matrix.addVector(myvectors[c]);
}
matrix.setRotationPoint(new Vec3(50,70,0));
matrix.populate();
somematrix = matrix.transform();
drawVectors(matrix.getCurrentMatrix(),"lime"); // draw current matrix that was hand coded
drawVectors([matrix.rotationPoint],'white'); // draw rotation point
drawVectors(somematrix,"red"); // transformed matrix... somehow two points merge
<div id="whoa" style="position:relative;top:50px;left:150px;background-color:green;color:red;width:400px;height:300px;">
</div>
The green text is the original triangle, the white point the center point, the red points the failed transformation(I think, because it isn't aligned around the center point). The tutorial I was in thought me how to combine matrices into a combined matrix, but I guess I screwed up somewhere.
As I said, it's really really hard for me to understand mathematical notation and speak. And not helping is that most teachers skip parts of the explanation. Took me 2 hours alone to understand when multiplying matrices you need to add each step together instead of just keep on multiplying. Yay for explanations.
A practical example what I work with/want to work with
For example I have a cube, loaded from a wavefront obj file located in the world at
x = 50
y = 100
z = 200
The cube is drawn using quads and some uv mapping. No problems here. It renders beautifully with all the textures showing correctly.
These are the location coordinates for each "face" of the cube which is drawn using a quad.
// Front face
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, 1.0,
-1.0, 1.0, 1.0,
// Back face
-1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,
1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
// Top face
-1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, 1.0, -1.0,
// Bottom face
-1.0, -1.0, -1.0,
1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, 1.0,
// Right face
1.0, -1.0, -1.0,
1.0, 1.0, -1.0,
1.0, 1.0, 1.0,
1.0, -1.0, 1.0,
// Left face
-1.0, -1.0, -1.0,
-1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,
-1.0, 1.0, -1.0
So this works all great. But what if I want this cube rotated 90 degrees along the x axis and 45 degrees around the z axis? I cannot use glRotate because at the moment I pass the data to the tesselator object I cannot do any matrix transforms to it via the opengl functions because it's just taking in the data, not actually rendering it per se.
The way the data is stored is as following:
WaveFrontObject()
|
|-> Groups(String groupname)
|
|-> Faces()
|
|-> Vertex(float x, float y, float z)[]
|-> Float UVmap[] corresponding to each vertex
|-> drawFace() // Draws the face as a quad or triangle
So each of the above coordinates I gave is stored as a face of the wavefront object in the group "cube".
When the cube is added to the tesselator it is translated to the right coordinates in the world and it renders normal.
It always renders the same however. If I would want it to render at an angle I would have to make a seperate wavefront object at this moment to be able to do that. In my opnion that is madness to do when it can be solved with some math.
Needed in the answer
Explanation step by step how to build a translation matrix and an attempt to explain the math to me.
Explanation how to apply the translation matrix to the quads/triangles in the faces whist they keep oriented around the center of their location
x = 50.5
y = 100.5
z = 200.5
Some example/pseudo code to go along with the explanation.
The used programming language used to explain isn't really relevant as long as its in the C family
Please try to stay away from mathematical notation/speak. I don't know what alpha beta, thetha is, I do know what x axis, y axis and z axis is. I do know what angles are, but I do not know the names mathematicians find for it.
If you wish to use math names, please explain to me what they are in the 3D world/code and how they are formed/calculated.
I simply want to make a method/object along the lines of
Matrix.transformVertices(vertices[], 90deg x, 45 deg y, 0 deg z);
So the question really is Understanding 4x4 homogenous transform matrices
well without the math behind the only thing that left is geometric representation/meaning which is far better for human abstraction/understanding.
So what the 4x4 matrix is?
It is representation of some Cartesian coordinate system and it is composed of:
3 basis vectors (one for each axis) red,green,blue
So if the red,green,blue vectors are perpendicular to each other then the coordinate system is orthogonal. If they are also unit vectors then it is orthonormal (like for example unit matrix).
origin point gray
projection and homogenous side (unmarked bottom rest of the matrix)
This part is there only for enabling rotation and translation at once, therefore point used must be homogenous that means in form (x,y,z,w=1) for points and (x,y,z,w=0) for direction vectors. If it was just (x,y,z) then the matrix would be 3x3 and that is not enough for translation. I will not use any projections they are uneasy to explain geometrically.
This layout is from OpenGL notation there are also transposed representation out there (vectors are rows not columns)
now how to transform any point to/from this coordinate system:
g=M*l;
l=Inverse(M)*g;
where:
M is transform matrix
l is M local coordinate system point (LCS)
g is global coordinate system point (GCS)
for the transposed version (DirectX) it is:
l=M*g;
g=Inverse(M)*l;
That is because transposed orthogonal rotation matrix is also inverse of itself
for more info see transform matrix anatomy and 3D graphic pipeline
how to visualize it
Yes you can draw the matrix numbers but they do not make sense at first look especially if the numbers are changing so draw the axises vectors as on image above. Where each axis is a line from origin to origin + line_size*axis_vector
how to construct it
Just compute axis vectors and origin and put them inside matrix. To ensure orthogonality exploit cross product (but be careful with order of multiplicants to use the right direction) Here example of getting 3 basis vectors from direction
effects
rotation is done by rotating the axises so you can compute each axis by parametric circle equation ...
scaling is done by multiplying axises by scale factor
skewing is just using non perpendicular axises
rotation
For most cases the incremental rotation is used. There are two types
local rotation M'=M*rotation_matrix it rotates around local coordinate axises like you will control plane or car or player ... Most engines/games do not use these and fake it with euler angles instead which is a cheap solution (have many quirks and problems) because most people who using OpenGL do not even know this is possible and rather stack list of glRotate/glTranslate calls...
global rotation M'=Inverse(Inverse(M)*rotation_matrix) it rotates around global coordinate system axises.
where rotation_matrix is any standard rotation transform matrix.
If you have different matrix layout (transposed) then the rotations local and global are computed the other way around ...
You can also compute your rotation_matrix from 3 angles like:
rotation_matrix=rotation_around_x(ax)*rotation_around_y(ay)*rotation_around_z(az);
see Wiki rotation matrices the 3D Rx,Ry,Rz from Basic rotations are what you need. As you can see they are just unit circle parametric equation really. The order of multiplication change how the angles converge to target position. This is called Euler angles and I do not use it (I integrate step changes instead which has no restrictions if done properly not to mention it is simpler).
Anyway if you need you can convert transform matrix into euler angles relatively easily see:
Is there a way to calculate 3D rotation on X and Y axis from a 4x4 matrix
glRotate
If you want glRotate which is rotation around arbitrary axis not by 3 angles then There is workaround:
create transform matrix N for that axis
then transform your matrix M to it
rotate N by angle
then transform M back from N to global coordinates
Or you can use Rodrigues_rotation_formula instead
To transform Matrix to/from Matrix in this case just transform axises as points and leave the origin as is but the origin of N must be (0,0,0)!!! or the vectors transformed must have w=0 instead.
usage
Transformations are cumulative that means:
p'=M1*M2*M3*M4*p; is the same as M=M1*M2*M3*M4; p'=M*p
So if you have many points to transform then you precompute all transformations to single matrix and use just it. Do not need to multiply points by all subsequent matrices. OK now the concept:
you should have 3 coordinate systems:
camera C
world (usually unit matrix)
object O (each object have its own matrix)
so if you have cube with 8 vertexes p0,...,p7 then you have to perform transformation on each point from object local coordinates to camera local coordinates. Some gfx api do some of it so you apply only what you have to so you really need:
p(i)'=inverse(C)*unit*M*p(i);
the transforms are cumulative and unit matrix does not change anything so:
Q=inverse(C)*M; p(i)'=Q*p(i);
so before drawing compute Q for drawed object then take each point p(i) of the object and compute the transformed p(i)' and draw/use the transformed one ... The p(i)' is in local camera coordinate system (x,y of the screen) but there is no perspective there so before drawing you can also add any of the projection matrices and divide by z cordinate at the end ... The projection is also cumulative so it can be also inside Q
[edit1] C++ example
//$$---- Form CPP ----
//---------------------------------------------------------------------------
// apart from math.h include you can ignore this machine generated VCL related code
#include <vcl.h>
#pragma hdrstop
#include "win_main.h"
#include <math.h>
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TMain *Main; // pointer to main window ...
//---------------------------------------------------------------------------
// Here is the important stuff some math first
//---------------------------------------------------------------------------
const double deg=M_PI/180.0;
double divide(double x,double y);
void matrix_mul (double *c,double *a,double *b); // c[16] = a[16] * b[16]
void matrix_mul_vector(double *c,double *a,double *b); // c[ 4] = a[16] * b[ 4]
void matrix_subdet (double *c,double *a); // c[16] = all subdets of a[16]
double matrix_subdet ( double *a,int r,int s);// = subdet(r,s) of a[16]
double matrix_det ( double *a); // = det of a[16]
double matrix_det ( double *a,double *b); // = det of a[16] and subdets b[16]
void matrix_inv (double *c,double *a); // c[16] = a[16] ^ -1
//---------------------------------------------------------------------------
double divide(double x,double y)
{
if (!y) return 0.0;
return x/y;
}
void matrix_mul (double *c,double *a,double *b)
{
double q[16];
q[ 0]=(a[ 0]*b[ 0])+(a[ 1]*b[ 4])+(a[ 2]*b[ 8])+(a[ 3]*b[12]);
q[ 1]=(a[ 0]*b[ 1])+(a[ 1]*b[ 5])+(a[ 2]*b[ 9])+(a[ 3]*b[13]);
q[ 2]=(a[ 0]*b[ 2])+(a[ 1]*b[ 6])+(a[ 2]*b[10])+(a[ 3]*b[14]);
q[ 3]=(a[ 0]*b[ 3])+(a[ 1]*b[ 7])+(a[ 2]*b[11])+(a[ 3]*b[15]);
q[ 4]=(a[ 4]*b[ 0])+(a[ 5]*b[ 4])+(a[ 6]*b[ 8])+(a[ 7]*b[12]);
q[ 5]=(a[ 4]*b[ 1])+(a[ 5]*b[ 5])+(a[ 6]*b[ 9])+(a[ 7]*b[13]);
q[ 6]=(a[ 4]*b[ 2])+(a[ 5]*b[ 6])+(a[ 6]*b[10])+(a[ 7]*b[14]);
q[ 7]=(a[ 4]*b[ 3])+(a[ 5]*b[ 7])+(a[ 6]*b[11])+(a[ 7]*b[15]);
q[ 8]=(a[ 8]*b[ 0])+(a[ 9]*b[ 4])+(a[10]*b[ 8])+(a[11]*b[12]);
q[ 9]=(a[ 8]*b[ 1])+(a[ 9]*b[ 5])+(a[10]*b[ 9])+(a[11]*b[13]);
q[10]=(a[ 8]*b[ 2])+(a[ 9]*b[ 6])+(a[10]*b[10])+(a[11]*b[14]);
q[11]=(a[ 8]*b[ 3])+(a[ 9]*b[ 7])+(a[10]*b[11])+(a[11]*b[15]);
q[12]=(a[12]*b[ 0])+(a[13]*b[ 4])+(a[14]*b[ 8])+(a[15]*b[12]);
q[13]=(a[12]*b[ 1])+(a[13]*b[ 5])+(a[14]*b[ 9])+(a[15]*b[13]);
q[14]=(a[12]*b[ 2])+(a[13]*b[ 6])+(a[14]*b[10])+(a[15]*b[14]);
q[15]=(a[12]*b[ 3])+(a[13]*b[ 7])+(a[14]*b[11])+(a[15]*b[15]);
for(int i=0;i<16;i++) c[i]=q[i];
}
void matrix_mul_vector(double *c,double *a,double *b)
{
double q[3];
q[0]=(a[ 0]*b[0])+(a[ 1]*b[1])+(a[ 2]*b[2])+(a[ 3]);
q[1]=(a[ 4]*b[0])+(a[ 5]*b[1])+(a[ 6]*b[2])+(a[ 7]);
q[2]=(a[ 8]*b[0])+(a[ 9]*b[1])+(a[10]*b[2])+(a[11]);
for(int i=0;i<3;i++) c[i]=q[i];
}
void matrix_subdet (double *c,double *a)
{
double q[16];
int i,j;
for (i=0;i<4;i++)
for (j=0;j<4;j++)
q[j+(i<<2)]=matrix_subdet(a,i,j);
for (i=0;i<16;i++) c[i]=q[i];
}
double matrix_subdet ( double *a,int r,int s)
{
double c,q[9];
int i,j,k;
k=0; // q = sub matrix
for (j=0;j<4;j++)
if (j!=s)
for (i=0;i<4;i++)
if (i!=r)
{
q[k]=a[i+(j<<2)];
k++;
}
c=0;
c+=q[0]*q[4]*q[8];
c+=q[1]*q[5]*q[6];
c+=q[2]*q[3]*q[7];
c-=q[0]*q[5]*q[7];
c-=q[1]*q[3]*q[8];
c-=q[2]*q[4]*q[6];
if (int((r+s)&1)) c=-c; // add signum
return c;
}
double matrix_det ( double *a)
{
double c=0;
c+=a[ 0]*matrix_subdet(a,0,0);
c+=a[ 4]*matrix_subdet(a,0,1);
c+=a[ 8]*matrix_subdet(a,0,2);
c+=a[12]*matrix_subdet(a,0,3);
return c;
}
double matrix_det ( double *a,double *b)
{
double c=0;
c+=a[ 0]*b[ 0];
c+=a[ 4]*b[ 1];
c+=a[ 8]*b[ 2];
c+=a[12]*b[ 3];
return c;
}
void matrix_inv (double *c,double *a)
{
double d[16],D;
matrix_subdet(d,a);
D=matrix_det(a,d);
if (D) D=1.0/D;
for (int i=0;i<16;i++) c[i]=d[i]*D;
}
//---------------------------------------------------------------------------
// now the object representation
//---------------------------------------------------------------------------
const int pnts=8;
double pnt[pnts*3]= // Vertexes for 100x100x100 cube centered at (0,0,0)
{
-100.0,-100.0,-100.0,
-100.0,+100.0,-100.0,
+100.0,+100.0,-100.0,
+100.0,-100.0,-100.0,
-100.0,-100.0,+100.0,
-100.0,+100.0,+100.0,
+100.0,+100.0,+100.0,
+100.0,-100.0,+100.0,
};
const int facs=6;
int fac[facs*4]= // faces (index of point used) no winding rule
{
0,1,2,3,
4,5,6,7,
0,1,5,4,
1,2,6,5,
2,3,7,6,
3,0,4,7,
};
double rep[16]= // 4x4 transform matrix of object (unit from start) at (0,0,+100)
{
1.0,0.0,0.0, 0.0,
0.0,1.0,0.0, 0.0,
0.0,0.0,1.0,100.0,
0.0,0.0,0.0,1.0,
};
double eye[16]= // 4x4 transform matrix of camera at (0,0,-150)
{
1.0,0.0,0.0, 0.0,
0.0,1.0,0.0, 0.0,
0.0,0.0,1.0,-150.0,
0.0,0.0,0.0,1.0,
};
//---------------------------------------------------------------------------
// this is how to draw it
//---------------------------------------------------------------------------
void obj(double *pnt,int pnts,int *fac,int facs,double *rep,double *ieye)
{
// variables for drawing
int i;
double p0[3],p1[3],p2[3],p3[3],m[16],d;
// gfx api variables (change to your stuff) Main is the main form of this application
TCanvas *scr=Main->bmp->Canvas;
double xs2=Main->ClientWidth/2,ys2=Main->ClientHeight/2;
double v=xs2*tan(30.0*deg); // 60 degree viewing angle perspective projection
matrix_mul(m,ieye,rep); // cumulate all needed transforms
for (i=0;i<facs*4;) // go through all faces
{
// convert all points of face
matrix_mul_vector(p0,m,&pnt[fac[i]*3]); i++;
matrix_mul_vector(p1,m,&pnt[fac[i]*3]); i++;
matrix_mul_vector(p2,m,&pnt[fac[i]*3]); i++;
matrix_mul_vector(p3,m,&pnt[fac[i]*3]); i++;
// here goes perspective divide by z coordinate if needed
d=divide(v,p0[2]); p0[0]*=d; p0[1]*=d;
d=divide(v,p1[2]); p1[0]*=d; p1[1]*=d;
d=divide(v,p2[2]); p2[0]*=d; p2[1]*=d;
d=divide(v,p3[2]); p3[0]*=d; p3[1]*=d;
// here is viewport transform (just translate (0,0) to middle of screen in this case
p0[0]+=xs2; p0[1]+=ys2;
p1[0]+=xs2; p1[1]+=ys2;
p2[0]+=xs2; p2[1]+=ys2;
p3[0]+=xs2; p3[1]+=ys2;
// draw quad
// I use VCL GDI TCanvas you use what you have ...
// and wireframe only to keep this simple (no Z buffer,winding culling,...)
scr->Pen->Color=clAqua; // perimeter wireframe
scr->MoveTo(p0[0],p0[1]);
scr->LineTo(p1[0],p1[1]);
scr->LineTo(p2[0],p2[1]);
scr->LineTo(p3[0],p3[1]);
scr->LineTo(p0[0],p0[1]);
// scr->Pen->Color=clBlue; // face cross to visualy check if I correctly generate the fac[]
// scr->MoveTo(p0[0],p0[1]);
// scr->LineTo(p2[0],p2[1]);
// scr->MoveTo(p1[0],p1[1]);
// scr->LineTo(p3[0],p3[1]);
}
}
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
void TMain::draw()
{
if (!_redraw) return;
bmp->Canvas->Brush->Color=clBlack;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
// compute inverse of camera need to compute just once for all objects
double ieye[16];
matrix_inv(ieye,eye);
// draw all objects
obj(pnt,pnts,fac,facs,rep,ieye);
Main->Canvas->Draw(0,0,bmp);
_redraw=false;
}
//---------------------------------------------------------------------------
__fastcall TMain::TMain(TComponent* Owner) : TForm(Owner)
{
// window constructor you can ignore this ... (just create a backbuffer bitmap here)
bmp=new Graphics::TBitmap;
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
pyx=NULL;
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormDestroy(TObject *Sender)
{
// window destructor release memory ... also ignoe this
if (pyx) delete pyx;
delete bmp;
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormResize(TObject *Sender)
{
// on resize event ... just resize/redraw backbuffer also can ignore this
xs=ClientWidth; xs2=xs>>1;
ys=ClientHeight; ys2=ys>>1;
bmp->Width=xs;
bmp->Height=ys;
if (pyx) delete pyx;
pyx=new int*[ys];
for (int y=0;y<ys;y++) pyx[y]=(int*) bmp->ScanLine[y];
_redraw=true;
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormPaint(TObject *Sender)
{
// repaint event can ignore
_redraw=true;
}
//---------------------------------------------------------------------------
void __fastcall TMain::tim_redrawTimer(TObject *Sender)
{
// timer event to animate the cube ...
_redraw=true;
// rotate the object to see it in motion
double ang,c,s;
ang=5.0*deg; c=cos(ang); s=sin(ang); // rotate baround z by 5 degrees per timer step
double rz[16]= { c, s, 0, 0,
-s, c, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 };
ang=1.0*deg; c=cos(ang); s=sin(ang); // rotate baround x by 1 degrees per timer step
double rx[16]= { 1, 0, 0, 0,
0, c, s, 0,
0,-s, c, 0,
0, 0, 0, 1 };
matrix_mul(rep,rep,rz);
matrix_mul(rep,rep,rx);
draw();
}
//---------------------------------------------------------------------------
here is how it looks like:
And GIF animation with back face culling:
[notes]
If you have more questions then comment me ...
[Edit2] basic 3D vector operations often needed
If you do not know how to compute vector operations like cross/dot products or absolute value see:
// cross product: W = U x V
W.x=(U.y*V.z)-(U.z*V.y)
W.y=(U.z*V.x)-(U.x*V.z)
W.z=(U.x*V.y)-(U.y*V.x)
// dot product: a = (U.V)
a=U.x*V.x+U.y*V.y+U.z*V.z
// abs of vector a = |U|
a=sqrt((U.x*U.x)+(U.y*U.y)+(U.z*U.z))
here my C++ vector math:
static double vector_tmp[3];
double divide(double x,double y) { if ((y>=-1e-30)&&(y<=+1e-30)) return 0.0; return x/y; }
double* vector_ld(double x,double y,double z) { double *p=vector_tmp; p[0]=x; p[1]=y; p[2]=z; return p;}
double* vector_ld(double *p,double x,double y,double z) { p[0]=x; p[1]=y; p[2]=z; return p;}
void vector_copy(double *c,double *a) { for(int i=0;i<3;i++) c[i]=a[i]; }
void vector_abs(double *c,double *a) { for(int i=0;i<3;i++) c[i]=fabs(a[i]); }
void vector_one(double *c,double *a)
{
double l=divide(1.0,sqrt((a[0]*a[0])+(a[1]*a[1])+(a[2]*a[2])));
c[0]=a[0]*l;
c[1]=a[1]*l;
c[2]=a[2]*l;
}
void vector_len(double *c,double *a,double l)
{
l=divide(l,sqrt((a[0]*a[0])+(a[1]*a[1])+(a[2]*a[2])));
c[0]=a[0]*l;
c[1]=a[1]*l;
c[2]=a[2]*l;
}
void vector_neg(double *c,double *a) { for(int i=0;i<3;i++) c[i]=-a[i]; }
void vector_add(double *c,double *a,double *b) { for(int i=0;i<3;i++) c[i]=a[i]+b[i]; }
void vector_sub(double *c,double *a,double *b) { for(int i=0;i<3;i++) c[i]=a[i]-b[i]; }
void vector_mul(double *c,double *a,double *b) // cross
{
double q[3];
q[0]=(a[1]*b[2])-(a[2]*b[1]);
q[1]=(a[2]*b[0])-(a[0]*b[2]);
q[2]=(a[0]*b[1])-(a[1]*b[0]);
for(int i=0;i<3;i++) c[i]=q[i];
}
void vector_mul(double *c,double *a,double b) { for(int i=0;i<3;i++) c[i]=a[i]*b; }
void vector_mul(double *c,double a,double *b) { for(int i=0;i<3;i++) c[i]=a*b[i]; }
double vector_mul( double *a,double *b) { double c=0; for(int i=0;i<3;i++) c+=a[i]*b[i]; return c; } // dot
double vector_len(double *a) { return sqrt((a[0]*a[0])+(a[1]*a[1])+(a[2]*a[2])); }
double vector_len2(double *a) { return (a[0]*a[0])+(a[1]*a[1])+(a[2]*a[2]); }
[Edit3] local rotations for camera and object control via keyboard
As this has been asked a lot lately here some example answers of mine with demos:
stationary camera view control (partial pseudo inverse matrix)
camera and player control (inverse matrix)
How to preserve accuracy with cumulative transforms over time (full pseudo inverse matrix)
rotundus style simple OpenGL/C++/VCL player control example
I'm using Fltk to render openGL graphs. currently I'm debugging a global array which is sorted by a heapsort function. My purpose is to see after each swap of elements in the heapsort function a graphical swap of elements. but I don't want to catch an event from FLTK event_handle for every time i need to redraw after I swapped and waiting at breakpoint. (the heapsort function and the opengl render part are running in 2 different threads (if that doesn't has to go without saying)).
So the first try I had was to use:
Fl::add_timeout(1.0, MyRedrawCallback, (void *)&myWindow);
Fl::run();
void MyRedrawCallback(void *myWindow)
{
MyWindow *pMyWindow;
pMyWindow = (MyWindow *) myWindow;
pMyWindow->redraw();
Fl::repeat_timeout(1.0, MyRedrawCallback, (void *)&pMyWindow);
}
But every Time the callback is called the 2nd time i get an "Access violation reading"
I'm suggesting that FL::run starts a different thread so maybe the first time is still in same thread so the address of redraw is still usable but after that I'm in a different thread and the function at address is not that what I'm expecting?!
But I already took a different way because I wasn't sure i cant even use the timeout on this way.
So i was looking for a way to get an event that's eq to "set amount of time passed" or "nothing is happening for..." but there isn't such a handle I'm right?
Finally is there a way to let FLTK execute commands even outside the eventloop? or is there another way to solve my problem?
Please take a look at the following example, taken from here: http://seriss.com/people/erco/fltk/#OpenGlInterp
#include <FL/Fl.H>
#include <FL/Fl_Gl_Window.H>
#include <FL/gl.h>
#include <math.h>
//
// Demonstrate interpolating shapes
// erco 06/10/05
//
class Playback : public Fl_Gl_Window {
int frame;
// Linear interpolation between two values based on 'frac' (0.0=a, 1.0=b)
float Linterp(float frac, float a, float b) {
return( a + ( frac * (b - a) ));
}
// Sinusoidal easein/easeout interpolation between two values based on 'frac' (0.0=a, 1.0=b)
float SinInterp(float frac, float a, float b) {
float pi = 3.14159;
frac = (sin(pi/2 + frac*pi ) + 1.0 ) / 2.0; // 0 ~ 1 -> 0 ~ 1
return(Linterp(frac,a,b));
}
// DRAW SIMPLE SHAPE INTERPOLATION
// Interpolation is based on the current frame number
//
void DrawShape(int frame) {
// Calculate a fraction that represents the frame# being shown
float frac = ( frame % 48 ) / 48.0 * 2;
if ( frac > 1.0 ) frac = 2.0-frac; // saw tooth wave: "/\/\/\"
static float a_xy[9][2] = {
{ -.5, -1. }, { 0.0, -.5 }, { -.5, -1. }, { 0.0, -.5 },
{ 0.0, 0.0 },
{ 0.0, -.5 }, { +.5, -1. }, { 0.0, -.5 }, { +.5, -1. },
};
static float b_xy[9][2] = {
{ -.25, -1. }, { -.50, -.75 }, { -.75, -1.0 }, { -.50, -.75 },
{ 0.0, 0.0 },
{ +.50, -.75 }, { +.75, -1.0 }, { +.50, -.75 }, { +.25, -1.0 }
};
// Linterp a and b to form new shape c
float c_xy[9][2];
for ( int i=0; i<9; i++ )
for ( int xy=0; xy<2; xy++ )
c_xy[i][xy] = SinInterp(frac, a_xy[i][xy], b_xy[i][xy]);
// Draw shape
glColor3f(1.0, 1.0, 1.0);
glBegin(GL_LINE_STRIP);
for ( int i=0; i<9; i++ )
glVertex2f(c_xy[i][0], c_xy[i][1]);
glEnd();
}
// DRAW THE WIDGET
// Each time we're called, assume
//
void draw() {
if (!valid()) {
valid(1);
glLoadIdentity();
glViewport(0,0,w(),h());
}
glClear(GL_COLOR_BUFFER_BIT);
// Draw shape 4x, rotated at 90 degree positions
glPushMatrix();
DrawShape(frame); glRotatef(90.0, 0, 0, 1);
DrawShape(frame); glRotatef(90.0, 0, 0, 1);
DrawShape(frame); glRotatef(90.0, 0, 0, 1);
DrawShape(frame);
glPopMatrix();
// Advance frame counter
++frame;
}
// 24 FPS TIMER CALLBACK
// Called 24x per second to redraw the widget
//
static void Timer_CB(void *userdata) {
Playback *pb = (Playback*)userdata;
pb->redraw();
Fl::repeat_timeout(1.0/24.0, Timer_CB, userdata);
}
public:
// Constructor
Playback(int X,int Y,int W,int H,const char*L=0) : Fl_Gl_Window(X,Y,W,H,L) {
frame = 0;
Fl::add_timeout(1.0/24.0, Timer_CB, (void*)this); // 24fps timer
end();
}
};
int main() {
Fl_Window win(500, 500);
Playback playback(10, 10, win.w()-20, win.h()-20);
win.resizable(&playback);
win.show();
return(Fl::run());
}
This example more/less does exactly what you want. Greg Ercolano has more FLTK examples on his web-site. I recommend taking a look at http://seriss.com/people/erco/fltk/ .
I'm a CS student and for our final we were told to construct the reflections on multiple spheres via ray tracing. That's almost literally what we got for directions except a picture for how it should look when finished. So I need spheres, with they're reflections (using ray tracing) mapped on them with the proper shading from a light.
Well I have all of it working, except having multiple spheres and the fact that it doesn't look like the picture he gave us for a rubric.
The multiple spheres thing I'm not too sure how to do, but I'd say I need to store them in a 2D array and modify a few sections of code.
What I thought was modifying the sphere_intersect and find_reflect to include which sphere is being analyzed. Next, modify find_reflect so that when the new vector u is calculated its starting point (P0) is also updated. Then if the ray hits a sphere it will have to count how many times the ray has been reflected. At some point terminate (after 10 times maybe) and then I'll just draw the pixel. For an added touch I'd like to add solid colors to the spheres which would call for finding the normal of a sphere I believe.
Anyways I'm going to attach a picture of his, a picture of mine, and the source code. Hopefully someone can help me out on this one.
Thanks in advance!
Professor's spheres
My spheres
#include "stdafx.h"
#include <stdio.h>
#include <stdlib.h>
#include <GL/glut.h>
#include <math.h>
#include <string>
#define screen_width 750
#define screen_height 750
#define true 1
#define false 0
#define perpendicular 0
int gridXsize = 20;
int gridZsize = 20;
float plane[] = {0.0, 1.0, 0.0, -50.0,};
float sphere[] = {250.0, 270.0, -100.0, 100.0};
float eye[] = {0.0, 400.0, 550.0};
float light[] = {250.0, 550.0, -200.0};
float dot(float *u, float *v)
{
return u[0]*v[0] + u[1]*v[1] + u[2]*v[2];
}
void norm(float *u)
{
float norm = sqrt(abs(dot(u,u)));
for (int i =0; i <3; i++)
{
u[i] = u[i]/norm;
}
}
float plane_intersect(float *u, float *pO)
{
float normt[3] = {plane[0], plane[1], plane[2]};
float s;
if (dot(u,normt) == 0)
{
s = -10;
}
else
{
s = (plane[3]-(dot(pO,normt)))/(dot(u,normt));
}
return s;
}
float sphere_intersect(float *u, float *pO)
{
float deltaP[3] = {sphere[0]-pO[0],sphere[1]-pO[1],sphere[2]-pO[2]};
float deltLen = sqrt(abs(dot(deltaP,deltaP)));
float t=0;
float answer;
float det;
if ((det =(abs(dot(u,deltaP)*dot(u,deltaP))- (deltLen*deltLen)+sphere[3]*sphere[3])) < 0)
{
answer = -10;
}
else
{
t =-1*dot(u,deltaP)- sqrt(det) ;
if (t>0)
{
answer = t;
}
else
{
answer = -10;
}
}
return answer;
}
void find_reflect(float *u, float s, float *pO)
{
float n[3] = {pO[0]+s *u[0]-sphere[0],pO[1]+s *u[1]-sphere[1],pO[2]+s *u[2]- sphere[2]};
float l[3] = {s *u[0],s *u[1],s *u[2]};
u[0] =(2*dot(l,n)*n[0])-l[0];
u[1] = (2*dot(l,n)*n[1])-l[1];
u[2] = (2*dot(l,n)*n[2])-l[2];
}
float find_shade(float *u,float s, float *pO)
{
float answer;
float lightVec[3] = {light[0]-(pO[0]+s *u[0]), light[1]-(pO[1]+s *u[1]), light[2]-(pO[2]+s *u[2])};
float n[3] = {pO[0]+s *u[0]-sphere[0],pO[1]+s *u[1]-sphere[1],pO[2]+s *u[2]-sphere[2]};
answer = -1*dot(lightVec,n)/(sqrt(abs(dot(lightVec,lightVec)))*sqrt(abs(dot(n,n))));
return answer;
}
void init()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,screen_width,0,screen_height);
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
for (int i=0; i < screen_width; i++)
{
for (int j=0; j < screen_height; j++)
{
float ray[3] = {1*(eye[0]-i),-1*(eye[1]-j),1*eye[2]};
float point[3] = {i,j,0};
norm(ray);
int plotted = false;
while (!plotted)
{
float s_plane = plane_intersect(ray, point);
float s_sphere = sphere_intersect(ray, point);
if (s_plane <= 0 && s_sphere <=0)
{
glColor3f(0,0,0);
glBegin(GL_POINTS);
glVertex3f(i,j,0);
glEnd();
plotted = true;
}
else if (s_sphere >= 0 && (s_plane <=0 || s_sphere <= s_plane))
{
find_reflect(ray, s_sphere, point);
}
else if (s_plane >=0 && (s_sphere <=0 ||s_plane <= s_sphere))
{
float shade = find_shade(ray, s_plane, point);
float xx = s_plane*ray[0] + eye[0];
float z = s_plane*ray[2] + eye[2];
if (abs((int)xx/gridXsize)%2 == abs((int)z/gridZsize)%2)
{
glColor3f(shade,0,0);
}
else
{
glColor3f(shade,shade,shade);
}
glBegin(GL_POINTS);
glVertex3f(i,j,0);
glEnd();
plotted = true;
}
}
}
}
glFlush();
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutCreateWindow("Ray Trace with Sphere.");
glutInitWindowSize(screen_width,screen_height);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutDisplayFunc(display);
init();
glutMainLoop();
return 0;
}
The professor did not tell you too much, because such a topic is covered thousands of time over the web, just check-out "Whitted Raytracing" ;) It's homework, and 5mn of googling around would solve the issue... Some clues to help without doing your homework for you
Do it step by step, don't try to reproduce the picture in one step
Get one sphere working, if hit the plane green pixel, the sphere red pixel, nothing, black. It's enough to get the intersections computing right. It looks like, from your picture, that you don't have the intersections right, for a start
Same as previous, with several spheres. Same as one sphere : check intersection for all objects, keep the closest intersection from the point of view.
Same as previous, but also compute the amount of light received for each intersection found, to have shade of red for spheres, and shade of green for the plane. (hint: dot product ^^)
Texture for the plane
Reflection for the spheres. Protip: a mirror don't reflect 100% of the light, just a fraction of it.