Reflecting scene by plane - mirror in openGL - opengl

I'm trying to create scene with flat planar mirror using stencil buffer. I'm stuck at the point when I'm trying to reflect my eyepoint for the second render pass as depictured here https://www.opengl.org/archives/resources/code/samples/advanced/advanced97/notes/node90.html. What transformation do I need to apply on viewmatrix to achieve desired effect? To simplify things i tried to set plane at XY plane and scale view matrix by (1, 1, -1) but it doesn't work as I expected. Could point me how to reflect this view matrix so that I can render mirrored scene or is there any other way around?

This code is simple and probably not completely correct, but works for me.
It uses GLM. If you don't want to use GLM, you can find information about this methods online.
bool scene::CalcMirrorMatrix (
const glm::vec3 & eyePos,
const glm::vec3 & mirrorPos,
const glm::vec3 & mirrorNormal,
float mirrorSize,
glm::mat4 & outMirrorProjMatrix,
glm::mat4 & outMirrorViewMatrix )
{
// Eye N R
// \ /\ /\
// \ | /
// \ | /
// \/|/
// ----------------- Mirror
// /
// / -R
// /
// \/
// MirrorEye
glm::vec3 eyeV (mirrorPos-eyePos); // eye to mirror center
float dist = glm::length (eyeV); // dist from eye to mirror
eyeV = glm::normalize (eyeV);
glm::vec3 N = glm::normalize (mirrorNormal);
if (glm::dot(N,eyeV) <= 0.0)
return false; // mirror looking backguards
glm::vec3 R = glm::reflect (eyeV, N); // reflected light vector, normalized.
// Position of the 'eye' inside the mirror.
glm::vec3 mirrorEyePos = mirrorPos - (R * dist);
// Mirror camera FOV (perfect square mirror)
//
// _-|
// _- | mirrorSize/2
// _- |
// mirrorEye -----------
// dist
//
float fovAngleRad = 2.0f * atanf ((mirrorSize/2.0f) / dist);
// Final matrix
outMirrorProjMatrix = glm::perspective (fovAngleRad, 1.0f, dist, cCamFarP);
outMirrorViewMatrix = glm::lookAt (mirrorEyePos, mirrorPos, glm::vec3(0.0f,1.0f,0.0f));
return true;
}

Related

Issue with Picking (custom unProject() function)

I'm currently working on a STL file viewer. This one use an Arcball camera :
To provide more features on this viewer (which can handle more than one object) I would like to implement a click select. To achieve it, I have used picking(Pseudo code I have used)
At this time, my code to check for a any object 3D between 2 points works. However the conversion of mouse position to a correct set of vector is far away from working:
glm::vec3 range = transform.GetPosition() + ( transform.GetFront() * 1000.0f);
// x and y are cursor position on the screen
glm::vec3 start = UnProject(x,y, transform.GetPosition().z);
glm::vec3 end = UnProject(x,y,range.z);
/*
The code which iterate over all objects in the scene and checks for collision
between my start / end and the object hitbox
*/
As you can see I have tried (maybe it is stupid) to set a the z distance between my start and my end to 100 * theFront vector of my camera. But it's not working the set of vectors I get are incoherents.
By example, placing the camera at 0 0 0 with a front of 0 0 -1 give me this set of Vectors :
Start : 0.0000~ , 0.0000~ , 0.0000~
End : 0.0000~ , 0.0000~ , 0.0000~
which is (by my logic) incoherent, I would have expected something more like (Start : 0, 0, 0) ( End : 0, 0, -1000)
I think there's an issue with my UnProject function :
glm::vec3 UnProject(float winX, float winY, float winZ)
{
// Compute (projection x modelView) ^ -1:
glm::mat4 modelView = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
const glm::mat4 m = glm::inverse(projection * modelView);
// Need to invert Y since screen Y-origin point down,
// while 3D Y-origin points up (this is an OpenGL only requirement):
winY = ScreenSize.cy - winY;
// Transformation of normalized coordinates between -1 and 1:
glm::vec4 in;
in.x = winX / ScreenSize.cx * 2.0 - 1.0;
in.y = winY / ScreenSize.cy * 2.0 - 1.0;
in.z = 2.0 * winZ - 1.0;
in.w = 1.0;
// To world coordinates:
glm::vec4 out(m * in);
if (out.w == 0.0) // Avoid a division by zero
{
return glm::vec3(0.0f);
}
out.w = 1.0 / out.w;
return glm::vec3(out.x * out.w, out.y * out.w,out.z * out.w);
}
Since this function is basic rewrite of the pseudo code (from here) and I'm far from behind good at mathematics I don't really see what could go wrong...
PS: my view matrix (provided by GetViewMatrix()) is correct (since I use it to show my scene)
my projection matrix is also correct
the ScreenSize object carry my viewport size
I have found what's wrong, the return vec3 should be made by dividing each component by the perspective instead of being multiply by it. Here is the new UnProject function :
glm::vec3 UnProject2(float winX, float winY,float winZ){
glm::mat4 View = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
glm::mat4 viewProjInv = glm::inverse(projection * View);
winY = ScreenSize.cy - winY;
glm::vec4 clickedPointOnSreen;
clickedPointOnSreen.x = ((winX - 0.0f) / (ScreenSize.cx)) *2.0f -1.0f;
clickedPointOnSreen.y = ((winY - 0.0f) / (ScreenSize.cy)) * 2.0f -1.0f;
clickedPointOnSreen.z = 2.0f*winZ-1.0f;
clickedPointOnSreen.w = 1.0f;
glm::vec4 clickedPointOrigin = viewProjInv * clickedPointOnSreen;
return glm::vec3(clickedPointOrigin.x / clickedPointOrigin.w,clickedPointOrigin.y / clickedPointOrigin.w,clickedPointOrigin.z / clickedPointOrigin.w);
}
I also changed the way start and end are calculated :
glm::vec3 start = UnProject2(x,y,0.0f);
glm::vec3 end = UnProject2(x,y,1.0f);

GLUT torus colliding with camera

I want to implement collision of 6 torus which are randomly disturbed in the game area. It is a simple 3D space game using the perspective view and in first person. I saw some stack overflow answer suggesting to compute distance of whatever (player) to torus cell and if bigger than half or whole cell size you are colliding +/- your coordinate system and map topology tweaks. But if we take the distance that means we're only considering the z co-ordinates so if the camera moved to that distance (without considering x,y coordinates) it's always taking as a collision which is wrong right?
I'm hoping to do this using AABB algorithm. Is it ok to consider camera position and torus position as 2 boxes and check the collision (box to box collision) or camera as a point and torus as a box (point to box)? Or can somebody suggest best way to do that?
Below is the code that I've tried so far.
float im[16], m[16], znear = 0.1, zfar = 100.0, fovx = 45.0 * M_PI / 180.0;
glm::vec3 p0, p1, p2, p3, o, u, v;
//p0, p1, p2, p3 holds your znear camera screen corners in world coordinates
void ChangeSize(int w, int h)
{
GLfloat fAspect;
// Prevent a divide by zero
if(h == 0)
h = 1;
// Set Viewport to window dimensions
glViewport(0, 0, w, h);
// Calculate aspect ratio of the window
fAspect = (GLfloat)w*1.0/(GLfloat)h;
// Set the perspective coordinate system
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// field of view of 45 degrees, near and far planes 1.0 and 1000
//that znear and zfar should typically have a ratio of 1000:1 to make sorting out z depth easier for the GPU
gluPerspective(45.0f, fAspect, 0.1f, 300.0f); //may need to make larger depending on project
// Modelview matrix reset
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// get camera matrix (must be in right place in code before model transformations)
glGetFloatv(GL_MODELVIEW_MATRIX, im); // get camera inverse matrix
matrix_inv(m, im); // m = inverse(im)
u = glm::vec3(m[0], m[1], m[2]); // x axis
v = glm::vec3(m[4], m[5], m[6]); // y axis
o = glm::vec3(m[12], m[13], m[14]); // origin
o -= glm::vec3(m[8], m[9], m[10]) * znear; // z axis offset
// scale by FOV
u *= znear * tan(0.5 * fovx);
v *= znear * tan(0.5 * fovx / fAspect);
// get rectangle coorners
p0 = o - u - v;
p1 = o + u - v;
p2 = o + u + v;
p3 = o - u + v;
}
void matrix_inv(float* a, float* b) // a[16] = Inverse(b[16])
{
float x, y, z;
// transpose of rotation matrix
a[0] = b[0];
a[5] = b[5];
a[10] = b[10];
x = b[1]; a[1] = b[4]; a[4] = x;
x = b[2]; a[2] = b[8]; a[8] = x;
x = b[6]; a[6] = b[9]; a[9] = x;
// copy projection part
a[3] = b[3];
a[7] = b[7];
a[11] = b[11];
a[15] = b[15];
// convert origin: new_pos = - new_rotation_matrix * old_pos
x = (a[0] * b[12]) + (a[4] * b[13]) + (a[8] * b[14]);
y = (a[1] * b[12]) + (a[5] * b[13]) + (a[9] * b[14]);
z = (a[2] * b[12]) + (a[6] * b[13]) + (a[10] * b[14]);
a[12] = -x;
a[13] = -y;
a[14] = -z;
}
//Store torus coordinates
std::vector<std::vector<GLfloat>> translateTorus = { { 0.0, 1.0, -10.0, 1 }, { 0.0, 4.0, -6.0, 1 } , { -1.0, 0.0, -4.0, 1 },
{ 3.0, 1.0, -6.0, 1 }, { 1.0, -1.0, -9.0, 1 } , { 4.0, 1.0, -4.0, 1 } };
GLfloat xpos, ypos, zpos, flagToDisplayCrystal;
//Looping through 6 Torus
for (int i = 0; i < translateTorus.size(); i++) {
//Get the torus coordinates
xpos = translateTorus[i][0];
ypos = translateTorus[i][1];
zpos = translateTorus[i][2];
//This variable will work as a variable to display crystal after collision
flagToDisplayCrystal = translateTorus[i][3];
//p0 min, p2 max
//Creating a square using Torus index coordinates and radius
double halfside = 1.0 / 2;
//This (xpos+halfside), (xpos-halfside), (ypos+halfside), (ypos-halfside) are //created using Torus index and radius
float d1x = p0[0] - (xpos + halfside);
float d1y = p0[1] - (ypos + halfside);
float d2x = (xpos - halfside) - p2[0];
float d2y = (ypos - halfside) - p2[1];
//Collision is checking here
//For square's min z and max z is checking whether equal to camera's min //z and max z
if ((d1x > 0.0f || d1y > 0.0f || d2x > 0.0f || d2y > 0.0f) && p2[2] == zpos && p0[2] == zpos) {
//If there is collision update the variable as 0
translateTorus[i][3] = 0;
}
else {
if (flagToDisplayCrystal == 1) {
glPushMatrix();
glEnable(GL_TEXTURE_2D);
glTranslatef(xpos, ypos, zpos);
glRotatef(fPlanetRot, 0.0f, -1.0f, 0.0f);
glColor3f(0.0, 0.0, 0.0);
// Select the texture object
glBindTexture(GL_TEXTURE_2D, textures[3]);
glutSolidTorus(0.1, 1.0, 30, 30);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
}
}
}
as I mentioned in the comments you got 2 options either use OpenGL rendering or compute entirely on CPU side without it. Let start with rendering first:
render your scene
but instead of color of torus and stuff use integer indexes (for example 0 empty space, 1 obstacle, 2 torus ...) you can even have separate indexes for each object in the world so you know exactly which one is hit etc ...
so: clear screen with empty color, render your scene (using indexes instead of color with glColor??(???)) without lighting or shading or whatever. But Do not swap buffers !!! as that would show the stuff on screen and cause flickering.
read rendered screen and depth buffers
you simply use glReadPixels to copy your screen and depth buffers into CPU side memory (1D arrays) lets call them scr[],zed[].
scan the scr[] for color matching torus indexes
simply loop through all pixels and if torus pixel found check its depth. If it is close enough to camera you found your collision.
render normally
now clear screen again and render your screen with colors and lighting... now you can swap buffers too.
Beware depth buffer will be non linear which requires linearization to obtain original depth in world units. For more about it and example of reading both scr,zed see:
depth buffer got by glReadPixels is always 1
OpenGL 3D-raypicking with high poly meshes
The other approach is is much faster in case you have not too many torus'es. You simply compute intersection between camera znear plane and torus. Which boils down to either AABB vs rectangle intersection or cylinder vs. rectangle intersection.
However if you not familiar with 3D vector math you might get lost quickly.
let assume the torus is described by AABB. Then intersection between that and rectangle boils down to checking intersection between line (each edge of AABB) and rectangle. So simply finding instersection between plane and line and checking if the point is inside rectangle.
if our rectangle is defined by its vertexes in CW or CCW order (p0,p1,p2,p3) and line by endpoints q0,q1 then:
n = normalize(cross(p1-p0,p2-p1)) // is rectangle normal
dq = normalize(q1-q0) // is line direction
q = q0 + dq*dot(dq,p1-p0) // is plane/line intersection
So now just check if q is inside rectangle. There are 2 ways either test if all crosses between q-edge_start and edge_end-edge_start have the same direction or all dots between all edge_normal and q-edge_point has the same sign or zero.
The problem is that both AABB and rectangle must be in the same coordinate system so either transform AABB into camera coordinates by using modelview matrix or transform the rectangle into world coordinates using inverse of modelview. The latter is better as you do it just once instead of transforming each torus'es AABB ...
For more info about math side see:
Cone to box collision
Understanding 4x4 homogenous transform matrices
The rectangle itself is just extracted from your camera matrix (part of modelviev) position, and x,y basis vectors gives you the "center" and axises of your rectangle... The size must be derived from the perspective matrix (or parameters you passed to it especially aspect ratio, FOV and znear)
Well first you need to obtain camera (view) matrix. The GL_MODELVIEW usually holds:
GL_MODELVIEW = Inverse(Camera)*Rendered_Object
so you need to find the place in your code where your GL_MODELVIEW holds just the Inverse(Camera) transformation and there place:
float aspect=float(xs)/float(ys); // aspect from OpenGL window resolution
float im[16],m[16],znear=0.1,zfar=100.0,fovx=60.0*M_PI/180.0;
vec3 p0,p1,p2,p3,o,u,v; // 3D vectors
// this is how my perspective is set
// glMatrixMode(GL_PROJECTION);
// glLoadIdentity();
// gluPerspective(fovx*180.0/(M_PI*aspect),aspect,znear,zfar);
// get camera matrix (must be in right place in code before model transformations)
glGetFloatv(GL_MODELVIEW_MATRIX,im); // get camera inverse matrix
matrix_inv(m,im); // m = inverse(im)
u =vec3(m[ 0],m[ 1],m[ 2]); // x axis
v =vec3(m[ 4],m[ 5],m[ 6]); // y axis
o =vec3(m[12],m[13],m[14]); // origin
o-=vec3(m[ 8],m[ 9],m[10])*znear; // z axis offset
// scale by FOV
u*=znear*tan(0.5*fovx);
v*=znear*tan(0.5*fovx/aspect);
// get rectangle coorners
p0=o-u-v;
p1=o+u-v;
p2=o+u+v;
p3=o-u+v;
// render it for debug
glColor3f(1.0,1.0,0.0);
glBegin(GL_QUADS);
glColor3f(1.0,0.0,0.0); glVertex3fv(p0.dat);
glColor3f(0.0,0.0,0.0); glVertex3fv(p1.dat);
glColor3f(0.0,0.0,1.0); glVertex3fv(p2.dat);
glColor3f(1.0,1.0,1.0); glVertex3fv(p3.dat);
glEnd();
Which basicaly loads the matrix into CPU side variables inverse it like this:
void matrix_inv(float *a,float *b) // a[16] = Inverse(b[16])
{
float x,y,z;
// transpose of rotation matrix
a[ 0]=b[ 0];
a[ 5]=b[ 5];
a[10]=b[10];
x=b[1]; a[1]=b[4]; a[4]=x;
x=b[2]; a[2]=b[8]; a[8]=x;
x=b[6]; a[6]=b[9]; a[9]=x;
// copy projection part
a[ 3]=b[ 3];
a[ 7]=b[ 7];
a[11]=b[11];
a[15]=b[15];
// convert origin: new_pos = - new_rotation_matrix * old_pos
x=(a[ 0]*b[12])+(a[ 4]*b[13])+(a[ 8]*b[14]);
y=(a[ 1]*b[12])+(a[ 5]*b[13])+(a[ 9]*b[14]);
z=(a[ 2]*b[12])+(a[ 6]*b[13])+(a[10]*b[14]);
a[12]=-x;
a[13]=-y;
a[14]=-z;
}
And compute the corners with perspective in mind as described above...
I used GLSL like vec3 but you can use any 3D math even own like float p0[3],.... You just need +,- and multiplying by constant.
Now the p0,p1,p2,p3 holds your znear camera screen corners in world coordinates.
[Edit1] example
I managed to put together simple example for this. Here support functiosn used first:
//---------------------------------------------------------------------------
void glutSolidTorus(float r,float R,int na,int nb) // render torus(r,R)
{
float *pnt=new float[(na+1)*(nb+1)*3*2]; if (pnt==NULL) return;
float *nor=pnt+((na+1)*(nb+1)*3);
float ca,sa,cb,sb,a,b,da,db,x,y,z,nx,ny,nz;
int ia,ib,i,j;
da=2.0*M_PI/float(na);
db=2.0*M_PI/float(nb);
glBegin(GL_LINES);
for (i=0,a=0.0,ia=0;ia<=na;ia++,a+=da){ ca=cos(a); sa=sin(a);
for ( b=0.0,ib=0;ib<=nb;ib++,b+=db){ cb=cos(b); sb=sin(b);
z=r*ca;
x=(R+z)*cb; nx=(x-(R*cb))/r;
y=(R+z)*sb; ny=(y-(R*sb))/r;
z=r*sa; nz=sa;
pnt[i]=x; nor[i]=nx; i++;
pnt[i]=y; nor[i]=ny; i++;
pnt[i]=z; nor[i]=nz; i++;
}}
glEnd();
for (ia=0;ia<na;ia++)
{
i=(ia+0)*(nb+1)*3;
j=(ia+1)*(nb+1)*3;
glBegin(GL_QUAD_STRIP);
for (ib=0;ib<=nb;ib++)
{
glNormal3fv(nor+i); glVertex3fv(pnt+i); i+=3;
glNormal3fv(nor+j); glVertex3fv(pnt+j); j+=3;
}
glEnd();
}
delete[] pnt;
}
//---------------------------------------------------------------------------
const int AABB_lin[]= // AABB lines
{
0,1,
1,2,
2,3,
3,0,
4,5,
5,6,
6,7,
7,4,
0,4,
1,5,
2,6,
3,7,
-1
};
const int AABB_fac[]= // AABB quads
{
3,2,1,0,
4,5,6,7,
0,1,5,4,
1,2,6,5,
2,3,7,6,
3,0,4,7,
-1
};
void AABBSolidTorus(vec3 *aabb,float r,float R) // aabb[8] = AABB of torus(r,R)
{
R+=r;
aabb[0]=vec3(-R,-R,-r);
aabb[1]=vec3(+R,-R,-r);
aabb[2]=vec3(+R,+R,-r);
aabb[3]=vec3(-R,+R,-r);
aabb[4]=vec3(-R,-R,+r);
aabb[5]=vec3(+R,-R,+r);
aabb[6]=vec3(+R,+R,+r);
aabb[7]=vec3(-R,+R,+r);
}
//---------------------------------------------------------------------------
void matrix_inv(float *a,float *b) // a[16] = Inverse(b[16])
{
float x,y,z;
// transpose of rotation matrix
a[ 0]=b[ 0];
a[ 5]=b[ 5];
a[10]=b[10];
x=b[1]; a[1]=b[4]; a[4]=x;
x=b[2]; a[2]=b[8]; a[8]=x;
x=b[6]; a[6]=b[9]; a[9]=x;
// copy projection part
a[ 3]=b[ 3];
a[ 7]=b[ 7];
a[11]=b[11];
a[15]=b[15];
// convert origin: new_pos = - new_rotation_matrix * old_pos
x=(a[ 0]*b[12])+(a[ 4]*b[13])+(a[ 8]*b[14]);
y=(a[ 1]*b[12])+(a[ 5]*b[13])+(a[ 9]*b[14]);
z=(a[ 2]*b[12])+(a[ 6]*b[13])+(a[10]*b[14]);
a[12]=-x;
a[13]=-y;
a[14]=-z;
}
//---------------------------------------------------------------------------
const int QUAD_lin[]= // quad lines
{
0,1,
1,2,
2,3,
3,0,
-1
};
const int QUAD_fac[]= // quad quads
{
0,1,2,3,
-1
};
void get_perspective_znear(vec3 *quad) // quad[4] = world coordinates of 4 corners of screen at znear distance from camera
{
vec3 o,u,v; // 3D vectors
float im[16],m[16],znear,zfar,aspect,fovx;
// get stuff from perspective
glGetFloatv(GL_PROJECTION_MATRIX,m); // get perspective projection matrix
zfar =0.5*m[14]*(1.0-((m[10]-1.0)/(m[10]+1.0)));// compute zfar from perspective matrix
znear=zfar*(m[10]+1.0)/(m[10]-1.0); // compute znear from perspective matrix
aspect=m[5]/m[0];
fovx=2.0*atan(1.0/m[5])*aspect;
// get stuff from camera matrix (must be in right place in code before model transformations)
glGetFloatv(GL_MODELVIEW_MATRIX,im); // get camera inverse matrix
matrix_inv(m,im); // m = inverse(im)
u =vec3(m[ 0],m[ 1],m[ 2]); // x axis
v =vec3(m[ 4],m[ 5],m[ 6]); // y axis
o =vec3(m[12],m[13],m[14]); // origin
o-=vec3(m[ 8],m[ 9],m[10])*znear; // z axis offset
// scale by FOV
u*=znear*tan(0.5*fovx);
v*=znear*tan(0.5*fovx/aspect);
// get rectangle coorners
quad[0]=o-u-v;
quad[1]=o+u-v;
quad[2]=o+u+v;
quad[3]=o-u+v;
}
//---------------------------------------------------------------------------
bool collideLineQuad(vec3 *lin,vec3 *quad) // return if lin[2] is colliding quad[4]
{
float t,l,u,v;
vec3 p,p0,p1,dp;
vec3 U,V,W;
// quad (rectangle) basis vectors
U=quad[1]-quad[0]; u=length(U); u*=u;
V=quad[3]-quad[0]; v=length(V); v*=v;
W=normalize(cross(U,V));
// convert line from world coordinates to quad local ones
p0=lin[0]-quad[0]; p0=vec3(dot(p0,U)/u,dot(p0,V)/v,dot(p0,W));
p1=lin[1]-quad[0]; p1=vec3(dot(p1,U)/u,dot(p1,V)/v,dot(p1,W));
dp=p1-p0;
// test if crossing the plane
if (fabs(dp.z)<1e-10) return false;
t=-p0.z/dp.z;
p=p0+(t*dp);
// test inside 2D quad (rectangle)
if ((p.x<0.0)||(p.x>1.0)) return false;
if ((p.y<0.0)||(p.y>1.0)) return false;
// inside line
if ((t<0.0)||(t>1.0)) return false;
return true;
}
//---------------------------------------------------------------------------
bool collideQuadQuad(vec3 *quad0,vec3 *quad1) // return if quad0[4] is colliding quad1[4]
{
int i;
vec3 l[2];
// lines vs. quads
for (i=0;QUAD_lin[i]>=0;)
{
l[0]=quad0[QUAD_lin[i]]; i++;
l[1]=quad0[QUAD_lin[i]]; i++;
if (collideLineQuad(l,quad1)) return true;
}
for (i=0;QUAD_lin[i]>=0;)
{
l[0]=quad1[QUAD_lin[i]]; i++;
l[1]=quad1[QUAD_lin[i]]; i++;
if (collideLineQuad(l,quad0)) return true;
}
// ToDo coplanar quads tests (not needed for AABB test)
return false;
}
//---------------------------------------------------------------------------
bool collideAABBQuad(vec3 *aabb,vec3 *quad) // return if aabb[8] is colliding quad[4]
{
int i;
vec3 q[4],n,p;
// test all AABB faces (rectangle) for intersection with quad (rectangle)
for (i=0;AABB_fac[i]>=0;)
{
q[0]=aabb[AABB_fac[i]]; i++;
q[1]=aabb[AABB_fac[i]]; i++;
q[2]=aabb[AABB_fac[i]]; i++;
q[3]=aabb[AABB_fac[i]]; i++;
if (collideQuadQuad(q,quad)) return true;
}
// test if one point of quad is fully inside AABB
for (i=0;AABB_fac[i]>=0;i+=4)
{
n=cross(aabb[AABB_fac[i+1]]-aabb[AABB_fac[i+0]],
aabb[AABB_fac[i+2]]-aabb[AABB_fac[i+1]]);
if (dot(n,quad[0]-aabb[AABB_fac[i+0]])>0.0) return false;
}
return true;
}
//---------------------------------------------------------------------------
And here the usage (during rendering):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int i;
float m[16];
mat4 m0,m1;
vec4 v4;
float aspect=float(xs)/float(ys);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0/aspect,aspect,0.1,20.0);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
static float anim=180.0; anim+=0.1; if (anim>=360.0) anim-=360.0;
glEnable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
vec3 line[2],quad[4],aabb[8]; // 3D vectors
get_perspective_znear(quad);
// store view matrix for latter
glMatrixMode(GL_MODELVIEW);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
m0=mat4(m[0],m[1],m[2],m[3],m[4],m[5],m[6],m[7],m[8],m[9],m[10],m[11],m[12],m[13],m[14],m[15]);
m0=inverse(m0);
// <<-- here should be for start that loop through your toruses
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
// set/animate torus position
glTranslatef(0.3,0.3,3.5*(-1.0-cos(anim)));
glRotatef(+75.0,0.5,0.5,0.0);
// get actual matrix and convert it to the change
glGetFloatv(GL_MODELVIEW_MATRIX,m);
m1=m0*mat4(m[0],m[1],m[2],m[3],m[4],m[5],m[6],m[7],m[8],m[9],m[10],m[11],m[12],m[13],m[14],m[15]);
// render torus and compute its AABB
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glColor3f(1.0,1.0,1.0);
glutSolidTorus(0.1,0.5,36,36);
AABBSolidTorus(aabb,0.1,0.5);
glDisable(GL_LIGHT0);
glDisable(GL_LIGHTING);
// convert AABB to the same coordinates as quad
for (i=0;i<8;i++) aabb[i]=(m1*vec4(aabb[i],1.0)).xyz;
// restore original view matrix
glPopMatrix();
// render wireframe AABB
glColor3f(0.0,1.0,0.0);
glBegin(GL_LINES);
for (i=0;AABB_lin[i]>=0;i++)
glVertex3fv(aabb[AABB_lin[i]].dat);
glEnd();
/*
// render filled AABB for debug
glBegin(GL_QUADS);
for (i=0;AABB_fac[i]>=0;i++)
glVertex3fv(aabb[AABB_fac[i]].dat);
glEnd();
// render quad for debug
glBegin(GL_QUADS);
glColor3f(1.0,1.0,1.0);
for (i=0;QUAD_fac[i]>=0;i++)
glVertex3fv(quad[QUAD_fac[i]].dat);
glEnd();
*/
// render X on colision
if (collideAABBQuad(aabb,quad))
{
glColor3f(1.0,0.0,0.0);
glBegin(GL_LINES);
glVertex3fv(quad[0].dat);
glVertex3fv(quad[2].dat);
glVertex3fv(quad[1].dat);
glVertex3fv(quad[3].dat);
glEnd();
}
// <<-- here should be end of the for that loop through your toruses
glFlush();
SwapBuffers(hdc);
just ignore the GLUT solid torus function as you already got it ... Here preview:
The red cross indicates collision with screen ...

Implement camera with off-axis projection

I'm trying to create a 3D viewer for a parallax barrier display, but I'm stuck with camera movements. You can see a parallax barrier display at: displayblocks.org
Multiple views are needed for this effect, this tutorial provide code for calculating the interViewpointDistance depending of the display properties and so selecting the head Position.
Here are the parts of the code involved in the matrix creation:
for (y = 0; y < viewsCountY; y++) {
for (x = 0; x <= viewsCountX; x++) {
viewMatrix = glm::mat4(1.0f);
// selection of the head Position
float cameraX = (float(x - int(viewsCountX / 2))) * interViewpointDistance;
float cameraY = (float(y - int(mviewsCountY / 2))) * interViewpointDistance;
camera.Position = glm::vec3(camera.Position.x + cameraX, camera.Position.y + cameraY, camera.Position.z);
// Move the apex of the frustum to the origin.
viewMatrix = glm::translate(viewMatrix -camera.Position);
projectionMatrix = get_off_Axis_Projection_Matrix();
// render's stuff
// (...)
// glfwSwapBuffers();
}
}
The following code is the projection matrix function. I use the Robert Kooima's paper generalized perspective projection.
glm::mat4 get_off_Axis_Projection_Matrix() {
glm::vec3 Pe = camera.Position;
// space corners coordinates (space points)
glm::vec3 Pa = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pb = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pc = glm::vec3(screenSizeX, screenSizeY, 0.0);
// Compute an orthonormal basis for the screen.
glm::vec3 Vr = Pb - Pa;
Vr = glm::normalize(Vr);
glm::vec3 Vu = Pc - Pa;
Vu = glm::normalize(Vu);
glm::vec3 Vn = glm::cross(Vr, Vu);
Vn = glm::normalize(Vn);
// Compute the screen corner vectors.
glm::vec3 Va = Pa - Pe;
glm::vec3 Vb = Pb - Pe;
glm::vec3 Vc = Pc - Pe;
//-- Find the distance from the eye to screen plane.
float d = -glm::dot(Va, Vn);
// Find the extent of the perpendicular projection.
float left = glm::dot(Va, Vr) * const_near / d;
float right = glm::dot(Vr, Vb) * const_near / d;
float bottom = glm::dot(Vu, Va) * const_near / d;
float top = glm::dot(Vu, Vc) * const_near / d;
// Load the perpendicular projection.
return glm::frustum(left, right, bottom, top, const_near, const_far + d);
}
These two methods works, and I can see that my multiple views are well projected.
But I cant manage to make a camera that works normally, like in a FPS, with Tilt and Pan.
This code for example give me the "head tracking" effect (but with the mouse), it was handy to test projections, but this is not what I'm looking for.
float cameraX = (mouseX - windowWidth / 2) / (windowWidth * headDisplacementFactor);
float cameraY = (mouseY - windowHeight / 2) / (windowHeight * headDisplacementFactor);
camera.Position = glm::vec3(cameraX, cameraY, 60.0f);
viewMatrix = glm::translate(viewMatrix, -camera.Position);
My camera class works if viewmatrix is created with lookAt. But with the off-axis projection, using lookAt will rotate the scene, by which the correspondence between near plane and screen plane will be lost.
I may need to translate/rotate the space corners coordinates Pa, Pb, Pc, used to create the frustum, but I don't know how.

Rotate Object around origin as it faces origin in OpenGL with GLM?

I'm trying to make a simple animation where an object rotates around the world origin in OpenGL using glm lib. My ideia is:
Send object to origin
Rotate it
Send back to original position
Make it look at what I want
Here's my implementation:
// Rotates object around point p
void rotate_about(float deltaTime, glm::vec3 p, bool ended) {
glm::vec3 axis = glm::vec3(0,1,0); //rotation axis
glm::mat4 scale_m = glm::scale(glm::mat4(1.0f), glm::vec3(scale, scale, scale)); //scale matrix
glm::mat4 rotation = getMatrix(Right, Up, Front, Position); //builds rotation matrix
rotation = glm::translate(rotation, p - Position );
rotation = glm::rotate(rotation, ROTATION_SPEED * deltaTime, axis);
rotation = glm::translate(rotation, Position - p );
Matrix = rotation * scale_m;
//look at point P
Front = glm::normalize(p - start_Position);
Right = glm::normalize(glm::cross(WorldUp, Front));
Up = glm::normalize(glm::cross(Right, Front));
if (ended == true) { //if last iteration of my animation: saves position
Position.x = Matrix[3][0];
Position.y = Matrix[3][1];
Position.z = Matrix[3][2];
}
}
getMatrix() simply returns a 4x4 matrix as:
| Right.x Right.y Right.z |
| Up.x Up.y Up.z |
| Front.x Front.y Front.z |
| Pos.x Pos.y Pos.z |
I'm using this image as reference:
As it is my model simply disappears when I start the animation. If I remove lines bellow "//look at point P" it rotates around the origin, but twitches every time my animation restarts. I'm guessing I'm losing or mixing informations I shouldn't somewhere.
How can I store my models Front/Right/Up information so I can rebuild its matrix from scratch?
First edit, this is the effect I'm having when I don't try to make my model look at the point P, in this case the origin. When I do try my model disappears. How can I make it look at where I want, and how can I get my models new Front/Right/Up vectors after I finish rotating it?
This is the code I ran in the gif above
Operations like glm::translate() or glm::roate() build a matrix by its parameters and multiply the input matrix by the new matrix
This means that
rotation = glm::translate(rotation, Position - p );
can be expressed as (pseudo code):
rotation = rotation * translation(Position - p);
Note, that the matrix multiplication has to be "read" from the left to the right. (See GLSL Programming/Vector and Matrix Operations)
The operation translate * rotate causes a rotation around the origin of the object:
The operation rotate * translate causes a rotation around the origin of the world:
The matrix glm::mat4 rotation (in the code of your question) is the current model matrix of your object.
It contains the position (translation) and the orientation of the object.
You want to rotate the object around the origin of the world.
To do so you have to create a matrix which contains the new rotation
glm::mat4 new_rot = glm::rotate(glm::mat4(1.0f), ROTATION_SPEED * deltaTime, axis);
Then you can calculate the final matrix as follows:
Matrix = new_rot * rotation * scale_m;
If you want to rotate an object around the a point p and the object should always face a point p, then all you need is the position of the object (start_position) and the rotation axis.
In your case the rotation axis is the up vector of the world.
glm::vec3 WorldUp( 0.0f, 1.0f, 0.0f );
glm::vec3 start_position = ...;
float scale = ...;
glm::vec3 p = ...;
Calculate the rotation matrix and the new (rotated) position
glm::mat4 rotate = glm::rotate(glm::mat4(1.0f), ROTATION_SPEED * deltaTime, WorldUp);
glm::vec4 pos_rot_h = rotate * glm::vec4( start_position - p, 1.0f );
glm::vec3 pos_rot = glm::vec3( pos_rot_h ) + p;
Calculate the direction in which the object should "look"
glm::vec3 Front = glm::normalize(p - pos_rot);
You can use your function getMatrix to setup the current orientation matrix of the object:
glm::vec3 Right = glm::normalize(glm::cross(WorldUp, Front));
glm::mat4 pos_look = getMatrix(Right, WorldUp, Front, pos_rot);
Calculate the model matrix:
glm::mat4 scale_m = glm::scale(glm::mat4(1.0f), glm::vec3(scale));
Matrix = pos_look * scale_m;
The final code may look like this:
glm::mat4 getMatrix(const glm::vec3 &X, const glm::vec3 &Y, const glm::vec3 &Z, const glm::vec3 &T)
{
return glm::mat4(
glm::vec4( X, 0.0f ),
glm::vec4( Y, 0.0f ),
glm::vec4( Z, 0.0f ),
glm::vec4( T, 1.0f ) );
}
void rotate_about(float deltaTime, glm::vec3 p, bool ended) {
glm::mat4 rotate = glm::rotate(glm::mat4(1.0f), ROTATION_SPEED * deltaTime, WorldUp);
glm::vec4 pos_rot_h = rotate * glm::vec4( start_position - p, 1.0f );
glm::vec3 pos_rot = glm::vec3( pos_rot_h ) + p;
glm::vec3 Front = glm::normalize(p - pos_rot);
glm::vec3 Right = glm::normalize(glm::cross(WorldUp, Front));
glm::mat4 pos_look = getMatrix(Right, WorldUp, Front, pos_rot);
glm::mat4 scale_m = glm::scale(glm::mat4(1.0f), glm::vec3(scale));
Matrix = pos_look * scale_m;
if ( ended == true )
Position = glm::vec3(Matrix[3]);
}
SOLUTION:
The problem was in this part:
rotation = glm::translate(rotation, p - Position );
rotation = glm::rotate(rotation, ROTATION_SPEED * deltaTime, axis);
rotation = glm::translate(rotation, Position - p );
if (ended == true) { //if last iteration of my animation: saves position
Position.x = Matrix[3][0];
Position.y = Matrix[3][1];
Position.z = Matrix[3][2];
}
Note that I was using the distance between world origin and model as the radius of the translation. However, after the animation ends I update the models Position, which changes the result of p - Position, i.e, the orbit radius. When this happen the model "twitches", because it lost rotation information.
I solved it by using a different variable for the orbit radius, and applying the translation on the z-axis of the model. When the translation is applied on the x-axis, the model - which faces the camera initially - will end up sideways to the origin. However, applying the translation on the z-axis will end up with the model either facing or backwards to the origin, depending on the signal.

OpenGL Matrix Camera controls, local rotation not functioning properly

So I'm trying to figure out how to mannually create a camera class that creates a local frame for camera transformations. I've created a player object based on OpenGL SuperBible's GLFrame class.
I got keyboard keys mapped to the MoveUp, MoveRight and MoveForward functions and the horizontal and vertical mouse movements are mapped to the xRot variable and rotateLocalY function. This is done to create a FPS style camera.
The problem however is in the RotateLocalY. Translation works fine and so does the vertical mouse movement but the horizontal movement scales all my objects down or up in a weird way. Besides the scaling, the rotation also seems to restrict itself to 180 degrees and rotates around the world origin (0.0) instead of my player's local position.
I figured that the scaling had something to do with normalizing vectors but the GLframe class (which I used for reference) never normalized any vectors and that class works just fine. Normalizing most of my vectors only solved the scaling and all the other problems were still there so I'm figuring one piece of code is causing all these problems?
I can't seem to figure out where the problem lies, I'll post all the appropriate code here and a screenshot to show the scaling.
Player object
Player::Player()
{
location[0] = 0.0f; location[1] = 0.0f; location[2] = 0.0f;
up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f;
forward[0] = 0.0f; forward[1] = 0.0f; forward[2] = -1.0f;
}
// Does all the camera transformation. Should be called before scene rendering!
void Player::ApplyTransform()
{
M3DMatrix44f cameraMatrix;
this->getTransformationMatrix(cameraMatrix);
glRotatef(xAngle, 1.0f, 0.0f, 0.0f);
glMultMatrixf(cameraMatrix);
}
void Player::MoveForward(GLfloat delta)
{
location[0] += forward[0] * delta;
location[1] += forward[1] * delta;
location[2] += forward[2] * delta;
}
void Player::MoveUp(GLfloat delta)
{
location[0] += up[0] * delta;
location[1] += up[1] * delta;
location[2] += up[2] * delta;
}
void Player::MoveRight(GLfloat delta)
{
// Get X axis vector first via cross product
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, forward);
location[0] += xAxis[0] * delta;
location[1] += xAxis[1] * delta;
location[2] += xAxis[2] * delta;
}
void Player::RotateLocalY(GLfloat angle)
{
// Calculate a rotation matrix first
M3DMatrix44f rotationMatrix;
// Rotate around the up vector
m3dRotationMatrix44(rotationMatrix, angle, up[0], up[1], up[2]); // Use up vector to get correct rotations even with multiple rotations used.
// Get new forward vector out of the rotation matrix
M3DVector3f newForward;
newForward[0] = rotationMatrix[0] * forward[0] + rotationMatrix[4] * forward[1] + rotationMatrix[8] * forward[2];
newForward[1] = rotationMatrix[1] * forward[1] + rotationMatrix[5] * forward[1] + rotationMatrix[9] * forward[2];
newForward[2] = rotationMatrix[2] * forward[2] + rotationMatrix[6] * forward[1] + rotationMatrix[10] * forward[2];
m3dCopyVector3(forward, newForward);
}
void Player::getTransformationMatrix(M3DMatrix44f matrix)
{
// Get Z axis (Z axis is reversed with camera transformations)
M3DVector3f zAxis;
zAxis[0] = -forward[0];
zAxis[1] = -forward[1];
zAxis[2] = -forward[2];
// Get X axis
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, zAxis);
// Fill in X column in transformation matrix
m3dSetMatrixColumn44(matrix, xAxis, 0); // first column
matrix[3] = 0.0f; // Set 4th value to 0
// Fill in the Y column
m3dSetMatrixColumn44(matrix, up, 1); // 2nd column
matrix[7] = 0.0f;
// Fill in the Z column
m3dSetMatrixColumn44(matrix, zAxis, 2); // 3rd column
matrix[11] = 0.0f;
// Do the translation
M3DVector3f negativeLocation; // Required for camera transform (right handed OpenGL system. Looking down negative Z axis)
negativeLocation[0] = -location[0];
negativeLocation[1] = -location[1];
negativeLocation[2] = -location[2];
m3dSetMatrixColumn44(matrix, negativeLocation, 3); // 4th column
matrix[15] = 1.0f;
}
Player object header
class Player
{
public:
//////////////////////////////////////
// Variables
M3DVector3f location;
M3DVector3f up;
M3DVector3f forward;
GLfloat xAngle; // Used for FPS divided X angle rotation (can't combine yaw and pitch since we'll also get a Roll which we don't want for FPS)
/////////////////////////////////////
// Functions
Player();
void ApplyTransform();
void MoveForward(GLfloat delta);
void MoveUp(GLfloat delta);
void MoveRight(GLfloat delta);
void RotateLocalY(GLfloat angle); // Only need rotation on local axis for FPS camera style. Then a translation on world X axis. (done in apply transform)
private:
void getTransformationMatrix(M3DMatrix44f matrix);
};
Applying transformations
// Clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// Apply camera transforms
player.ApplyTransform();
// Set up lights
...
// Use shaders
...
// Render the scene
RenderScene();
// Do post rendering operations
glutSwapBuffers();
and mouse
float mouseSensitivity = 500.0f;
float horizontal = (width / 2) - mouseX;
float vertical = (height / 2) - mouseY;
horizontal /= mouseSensitivity;
vertical /= (mouseSensitivity / 25);
player.xAngle += -vertical;
player.RotateLocalY(horizontal);
glutWarpPointer((width / 2), (height / 2));
Honestly I think you are taking a way to complicated approach to your problem. There are many ways to create a camera. My favorite is using a R3-Vector and a Quaternion, but you could also work with a R3-Vector and two floats (pitch and yaw).
The setup with two angles is simple:
glLoadIdentity();
glTranslatef(-pos[0], -pos[1], -pos[2]);
glRotatef(-yaw, 0.0f, 0.0f, 1.0f);
glRotatef(-pitch, 0.0f, 1.0f, 0.0f);
The tricky part now is moving the camera. You must do something along the lines of:
flaot ds = speed * dt;
position += tranform_y(pich, tranform_z(yaw, Vector3(ds, 0, 0)));
How to do the transforms, I would have to look that up, but you could to it by using a rotation matrix
Rotation is trivial, just add or subtract from the pitch and yaw values.
I like using a quaternion for the orientation because it is general and thus you have a camera (any entity that is) that independent of any movement scheme. In this case you have a camera that looks like so:
class Camera
{
public:
// lots of stuff omitted
void setup();
void move_local(Vector3f value);
void rotate(float dy, float dz);
private:
mx::Vector3f position;
mx::Quaternionf orientation;
};
Then the setup code uses shamelessly gluLookAt; you could make a transformation matrix out of it, but I never got it to work right.
void Camera::setup()
{
// projection related stuff
mx::Vector3f eye = position;
mx::Vector3f forward = mx::transform(orientation, mx::Vector3f(1, 0, 0));
mx::Vector3f center = eye + forward;
mx::Vector3f up = mx::transform(orientation, mx::Vector3f(0, 0, 1));
gluLookAt(eye(0), eye(1), eye(2), center(0), center(1), center(2), up(0), up(1), up(2));
}
Moving the camera in local frame is also simple:
void Camera::move_local(Vector3f value)
{
position += mx::transform(orientation, value);
}
The rotation is also straight forward.
void Camera::rotate(float dy, float dz)
{
mx::Quaternionf o = orientation;
o = mx::axis_angle_to_quaternion(horizontal, mx::Vector3f(0, 0, 1)) * o;
o = o * mx::axis_angle_to_quaternion(vertical, mx::Vector3f(0, 1, 0));
orientation = o;
}
(Shameless plug):
If you are asking what math library I use, it is mathex. I wrote it...