Implementing bounding sphere collision in OpenGL - c++

I understand the basic principles of bounding sphere collision however implementing it is confusing me a little.
if I have a two cubes defined within arrays: cube1[] and cube2[] with each array consisting of GLfloats that make up each triangle. How then can I first calculate the center point of each cube? and how would I get the radius of the sphere around this?
What mathematics are needed to calculate this?
EDIT: To give more clarification on my question. Assume I have a cube defined using the following array:
GLfloat cube[] = {
2.0f, 3.0f, -4.0f, // triangle 1, top right
3.0f, 3.0f, -4.0f,
2.0f, 2.0f, -4.0f, // bottom right
3.0f, 3.0f, -4.0f, // triangle 2, back face top left
3.0f, 2.0f, -4.0f, // bottom left
2.0f, 2.0f, -4.0f,
2.0f, 3.0f, -3.0f, // triangle 1, front face top left
2.0f, 2.0f, -3.0f, // bottom left
3.0f, 3.0f, -3.0f, // Bottom right
3.0f, 3.0f, -3.0f, // triangle 2, front face
2.0f, 2.0f, -3.0f,
3.0f, 2.0f, -3.0f, // Bottom right
2.0f, 3.0f, -3.0f, // triangle 1, top face
3.0f, 3.0f, -3.0f,
2.0f, 3.0f, -4.0f,
3.0f, 3.0f, -4.0f, // triangle 2, top face
2.0f, 3.0f, -4.0f,
3.0f, 3.0f, -3.0f,
2.0f, 2.0f, -3.0f, // triangle 1, bottom face
2.0f, 2.0f, -4.0f,
3.0f, 2.0f, -3.0f,
3.0f, 2.0f, -4.0f, // triangle 2, bottom face
3.0f, 2.0f, -3.0f, // Bottom Right.
2.0f, 2.0f, -4.0f,
2.0f, 2.0f, -4.0f, // triangle 1, left face
2.0f, 2.0f, -3.0f,
2.0f, 3.0f, -4.0f,
2.0f, 3.0f, -4.0f, // triangle 2, left face
2.0f, 2.0f, -3.0f,
2.0f, 3.0f, -3.0f,
3.0f, 2.0f, -4.0f, // triangle 1, right face
3.0f, 3.0f, -4.0f,
3.0f, 2.0f, -3.0f,
3.0f, 3.0f, -4.0f, // triangle 2, right face
3.0f, 3.0f, -3.0f,
3.0f, 2.0f, -3.0f,
};
Given this cube, I need to get the center point and keep track of it every time the cube translates. I believe I have done so but assistance on whether this is correct is also appreciated:
// Calculate initial center of the shape
glm::vec3 corner1 = glm::vec3(2.0f, 3.0f, -4.0f);
glm::vec3 corner2 = glm::vec3(2.0f, 2.0f, -4.0f);
glm::vec3 corner3 = glm::vec3(3.0f, 3.0f, -4.0f);
glm::vec3 corner4 = glm::vec3(3.0f, 2.0f, -4.0f);
glm::vec3 corner5 = glm::vec3(2.0f, 3.0f, -3.0f);
glm::vec3 corner6 = glm::vec3(2.0f, 2.0f, -3.0f);
glm::vec3 corner7 = glm::vec3(3.0f, 3.0f, -3.0f);
glm::vec3 corner8 = glm::vec3(3.0f, 2.0f, -3.0f);
GLfloat x = (corner1.x + corner2.x + corner3.x + corner4.x + corner5.x + corner6.x+ corner7.x + corner8.x)/8;
GLfloat y = (corner1.y + corner2.y + corner3.y + corner4.y + corner5.y + corner6.y+ corner7.y + corner8.y)/8;
GLfloat z = (corner1.z + corner2.z + corner3.z + corner4.z + corner5.z + corner6.z+ corner7.z + corner8.z)/8;
center = glm::vec4(x, y, z, 1.0f);
Translation is kept in check with the following function:
void Cube::Translate(double x, double y, double z)
{
// Translation matrix for cube.
glm::mat4 cubeTransMatrix = glm::mat4();
cubeTransMatrix = glm::translate(cubeTransMatrix, glm::vec3(x, y, z));
//center = cubeTransMatrix * center;
//Move the cube
for(int i = 0; i < sizeof(cube) / sizeof(GLfloat); i+=3){
glm::vec4 vector = glm::vec4(cube[i], cube[i+1], cube[i+2], 1.0f);
glm::vec4 translate = cubeTransMatrix*vector;
glm::vec4 translateCenter = cubeTransMatrix*center;
center.x = translateCenter[0];
center.y = translateCenter[1];
center.z = translateCenter[2];
cube[i] = translate[0];
cube[i+1] = translate[1];
cube[i+2] = translate[2];
}
}

The center-point of a shape can be calculated in many ways, depending on what you want to consider the "center." However, for a cube, the center calculation is generally considered to be the mean of its points, which is relatively simple: Just get the mean of all the corners' coordinates by adding up all the vectors and dividing by 8. Depending on the exact mesh of your cube, you may have more vertices than that, but for a simple cube, this shouldn't be the case.
If you don't have access to the vertices themselves (you loaded up a mesh, or are using the default cube, built in to GLUT, or something), you will need to keep track of transformations for that cube. I might suggest using a "local" position vector or a local transformation matrix for each cube.
With OpenGL, matrices should be column major, so the top 3 values in the right most column should be your location, in world coordinates, after any global transformations have taken place.
Detecting a collision is almost easier (once you get past the whole "predicting when the collision is going to take place" part, which I wouldn't worry about for your first implementation, if I were you). Spheres are simple shapes, and detecting if two spheres intersect is even simpler. All you need to do is find the squared distance between the two sphere colliders, and compare that to their squared radii.
If the sum of the two squared radii is greater than the distance between the two spheres, then they intersect. Otherwise, they do not.
Just to illustrate how simple this calculation really is, I'll show you here:
float r0sqr = sphere0.radius * sphere0.radius;
float r1sqr = sphere1.radius * sphere1.radius;
float distX = sphere0.position.x - sphere1.position.x;
float distY = sphere0.position.y - sphere1.position.y;
float distZ = sphere0.position.z - sphere1.position.z;
// Since we already need to square these, we won't need to take the absolute value
// to accurately compare them to the radii
distX *= distX;
distY *= distY;
distZ *= distZ;
float sqrDist = (distX+distY+distZ)
if((r0sqr + r1sqr) > sqrDist)
{
// They intersect
}
else
{
// They do not intersect
}
Once you've detected the collision, assuming you want the spheres to be rigidbody colliders, moving them away from one another is incredibly simple. Simply take the intersection distance of the two spheres. For the sake of efficiency, we should modify our previous code a bit, however:
// Since we already need to square these, we won't need to take the absolute value
// to accurately compare them to the radii
float distSqrX = distX * distX;
float distSqrY = distY * distY;
float distSqrZ = distZ * distZ;
float sqrDist = (distSqrX+distSqrY+distSqrZ);
Once we've done that, we can get to calculating the rest of the resolution for this collision. We'll be doing it in a very simple way (assuming neither object has mass, and there is no impact to calculate).
float totalRadius = sphere0.radius + sphere1.radius;// the sum of the two spheres' radii
float dist = sqrt(sqrDist);// the actual distance between the two shapes' centers
float minMovement = (totalRadius - dist);// the difference between the total radius and the actual distance tells us how far they intersect.
minMovement /= dist;// Divide by the distance to "scale" this movement so we can "scale" our distance vector (distX, distY, and distZ)
float mvmtX = distX * minMovement * 0.5f;// The minimum movement on the x-axis to resolve the collision
float mvmtY = distY * minMovement * 0.5f;// The minimum movement on the y-axis to resolve the collision
float mvmtZ = distZ * minMovement * 0.5f;// The minimum movement on the z-axis to resolve the collision
// For the sake of simplicity, we'll just have them "pop" out of each other, and won't
// be doing any interpolation to "smooth" the spheres' interaction.
//
// However, to ensure that we move the correct collider in the correct direction, we
// need to see which one is on which side of the other, along the three axes.
if(sphere0.position.x < sphere1.position.x)
{
sphere0.position.x -= mvmtX;
sphere1.position.x += mvmtX;
}
else
{
sphere0.position.x += mvmtX;
sphere1.position.x -= mvmtX;
}
// Repeat this process for the other two axes
if(sphere0.position.y < sphere1.position.y)
{
sphere0.position.y -= mvmtY;
sphere1.position.y += mvmtY;
}
else
{
sphere0.position.y += mvmtY;
sphere1.position.y -= mvmtY;
}
if(sphere0.position.z < sphere1.position.z)
{
sphere0.position.z -= mvmtZ;
sphere1.position.z += mvmtZ;
}
else
{
sphere0.position.z += mvmtZ;
sphere1.position.z -= mvmtZ;
}
Lastly, calculating the proper radius of the sphere to get the desired effect for collision detection about a cube can be done in one of three ways:
Using a circumscribed sphere (the sphere "touches" the corners of the cube), the formula for the radius is sqrt(3)*edgeLength*0.5. You will get an "overreactive" collision detection system, in that it will detect collisions that are reasonably far outside the volume of the cube, due to the radius being able to reach out to the corners of the box. The largest point of error will be at the center of one of the faces of the cube, where the sphere will overextend the cube by 1/sqrt(3) times the side length of the cube.
The second method would be to use an inscribed sphere, where the sphere is tangent to the faces of the cube (the sphere "touches" the centers of each cube face) and has a radius that is calculated as edgeLength*0.5. Again, there will be error, but this one will actually tend to have a bit MORE error, since it will be "underreactive" at 8 points, rather than being "overreactive" at 6, as the last one was. The amount of distance each corner will be "underreactive" (how far between the corners of the cube and the closest point on the surface of the sphere) is the same as the distance of the overreactiveness of the previous one, roughly 1/sqrt(3) times the side length.
The last method, and most accurate, is the to calculate the sphere such that it is tangent to the edges of the cube. The formula for this one's radius is edgeLength/sqrt(2). This sphere will "touch" the center of each edge of the cube, and will "overestimate" on every face, and "underestimate" on every corner. However, the distance of the overestimation/underestimation is considerably less, and generally more tolerable, being only (sqrt(3)*sqrt(2))/2 times the side length further from where it "should" be, at any point (giving arpproximately 1.4 times more accurate results, for a collision).
The choice is yours as to which best suits your needs. The first one has the best "corner" detection, but he worst "face" detection, the second has the best "face" detection and the worst "corner" detection, and the third has more or less the "average" of the first two, giving it the most "reliable" accuracy, if all cases are a possibility.

Related

From perspective to orthographic projections

I'm trying to change my camera projection from perspective to orthographic.
At the moment my code is working fine with the perspective projection
m_prespective = glm::perspective(70.0f, (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_prespective * glm::lookAt(m_position, m_forward, m_up);
But as soon as i change it to orthographic projection I can't see my mesh anymore.
m_ortho = glm::ortho(0.0f, (float)DISPLAY_WIDTH, (float)DISPLAY_HEIGHT,5.0f, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_ortho * glm::lookAt(m_position, m_forward, m_up);
I don't understand what I'm doing wrong.
In perspective projection the term (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT is evaluating the picture aspect ratio. This number is going to be close to 1. The left and right clip plane distances at the near plane for perspective projection is aspect * near_distance. More interesting though is the expanse of left-right at the viewing distance, which in your case is abs(m_position.z)= abs(mesh.radius).
Carrying this over to orthographic projection the left, right, top and bottom clip plane distances should be of the same order of magnitude, so given that aspect is close to 1 the values for left, right, bottom and top should be close to the value of abs(mesh.radius). The resolution of the display in pixels is totally irrelevant except for the aspect ratio.
Furthermore when using a perspective projection the value for near should be chosen as large as possible so that all desired geometry is visible. Doing otherwise will waste precious depth buffer resolution.
float const view_distance = mesh.radius + 1;
float const aspect = (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT;
switch( typeof_projection ){
case perspective:
m_projection = glm::perspective(70.0f, aspect, 1.f, 1000.0f);
break;
case ortho:
m_projection = glm::ortho(
-aspect * view_distance,
aspect * view_distance,
view_distance,
view_distance,
-1000, 1000 );
break;
}
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -view_distance);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_projection * glm::lookAt(m_position, m_forward, m_up);

Projection Matrix only goes from 0 to 1

I am working on my Projection Matrix in c++.
If I use a Orthogonal Matrix, the Axis range goes from 0 to my screen size.
Now if I use my Perspective Matrix, the Axis range goes from 0 to 1.
This is not good if I want to position my objects. I could divide their movement with the width and height, but I think that there should be a better solution just like by using an orthogonal matrix.
T aspect = (right - left) / (top - bottom);
T xScale = 1.0f / tan(fov / 2.0f);
T yScale = xScale / aspect;
return Matrix<T>(
yScale, 0.0f, 0.0f, 0.0f,
0.0f, xScale, 0.0f, 0.0f,
(left + right) / (left - right), (top + bottom) / (bottom - top), zFar / (zNear - zFar), -1.0f,
0.0f, 0.0f, (zNear * zFar) / (zNear - zFar), 0.0f);
That's my Perspective Matrix
T farNear = zFar - zNear;
return Matrix<T>(
2.0f / (right - left), 0.0f, 0.0f, 0.0f,
0.0f, 2.0f / (top - bottom), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f / farNear, 0.0f,
(left + right) / (left - right), (top + bottom) / (bottom - top), -zNear / farNear, 1.0f);
That's my Orthogonal Matrix calculation
So how can I fix it so that if I use my perspective matrix, the axis range goes from 0 to my screen size instead of 0 to 1.
This range you mention does not work that way in a perspective projection.
To figure out the width and height of your viewing volume, you need to know your field of view (in GL we typically define this using a vertical angle and aspect ratio) and the distance from the near plane; width and height will vary with distance down the z-axis.
The following diagram illustrates the situation:
  
In an orthographic projection, the viewing volume has the same width and height no matter how far or close you are to the near clip plane. In this sort of projection, a point (x,y,...) at z=1.0 is equa-distant from one edge of the screen as the same point (x,y,...) at z=100.0, and thus you can establish a single X and Y range for all points.
With a perspective projection as discussed here, the farther a point is from the near plane, the more pushed toward the center of the screen it gets because the visible coordinate space expands.
  
The only way you are going to have a single range of visible X and Y coordinates is if you keep Z constant. But if you keep Z constant, then why do you want a perspective projection in the first place?

Rotate the vertexes of a cube using a rotation matrix

I'm trying to rotate a cube's vertexes with a rotation matrix but whenever I run the program the cube just disappears.
I'm using a rotation matrix that was given to us in a lecture that rotates the cube's x coordinates.
double moveCubeX = 0;
float xRotationMatrix[9] = {1, 0, 0,
0, cos(moveCubeX), sin(moveCubeX),
0, -sin(moveCubeX), cos(moveCubeX)
};
I'm adding to the moveCubeX variable with the 't' key on my keyboard
case 't':
moveCubeX += 5;
break;
And to do the matrix multiplication I'm using
glMultMatrixf();
However when I add this into my code when running it the cube has just disappeared. This is where I add in the glMultMatrixf() function.
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(pan, 0, -g_fViewDistance,
pan, 0, -1,
0, 1, 0);
glRotatef(rotate_x, 1.0f, 0.0f, 0.0f); //Rotate the camera
glRotatef(rotate_y, 0.0f, 1.0f, 0.0f); //Rotate the camera
glMultMatrixf(xRotationMatrix);
I'm struggling to see where it is I have gone wrong.
OpenGL uses matrices of size 4x4. Therefore, your rotation matrix needs to be expanded to 4 rows and 4 columns, for a total of 16 elements:
float xRotationMatrix[16] = {1.0f, 0.0f, 0.0f, 0.0f,
0.0f, cos(moveCubeX), sin(moveCubeX), 0.0f,
0.0f, -sin(moveCubeX), cos(moveCubeX), 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};
You will also need to be careful about the units for your angles. Since you add 5 to your angle every time the user presses a key, it looks like you're thinking in degrees. The standard cos() and sin() functions in C/C++ libraries expect the angle to be in radians.
In addition, it looks like your matrix is defined at a global level. If you do this, the elements will only be evaluated once at program startup. You will either have to make the matrix definition local to the display(), so that the matrix is re-evaluated each time you draw, or update the matrix every time the angle changes.
For the second option, you can update only the matrix elements that depend on the angle every time the angle changes. In the function that modifies moveCubeX, add:
xRotationMatrix[5] = cos(moveCubeX);
xRotationMatrix[6] = sin(moveCubeX);
xRotationMatrix[9] = -sin(moveCubeX);
xRotationMatrix[10] = cos(moveCubeX);

scaling different objects using mouse wheel

I use glfw and glm.
If I scroll up - I want to make object bigger, when I scroll down - I want to make object smaller.
How to do it?
I use this function to handle mouse scrolling.
static void mousescroll(GLFWwindow* window, double xoffset, double yoffset)
{
if (yoffset > 0) {
scaler += yoffset * 0.01; //make it bigger than current size
world = glm::scale(world, glm::vec3(scaler, scaler, scaler));
}
else {
scaler -= yoffset * 0.01; //make it smaller than current size
world = glm::scale(world, glm::vec3(scaler, scaler, scaler));
}
}
By default scaler is 1.0.
I can describe the problem like this.
There is an object. If I scroll up - the value of scaler will become 1.01. So the object will be bigger in 1.01 times. When I scroll up again - as far as I can understand in my case the size of object will be bigger in 1.02 than the previous size(which is bigger than the original in 1.01 times)! But I want its size to be bigger than the original in 1.02 times.
How to solve this problem?
Matrix world looks like this
glm::mat4 world = glm::mat4(
glm::vec4(1.0f, 0.0f, 0.0f, 0.0f),
glm::vec4(0.0f, 1.0f, 0.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 1.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 0.0f, 1.0f));
And I calculate the positions of vertex in the shader
gl_Position = world * vec4(Position, 1.0);
But I want its size to be bigger than the original in 1.02 times.
Then reset the transform each time instead of accumulating the scales:
world = glm::scale( scaler, scaler, scaler );

Is it possible to rotate an object around its own axis and not around the base coordinate's axis?

I am following the OpenGL es rotation examples from google to rotate a simple square (not a cube) on my Android App, for example this code:
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
It works fine if you only rotate around one axis.
But if you rotate around one axis, and after that, you rotate around another axis, the rotation is not fair. I mean that the rotation is done around the axes of base (global) coordinate system and not the square's own coordinate system.
EDIT with code for Shahbaz
public void onDrawFrame(GL10 gl) {
//Limpiamos pantalla y Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
//Dibujado
gl.glTranslatef(0.0f, 0.0f, z); //Move z units into the screen
gl.glScalef(0.8f, 0.8f, 0.8f); //Escalamos para que quepa en la pantalla
//Rotamos sobre los ejes.
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
//Dibujamos el cuadrado
square.draw(gl);
//Factores de rotación.
xrot += xspeed;
yrot += yspeed;
}
Draw of the square:
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CCW);
//gl.glEnable(GL10.GL_BLEND);
//Bind our only previously generated texture in this case
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
//Enable vertex buffer
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//gl.glDisable(GL10.GL_BLEND);
}
VERTEX BUFFER VALUES:
private FloatBuffer vertexBuffer;
private float vertices[] =
{
-1.0f, -1.0f, 0.0f, //Bottom Left
1.0f, -1.0f, 0.0f, //Bottom Right
-1.0f, 1.0f, 0.0f, //Top Left
1.0f, 1.0f, 0.0f //Top Right
};
.
.
.
public Square(int resourceId) {
ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuf.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
.
.
.
First thing you should know is that in OpenGL, transformation matrices are multiplied from right. What does it mean? It means that the last transformation you write gets applied to the object first.
So let's look at your code:
gl.glScalef(0.8f, 0.8f, 0.8f);
gl.glTranslatef(0.0f, 0.0f, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
gl.glTranslatef(0.0f, 0.0f, z);
square.draw(gl);
This means that, first, the object is moved to (0.0f, 0.0f, z). Then it is rotated around Z, then around Y, then around X, then moved by (0.0f, 0.0f, -z) and finally scaled.
You got the scaling right. You put it first, so it gets applied last. You also got
gl.glTranslatef(0.0f, 0.0f, -z);
in the right place, because you first want to rotate the object then move it. Note that, when you rotate an object, it ALWAYS rotates around the base coordinate, that is (0, 0, 0). If you want to rotate the object around its own axes, the object itself should be in (0, 0, 0).
So, right before you write
square.draw(gl);
you should have the rotations. The way your code is right now, you move the object far (by writing
gl.glTranslatef(0.0f, 0.0f, z);
before square.draw(gl);) and THEN rotate which messes things up. Removing that line gets you much closer to what you need. So, your code will look like this:
gl.glScalef(0.8f, 0.8f, 0.8f);
gl.glTranslatef(0.0f, 0.0f, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
square.draw(gl);
Now the square should rotate in place.
Note: After you run this, you will see that the rotation of the square would be rather awkward. For example, if you rotate around z by 90 degrees, then rotating around x would look like rotating around y because of the previous rotation. For now, this may be ok for you, but if you want to it to look really good, you should do it like this:
Imagine, you are not rotating the object, but rotating a camera around the object, looking at the object. By changing xrot, yrot and zrot, you are moving the camera on a sphere around the object. Then, once finding out the location of the camera, you could either do the math and get the correct parameters to call glRotatef and glTranslatef or, use gluLookAt.
This requires some understanding of math and 3d imagination. So if you don't get it right the first day, don't get frustrated.
Edit: This is the idea of how to rotate along rotated object coordinates;
First, let's say you do the rotation around z. Therefore you have
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
Now, the global Y unit vector is obviously (0, 1, 0), but the object has rotated and thus its Y unit vector has also rotated. This vector is given by:
[cos(zrot) -sin(zrot) 0] [0] [-sin(zrot)]
[sin(zrot) cos(zrot) 0] x [1] = [ cos(zrot)]
[0 0 1] [0] [ 0 ]
Therefore, your rotation around y, should be like this:
gl.glRotatef(yrot, -sin(zrot), cos(zrot), 0.0f); //Y-object
You can try this so far (disable rotation around x) and see that it looks like the way you want it (I did it, and it worked).
Now for x, it gets very complicated. Why? Because, the X unit vector is not only first rotated around the z vector, but after it is rotated around the (-sin(zrot), cos(zrot), 0) vector.
So now the X unit vector in the object's cooridnate is
[cos(zrot) -sin(zrot) 0] [1] [cos(zrot)]
Rot_around_new_y * [sin(zrot) cos(zrot) 0] x [0] = Rot_around_new_y * [sin(zrot)]
[0 0 1] [0] [0 ]
Let's call this vector (u_x, u_y, u_z). Then your final rotation (the one around X), would be like this:
gl.glRotatef(xrot, u_x, u_y, u_z); //X-object
So! How to find the matrix Rot_around_new_y? See here about rotation around arbitrary axis. Go to section 6.2, the first matrix, get the 3*3 sub matrix rotation (that is ignore the rightmost column which is related to translation) and put (-sin(zrot), cos(zrot), 0) as the (u, v, w) axis and theta as yrot.
I won't do the math here because it requires a lot of effort and eventually I'm going to make a mistake somewhere around there anyway. However, if you are very careful and ready to double check them a couple of times, you could write it down and do the matrix multiplications.
Additional note: one way to calculate Rot_around_new_y could also be using Quaternions. A quaternion is defined as a 4d vector [xs, ys, zs, c], which corresponds to rotation around [x, y, z] by an angle whose sin is s and whose cos is c.
This [x, y, z] is our "new Y", i.e. [-sin(zrot), cos(zrot), 0]. The angle is yrot. The quaternion for rotation around Y is thus given as:
q_Y = [-sin(zrot)*sin(yrot), cos(zrot)*sin(yrot), 0, cos(yrot)]
Finally, if you have a quaternion [a, b, c, d], the corresponding rotation matrix is given as:
[1 - 2b^2 - 2c^2 2ab + 2cd 2ac - 2bd ]
[ 2ab - 2cd 1 - 2a^2 - 2c^2 2bc - 2ad ]
[ 2ac - 2bd 2bc + 2ad 1 - 2a^2 - 2b^2]
I know next-to-nothing about openGL, but I imagine translating to 0, rotating and then translating back should work...
gl.glTranslatef(-x, -y, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
gl.glTranslatef(x, y, z);
I think you need quaternions to do what you want to do. Using rotations about the coordinate axes works some of the time, but ultimately suffers from "gimbal lock". This happens when the rotation you want passes close by a coordinate axis and creates an unwanted gyration as the rotation required around the axis approaches 180 degrees.
A quaternion is a mathematical object that represents a rotation about an arbitrary axis defined as a 3D vector. To use it in openGL you generate a matrix from the quaternion and multiply it by your modelview matrix. This will transform your world coordinates so that the square is rotated.
You can get more info here http://content.gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation
I have a Quaternion C++ class I could send you if it helps.
Try adding
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
before the render code for a single cube that's being rotated, and then
glPopMatrix();
after the rendering is done. It will give you an extra view matrix to work with without affecting your primary modelview matrix.
Essentially what this does is create a new modelview camera, render, then destroy it.
I'm using opentk, nevertheless it's the same.
First move the object half all it's dimensions size, then rotate and move back:
model = Matrix4.CreateTranslation(new Vector3(-width/2, -height / 2, -depth / 2)) *
Matrix4.CreateRotationX(rotationX) *
Matrix4.CreateRotationY(rotationY) *
Matrix4.CreateRotationZ(rotationZ) *
Matrix4.CreateTranslation(new Vector3(width / 2, height / 2, depth / 2));