I'm trying to rotate a cube's vertexes with a rotation matrix but whenever I run the program the cube just disappears.
I'm using a rotation matrix that was given to us in a lecture that rotates the cube's x coordinates.
double moveCubeX = 0;
float xRotationMatrix[9] = {1, 0, 0,
0, cos(moveCubeX), sin(moveCubeX),
0, -sin(moveCubeX), cos(moveCubeX)
};
I'm adding to the moveCubeX variable with the 't' key on my keyboard
case 't':
moveCubeX += 5;
break;
And to do the matrix multiplication I'm using
glMultMatrixf();
However when I add this into my code when running it the cube has just disappeared. This is where I add in the glMultMatrixf() function.
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(pan, 0, -g_fViewDistance,
pan, 0, -1,
0, 1, 0);
glRotatef(rotate_x, 1.0f, 0.0f, 0.0f); //Rotate the camera
glRotatef(rotate_y, 0.0f, 1.0f, 0.0f); //Rotate the camera
glMultMatrixf(xRotationMatrix);
I'm struggling to see where it is I have gone wrong.
OpenGL uses matrices of size 4x4. Therefore, your rotation matrix needs to be expanded to 4 rows and 4 columns, for a total of 16 elements:
float xRotationMatrix[16] = {1.0f, 0.0f, 0.0f, 0.0f,
0.0f, cos(moveCubeX), sin(moveCubeX), 0.0f,
0.0f, -sin(moveCubeX), cos(moveCubeX), 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};
You will also need to be careful about the units for your angles. Since you add 5 to your angle every time the user presses a key, it looks like you're thinking in degrees. The standard cos() and sin() functions in C/C++ libraries expect the angle to be in radians.
In addition, it looks like your matrix is defined at a global level. If you do this, the elements will only be evaluated once at program startup. You will either have to make the matrix definition local to the display(), so that the matrix is re-evaluated each time you draw, or update the matrix every time the angle changes.
For the second option, you can update only the matrix elements that depend on the angle every time the angle changes. In the function that modifies moveCubeX, add:
xRotationMatrix[5] = cos(moveCubeX);
xRotationMatrix[6] = sin(moveCubeX);
xRotationMatrix[9] = -sin(moveCubeX);
xRotationMatrix[10] = cos(moveCubeX);
Related
For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);
I followed a guide to draw a Lorenz system in 2D.
I want now to extend my project and switch from 2D to 3D. As far as I know I have to substitute the gluOrtho2D call with either gluPerspective or glFrustum. Unfortunately whatever I try is useless.
This is my initialization code:
// set the background color
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
/// set the foreground (pen) color
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);*/
// set the foreground (pen) color
glColor4f(1.0f, 1.0f, 1.0f, 0.02f);
// enable blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// enable point smoothing
glEnable(GL_POINT_SMOOTH);
glPointSize(1.0f);
// set up the viewport
glViewport(0, 0, 400, 400);
// set up the projection matrix (the camera)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//gluOrtho2D(-2.0f, 2.0f, -2.0f, 2.0f);
gluPerspective(45.0f, 1.0f, 0.1f, 100.0f); //Sets the frustum to perspective mode
// set up the modelview matrix (the objects)
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
while to draw I do this:
glClear(GL_COLOR_BUFFER_BIT);
// draw some points
glBegin(GL_POINTS);
// go through the equations many times, drawing a point for each iteration
for (int i = 0; i < iterations; i++) {
// compute a new point using the strange attractor equations
float xnew=z*sin(a*x)+cos(b*y);
float ynew=x*sin(c*y)+cos(d*z);
float znew=y*sin(e*z)+cos(f*x);
// save the new point
x = xnew;
y = ynew;
z = znew;
// draw the new point
glVertex3f(x, y, z);
}
glEnd();
// swap the buffers
glutSwapBuffers();
the problem is that I don't visualize anything in my window. It's all black. What am I doing wrong?
The name "gluOrtho2D" is a bit misleading. In fact gluOrtho2D is probably the most useless function ever. The definition of gluOrtho2D is
void gluOrtho2D(
GLdouble left,
GLdouble right,
GLdouble bottom,
GLdouble top )
{
glOrtho(left, right, bottom, top, -1, 1);
}
i.e. the only thing it does it calling glOrtho with default values for near and far. Wow, how complicated and ingenious </sarcasm>.
Anyway, even if it's called ...2D, there's nothing 2-dimensional about it. The projection volume still has a depth range of [-1 ; 1] which is perfectly 3-dimensional.
Most likely the points generated lie outside the projection volume, which has a Z value range of [0.1 ; 100] in your case, but your points are confined to the range [-1 ; 1] in either axis (and IIRC the Z range of the strange attractor is entirely positive). So you have to apply some translation to see something. I suggest you choose
near = 1
far = 10
and apply a translation of Z: -5.5 to move things into the center of the viewing volume.
I'm writing a small 2D game-engine (educative purpose) in C++ and OpenGL 3.3, while writing the code I noted that almost all sprites (if not all) use the same vertexBuffer values:
const float vertexBuffer[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f
}
That is 2 triangles (if using VBO indexing) in model space that form a square, the indexBuffer goes like:
const unsigned short indexBuffer[] = { 0, 1, 2, 2, 0, 3 }
Why I'm using the same model-space values for all my sprites? Well I use a different MVP matrix for all of them:
P (projection): The orthogonal camera transform, usually with the same width and height of the glContext.
V (view): A lookAt transformation, it just sits in the z axis looking to the xy plane perpendicullary. This is also used to move the camera (follow the player, etc).
M (model): this matrix is created using transformations belonging to each sprite:
glm::mat4 model = <translate> * <rotate> * <scale>
Where:
<translate> is the position of the sprite in screen-space
<rotate> the rotation of the sprite
<scale> The size of the sprite in pixels divided by 2. Why? Each corner of the model-space corresponds to a vertex, and the square formed by these with its center in the origin, so if our sprite is 250x250 pixels, we scale by 125px to each side in each axis, thus transforming our model-space square to a screen-space square.
So, if I have 5 sprites I'll call glDrawElements 5 times, with differents MVPs and Textures each time, but same vertexBuffer, indexBuffer and uvCoordinates.
Do you think this is a error-prone approach for using in the future? Or should I instead apply the <translate> and <scale> transformations directly to the vertices when creating them? And leave the Model matrix with only the rotation.
What must be changed to let me see the impression of flying around the whole fixed scene? My current code just lets me look from a fixed viewpoint at objects each one rotating around itself. Enabling glLoadIdentity() just stops their rotation. Note that 3dWidget::paintGL() is permanently called by a timer every 20ms.
void 3dWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTranslatef(0.5f, 0.5f, 0.5f);
glRotatef(3.0f, 1.0f, 1.0f, 1.0f);
glTranslatef(-0.5f, -0.5f, -0.5f);
glPushMatrix();
//glLoadIdentity();
for (int i = 0; i < m_cubes.count(); i++) {
m_cubes[i]->render();
}
glPopMatrix();
}
void Cube::render() {
glTranslatef(m_x, m_y, m_z); // local position of this object
glCallList(m_cubeId); // render code is in createRenderCode()
glTranslatef(-m_x, -m_y, -m_z);
}
void Cube::createRenderCode(int cubeId) {
m_cubeId = cubeId;
glVertexPointer(3, GL_FLOAT, 0, m_pCubePoints);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, m_pCubeColors);
glNewList(m_cubeId, GL_COMPILE);
{
glEnableClientState(GL_COLOR_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP, 0, m_numPoints);
glDisableClientState(GL_COLOR_ARRAY);
}
glEndList();
}
void 3dWidget::init(int w, int h)
{
...
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float aspect = w/(float)(h ? h : 1);
glFrustum(-aspect, aspect, -1, 1, 10, 100);
glTranslatef(0., 0., -12);
glMatrixMode(GL_MODELVIEW);
}
EDIT: It seems it's important to know that 2 cubes are created with the following 3D position coordinates (m_x, m_y, m_z):
void 3dWidget::createScene()
{
Cube* pCube = new Cube;
pCube->create(0.5 /*size*/, -0.5 /*m_x*/, -0.5 /*m_y*/, -0.5 /*m_z*/);
pCube = new Cube;
pCube->create(0.5 /*size*/, +0.5 /*m_x*/, +0.5 /*m_y*/, +0.5 /*m_z*/);
}
Use gluLookAt to position the camera. You apply it to the modelview matrix before any object transforms.
Obviously, you'll have to figure out a path for the camera to follow. That's up you and how you want the "flight" to proceed.
EDIT: Just to be clear, there's no camera concept, as such, in OpenGL. gluLookAt is just another transform that (when applied to the modelview matrix) has the effect of placing a camera at the prescribed location.
If you really are just trying to rotate the world, your code seems to perform the transforms in a reasonable order. I can't see why your objects rotate around themselves rather than as a group. It might help to present a SSCCE using glut.
Now I've found the reason by myself. It works as soon as I change method paintGL() to
void 3dWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
#if 0 // not working
glTranslatef(0.5f, 0.5f, 0.5f);
glRotatef(3.0f, 1.0f, 1.0f, 1.0f);
glTranslatef(-0.5f, -0.5f, -0.5f);
#else // this works properly, they rotate horizontally around (0,0,0)
glRotatef(3.0f, 0.0f, 1.0f, 0.0f);
#endif
for (int i = 0; i < m_cubes.count(); i++) {
m_cubes[i]->render();
}
}
I don't get it exactly why, but it obviously appeared that some transformations had compensated in a way that the objects just rotate around itself. Thanks for your help anyway.
I think it's always better to let the scene rotate than to move by gluLookAt (beside the issue that finding the right formula for the angle of view is more difficult).
I am following the OpenGL es rotation examples from google to rotate a simple square (not a cube) on my Android App, for example this code:
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
It works fine if you only rotate around one axis.
But if you rotate around one axis, and after that, you rotate around another axis, the rotation is not fair. I mean that the rotation is done around the axes of base (global) coordinate system and not the square's own coordinate system.
EDIT with code for Shahbaz
public void onDrawFrame(GL10 gl) {
//Limpiamos pantalla y Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
//Dibujado
gl.glTranslatef(0.0f, 0.0f, z); //Move z units into the screen
gl.glScalef(0.8f, 0.8f, 0.8f); //Escalamos para que quepa en la pantalla
//Rotamos sobre los ejes.
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
//Dibujamos el cuadrado
square.draw(gl);
//Factores de rotaciĆ³n.
xrot += xspeed;
yrot += yspeed;
}
Draw of the square:
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CCW);
//gl.glEnable(GL10.GL_BLEND);
//Bind our only previously generated texture in this case
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
//Enable vertex buffer
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//gl.glDisable(GL10.GL_BLEND);
}
VERTEX BUFFER VALUES:
private FloatBuffer vertexBuffer;
private float vertices[] =
{
-1.0f, -1.0f, 0.0f, //Bottom Left
1.0f, -1.0f, 0.0f, //Bottom Right
-1.0f, 1.0f, 0.0f, //Top Left
1.0f, 1.0f, 0.0f //Top Right
};
.
.
.
public Square(int resourceId) {
ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuf.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
.
.
.
First thing you should know is that in OpenGL, transformation matrices are multiplied from right. What does it mean? It means that the last transformation you write gets applied to the object first.
So let's look at your code:
gl.glScalef(0.8f, 0.8f, 0.8f);
gl.glTranslatef(0.0f, 0.0f, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
gl.glTranslatef(0.0f, 0.0f, z);
square.draw(gl);
This means that, first, the object is moved to (0.0f, 0.0f, z). Then it is rotated around Z, then around Y, then around X, then moved by (0.0f, 0.0f, -z) and finally scaled.
You got the scaling right. You put it first, so it gets applied last. You also got
gl.glTranslatef(0.0f, 0.0f, -z);
in the right place, because you first want to rotate the object then move it. Note that, when you rotate an object, it ALWAYS rotates around the base coordinate, that is (0, 0, 0). If you want to rotate the object around its own axes, the object itself should be in (0, 0, 0).
So, right before you write
square.draw(gl);
you should have the rotations. The way your code is right now, you move the object far (by writing
gl.glTranslatef(0.0f, 0.0f, z);
before square.draw(gl);) and THEN rotate which messes things up. Removing that line gets you much closer to what you need. So, your code will look like this:
gl.glScalef(0.8f, 0.8f, 0.8f);
gl.glTranslatef(0.0f, 0.0f, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
square.draw(gl);
Now the square should rotate in place.
Note: After you run this, you will see that the rotation of the square would be rather awkward. For example, if you rotate around z by 90 degrees, then rotating around x would look like rotating around y because of the previous rotation. For now, this may be ok for you, but if you want to it to look really good, you should do it like this:
Imagine, you are not rotating the object, but rotating a camera around the object, looking at the object. By changing xrot, yrot and zrot, you are moving the camera on a sphere around the object. Then, once finding out the location of the camera, you could either do the math and get the correct parameters to call glRotatef and glTranslatef or, use gluLookAt.
This requires some understanding of math and 3d imagination. So if you don't get it right the first day, don't get frustrated.
Edit: This is the idea of how to rotate along rotated object coordinates;
First, let's say you do the rotation around z. Therefore you have
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
Now, the global Y unit vector is obviously (0, 1, 0), but the object has rotated and thus its Y unit vector has also rotated. This vector is given by:
[cos(zrot) -sin(zrot) 0] [0] [-sin(zrot)]
[sin(zrot) cos(zrot) 0] x [1] = [ cos(zrot)]
[0 0 1] [0] [ 0 ]
Therefore, your rotation around y, should be like this:
gl.glRotatef(yrot, -sin(zrot), cos(zrot), 0.0f); //Y-object
You can try this so far (disable rotation around x) and see that it looks like the way you want it (I did it, and it worked).
Now for x, it gets very complicated. Why? Because, the X unit vector is not only first rotated around the z vector, but after it is rotated around the (-sin(zrot), cos(zrot), 0) vector.
So now the X unit vector in the object's cooridnate is
[cos(zrot) -sin(zrot) 0] [1] [cos(zrot)]
Rot_around_new_y * [sin(zrot) cos(zrot) 0] x [0] = Rot_around_new_y * [sin(zrot)]
[0 0 1] [0] [0 ]
Let's call this vector (u_x, u_y, u_z). Then your final rotation (the one around X), would be like this:
gl.glRotatef(xrot, u_x, u_y, u_z); //X-object
So! How to find the matrix Rot_around_new_y? See here about rotation around arbitrary axis. Go to section 6.2, the first matrix, get the 3*3 sub matrix rotation (that is ignore the rightmost column which is related to translation) and put (-sin(zrot), cos(zrot), 0) as the (u, v, w) axis and theta as yrot.
I won't do the math here because it requires a lot of effort and eventually I'm going to make a mistake somewhere around there anyway. However, if you are very careful and ready to double check them a couple of times, you could write it down and do the matrix multiplications.
Additional note: one way to calculate Rot_around_new_y could also be using Quaternions. A quaternion is defined as a 4d vector [xs, ys, zs, c], which corresponds to rotation around [x, y, z] by an angle whose sin is s and whose cos is c.
This [x, y, z] is our "new Y", i.e. [-sin(zrot), cos(zrot), 0]. The angle is yrot. The quaternion for rotation around Y is thus given as:
q_Y = [-sin(zrot)*sin(yrot), cos(zrot)*sin(yrot), 0, cos(yrot)]
Finally, if you have a quaternion [a, b, c, d], the corresponding rotation matrix is given as:
[1 - 2b^2 - 2c^2 2ab + 2cd 2ac - 2bd ]
[ 2ab - 2cd 1 - 2a^2 - 2c^2 2bc - 2ad ]
[ 2ac - 2bd 2bc + 2ad 1 - 2a^2 - 2b^2]
I know next-to-nothing about openGL, but I imagine translating to 0, rotating and then translating back should work...
gl.glTranslatef(-x, -y, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
gl.glTranslatef(x, y, z);
I think you need quaternions to do what you want to do. Using rotations about the coordinate axes works some of the time, but ultimately suffers from "gimbal lock". This happens when the rotation you want passes close by a coordinate axis and creates an unwanted gyration as the rotation required around the axis approaches 180 degrees.
A quaternion is a mathematical object that represents a rotation about an arbitrary axis defined as a 3D vector. To use it in openGL you generate a matrix from the quaternion and multiply it by your modelview matrix. This will transform your world coordinates so that the square is rotated.
You can get more info here http://content.gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation
I have a Quaternion C++ class I could send you if it helps.
Try adding
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
before the render code for a single cube that's being rotated, and then
glPopMatrix();
after the rendering is done. It will give you an extra view matrix to work with without affecting your primary modelview matrix.
Essentially what this does is create a new modelview camera, render, then destroy it.
I'm using opentk, nevertheless it's the same.
First move the object half all it's dimensions size, then rotate and move back:
model = Matrix4.CreateTranslation(new Vector3(-width/2, -height / 2, -depth / 2)) *
Matrix4.CreateRotationX(rotationX) *
Matrix4.CreateRotationY(rotationY) *
Matrix4.CreateRotationZ(rotationZ) *
Matrix4.CreateTranslation(new Vector3(width / 2, height / 2, depth / 2));