Basically I have a local space where y points down and x points right, like
|
|
---------> +x
|
| +y
and the center is at (320, 240), so the upper left corner is (0,0). Some windowing system uses it.
So I have this code
auto const proj = glm::ortho(0.0f, 640.0f, 0.0f, 480.0f);
auto const view = glm::lookAt(
glm::vec3(320.0f, 240.0f, -1.0f), //eye position
glm::vec3(320.0f, 240.0f, 0.0f), //center
glm::vec3(0.0f, -1.0f, 0.0f) //up
);
I imagine I need to look at the (320,240,0) from the negative position of z towards positive z, and the "up" direction is negative y.
However it doesn't seems to provide the right result
auto const v = glm::vec2(320.0f, 240.0f);
auto const v2 = glm::vec2(0.0f, 0.0f);
auto const v3 = glm::vec2(640.0f, 480.0f);
auto const result = proj * view * glm::vec4(v, 0.0f, 1.0f); //expects: (0,0), gives (-1,-1)
auto const result2 = proj * view * glm::vec4(v2, 0.0f, 1.0f); //expects: (-1,1), gives (-2,0)
auto const result3 = proj * view * glm::vec4(v3, 0.0f, 1.0f); //expects: (1,-1), gives (0,-2)
That's not how view and projection work. The view matrix tells you which point should be mapped to [0,0] in view space. It seems you try to map the center of the visible area to [0,0], but then you use a projection matrix which assumes that [0,0] is the top-left corner.
Since you first apply the view matrix, that gives a result of view * glm::vec4(v, 0.0f, 1.0f) = [0,0]. Then the projection matrix get's applied where you defined that 0,0 as the top-left corner, thus proj * [0,0] will result in [-1,-1].
I'm not 100% sure what you want to achieve, but if you want to use the given projection matrix, then the view matrix has to transform the scene in a way that the top-left point of the visible area get's mapped to [0,0].
You can also adjust the project to use the range [-320, 320] (and respectively [-240, 240]) and keep mapping the center to [0,0] with the view matrix.
I am following the LearnOpenGL tutorials and have been tinkering with shadows casting. So far everything is working correctly but there is this very specific problem where I can't cast shadows that from a purely vertical directional light. Let's add some code. My light space matrix looks like this:
glm::mat4 view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
return glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f) * view;
direction is a vector with as direction the direction of the directional light, of course. Everything works well until that direction vector is set to (0,-1,0), because it is parallel to the up vector (0,1,0). To construct the lookAt matrix, glm is performing the cross product between the up vector and the difference between the center and the eye (so in that case it's basically the direction), but that cross product won't give any result since the two vectors are parallel.
Knowing all of this, my question would be : how should my lookAt view matrix when the up vector and the direction of the light are parallel ?
Edit : Thank you for your answer, I changed my code to this :
if(abs(direction.x) < FLT_EPSILON && abs(direction.z) < FLT_EPSILON)
view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 0.0f, 1.0f));
else
view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
return glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f) * view;
and now everything works fine !
When the up-vector and the line of sight are parallel, then the view matrix is undefined, because the Cross product is (0, 0, 0).
The view matrix is an Orthogonal matrix, this means each of the 3 axis is perpendicular to plane which is formed by the 2 other axis. Respectively the angele between the axis is 90°.
The view matrix is the inverse matrix of that matrix which defines the viewing position and orientation. This matrix is defined by the parameters to glm::lookAt. 2 of the axis a res specified by the line of sight and the up-vector. The 3rd axis is calculated by the cross product.
This all means, you have to specify the matrix by 2 Orthogonal directions. If the angle between the direction vectors is not exactly (90°) then this is corrected by glm::lookAt. But the algorithm fails to do that if the vectors are parallel.
Define a line of sight (direction) vector and a up-vector, with an angle of 90° to each another. If you rotate on of them, then you've to rotate the other vector in the same way.
e.g. Lets assume you've a direction vector (line of sight) and an up-vector:
direction: (0, 0, 1)
up : (0, 1, 0)
If the direction vector is rotated by 90° then the up-vector has to be rotated by 90°, too:
direction: (0, -1, 0)
up : (0, 0, 1)
I'm trying to rotate a cube's vertexes with a rotation matrix but whenever I run the program the cube just disappears.
I'm using a rotation matrix that was given to us in a lecture that rotates the cube's x coordinates.
double moveCubeX = 0;
float xRotationMatrix[9] = {1, 0, 0,
0, cos(moveCubeX), sin(moveCubeX),
0, -sin(moveCubeX), cos(moveCubeX)
};
I'm adding to the moveCubeX variable with the 't' key on my keyboard
case 't':
moveCubeX += 5;
break;
And to do the matrix multiplication I'm using
glMultMatrixf();
However when I add this into my code when running it the cube has just disappeared. This is where I add in the glMultMatrixf() function.
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(pan, 0, -g_fViewDistance,
pan, 0, -1,
0, 1, 0);
glRotatef(rotate_x, 1.0f, 0.0f, 0.0f); //Rotate the camera
glRotatef(rotate_y, 0.0f, 1.0f, 0.0f); //Rotate the camera
glMultMatrixf(xRotationMatrix);
I'm struggling to see where it is I have gone wrong.
OpenGL uses matrices of size 4x4. Therefore, your rotation matrix needs to be expanded to 4 rows and 4 columns, for a total of 16 elements:
float xRotationMatrix[16] = {1.0f, 0.0f, 0.0f, 0.0f,
0.0f, cos(moveCubeX), sin(moveCubeX), 0.0f,
0.0f, -sin(moveCubeX), cos(moveCubeX), 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};
You will also need to be careful about the units for your angles. Since you add 5 to your angle every time the user presses a key, it looks like you're thinking in degrees. The standard cos() and sin() functions in C/C++ libraries expect the angle to be in radians.
In addition, it looks like your matrix is defined at a global level. If you do this, the elements will only be evaluated once at program startup. You will either have to make the matrix definition local to the display(), so that the matrix is re-evaluated each time you draw, or update the matrix every time the angle changes.
For the second option, you can update only the matrix elements that depend on the angle every time the angle changes. In the function that modifies moveCubeX, add:
xRotationMatrix[5] = cos(moveCubeX);
xRotationMatrix[6] = sin(moveCubeX);
xRotationMatrix[9] = -sin(moveCubeX);
xRotationMatrix[10] = cos(moveCubeX);
I'm writing a small 2D game-engine (educative purpose) in C++ and OpenGL 3.3, while writing the code I noted that almost all sprites (if not all) use the same vertexBuffer values:
const float vertexBuffer[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f
}
That is 2 triangles (if using VBO indexing) in model space that form a square, the indexBuffer goes like:
const unsigned short indexBuffer[] = { 0, 1, 2, 2, 0, 3 }
Why I'm using the same model-space values for all my sprites? Well I use a different MVP matrix for all of them:
P (projection): The orthogonal camera transform, usually with the same width and height of the glContext.
V (view): A lookAt transformation, it just sits in the z axis looking to the xy plane perpendicullary. This is also used to move the camera (follow the player, etc).
M (model): this matrix is created using transformations belonging to each sprite:
glm::mat4 model = <translate> * <rotate> * <scale>
Where:
<translate> is the position of the sprite in screen-space
<rotate> the rotation of the sprite
<scale> The size of the sprite in pixels divided by 2. Why? Each corner of the model-space corresponds to a vertex, and the square formed by these with its center in the origin, so if our sprite is 250x250 pixels, we scale by 125px to each side in each axis, thus transforming our model-space square to a screen-space square.
So, if I have 5 sprites I'll call glDrawElements 5 times, with differents MVPs and Textures each time, but same vertexBuffer, indexBuffer and uvCoordinates.
Do you think this is a error-prone approach for using in the future? Or should I instead apply the <translate> and <scale> transformations directly to the vertices when creating them? And leave the Model matrix with only the rotation.
I am following the OpenGL es rotation examples from google to rotate a simple square (not a cube) on my Android App, for example this code:
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
It works fine if you only rotate around one axis.
But if you rotate around one axis, and after that, you rotate around another axis, the rotation is not fair. I mean that the rotation is done around the axes of base (global) coordinate system and not the square's own coordinate system.
EDIT with code for Shahbaz
public void onDrawFrame(GL10 gl) {
//Limpiamos pantalla y Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
//Dibujado
gl.glTranslatef(0.0f, 0.0f, z); //Move z units into the screen
gl.glScalef(0.8f, 0.8f, 0.8f); //Escalamos para que quepa en la pantalla
//Rotamos sobre los ejes.
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
//Dibujamos el cuadrado
square.draw(gl);
//Factores de rotación.
xrot += xspeed;
yrot += yspeed;
}
Draw of the square:
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CCW);
//gl.glEnable(GL10.GL_BLEND);
//Bind our only previously generated texture in this case
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
//Enable vertex buffer
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//gl.glDisable(GL10.GL_BLEND);
}
VERTEX BUFFER VALUES:
private FloatBuffer vertexBuffer;
private float vertices[] =
{
-1.0f, -1.0f, 0.0f, //Bottom Left
1.0f, -1.0f, 0.0f, //Bottom Right
-1.0f, 1.0f, 0.0f, //Top Left
1.0f, 1.0f, 0.0f //Top Right
};
.
.
.
public Square(int resourceId) {
ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuf.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
.
.
.
First thing you should know is that in OpenGL, transformation matrices are multiplied from right. What does it mean? It means that the last transformation you write gets applied to the object first.
So let's look at your code:
gl.glScalef(0.8f, 0.8f, 0.8f);
gl.glTranslatef(0.0f, 0.0f, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
gl.glTranslatef(0.0f, 0.0f, z);
square.draw(gl);
This means that, first, the object is moved to (0.0f, 0.0f, z). Then it is rotated around Z, then around Y, then around X, then moved by (0.0f, 0.0f, -z) and finally scaled.
You got the scaling right. You put it first, so it gets applied last. You also got
gl.glTranslatef(0.0f, 0.0f, -z);
in the right place, because you first want to rotate the object then move it. Note that, when you rotate an object, it ALWAYS rotates around the base coordinate, that is (0, 0, 0). If you want to rotate the object around its own axes, the object itself should be in (0, 0, 0).
So, right before you write
square.draw(gl);
you should have the rotations. The way your code is right now, you move the object far (by writing
gl.glTranslatef(0.0f, 0.0f, z);
before square.draw(gl);) and THEN rotate which messes things up. Removing that line gets you much closer to what you need. So, your code will look like this:
gl.glScalef(0.8f, 0.8f, 0.8f);
gl.glTranslatef(0.0f, 0.0f, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
square.draw(gl);
Now the square should rotate in place.
Note: After you run this, you will see that the rotation of the square would be rather awkward. For example, if you rotate around z by 90 degrees, then rotating around x would look like rotating around y because of the previous rotation. For now, this may be ok for you, but if you want to it to look really good, you should do it like this:
Imagine, you are not rotating the object, but rotating a camera around the object, looking at the object. By changing xrot, yrot and zrot, you are moving the camera on a sphere around the object. Then, once finding out the location of the camera, you could either do the math and get the correct parameters to call glRotatef and glTranslatef or, use gluLookAt.
This requires some understanding of math and 3d imagination. So if you don't get it right the first day, don't get frustrated.
Edit: This is the idea of how to rotate along rotated object coordinates;
First, let's say you do the rotation around z. Therefore you have
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
Now, the global Y unit vector is obviously (0, 1, 0), but the object has rotated and thus its Y unit vector has also rotated. This vector is given by:
[cos(zrot) -sin(zrot) 0] [0] [-sin(zrot)]
[sin(zrot) cos(zrot) 0] x [1] = [ cos(zrot)]
[0 0 1] [0] [ 0 ]
Therefore, your rotation around y, should be like this:
gl.glRotatef(yrot, -sin(zrot), cos(zrot), 0.0f); //Y-object
You can try this so far (disable rotation around x) and see that it looks like the way you want it (I did it, and it worked).
Now for x, it gets very complicated. Why? Because, the X unit vector is not only first rotated around the z vector, but after it is rotated around the (-sin(zrot), cos(zrot), 0) vector.
So now the X unit vector in the object's cooridnate is
[cos(zrot) -sin(zrot) 0] [1] [cos(zrot)]
Rot_around_new_y * [sin(zrot) cos(zrot) 0] x [0] = Rot_around_new_y * [sin(zrot)]
[0 0 1] [0] [0 ]
Let's call this vector (u_x, u_y, u_z). Then your final rotation (the one around X), would be like this:
gl.glRotatef(xrot, u_x, u_y, u_z); //X-object
So! How to find the matrix Rot_around_new_y? See here about rotation around arbitrary axis. Go to section 6.2, the first matrix, get the 3*3 sub matrix rotation (that is ignore the rightmost column which is related to translation) and put (-sin(zrot), cos(zrot), 0) as the (u, v, w) axis and theta as yrot.
I won't do the math here because it requires a lot of effort and eventually I'm going to make a mistake somewhere around there anyway. However, if you are very careful and ready to double check them a couple of times, you could write it down and do the matrix multiplications.
Additional note: one way to calculate Rot_around_new_y could also be using Quaternions. A quaternion is defined as a 4d vector [xs, ys, zs, c], which corresponds to rotation around [x, y, z] by an angle whose sin is s and whose cos is c.
This [x, y, z] is our "new Y", i.e. [-sin(zrot), cos(zrot), 0]. The angle is yrot. The quaternion for rotation around Y is thus given as:
q_Y = [-sin(zrot)*sin(yrot), cos(zrot)*sin(yrot), 0, cos(yrot)]
Finally, if you have a quaternion [a, b, c, d], the corresponding rotation matrix is given as:
[1 - 2b^2 - 2c^2 2ab + 2cd 2ac - 2bd ]
[ 2ab - 2cd 1 - 2a^2 - 2c^2 2bc - 2ad ]
[ 2ac - 2bd 2bc + 2ad 1 - 2a^2 - 2b^2]
I know next-to-nothing about openGL, but I imagine translating to 0, rotating and then translating back should work...
gl.glTranslatef(-x, -y, -z);
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
gl.glRotatef(zrot, 0.0f, 0.0f, 1.0f); //Z
gl.glTranslatef(x, y, z);
I think you need quaternions to do what you want to do. Using rotations about the coordinate axes works some of the time, but ultimately suffers from "gimbal lock". This happens when the rotation you want passes close by a coordinate axis and creates an unwanted gyration as the rotation required around the axis approaches 180 degrees.
A quaternion is a mathematical object that represents a rotation about an arbitrary axis defined as a 3D vector. To use it in openGL you generate a matrix from the quaternion and multiply it by your modelview matrix. This will transform your world coordinates so that the square is rotated.
You can get more info here http://content.gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation
I have a Quaternion C++ class I could send you if it helps.
Try adding
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
before the render code for a single cube that's being rotated, and then
glPopMatrix();
after the rendering is done. It will give you an extra view matrix to work with without affecting your primary modelview matrix.
Essentially what this does is create a new modelview camera, render, then destroy it.
I'm using opentk, nevertheless it's the same.
First move the object half all it's dimensions size, then rotate and move back:
model = Matrix4.CreateTranslation(new Vector3(-width/2, -height / 2, -depth / 2)) *
Matrix4.CreateRotationX(rotationX) *
Matrix4.CreateRotationY(rotationY) *
Matrix4.CreateRotationZ(rotationZ) *
Matrix4.CreateTranslation(new Vector3(width / 2, height / 2, depth / 2));