I'm learning from this tutotrials:
http://en.wikibooks.org/wiki/Category:OpenGL_Programming
http://www.opengl-tutorial.org/
I have modified the 7.th lesson from http://www.opengl-tutorial.org/ so that the cube rotate, now what I want to do is to have two or tree cubes each at different places and make them rotate(the cubes), but I really don't know how to do that. So I'm asking and hoping for some help.
The rotation is made by this code:
glm::vec3 axis_y(0, 1, 0);
glm::mat4 anim = glm::rotate(glm::mat4(1.0f), angle, axis_y);
...
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix * anim;
I didn't go through the details of the tutorial, but in principle, you need to create a model matrix for each of the cubes, and then render each cube with its own value of MVP constructed from the cube's model matrix (and the global view & projection matrices).
The above can give you three identical cubes in different positions, rotations and scales. If you want three different objects, you'll need to load each of them separately, preferably into its own buffer object.
EDIT
I don't know the libraries the tutorial uses, but the principle of coding this could be along these lines:
for (int idxCube = 0; idxCube < 3; ++idxCube) {
glm::mat4 offset = glm::translate(10 * idxCube, 0, 0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix * offset * anim;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glDrawArrays(...);
}
This would give 3 cubes at positions (0, 0, 0), (10, 0, 0) and (20, 0, 0).
More generally, you'd just have one ModelMatrix for each cube.
Related
I'm trying to visualize a simple quad made of -1 to 1 vertices along x and y axis. Why opengl clips the object? The code seems correct to me
glm::mat4 m = glm::translate(glm::mat4{1.0f}, toGlmVec3(objectPosition));
glm::mat4 v = glm::lookAtLH(toGlmVec3(cameraPosition), toGlmVec3(objectPosition), glm::vec3(0, 1, 0));
glm::mat4 p = glm::perspective(glm::radians(50.f), float(640.f) / 480.f, 0.0001f, 100.f);
glm::mat4 mvp = /* p* */ v * m; // when I take p back, the object disappears completely
testShader.use();
testShader.setVector4("u_color", math::Vector4f(0.f, 1.f, 0.f, 1.f));
testShader.setMatrix4("u_mMVP", mvp);
in shader's code only a line
gl_Position = u_mMVP * vec4(a_Pos, 1.0);
after moving the camera a bit along z axis
if I comment out v *, then it works fine and object moves along x and y axis on the screen
without view matrix, only model:
move the object along x and y
so it looks like the rendering code is working fine but what is wrong with view and projection matrices?
The object is clipped by the near and far plane of the Orthographic projection. If you don't explicitly set an projection matrix, the projection matrix is the Identity matrix. The near plane far pane are at +/- 1.
Use glm::ortho to define a different projection matrix. e.g.:
glm::mat4 p = glm::ortho(-1, 1, -1, 1, -10, 10);
The orthographic projection matrix defines a cuboid viewing volume around the position of the viewer. All geometry outside of this volume is clipped.
I am following this site to learn ray tracing using compute shaders: https://github.com/LWJGL/lwjgl3-wiki/wiki/2.6.1.-Ray-tracing-with-OpenGL-Compute-Shaders-%28Part-I%29
My question, The tutorial details a procedure to get the perspective projection. I think I followed his steps correctly but I am getting the wrong result and I believe I made a mistake in my matrix computations.
My code for the perspective projection-
//Getting the perspective projection using glm::perspective
glm::mat4 projection = glm::perspective(60.0f, 1024.0f/768.0f, 1.0f, 2.0f);
//My Camera Position
glm::vec3 camPos=glm::vec3(3.0, 2.0, 7.0);
//My View matrix using glm::lookAt
glm::mat4 view = glm::lookAt(camPos, glm::vec3(0.0, 0.5, 0.0),glm::vec3(0.0, 1.0, 0.0));
//Calculating inverse of the view*projection
glm::mat4 inv = glm::inverse(view*projection);
//Calculating the rays from camera position to the corners of the frustum as detailed in the site.
glm::vec4 ray00=glm::vec4(-1, -1, 0, 1) * inv;
ray00 /= ray00.w;
ray00 -= glm::vec4(camPos,1.0);
glm::vec4 ray10 = glm::vec4(+1, -1, 0, 1) * inv;
ray10 /= ray10.w;
ray10 -= glm::vec4(camPos,1.0);
glm::vec4 ray01=glm::vec4(-1, 1, 0, 1) * inv;
ray01 /= ray01.w;
ray01 -= glm::vec4(camPos,1.0);
glm::vec4 ray11 = glm::vec4(+1, +1, 0, 1) * inv;
ray11 /= ray11.w;
ray11 -= glm::vec4(camPos,1.0);
Result of above tranformations:
[![enter image description here][1]][1]
As additional information, I am calling my compute shaders using
//Dispatch Shaders.
glDispatchCompute ((GLuint)1024.0/16, (GLuint)768.0f/8 , 1);
I am also passing the values to the shader using the
//Querying the location for ray00 and assigning the value. Similarly for the rest
GLuint ray00Id = glGetUniformLocation(computeS, "ray00");
glUniform3f(ray00Id, ray00.x, ray00.y, ray00.z);
GLuint ray01Id = glGetUniformLocation(computeS, "ray01");
glUniform3f(ray01Id, ray01.x, ray01.y, ray01.z);
GLuint ray10Id = glGetUniformLocation(computeS, "ray10");
glUniform3f(ray10Id, ray10.x, ray10.y, ray10.z);
GLuint ray11Id = glGetUniformLocation(computeS, "ray11");
glUniform3f(ray11Id, ray11.x, ray11.y, ray11.z);
GLuint camId = glGetUniformLocation(computeS, "eye");
glUniform3f(camId, camPos.x, camPos.y, camPos.z);
Updated Answer following derhass suggestion.
My image now looks like :
Latest Image
The glm library uses the standard OpenGL matrix conventions, meaning that the matrices are created with the multiplication order Matrix * Vector in mind. So the following code is wrong:
//Calculating inverse of the view*projection
glm::mat4 inv = glm::inverse(view*projection);
The composition of the view matrix (transforming from world space to eye space) and the projection matrix (transforming from eye space to clip space) is projection * view, not view * projection as you put it (which would apply the projection before the view).
I have a triangle and have 3 vertices anywhere in space.
I attempted to get the max and min coordinates for it.
void findBoundingBox(glm::vec3 & minBB, glm::vec3 & maxBB)
{
minBB.x = std::min(minBB.x, mCoordinate.x);
minBB.y = std::min(minBB.y, mCoordinate.y);
minBB.z = std::min(minBB.z, mCoordinate.z);
maxBB.x = std::max(maxBB.x, mCoordinate.x);
maxBB.y = std::max(maxBB.y, mCoordinate.y);
maxBB.z = std::max(maxBB.z, mCoordinate.z);
}
}
Now I tried to set
:
glm::vec3 InverseViewDirection(50.0f, 200, 200); //Inverse View Direction
glm::vec3 LookAtPosition(0.0,0,0); // I can make it anywhere with barycentric coord, but this is the simple case
glm::vec3 setupVector(0.0, 1, 0);
I tried to set the orthographic view to wrap the triangle by:
myCamera.setProjectionMatrix(min.x, max.x, max.y,min.y, 0.0001f, 10000.0f);
But its not neatly bounding the triangle in my view.
I've been stumped on this for a day, any pointers?
Bad: output : (I want the view to neatly bound the triangle)
Edit:
Based on a comment ( I have tried to update the bounds with the view matrix (model is identity, so ignoring that for now)
still no luck :(
glm::vec4 minSS = ((myCamera.getViewMatrix()) * glm::vec4(minWS, 0.0));
glm::vec4 maxSS = ((myCamera.getViewMatrix()) * glm::vec4(maxWS, 0.0));
myCamera.setProjectionMatrix(minSS.x, maxSS.x, maxSS.y, minSS.y, -200.0001f, 14900.0f);
You will need to apply all transformations that come before the perspective transformation to your input points when you calculate the bounding box.
In your code fragments, it looks like you're applying a viewing transform with an arbitrary viewpoint (50, 200, 200) as part of your rendering. You need to apply this same transformation to your input points before you feed them into your findBoundingBox() function.
In more mathematical terms, you typically have something like this in your vertex shader, with InputPosition being the original vertex coordinates:
gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * InputPosition;
To determine a projection matrix that will map all your points to a given range, you need to look at all points that the projection matrix is applied to. With the notation above, those points are ViewMatrix * ModelMatrix * InputPosition. So when you calculate the bounding box, the model and view matrices (or the modelview matrix if you combine them) needs to be applied to the input points.
I'm working on a small graphics engine using OpenGL and I'm having some issues with my translation matrix. I'm using OpenGL 3.3, GLSL and C++.
The situation is this: I have defined a small cube which I want to render on screen. The cube uses it's own coordinate system, so I created a model matrix to be able to transform the cube. To make it myself a bit easier I started out with just a translation matrix as the cube's model matrix and after a bit of coding I've managed to make everything work and the cube appears on the screen. Nothing all to special, but there is one thing about my translation matrix that I find a bit odd.
Now as far as I know, a translation matrix is defined as follows:
1, 0, 0, x
0, 1, 0, y
0, 0, 1, z
0, 0, 0, 1
However, this does not work for me. When I define my translation matrix this way, nothing appears on the screen. It only works when I define my translation matrix like this:
1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
x, y, z, 1
Now I've been over my code several times to find out why this is the case, but I can't seem to find out why or am I just wrong and does a translation matrix needs to be defined like the transposed one here above?
My matrices are defined as a one-dimensional array going from left to right, top to bottom.
Here is some of my code that might help:
//this is called just before cube is being rendered
void DisplayObject::updateMatrices()
{
modelMatrix = identityMatrix();
modelMatrix = modelMatrix * translateMatrix( xPos, yPos, zPos );
/* update modelview-projection matrix */
mvpMatrix = modelMatrix * (*projMatrix);
}
//this creates my translation matrix which causes the cube to disappear
const Matrix4 translateMatrix( float x, float y, float z )
{
Matrix4 tranMatrix = identityMatrix();
tranMatrix.data[3] = x;
tranMatrix.data[7] = y;
tranMatrix.data[11] = z;
return Matrix4(tranMatrix);
}
This is my simple test vertex shader:
#version 150 core
in vec3 vPos;
uniform mat4 mvpMatrix;
void main()
{
gl_Position = mvpMatrix * vec4(vPos, 1.0);
}
I've also did tests to check if my matrix multiplication works and it does.
I * randomMatrix is still just randomMatrix
I hope you guys can help.
Thanks
EDIT:
This is how I send the matrix data to OpenGL:
void DisplayObject::render()
{
updateMatrices();
glBindVertexArray(vaoID);
glUseProgram(progID);
glUniformMatrix4fv( glGetUniformLocation(progID, "mvpMatrix"), 1, GL_FALSE, &mvpMatrix.data[0] );
glDrawElements(GL_TRIANGLES, bufferSize[index], GL_UNSIGNED_INT, 0);
}
mvpMatrix.data is a std::vector:
For OpenGL
1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
x, y, z, 1
Is the correct Translation Matrix.
Why? Opengl Uses column-major matrix ordering. Which is the Transpose of the Matrix you initially presented, which is in row-major ordering. Row major is used in most math text-books and also DirectX, so it is a common point of confusion for those new to OpenGL.
See: http://www.mindcontrol.org/~hplus/graphics/matrix-layout.html
You cannot swap matrices in a matrix multiplication, so A*B is different from B*A. You have to transpose B before swapping the matrices.
A * B = t(B) * A
try
void DisplayObject::updateMatrices()
{
modelMatrix = identityMatrix();
modelMatrix = translateMatrix( xPos, yPos, zPos ) * modelMatrix;
/* update modelview-projection matrix */
mvpMatrix = modelMatrix * (*projMatrix);
}
I am working on rendering a terrain in OpenGL.
My code is the following:
void Render_Terrain(int k)
{
GLfloat angle = (GLfloat) (k/40 % 360);
//PROJECTION
glm::mat4 Projection = glm::perspective(45.0f, 1.0f, 0.1f, 100.0f);
//VIEW
glm::mat4 View = glm::mat4(1.);
//ROTATION
//View = glm::rotate(View, angle * -0.1f, glm::vec3(1.f, 0.f, 0.f));
//View = glm::rotate(View, angle * 0.2f, glm::vec3(0.f, 1.f, 0.f));
//View = glm::rotate(View, angle * 0.9f, glm::vec3(0.f, 0.f, 1.f));
View = glm::translate(View, glm::vec3(0.f,0.f, -4.0f)); // x, y, z position ?
//MODEL
glm::mat4 Model = glm::mat4(1.0);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MVP_matrix"), 1, GL_FALSE, glm::value_ptr(MVP));
//Transfer additional information to the vertex shader
glm::mat4 MV = Model * View;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MV_matrix"), 1, GL_FALSE, glm::value_ptr(MV));
glClearColor(0.0, 0.0, 0.0, 1.0);
glDrawArrays(GL_LINE_STRIP, terrain_start, terrain_end );
}
I can do a rotation around the X,Y,Z axis, scale my terrain but I can't find a way to move the camera. I am using OpenGL 3+ and I am kinda new to graphics.
The best way to move the camera would be through the use of gluLookAt(), it simulates camera movement since the camera cannot be moved whatsoever. The function takes 9 parameters. The first 3 are the XYZ coordinates of the eye which is where the camera is exactly located. The second 3 parameters are the XYZ coordinates of the center which is the point the camera is looking at from the eye. It is always going to be the center of the screen. The third 3 parameters are the XYZ coordinates of the UP vector which points vertically upwards from the eye. Through manipulating those 3 XYZ coordinates you can simulate any camera movement you want.
Check out this link.
Further details:
-If you want for example to rotate around an object you rotate your eye around the up vector.
-If you want to move forward or backwards you add or subtract to the eye as well as the center points.
-If you want to tilt the camera left or right you rotate your up vector around your look vector where your look vector is center - eye.
gluLookAt operates on the deprecated fixed function pipeline, so you should use glm::lookAt instead.
You are currently using a constant vector for translation. In the commented out code (which I assume you were using to test rotation), you use angle to adjust the rotation. You should have a similar variable for translation. Then, you can change the glm::translate call to:
View = glm::translate(View, glm::vec3(x_transform, y_transform, z_transform)); // x, y, z position ?
and get translation.
You should probably pass in more than one parameter into Render_Terrain, as translation and rotation need at least six parameters.
In OpenGL the camera is always at (0, 0, 0). You need to set the matrix mode to GL_MODELVIEW, and then modify or set the model/view matrix using things like glTranslate, glRotate, glLoadMatrix, etc. in order to make it appear that the camera has moved. If you're using GLU, you can use gluLookAt to point the camera in a particular direction.