I'm attempting to create a Camera class for a 3D OpenGL project. However I cannot figure out how to actually apply the camera to my scene. I have these Camera functions (amongst others):
void Camera::update(){
glm::vec3 direction(cos(_verticalAngle) * sin(_horizontalAngle), sin(_verticalAngle), cos(_verticalAngle) * cos(_horizontalAngle));
glm::vec3 right = glm::vec3(sin(_horizontalAngle - 3.14f/2.0f), 0, cos(_horizontalAngle - 3.14f/2.0f));
glm::vec3 up = glm::cross(right, direction);
_projectionMatrix = glm::perspective(_FoV, float(VIEWPORT_X) / float(VIEWPORT_Y), 0.1f, 250.0f);
_viewMatrix = glm::lookAt(_position, _position + direction, up);
}
glm::mat4 Camera::getProjectionMatrix(){
return _projectionMatrix;
}
glm::mat4 Camera::getViewMatrix(){
return _viewMatrix;
}
They were created from a tutorial, I'm not sure if they work though since I can't test them. What I want to do is get OpenGL to use the view and projection matrices to simulate a camera. How exactly do I tell OpenGL to use those projection and view matrices, so that it properly simulates a camera separate from model's transformations? I'm aware OpenGL will not accept glm matrices by default, but I have seen this type of thing in a few tutorials:
glm::mat4 ProjectionMatrix = getProjectionMatrix();
glm::mat4 ViewMatrix = getViewMatrix();
glm::mat4 ModelMatrix = glm::mat4(1.0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
but glUniformMatrix4fv (which I think applies the camera transforms?) makes no sense to me. It always has something to do with shaders, which I have none of. I simply have a wireframe test mesh currently. Could someone provide me a code snippet for this problem?
use glLoadMatrixf() if you are not using shaders, if you want just multiply current matrix use glMultMatrixf(), current matrix mode can switch use glMatrixMode(GL_PROJECTION or GL_MODELVIEW); for example(this is your code):
lm::mat4 ProjectionMatrix = getProjectionMatrix();
//setup projection matrix for opengl
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMultMatrixf(glm::value_ptr(ProjectionMatrix));
or:
glMultMatrixf(&ProjectionMatrix[0][0]);
EDIT:
if you want to apply transform to your model:(model_view are combined in fixed function)
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(&viewMatrix[0][0]);
glMultMatrixf(&modelTransform[0][0]); //model * view
Draw_your_model();
you might need to set your transform like this :
glm::translate(modelTransform,-10,-10,-10); so your model will be at (10,10,10)
I don't know about using GLM, but I can help with the regular OpenGL part.
glUniformMatrix4fv updates a 4x4 uniform matrix at the location specified by MatrixID in a particular shader program.
I recommend working through Learning Modern 3D Graphics Programming, which is excellent as both a reference and guide.
For a discussion of how these uniforms are used within the GLSL shader program see:
Learning Modern 3D Graphics Programming - Chapter 3
Based at your code you should do the following :
glm::mat4 ProjectionMatrix = getProjectionMatrix();
glm::mat4 ViewMatrix = inverse(getViewMatrix());//view(camera) must be inverse(if you don't already do it)
glm::mat4 ModelMatrix = glm::mat4(1.0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, value_ptr(MVP));//use value_ptr method to pass matrix pointer
Also to set proper camera matrix I would suggest using lookAt() GLM build in method to calculate eye, dir ,up and compose those into final matrix for you.
Related
I have been getting unexpected behavior while trying to rotate a basic cube. It may be helpful to know that translating the cube works correctly in the y and z direction. However, translating along the x axis is backwards(I negate only x for proper results) which I haven't been able to figure out why.
Furthermore, rotating the cube has been a mess. Without any sort of transform the cube appears correctly. Once I add a rotation transformation the cube is not displayed until I change one of the x,y,z rotation values from 0(Putting all values back to 0 makes it disappear again). Once it appears the cube won't rotate around whichever x,y,z plane I first changed unless I change two or more of the coordinates. It also wobbles around its origin when rotating.
Below is a snippets of my code I believe has incorrect math.
/* Here's how I setup the matrices for a mvp matrix*/
proj = glm::perspective(glm::radians(90.0f), (960.0f / 540.0f), 0.1f, 400.0f);
view = glm::lookAt(glm::vec3(0, 0, -200), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));
glm::mat4 model = glm::mat4(1.0f);
/* Here's how I transform the model matrix, note
translating works properly once the cube is visible*/
model = glm::translate(model, glm::vec3(-translation[0], translation[1], translation[2])); //negative x value
model = glm::rotate(model, 30.0f, rotation);
glm::mat4 mvp = proj * view * model;
shader->Bind();
shader->SetUniformMat4f("MVP", mvp);
renderer.Draw(*c_VAO, *c_EBO, *shader);
/* Here's how I use these values in my vertex shader */
layout(location = 0) in vec4 position;
...
uniform mat4 MVP;
...
void main()
{
gl_Position = u_MVP * position;
....
};
I've checked both the translation and rotation vectors values and they are as expected but I am still going mad trying to figure out this problem.
The unit of the angle of glm::rotate is radians. Use glm::radians to convert form degrees to radians:
model = glm::rotate(model, 30.0f, rotation);
model = glm::rotate(model, glm::radians(30.0f), rotation);
I'm learning about computer graphics through modern OpenGL tutorials, and I'm having issues refactoring my code from GLM to a custom matrix math class. Using GLM I can achieve the desired effect (which is a pyramid-like shape rotating across the screen); however, using my math class I cannot get the correct transformation. For reference, this is what the transformation looks like with my math class. Using GLM the shape will translate along the x-axis (horizontally) which is what I want. I assume the GLSL code is correct since it works with the GLM code.
In my Transform class, I have a function that returns a model matrix and which maps to the corresponding uniform variable in my shader class.
Matrix4f Transform::getModel() const // my math class - not working
{
Matrix4f transMat, rotMat, scaleMat;
transMat.initTranslation(trans.x, trans.y, trans.z);
rotMat.initRotation(rot.x, rot.y, rot.z);
scaleMat.initScale(scale.x, scale.y, scale.z);
return transMat * rotMat * scaleMat;
}
glm::mat4 Transform::getModel() const // glm - works fine
{
glm::mat4 transMat = glm::translate(glm::vec3(trans.x, trans.y, trans.z));
glm::mat4 scaleMat = glm::scale(glm::vec3(scale.x, scale.y, scale.z));
glm::mat4 rotX = glm::rotate(rot.x, glm::vec3(1.0, 0.0, 0.0));
glm::mat4 rotY = glm::rotate(rot.y, glm::vec3(0.0, 1.0, 0.0));
glm::mat4 rotZ = glm::rotate(rot.z, glm::vec3(0.0, 0.0, 1.0));
glm::mat4 rotMat = rotX * rotY * rotZ;
return transMat * rotMat * scaleMat;
}
I think the problem lies in my Matrix4<T> class, but there's quite a bit of code to show so I will link. The Matrix class is based on the tutorials I linked above.
P.S. If you're wondering why I'm using a custom math class instead of GLM it is for learning purposes (I realize GLM is much more suited for this than my untested library).
Thanks for the helpful comments, all I needed to do was change the 3rd parameter in glUniformMatrix* which performed a transpose on the transformation matrix.
I've been learning OpenGL 3+ from various online resources and recently gotten confused with transformation (model) matrices. As far as I know the proper order of multiplication is translationMatrix * rotationMatrix * scaleMatrix. If I understand correctly the multiplication is backwards, so the scale is applied first, then the rotation and lastly the transformation.
I have a Transform class which stores the position, scale and origin as 2d vectors and rotation as a float. The method for calculating transformation matrix looks like this:
glm::mat4 Transform::getTransformationMatrix() const
{
glm::mat4 result = glm::mat4(1.0f);
result = glm::translate(result, glm::vec3(position, 0.0f));
result = glm::translate(result, glm::vec3(origin, 0.0f));
result = glm::rotate(result, rotation, glm::vec3(0, 0, 1));
result = glm::translate(result, glm::vec3(-origin, 0.0f));
result = glm::scale(result, glm::vec3(scale, 0.0f));
return result;
}
Here's the vertex shader code:
#version 330 core
layout(location = 0) in vec2 position;
uniform mat4 modelMatrix;
uniform mat4 projectionMatrix;
void main()
{
gl_Position = projectionMatrix * modelMatrix * vec4(position, 0.0, 1.0);
}
As you can see I first translate and then rotate and scale, which is the opposite of what I have learnt. At first I had it the proper way (scale, rotate and translate) but it rotated around the initial position with huge radius, not the translated position which is not what I want (I am making a 2d game with sprites). I don't understand why does it work this way, can someone explain, do I have to keep separate methods for transform matrix calculations? Also does it work the same in 3d space?
I have been attempting to rotate an object around its local coordinates and then move it based off based of the rotated coordinates but i have not been able to achieve the desired results,
to explain the problem in a more in depth way i have an object at a certain point in space and i need to rotate it around its own origin(not the global origin) and then translate the object based off of the newly rotated axis's, after much experimenting i have discovered that i can either rotate the object around is origin but the coordinates will not be rotated with it or i can have the objects local coordinates be transformed with it but it will then rotate around the global origin.
currently my rotation/translation/scaling code looks like this
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f),trans);
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
glm::mat4 MVP = Projection* View * Model;
I believe this is the problem code specifically the second line from the bottom but i could be wrong and will be post more code if its needed.
i have also attempted to create an inverse matrix and use that at the start of the calculation but that appears to do nothing(i can add the code that i attempted to do this with if needed)
If any kind of elaboration is needed regarding this issue feel free to ask and i will expand on the question
Thanks.
EDIT 1:
Slightly modified code that was suggested in the answers section, still giving the same bug though.
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
glm::vec4 trans(x,y,z,1);
glm::vec4 vTrans = myRotationMatrix* trans ;
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f),vTrans.x,vTrans.y,vTrans.z);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
You need to apply your rotation matrix to the translation vector (trans).
So, assuming trans is a vec4, your code will be:
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
glm::vec4 vTrans = myRotationMatrix * trans;
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f), vTrans.xyz);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
glm::mat4 MVP = Projection* View * Model;
convert vec4 to vec3
So to complete the answer, if the model center is not (0,0,0) , you will have to compute
bounds of your model and translate it by half of it less model bottom left vertex.
It's well explicated here:
model local origin
According to supplied code, the answer is the best available... if you wants more details, supply some screenshots and details on your Projection and view matrix calculations
How do I apply the drawing position in the world via shaders?
My vertex shader looks like this:
in vec2 position;
uniform mat4x4 model;
uniform mat4x4 view;
uniform mat4x4 projection;
void main() {
gl_Position = projection * view * model * vec4(position, 0.0, 1.0);
}
Where position is the positions of the vertexes of the triangles.
I'm binding the matrices as follows.
view:
glm::mat4x4 view = glm::lookAt(
glm::vec3(0.0f, 1.2f, 1.2f), // camera position
glm::vec3(0.0f, 0.0f, 0.0f), // camera target
glm::vec3(0.0f, 0.0f, 1.0f)); // camera up axis
GLint view_uniform = glGetUniformLocation(shader, "view");
glUniformMatrix4fv(view_uniform, 1, GL_FALSE, glm::value_ptr(view));
projection:
glm::mat4x4 projection = glm::perspective(80.0f, 640.0f/480.0f, 0.1f, 100.0f);
GLint projection_uniform = glGetUniformLocation(shader, "projection");
glUniformMatrix4fv(projection_uniform, 1, GL_FALSE, glm::value_ptr(projection));
model transformation:
glm::mat4x4 model;
model = glm::translate(model, glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, static_cast<float>((glm::sin(currenttime)) * 360.0), glm::vec3(0.0, 0.0, 1.0));
GLint trans_uniform = glGetUniformLocation(shader, "model");
glUniformMatrix4fv(trans_uniform, 1, GL_FALSE, glm::value_ptr(model));
So this way I have to compute the position transformation each frame on the CPU. Is this the recommended way or is there a better one?
So this way I have to compute the position transformation each frame on the CPU. Is this the recommended way or is there a better one?
Yes. Calculating a new transform once for a mesh on the CPU and then applying it to all vertices inside the mesh inside the vertex shader is not going to be a very slow operation and needs to be done every frame.
In the render() method you usually do the following things
create matrix for camera (once per frame usually)
for each object in the scene:
create transformation (position) matrix
draw object
projection matrix can be created once per windowResize, or when creating matrix for camera.
Answer: Your code is good, it is a basic way to draw/update objects.
You could go into some framework/system that manages it automatically. You should not care (right now) about the performance of those matrix creation procedures... it is very fast. Drawing is more problematic.
as jozxyqk wrote in one comment, you can create ModelViewProjMatrix and send one combined matrix instead of three different one.