I'm trying to write the orbital camera (based on glm::quat) for my OpenGL application.
I have a few questions:
Сan I make ViewMatrix from RotationMatrix + position of camera?
camera_quat = glm::quat(glm::vec3(tmp_pitch, tmp_yaw, 0)) * camera_quat;
float pitch = camera_quat.pitch();
float yaw = camera_quat.yaw();
glm::mat4 rotate = glm::mat4_cast(camera_quat);
glm::vec3 view_direction(cos(yaw) * cos(pitch), sin(pitch), -sin(yaw) * cos(pitch));
camera_position = target - view_direction * radius;
glm::mat4 translate = glm::translate(camera_position);
glm::mat4 view_matrix = **???**;
Is this line correct?:
glm::vec3 view_direction(cos(yaw) * cos(pitch), sin(pitch), -sin(yaw) * cos(pitch));
P.S. Sorry if my english is bad. It is not my native language, I am russian.
I hope you can help me. Thank you in advance!
If you change the translate matrix to
glm::mat4 translate = glm::translate(-camera_position);
, it should be simply
glm::mat4 view_matrix = rotation * translation;
However, there is an easier way to go there. What you basically want to do is the following: Move the camera to the target, rotate the camera there, move it a bit back. This can be expressed in matrix form with (note that the view matrix is the inverse model transform for the camera):
view_matrix = glm::translate(0, 0, -radius) * rotate * glm::translate(-target);
Related
Working on an opengl project in visual studio.Im trying to rotate the camera around the X and Y axis.
Thats the math i should use
Im having trouble because im using glm::lookAt for camera position and it takes glm::vec3 as arguments.
Can someone explain how can i implement this in opengl?
PS:i cant use quaternions
The lookAt function should take three inputs:
vec3 cameraPosition
vec3 cameraLookAt
vec3 cameraUp
For my past experience, if you want to move the camera, first find the transform matrix of the movement, then apply the matrix onto these three vectors, and the result will be three new vec3, which are your new input into the lookAt function.
vec3 newCameraPosition = movementMat4 * cameraPosition
//Same for other two
Another approach could be finding the inverse movement of the one you want the camera to do and applying it to the whole scene. Since moving the camera is kind of equals to move the object onto inverse movement while keep the camera not move :)
Below the camera is rotated around the z-axis.
const float radius = 10.0f;
float camX = sin(time) * radius;
float camZ = cos(time) * radius;
glm::vec3 cameraPos = glm::vec3(camX, 0.0, camZ);
glm::vec3 objectPos = glm::vec3(0.0, 0.0, 0.0);
glm::vec3 up = glm::vec3(0.0, 1.0, 0.0);
glm::mat4 view = glm::lookAt(cameraPos, objectPos, up);
Check out https://learnopengl.com/, its a great site to learn!
I'm trying to build a raytracer and I use this article on how to build camera system.
The problem is that when, after calculating ray direction in camera space, I multiply it by camera-to-world transformation matrix and my camera seems to rotate in wrong (opposite) directions and works correctly if I inverse transformation matrix before multiplication.
Here is the code (I use glm library and right-handed coordinate system).
Initial data:
glm::vec3 origin_ = glm::vec3(0.f);// camera origin
cont glm::vec3 kDirection = glm::vec3(0.f, 0.f, -1.f);
cont glm::vec3 kUp = glm::vec3(0.f, 1.f, 0.f);
float aspect_ratio_ = (float)raster_height_ / raster_width_;
// bug !!! rotates in opposite direction (camera is actually tilted down)
glm::mat4 camera_to_world_ = glm::lookAtRH(origin_, glm::vec3(0.f, 0.2f, -1.f), kUp);
// works !!! (camera is tilted up)
glm::mat4 camera_to_world_ = glm::inverse(glm::lookAtRH(origin_, glm::vec3(0.f, 0.2f, -1.f), kUp));
And function that generates camera rays
// Calculate ray as if camera is located at 0,0,0 and pointing into negative z direction
// Then transform ray direction to desired plase
// x,y - pixel coordinates of raster image
// calculate as if raster image (screen) is 1.0 unit away from origin (eye)
Ray Camera::GenRay(const uint32_t x, const uint32_t y) {
glm::vec3 ray_direction = kDirection;
// from raster space to NDC space
glm::vec2 pixel_ndc((x + 0.5f) / raster_width_, (y + 0.5f) / raster_height_);
// from NDC space to camera space
float scale = tan(fov_ / 2.0f);
ray_direction.x = (2.0f * pixel_ndc.x - 1.0f) * scale; // *aspect_ratio_;
ray_direction.y = (1.0f - 2.0f * pixel_ndc.y) * scale * aspect_ratio_;
// apply camera-to-world rotation matrix to direction
ray_direction = camera_to_world_ * glm::vec4(ray_direction, 0.0f);
return Ray(origin_, ray_direction, Ray::Type::kPrimary);
}
I really can't understand the root of a problem so any help us appreciated.
I am trying to render a 3D model using OpenGL. And for the projection and transformation matrices, I am using glm. I've got my model on the screen and it works just like I intended it to; except one small problem.
I am setting the model's translation matrix as
glm::translate(glm::vec3(0, 0, 4)
to move the model a little bit forward so that I can see it. Since in OpenGL, by default, negative z is out towards the 'camera' and positive z is forward, I expected this to work but it doesn't. It only works if I set it to
glm::translate(glm::vec3(0, 0, -4)
But this seems weird to me, as I am setting my zNear to 0.01 and zFar to 1000. Is glm's z values flipped or am I doing something wrong here?
Here is my code:
glm::mat4 rotation = glm::mat4(1.0f);
glm::mat4 translation = glm::translate(glm::vec3(0, 0, -4));
glm::mat4 scale = glm::mat4(1.0f);
glm::mat4 modelMatrix = translation * rotation * scale;
glm::mat4 projectionMatrix = glm::perspective(70.0f, aspectRatio, 0.01f, 1000.0f);
glm::mat4 transformationMatrix = projectionMatrix * modelMatrix;
When you call perspective() with near = 0.01 and far = 1000.0 planes, its actual meaning is that you are cutting it as -0.01 to -1000.0 so you should put the object's z-value into the range [-0.01, -1000.0].
Imagine the right handed Coordinate and assume your eye's z-value is 0.0 in default.
After reading through this article (http://3dgep.com/?p=1700) it seems to imply I got my view matrix wrong. Here's how I compute the view matrix;
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
Mat4 Camera::GetViewMatrix() const
{
return Orientation() * glm::translate(Mat4(1.0f), -mTranslation);
}
Supposedly, I am to invert this resulting matrix, but I have not so far and it has work excellently thus far, and I'm not doing any inverting down the pipeline either. Is there something I am missing here?
You already did the inversion. The view matrix is the inverse of the model transformation that positions the camera. This is:
ModelCamera = Translation(position) * Rotation
So the inverse is:
ViewMatrix = (Translation(position) * Rotation)^-1
= Rotation^-1 * Translation(position)^-1
The translation is inverted by negating the offset:
= Rotation^-1 * Translation(-position)
This leaves us with inverting the rotation. We can assume that the rotation is inverted. Thus, the original rotation of the camera model is
Rotation^-1 = RotationX(verticalAngle) * RotationY(horizontalAngle)
Rotation = (RotationX(verticalAngle) * RotationY(horizontalAngle))^-1
= RotationY(horizontalAngle)^-1 * RotationX(verticalAngle)^-1
= RotationY(-horizontalAngle) * RotationX(-verticalAngle)
So the angles you specify are actually the inverted angles that would rotate the camera. If you increase horizontalAngle, the camera should turn to the right (assuming a right-handed coordinate system). That's just a matter of definitions.
I have been attempting to rotate an object around its local coordinates and then move it based off based of the rotated coordinates but i have not been able to achieve the desired results,
to explain the problem in a more in depth way i have an object at a certain point in space and i need to rotate it around its own origin(not the global origin) and then translate the object based off of the newly rotated axis's, after much experimenting i have discovered that i can either rotate the object around is origin but the coordinates will not be rotated with it or i can have the objects local coordinates be transformed with it but it will then rotate around the global origin.
currently my rotation/translation/scaling code looks like this
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f),trans);
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
glm::mat4 MVP = Projection* View * Model;
I believe this is the problem code specifically the second line from the bottom but i could be wrong and will be post more code if its needed.
i have also attempted to create an inverse matrix and use that at the start of the calculation but that appears to do nothing(i can add the code that i attempted to do this with if needed)
If any kind of elaboration is needed regarding this issue feel free to ask and i will expand on the question
Thanks.
EDIT 1:
Slightly modified code that was suggested in the answers section, still giving the same bug though.
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
glm::vec4 trans(x,y,z,1);
glm::vec4 vTrans = myRotationMatrix* trans ;
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f),vTrans.x,vTrans.y,vTrans.z);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
You need to apply your rotation matrix to the translation vector (trans).
So, assuming trans is a vec4, your code will be:
glm::mat4 Model = glm::mat4(1.f);
glm::mat4 myScalingMatrix = glm::scale(sx, sy ,sz);
glm::vec3 myRotationAxis( 0, 1, 0);
glm::mat4 myRotationMatrix =glm::rotate(glm::mat4(1.0f),rot, myRotationAxis);
glm::vec4 vTrans = myRotationMatrix * trans;
glm::mat4 myMatrix = glm::translate(glm::mat4(1.0f), vTrans.xyz);
Model= myScalingMatrix* myRotationMatrix*myMatrix;
glm::mat4 MVP = Projection* View * Model;
convert vec4 to vec3
So to complete the answer, if the model center is not (0,0,0) , you will have to compute
bounds of your model and translate it by half of it less model bottom left vertex.
It's well explicated here:
model local origin
According to supplied code, the answer is the best available... if you wants more details, supply some screenshots and details on your Projection and view matrix calculations