I'm trying to make a 2D camera on OpenGl using the function glm::lookat(). The problem is that once everything is rendered, I can't move the camera. I'm only trying to move it on an horizontal way.
glm::mat4 projection = glm::ortho(0.0f, static_cast<GLfloat>(this->Width), static_cast<GLfloat>(this->Height), 0.0f, 0.1f, 500.0f);
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
So, something like:
void Update()
{
static glm::vec3 cameraPos(0,0,-1);
cameraPos.x += 0.1f;
... etc
}
(Though, I'd suggest creating a Camera class, or at least storing the vec3 outside of this method, for anything beyond experimentation.) Everything else should work fine.
Typically, you'd also want to take into account the time difference between frames. There are many ways to measure this - whatever gl framework you're using probably provides a function for it - but you're probably running at 60Hz, so assume the time between frames is 16.6ms. In which case, you might
float velocity = 10; // units per second
glm::vec3 cameraPos(0,0,-3);
float deltaT = 16.6e-3f; // 16 milliseconds
void Update()
{
cameraPos.x += velocity * deltaT;
glm::lookAt(cameraPos, .....);
}
You might run into a situation where everything renders the first frame, then disappears. In that case, drop the velocity to zero to make sure everything is working as it was before, then try some really small velocity values (like 0.001). It depends on how big your geometry is, distance from camera, and a bunch of other stuff. A camera Z of -3 is pretty small, might try backing it off some more while you're working this out.
Good luck!
Related
I am trying to create a 2D, top down, style camera in OpenGL. I would like to stick to the convention of using model-view-projection matrices, this is so I can switch between a 3D view and a top down view while the application runs. I am actually using the glm::lookAt method to create the view matrix.
However there is something missing in my understanding. I am rendering a triangle on the screen, [very close to this tutorial][1], and that works perfectly fine (so no problems with windowing, display loops, vertex buffers, etc). The triangle is centered at (0, 0), and vertices are on -0.5/0.5 (so already in NDC).
I then added a uniform mat4 mpv; to the vertex shader. If I set the mpv matrix to:
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
I get the same, unmodified triangle as expected as these are (from my understanding) the default values for OpenGL.
Now I thought if I changed the Z value of the camera position it would have the same effect as zooming in and out, however all I get is the clear color, no triangle is rendered.
// Trying to simulate zoom in and out by changing z value of camera
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -3.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
So I printed the view matrix, and noticed that all I was doing was translating the Z value, which makes sense.
I then added an ortho projection matrix, to make sure everything is in NDC, but I still get nothing.
// *2 cause im on a Mac/high-res screen and the frame buffer scale is 2.
// Doing projection * view in one step and just updating view uniform until I get it working.
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
Where is my misunderstanding taking place. I would like to:
Simulate a top down view where I can zoom in and out on the target.
Create a 2D camera that follows a target (racing car), so the camera_pos XY and target_pos XY will be the same.
Eventually add an option to switch to a 3D following camera, like a standard racing game 3rd person view, hence the MPV vs just using simple translations.
[1]: https://learnopengl.com/Getting-started/Hello-Triangle
The vertex coordinates are in the range [-0.5, 0.5], but the orthographic projection projects the cuboid volume with the left, bottom, near point _(0, 0, 0.1) and the right, top, far point (800.0 * 2, 600.0 * 2, 100) of the viewport.
Therefore, the triangle mesh just covers one fragment in the lower left of the viewport.
Change the orthographic projection:
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
view = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.1f, 100.0f) * view;
I have a problem with my ray generation that I do not understand. The direction for my ray is computed wrongly. I ported this code from DirectX 11 to Vulkan, where it works fine, so I was surprised I could not get it to work:
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz - camPos.xyz);
Yet this code works perfectly:
vec4 nearPos = inverseViewProj * vec4(screenPos, 0, 1);
nearPos /= nearPos.w;
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz – nearPos.xyz);
[Edit] Matrix and camera positions are set like this:
const glm::mat4 clip(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 1.0f);
projMatrix = clip * glm::perspectiveFov(FieldOfView, float(ViewWidth), float(ViewHeight), NearZ, FarZ);
viewMatrix = glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
buffer.inverseViewProjMatrix = glm::inverse(projMatrix * viewMatrix);
buffer.camPos = viewMatrix[3];
[Edit2] What I see on screen is correct if I start at the origin. However, if I move left, for example, it looks as if I am moving right. All my rays seem to be perturbed. In some cases, strafing the camera looks as if I am moving around a different point in space. I assume the camera position is not equal to the singularity of my perspective matrix, yet I can not figure out why.
I think I am misunderstanding something basic. What am I missing?
Thanks to the comments I have found the problem. I was building my view matrix incorrectly, in the exact same way as in this post:
glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
This is equal to translating first and then rotating, which of course leads to something unwanted. In addition, the Position was negative and camPos was obtained using the last column of the view matrix instead of the inverse view matrix, which is wrong.
It was not noticable with my fractal raycaster simply because I never moved far away from the origin. That, and the fact that there is no point of reference in such an environment.
I'm trying to implement a camera all by myself in OpenGL (I use glfw and gml).
As for now, I don't have any class for it. I will create it later. So here is my try on coding the camera movements; it works fine with simple mouse movements, but otherwise, the camera tilts sideways. I'm still new to OpenGL so I don't have a lot to show but here is illustrated my problem: http://imgur.com/a/p9xXQ
I have a few (global as for now) variables :
float lastX = 0.0f, lastY = 0.0f, yaw = 0.0f, pitch = 0.0f;
glm::vec3 cameraPos(0.0f, 0.0f, 3.0f);
glm::vec3 cameraUp(0.0f, 1.0f, 0.0f); // As a reminder, x points to the right, y points upwards and z points towards you
glm::vec3 cameraFront(0.0f, 0.0f, -1.0f);
With these, I can create a view matrix this way :
glm::mat4 view;
view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
I want to be able to move my camera perpendicularly (yaw) and laterally (pitch), i.e. up, down, right, left on my screen. For this, it is enough to rotate the cameraFront vector and the cameraUp vector appropriately and then update the view matrix with the updated vectors.
My Cursor Position Callback looks like this :
glm::vec3 rotateAroundAxis(glm::vec3 toRotate, float angle, glm::vec3 axisDirection, glm::vec3 axisPoint) { // angle in radians
toRotate -= axisPoint;
glm::mat4 rotationMatrix(1.0f);
rotationMatrix = glm::rotate(rotationMatrix, angle, axisDirection);
glm::vec4 result = rotationMatrix*glm::vec4(toRotate, 1.0f);
toRotate = glm::vec3(result.x, result.y, result.z);
toRotate += axisPoint;
return toRotate;
}
void mouseCallback(GLFWwindow* window, double xpos, double ypos) {
const float maxPitch = float(M_PI) - float(M_PI) / 180.0f;
glm::vec3 cameraRight = -glm::cross(cameraUp, cameraFront);
float xOffset = xpos - lastX;
float yOffset = ypos - lastY;
lastX = xpos;
lastY = ypos;
float sensitivity = 0.0005f;
xOffset *= sensitivity;
yOffset *= sensitivity;
yaw += xOffset; // useless here
pitch += yOffset;
if (pitch > maxPitch) {
yOffset = 0.0f;
}
if (pitch < -maxPitch) {
yOffset = 0.0f;
}
cameraFront = rotateAroundAxis(cameraFront, -xOffset, cameraUp, cameraPos);
cameraFront = rotateAroundAxis(cameraFront, -yOffset, cameraRight, cameraPos);
cameraUp = rotateAroundAxis(cameraUp, -yOffset, cameraRight, cameraPos);
}
As I said, it works fine for simple up-down, left-right camera movements, but when I start to move my mouse in circles or like a madman, the camera starts to rotate longitudinally (roll).
I've tried to force cameraRight.y = cameraPos.y so that the cameraRight vector doesn't tilt upwards/downwards due to numerical errors but it doesn't solve the problem. I've also tried to add a (global) cameraRight vector to keep track of it instead of computing it every time so the end of the function looks like this :
cameraFront = rotateAroundAxis(cameraFront, -xOffset, cameraUp, cameraPos);
cameraRight = rotateAroundAxis(cameraRight, -xOffset, cameraUp, cameraPos);
cameraFront = rotateAroundAxis(cameraFront, -yOffset, cameraRight, cameraPos);
cameraUp = rotateAroundAxis(cameraUp, -yOffset, cameraRight, cameraPos);
but it doesn't solve the problem. Any pieces of advice ?
It seems you have global X-axis to the right, Y-axis going deep in the screen and Z-axis going up. And you local camera axis system is similar.
The desired behaviour is rotate the camera over its current position, left-right mouse movement is rotation around global Z, and up-dowm mouse movement is rotation around local X. Think a bit around these rotations until you understand them well, and why one is around global but the other around local directions. Imagine a security camera and its movements to visualize the axis systems and rotations.
The goal is getting the parameters used to define the View transformation by lookAtfunction.
First rotate around local X. We convert this local vector into global axis system by inverting the current View-matrix, you call view
glm::vec3 currGlobalX = glm::normalize((glm::inverse(view) * glm::vec4(1.0, 0.0, 0.0, 0.0)).xyz);
We need to rotate not only the cameraUp vector, but also the current target defined in global coordinates, what you call cameraPos + cameraFront:
cameraUp = rotateAroundAxis(cameraUp, -yOffset, currGlobalX, glm::vec3(0.0f, 0.0f, 0.0f)); //vector, not needed to translate
cameraUp = glm::normalize(cameraUp);
currenTarget = rotateAroundAxis(currenTarget, -yOffset, currGlobalX, cameraPos); //point, need translation
Now rotate around global Z-axis
cameraUp = rotateAroundAxis(cameraUp, -xOffset, glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f)); //vector, not needed to translate
cameraUp = glm::normalize(cameraUp);
currenTarget = rotateAroundAxis(currenTarget, -xOffset, glm::vec3(0.0f, 0.0f, 1.0f), cameraPos); //point, need translation
Finally, update view:
view = glm::lookAt(cameraPos, currenTarget, cameraUp);
I've been stuck on this for two days now, I'm unsure where else to look. I'm rendering two 3d cubes using OpenGL, and trying to apply a local rotation to each cube in these scene in response to me pressing a button.
I've got to the point where my cubes rotate in 3d space, but their both rotating about the world-space origin, instead of their own local origins.
(couple second video)
https://www.youtube.com/watch?v=3mrK4_cCvUw
After scouring the internet, the appropriate formula for calculating the MVP is as follow:
auto const model = TranslationMatrix * RotationMatrix * ScaleMatrix;
auto const modelview = projection * view * model;
Each of my cube's has it's own "model", which is defined as follows:
struct model
{
glm::vec3 translation;
glm::quat rotation;
glm::vec3 scale = glm::vec3{1.0f};
};
When I press a button on my keyboard, I create a quaternion representing the new angle and multiply it with the previous rotation quaternion, updating it in place.
The function looks like this:
template<typename TData>
void rotate_entity(TData &data, ecst::entity_id const eid, float const angle,
glm::vec3 const& axis) const
{
auto &m = data.get(ct::model, eid);
auto const q = glm::angleAxis(glm::degrees(angle), axis);
m.rotation = q * m.rotation;
// I'm a bit unsure on this last line above, I've also tried the following without fully understanding the difference
// m.rotation = m.rotation * q;
}
The axis is provided by the user like so:
// inside user-input handling function
float constexpr ANGLE = 0.2f;
...
// y-rotation
case SDLK_u: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, 1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
case SDLK_i: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, -1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
My GLSL vertex shader is pretty straight forward from what I've found in the example code out there:
// attributes input to the vertex shader
in vec4 a_position; // position value
// output of the vertex shader - input to fragment
// shader
out vec3 v_uv;
uniform mat4 u_mvmatrix;
void main()
{
gl_Position = u_mvmatrix * a_position;
v_uv = vec3(a_position.x, a_position.y, a_position.z);
}
Inside my draw code, the exact code I'm using to calculate the MVP for each cube is:
...
auto const& model = shape.model();
auto const tmatrix = glm::translate(glm::mat4{}, model.translation);
auto const rmatrix = glm::toMat4(model.rotation);
auto const smatrix = glm::scale(glm::mat4{}, model.scale);
auto const mmatrix = tmatrix * rmatrix * smatrix;
auto const mvmatrix = projection * view * mmatrix;
// simple wrapper that does logging and forwards to glUniformMatrix4fv()
p.set_uniform_matrix_4fv(logger, "u_mvmatrix", mvmatrix);
Earlier in my program, I calculate my view/projection matrices like so:
auto const windowheight = static_cast<GLfloat>(hw.h);
auto const windowwidth = static_cast<GLfloat>(hw.w);
auto projection = glm::perspective(60.0f, (windowwidth / windowheight), 0.1f, 100.0f);
auto view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 1.0f), // camera position
glm::vec3(0.0f, 0.0f, -1.0f), // look at origin
glm::vec3(0.0f, 1.0f, 0.0f)); // "up" vector
The positions of my cube's in world-space are on the Z axis, so they should be visible:
cube0.set_world_position(0.0f, 0.0f, 0.0f, 1.0f);
cube1.set_world_position(-0.7f, 0.7f, 0.0f, 1.0f);
// I call set_world_position() exactly once before my game enter's it's main loop.
// I never call this again, it just modifies the vertex used as the center of the shape.
// It doesn't modify the model matrix at all.
// I call it once before my game enter's it's game loop, and I never modify it after that.
So, my question is, is the appropriate way to update a rotation for an object?
Should I be storing a quaternion directly in my object's "model"?
Should I be storing my translation and scaling as separate vec3's?
Is there an easier way to do this? I've been reading and re-reading anything I can find, but I don't see anyone doing this in the same way.
This tutorial is a bit short on details, specifically how to apply a rotation to an existing rotation (I believe this is just multiplying the quaternions together, which is what I'm doing inside rotate_entity(...) above).
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-17-quaternions/
https://github.com/opengl-tutorials/ogl/blob/master/tutorial17_rotations/tutorial17.cpp#L306-L311
Does it make more sense to store the resulting "MVP" matrix myself as my "model" and apply glm::transform/glm::scale/glm::rotate operations on the MVP matrix directly? (I tried this last option earlier, but I couldn't figure out how to get that to work too).
Thanks!
edit: better link
Generally, you don't want to modify the position of your model's individual vertices on the CPU. That's the entire purpose of the vertex program. The purpose of the model matrix is to position the model in the world in the vertex program.
To rotate a model around its center, you need to first move the center to the origin, then rotate it, then move the center to its final position. So let's say you have a cube that stretches from (0,0,0) to (1,1,1). You need to:
Translate the cube by (-0.5, -0.5, -0.5)
Rotate by the angle
Translate the cube by (0.5, 0.5, 0.5)
Translate the cube to wherever it belongs in the scene
You can combine the last 2 translations into a single one, and of course, you can collapse all of these transformations into a single matrix that is your model matrix.
In my code I am trying to run two (at the moment, probably more in the future) matrix transformations on my world matrix.
Like so:
D3DXMatrixRotationY(&worldMatrix, rotation);
D3DXMatrixTranslation(&worldMatrix, 0.0f, -1.0f, 0.0f);
where rotation is a changing float and worldMatrix is a D3DXMATRIX. My problem is that only the last line of code in the transformation statements works. So in the above code, the worldMatrix would get translated, but not rotated. But if I switch the order of the two statements, the worldMatrix will get rotated, but not translated. However, I played around with it, and this code works just fine:
D3DXMatrixRotationY(&worldMatrix, rotation);
D3DXMATRIX temp = worldMatrix;
D3DXMatrixTranslation(&worldMatrix, 0.0f, -1.0f, 0.0f);
worldMatrix *= temp;
After this, the worldMatrix is translated and rotated. Why doesn't it work if I only use the variables and don't include the temp matrix? Thank you!!
D3DXMatrixTranslation takes an output parameter as 1st parameter. The created matrix is written to that matrix, overriding the already present elements in that matrix. The matrices are not automatically multiplied by that call.
Your new code is fine; you could also write it like this:
D3DXMatrix rot;
D3DXMatrix trans;
D3DXMatrixRotationY(&rot, rotation);
D3DXMatrixTranslation(&trans, 0.0f, -1.0f, 0.0f);
D3DXMatrix world = rot * trans;