How to make 2D Zoom in OpenGL(GLFW, glad)? - c++

I'm trying to implement a simple paint program and now I have a problem with zoom, I can't understand how to do it? I tried to adapt the code from here to myself, but it did not work, I just get a black screen. What my problem?
Not using glut or glew!
Here my camera code:
.h
class Camera2d
{
public:
Camera2d(const glm::vec3& pos = glm::vec3(0.f, 0.f, 0.f),
const glm::vec3& up = glm::vec3(0.f, 1.f, 0.f));
//~Camera2d();
void setZoom(const float& zoom);
float getZoom() const noexcept;
glm::mat4 getViewMatrix() const noexcept;
void mouseScrollCallback(const float& yOffset);
protected:
void update();
private:
// Camera zoom
float m_zoom;
// Euler Angles
float m_yaw;
float m_pitch;
public:
// Camera Attributes
glm::vec3 position;
glm::vec3 worldUp;
glm::vec3 front;
glm::vec3 up;
glm::vec3 right;
};
.cpp
Camera2d::Camera2d(
const glm::vec3& pos /* = glm::vec3(0.f, 0.f, 0.f) */,
const glm::vec3& up /* = glm::vec3(0.f, 1.f, 0.f) */
)
: m_zoom(45.f)
, m_yaw(-90.f)
, m_pitch(0.f)
, position(pos)
, worldUp(up)
, front(glm::vec3(0.f, 0.f, -1.f))
{
this->update();
}
void Camera2d::setZoom(const float& zoom)
{
this->m_zoom = zoom;
}
float Camera2d::getZoom() const noexcept
{
return this->m_zoom;
}
glm::mat4 Camera2d::getViewMatrix() const noexcept
{
return glm::lookAt(this->position, this->position + this->front, this->up);
}
void Camera2d::mouseScrollCallback(const float& yOffset)
{
if (m_zoom >= 1.f && m_zoom <= 45.f)
m_zoom -= yOffset;
else if (m_zoom <= 1.f)
m_zoom = 1.f;
else if (m_zoom >= 45.f)
m_zoom = 45.f;
}
void Camera2d::update()
{
// Calculate the new Front vector
glm::vec3 _front;
_front.x = cos(glm::radians(this->m_yaw)) * cos(glm::radians(this->m_pitch));
_front.y = sin(glm::radians(this->m_pitch));
_front.z = cos(glm::radians(this->m_pitch)) * sin(glm::radians(this->m_yaw));
this->front = glm::normalize(_front);
// Also re-calculate the Right and Up vector
this->right = glm::normalize(glm::cross(this->front, this->worldUp)); // Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement.
this->up = glm::normalize(glm::cross(this->right, this->front));
}
and in main i try smth like this in render loop
// pass projection matrix to shader
glm::mat4 projection = glm::perspective(glm::radians(camera.getZoom()),
static_cast<float>(WIDTH) / static_cast<float>(HEIGHT),
0.1f,
10000.f);
shaderProg.setMat4("projecton", projection);
// camera view transformation
glm::mat4 view = camera.getViewMatrix();
shaderProg.setMat4("view", view);
here i have just 1 model its my white bg-texture
glm::mat4 model = glm::translate(model, glm::vec3(0.f, 0.f, 0.f));
model = glm::rotate(model, glm::radians(0.f), glm::vec3(1.0f, 0.3f, 0.5f));
shaderProg.setMat4("model", model);
All code on github: here

You're working in 2D, forget about the camera, forget about projection, forget about following OpenGL tutorials, they're aimed at 3D graphics.
What you need is just a rectangle to fill your screen. Start with the vertices at the the corners of the screen, starting from the top-left corner and moving counterclockwise: (-1,1,0) (-1,-1,0) (1,-1,0) (1,1,0). Forget about Z, you're working in 2D.
You draw on a texture, and the texture coordinates are (0,1) (0,0) (1,0) (1,1), same order as above. Zooming is now just a matter of scaling the rectangle. You have a matrix to determine the scale and one to determine the position. Forget about rotations, front vectors and all that stuff. In the vertex shader you scale and then translate the vertices as usual, in this order. Done.
To interact for example you can have mouse wheel up increasing the scale factor and mouse wheel down decreasing it. Hold click and move the mouse to change the (x,y) position. Again forget about Z. Throw those values into the vertex shader and do the usual transformations.

Related

OpenGL - Translate an Object on a arbitrary axis

I have a sphere and also the axis through its origin. I want to translate the sphere on its axis up and down. In it's starting position that's no problem, as the axis start parallel to the global y axis., but as soon as I rotate the sphere and therefore also the sphere axis around the z-axis it gets complicated.
My first thought was to normalize the axis and simply use that as the translation vector in the translation matrix. Then multiply the translation matrix with the normalized axis and the sphere should be pushed one unit on the axis.
Here is the code I already got:
class Object
{
public:
inline Object()
: vao(0),
positionBuffer(0),
colorBuffer(0),
indexBuffer(0),
elements(0),
vertices(0)
{}
inline ~Object() { // GL context must exist on destruction
glDeleteVertexArrays(1, &vao);
glDeleteBuffers(1, &indexBuffer);
glDeleteBuffers(1, &colorBuffer);
glDeleteBuffers(1, &positionBuffer);
}
GLuint vao; // vertex-array-object ID
GLuint positionBuffer; // ID of vertex-buffer: position
GLuint colorBuffer; // ID of vertex-buffer: color
GLuint indexBuffer; // ID of index-buffer
GLuint elements; // Number of Elements
vector<glm::vec3> vertices;
glm::vec3 mp;
glm::mat4x4 model; // model matrix
};
glm::vec3 axis = glm::normalize(glm::vec3{
sphereax.vertices[0].x - sphereax.vertices[1].x,
sphereax.vertices[0].y - sphereax.vertices[1].y,
sphereax.vertices[0].z - sphereax.vertices[1].z}
);
translateObject(earth, axis);
void translateObject(Object &obj, glm::vec3 &translation)
{
glm::mat4x4 trans_mat = glm::translate(glm::mat4(1.0f), translation);
for (int i = 0; i < obj.vertices.size(); i++)
{
obj.vertices[i] = glm::vec3(glm::vec4(obj.vertices[i], 1.0f) * trans_mat);
}
obj.mp = glm::vec3(glm::vec4(obj.mp, 1.0f) * trans_mat);
}
The translation matrix in translateObject(); seems to be right, yet the multiplication of one of the points and the transformation matrix shows no effect.
I suggest to do the following:
Rotate the sphere around the its rotation axis
Translate the sphere by its distance form the center along the x axis
Rotate the sphere around the center of the world
glm::vec3 rotation_axis = ...; // sphere axis
float axis_rot_angle = ...; // current axis rotation angle in radians
float distance_to_center = ...; // distance from the center of the world
float world_rot_angle = ...; // current world rotation angle in radians
glm::mat4 sphere_rot = glm::rotation(glm::mat4(1.0f), axis_rot_angle, rotation_axis);
glm::mat4 sphere_trans = glm::translate(glm::mat4(1.0f), glm::vec3(distance_to_center, 0.0f, 0.0f));
glm::mat4 world_rot = glm::rotation(glm::mat4(1.0f), world_rot_angle, glm::vec3(0.0f, 0.0f, 1.0f));
glm::mat4 model_mat = world_rot * sphere_trans * sphere_rot;

How to center directional light depthmap in camera position?

Right now I am working with shadow maps in my game engine. In the code below I compute View-Projection matrix for directional light source. I have a fixed projection-box size (=50), so now to light-up a box (-50; 50) in all directions positioned in the world center. It works correctly, but I want it to follow camera in the way such that the its position will always be the center of this box. How to do this?
Matrix4x4 DirectionalLight::GetMatrix() const
{
Vector3 position = Camera::GetPosition();
float sizeLx = -this->ProjectionSize;
float sizeRx = +this->ProjectionSize;
float sizeLy = -this->ProjectionSize;
float sizeRy = +this->ProjectionSize;
float sizeLz = -this->ProjectionSize;
float sizeRz = +this->ProjectionSize;
Matrix4x4 OrthoProjection = MakeOrthographicMatrix(sizeLx, sizeRx, sizeLy, sizeRy, sizeLz, sizeRz);
Matrix4x4 LightView = MakeViewMatrix(
this->Direction,
MakeVector3(0.0f, 0.0f, 0.0f),
MakeVector3(0.0f, 1.0f, 0.0f)
);
return OrthoProjection * LightView;
}
I am using glm as math library, most functions are aliases/wrappers: MakeOrthographicMatrix is glm::ortho, MakeViewMatrix is glm::lookAt

Ray Tracer Shadow Problems

I am working on a ray tracer, but
I am stuck for days on the shadow part.
My shadow is acting really weird. Here is an image of the ray tracer:
The black part should be the shadow.
The origin of the ray is always (0.f, -10.f, -500.f), because this is a perspective projection and that is the eye of the camera. When the ray hits a plane, the hit point is always the origin of the ray, but with the sphere it is different. It is different because it is based on the position of the sphere. There is never an intersection between the plane and sphere because the origin is huge difference.
I also tried to add shadow on a box, but this doesn't work either. The shadow between two spheres does work!
If someone wants to see the intersection code, let me know.
Thanks for taking the time to help me!
Camera
Camera::Camera(float a_fFov, const Dimension& a_viewDimension, vec3 a_v3Eye, vec3 a_v3Center, vec3 a_v3Up) :
m_fFov(a_fFov),
m_viewDimension(a_viewDimension),
m_v3Eye(a_v3Eye),
m_v3Center(a_v3Center),
m_v3Up(a_v3Up)
{
// Calculate the x, y and z axis
vec3 v3ViewDirection = (m_v3Eye - m_v3Center).normalize();
vec3 v3U = m_v3Up.cross(v3ViewDirection).normalize();
vec3 v3V = v3ViewDirection.cross(v3U);
// Calculate the aspect ratio of the screen
float fAspectRatio = static_cast<float>(m_viewDimension.m_iHeight) /
static_cast<float>(m_viewDimension.m_iWidth);
float fViewPlaneHalfWidth = tanf(m_fFov / 2.f);
float fViewPlaneHalfHeight = fAspectRatio * fViewPlaneHalfWidth;
// The bottom left of the plane
m_v3ViewPlaneBottomLeft = m_v3Center - v3V * fViewPlaneHalfHeight - v3U * fViewPlaneHalfWidth;
// The amount we need to increment to get the direction. The width and height are based on the field of view.
m_v3IncrementX = (v3U * 2.f * fViewPlaneHalfWidth);
m_v3IncrementY = (v3V * 2.f * fViewPlaneHalfHeight);
}
Camera::~Camera()
{
}
const Ray Camera::GetCameraRay(float iPixelX, float iPixelY) const
{
vec3 v3Target = m_v3ViewPlaneBottomLeft + m_v3IncrementX * iPixelX + m_v3IncrementY * iPixelY;
vec3 v3Direction = (v3Target - m_v3Eye).normalize();
return Ray(m_v3Eye, v3Direction);
}
Camera setup
Scene::Scene(const Dimension& a_Dimension) :
m_Camera(1.22173f, a_Dimension, vec3(0.f, -10.f, -500.f), vec3(0.f, 0.f, 0.f), vec3(0.f, 1.f, 0.f))
{
// Setup sky light
Color ambientLightColor(0.2f, 0.1f, 0.1f);
m_AmbientLight = new AmbientLight(0.1f, ambientLightColor);
// Setup shapes
CreateShapes();
// Setup lights
CreateLights();
// Setup buas
m_fBias = 1.f;
}
Scene objects
Sphere* sphere2 = new Sphere();
sphere2->SetRadius(50.f);
sphere2->SetCenter(vec3(0.f, 0.f, 0.f));
sphere2->SetMaterial(matte3);
Plane* plane = new Plane(true);
plane->SetNormal(vec3(0.f, 1.f, 0.f));
plane->SetPoint(vec3(0.f, 0.f, 0.f));
plane->SetMaterial(matte1);
Scene light
PointLight* pointLight1 = new PointLight(1.f, Color(0.1f, 0.5f, 0.7f), vec3(0.f, -200.f, 0.f), 1.f, 0.09f, 0.032f);
Shade function
for (const Light* light : a_Lights) {
vec3 v3LightDirection = (light->m_v3Position - a_Contact.m_v3Hitpoint).normalized();
light->CalcDiffuseLight(a_Contact.m_v3Point, a_Contact.m_v3Normal, m_fKd, lightColor);
Ray lightRay(a_Contact.m_v3Point + a_Contact.m_v3Normal * a_fBias, v3LightDirection);
bool test = a_RayTracer.ShadowTrace(lightRay, a_Shapes);
vec3 normTest = a_Contact.m_v3Normal;
float test2 = normTest.dot(v3LightDirection);
// No shadow
if (!test) {
a_ResultColor += lightColor * !test * test2;
}
else {
a_ResultColor = Color(); // Test code - change color to black.
}
}
You have several bugs:
in Sphere::Collides, m_fCollisionTime is not set, when t2>=t1
in Sphere::Collides, if m_fCollisionTime is negative, then the ray actually doesn't intersect with the sphere (this causes the strange shadow on the top of the ball)
put the plane lower, and you'll see the shadow of the ball
you need to check for nearest collision, when shooting a ray from the eye (just try, swap the order of the objects, and the sphere suddenly becomes behind the plane)
With these fixed, you'll get this:

Raycasting (Mouse Picking) while using an Perspective VS Orthographic Projection in OpenGL

I am struggling to understand how to change my algorithm to handle raycasting (utilized for MousePicking) using a Perspective projection and an Orthographic projection.
Currently I have a scene with 3D objects that have AxisAligned bounding boxes attached to them.
While rendering the scene using a perspective projection (created with glm::perspective) I can successfully use raycasting and my mouse to "pick" different objects in my scene. Here is a demonstration.
If I render the same scene, but using an Orthographic projection, and positioning the camera above the facing down (looking down the Y axis, Imagine like a level editor fora game) I am unable to correctly raycasting from the where the user clicks on the screen so I can get MousePicking working while rendering using an Orthographic projection. Here is a demonstration of it not working.
My algorithm at a high level:
auto const coords = mouse.coords();
glm::vec2 const mouse_pos{coords.x, coords.y};
glm::vec3 ray_dir, ray_start;
if (perspective) { // This "works"
auto const ar = aspect_rate;
auto const fov = field_of_view;
glm::mat4 const proj_matrix = glm::perspective(fov, ar, f.near, f.far);
auto const& target_pos = camera.target.get_position();
glm::mat4 const view_matrix = glm::lookAt(target_pos, target_pos, glm::vec3{0, -1, 0});
ray_dir = Raycast::calculate_ray_into_screen(mouse_pos, proj_matrix, view_matrix, view_rect);
ray_start = camera.world_position();
}
else if (orthographic) { // This "doesn't work"
glm::vec3 const POS = glm::vec3{50};
glm::vec3 const FORWARD = glm::vec3{0, -1, 0};
glm::vec3 const UP = glm::vec3{0, 0, -1};
// 1024, 768 with NEAR 0.001 and FAR 10000
//glm::mat4 proj_matrix = glm::ortho(0, 1024, 0, 768, 0.0001, 10000);
glm::mat4 proj_matrix = glm::ortho(0, 1024, 0, 768, 0.0001, 100);
// Look down at the scene from above
glm::mat4 view_matrix = glm::lookAt(POS, POS + FORWARD, UP);
// convert the mouse screen coordinates into world coordinates for the cube/ray test
auto const p0 = screen_to_world(mouse_pos, view_rect, proj_matrix, view_matrix, 0.0f);
auto const p1 = screen_to_world(mouse_pos, view_rect, proj_matrix, view_matrix, 1.0f);
ray_start = p0;
ray_dir = glm::normalize(p1 - p0);
}
bool const intersects = ray_intersects_cube(logger, ray_dir, ray_start,
eid, tr, cube, distances);
In perspective mode, we cast a ray into the scene and see if it intersects with the cube surrounding the object.
In orthographic mode, I'm casting two rays from the screen (one at z=0, the other at z=1) and creating a ray between those two points. I set the ray start point to where the mouse pointer is (with z=0) and use the ray direction just calculated as inputs into the same ray_cube_intersection algorithm.
My question is this
Since the MousePicking works using the Perspective projection, but not using an Orthographic projection:
Is it reasonable to assume the same ray_cube intersection algorithm can be used with a perspective/orthographic projection?
Is my thinking about setting the ray_start and ray_dir variables in the orthographic case correct?
Here is the source for the ray/cube collision algorithm in use.
glm::vec3
Raycast::calculate_ray_into_screen(glm::vec2 const& point, glm::mat4 const& proj,
glm::mat4 const& view, Rectangle const& view_rect)
{
// When doing mouse picking, we want our ray to be pointed "into" the screen
float constexpr Z = -1.0f;
return screen_to_world(point, view_rect, proj, view, Z);
}
bool
ray_cube_intersect(Ray const& r, Transform const& transform, Cube const& cube,
float& distance)
{
auto const& cubepos = transform.translation;
glm::vec3 const minpos = cube.min * transform.scale;
glm::vec3 const maxpos = cube.max * transform.scale;
std::array<glm::vec3, 2> const bounds{{minpos + cubepos, maxpos + cubepos}};
float txmin = (bounds[ r.sign[0]].x - r.orig.x) * r.invdir.x;
float txmax = (bounds[1 - r.sign[0]].x - r.orig.x) * r.invdir.x;
float tymin = (bounds[ r.sign[1]].y - r.orig.y) * r.invdir.y;
float tymax = (bounds[1 - r.sign[1]].y - r.orig.y) * r.invdir.y;
if ((txmin > tymax) || (tymin > txmax)) {
return false;
}
if (tymin > txmin) {
txmin = tymin;
}
if (tymax < txmax) {
txmax = tymax;
}
float tzmin = (bounds[ r.sign[2]].z - r.orig.z) * r.invdir.z;
float tzmax = (bounds[1 - r.sign[2]].z - r.orig.z) * r.invdir.z;
if ((txmin > tzmax) || (tzmin > txmax)) {
return false;
}
distance = tzmin;
return true;
}
edit: The math space conversions functions I'm using:
namespace boomhs::math::space_conversions
{
inline glm::vec4
clip_to_eye(glm::vec4 const& clip, glm::mat4 const& proj_matrix, float const z)
{
auto const inv_proj = glm::inverse(proj_matrix);
glm::vec4 const eye_coords = inv_proj * clip;
return glm::vec4{eye_coords.x, eye_coords.y, z, 0.0f};
}
inline glm::vec3
eye_to_world(glm::vec4 const& eye, glm::mat4 const& view_matrix)
{
glm::mat4 const inv_view = glm::inverse(view_matrix);
glm::vec4 const ray = inv_view * eye;
glm::vec3 const ray_world = glm::vec3{ray.x, ray.y, ray.z};
return glm::normalize(ray_world);
}
inline constexpr glm::vec2
screen_to_ndc(glm::vec2 const& scoords, Rectangle const& view_rect)
{
float const x = ((2.0f * scoords.x) / view_rect.right()) - 1.0f;
float const y = ((2.0f * scoords.y) / view_rect.bottom()) - 1.0f;
auto const assert_fn = [](float const v) {
assert(v <= 1.0f);
assert(v >= -1.0f);
};
assert_fn(x);
assert_fn(y);
return glm::vec2{x, -y};
}
inline glm::vec4
ndc_to_clip(glm::vec2 const& ndc, float const z)
{
return glm::vec4{ndc.x, ndc.y, z, 1.0f};
}
inline glm::vec3
screen_to_world(glm::vec2 const& scoords, Rectangle const& view_rect, glm::mat4 const& proj_matrix,
glm::mat4 const& view_matrix, float const z)
{
glm::vec2 const ndc = screen_to_ndc(scoords, view_rect);
glm::vec4 const clip = ndc_to_clip(ndc, z);
glm::vec4 const eye = clip_to_eye(clip, proj_matrix, z);
glm::vec3 const world = eye_to_world(eye, view_matrix);
return world;
}
} // namespace boomhs::math::space_conversions
I worked on this for several days because I ran into the same problem.
The unproject methods that we are used to work with are working 100% correctly here as well - even with orthographic projection. But with orthographic projection the direction vector going from the camera position into the screen is always the same. So, unprojecting the cursor in the same way dies not work as intended in this case.
What you want to do is getting the camera direction vector as it is but in order to get the ray origin you need to shift the camera position according to the current mouse position on screen.
My approach (C#, but you'll get the idea):
Vector3 worldUpDirection = new Vector3(0, 1, 0); // if your world is y-up
// Get mouse coordinates (2d) relative to window position:
Vector2 mousePosRelativeToWindow = GetMouseCoordsRelativeToWindow(); // (0,0) would be top left window corner
// get camera direction vector:
Vector3 camDirection = Vector3.Normalize(cameraTarget - cameraPosition);
// get x and y coordinates relative to frustum width and height.
// glOrthoWidth and glOrthoHeight are the sizeX and sizeY values
// you created your projection matrix with. If your frustum has a width of 100,
// x would become -50 when the mouse is left and +50 when the mouse is right.
float x = +(2.0f * mousePosRelativeToWindow .X / viewportWidth - 1) * (glOrthoWidth / 2);
float y = -(2.0f * mousePosRelativeToWindow .Y / viewPortHeight - 1) * (glOrthoHeight / 2);
// Now, you want to calculate the camera's local right and up vectors
// (depending on the camera's current view direction):
Vector3 cameraRight = Vector3.Normalize(Vector3.Cross(camDirection, worldUpDirection));
Vector3 cameraUp = Vector3.Normalize(Vector3.Cross(cameraRight, camDirection));
// Finally, calculate the ray origin:
Vector3 rayOrigin = cameraPosition + cameraRight * x + cameraUp * y;
Vector3 rayDirection = camDirection;
Now you have the ray origin and the ray direction for your orthographic projection.
With these you can run any ray-plane/volume-intersections as usual.

Camera Translation in OpenGL

I am trying to do a class to walk through the world in OpenGL, but I am having problems with the mathematics. My idea here is to use the function lookAt from glm to set the observer in the position I wanted, and then just operate with the points that I pass to the function.
I think the functions to do rotations that I made are correct, but the translation part in the walk method seems to be wrong, and when I try to walk in the world if I just translate, or just rotated, things go right, but when I do both things just get messed.
here is the class so far:
#ifndef OBSERVER_H
#define OBSERVER_H
#include <GL/gl.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
class Observer {
private:
glm::vec3 eye, center, upp;
public:
glm::mat4 view;
Observer() {}
~Observer() {}
void initialize(glm::vec3 eye, glm::vec3 center, glm::vec3 upp);
void walk(GLfloat distance);
void pitch(GLfloat pitch);
void yaw(GLfloat yaw);
void roll(GLfloat roll);
void setView();
};
void Observer::initialize(glm::vec3 eye, glm::vec3 center, glm::vec3 upp)
{
this->eye = eye;
this->center = center;
this->upp = upp;
}
void Observer::walk(GLfloat distance)
{
glm::vec3 vector = glm::normalize(center - eye);
glm::vec3 translate = vector*distance - vector;
eye += translate;
center += translate;
upp += translate;
}
void Observer::roll(GLfloat roll) {
glm::mat4 rotate(1.0f);
rotate = glm::rotate(rotate, roll, glm::vec3(center - eye));
center = glm::vec3(rotate * glm::vec4(center, 1.0f));
upp = glm::vec3(rotate * glm::vec4(upp, 1.0f));
}
void Observer::yaw(GLfloat yaw) {
glm::mat4 rotate(1.0f);
rotate = glm::rotate(rotate, yaw, glm::vec3(upp - eye));
center = glm::vec3(rotate * glm::vec4(center, 1.0f));
upp = glm::vec3(rotate * glm::vec4(upp, 1.0f));
}
void Observer::pitch(GLfloat pitch) {
glm::mat4 rotate(1.0f);
glm::vec3 cross = glm::cross(center - eye, upp - eye);
rotate = glm::rotate(rotate, pitch, cross);
center = glm::vec3(rotate * glm::vec4(center, 1.0f));
upp = glm::vec3(rotate * glm::vec4(upp, 1.0f));
}
void Observer::setView()
{
view = glm::lookAt(eye, center, upp);
}
#endif
So right before I starting draw things I set the view matrix with this class in other part in the program. Can someone tell me if my maths are right?
When you walk, you only want to transform the eye and center position, not the upp vector. Just remove the upp += translate; line.