I want to select an object in opengl glfw renderer and move the object in the scene. Currently I am able to create ray and its direction. Now I want to check if this ray intersects with my object in the renderer. How can I check if the ray hits my object and then I can say that my object is selected?
glm::vec3 CreateRay()
{
float mouseX = xPos/ (width * 0.5f) - 1.0f;
float mouseY = yPos / (height * 0.5f) - 1.0f;
glm::mat4 invVP = glm::inverse(proj * view);
glm::vec4 screenPos = glm::vec4(mouseX, -mouseY, 1.0f, 1.0f);
glm::vec4 worldPos = invVP * screenPos;
glm::vec3 dir = glm::normalize(glm::vec3(worldPos));
return dir;
}
glm::vec3 rayDirection = CreateRay();
glm::vec3 rayStartPositon = pCamera->GetCameraPosition();
glm::vec3 rayEndPosition = rayStartPositon + rayDirection * 2.0f;
Related
I'm trying to make a visual "simulation" of the solar system using OpenGL and am using this function to orbit a planet around the sun (in a circular orbit).
glm::vec3 application::orbit(glm::vec3 thisPlanet, glm::vec3 otherPlanet, float rotSpeed, const time &dt)
{
float radius = glm::distance(thisPlanet, otherPlanet);
float angle = acosf((thisPlanet.x - otherPlanet.x) / radius) - atanf(1.0f) * 4.0f;
angle += dt.as_seconds() * rotSpeed;
if (angle > 2 * atanf(1.0f) * 4.0f)
angle -= 2 * atanf(1.0f) * 4.0f;
float x = otherPlanet.x + cosf(angle) * radius;
float z = otherPlanet.z + sinf(angle) * radius;
return glm::vec3(x, thisPlanet.y, z);
}
The function is called every frame like this:
void application::tick(const time &dt)
{
if (m_keyboard.key_released(GLFW_KEY_ESCAPE)) {
m_running = false;
}
m_controller.update(m_keyboard, m_mouse, dt);
m_cube_rotation += dt.as_seconds();
m_mercury_position = orbit(m_mercury_position, m_sun_position, 2.0f, dt);
// glm::mat4 world = glm::translate(glm::mat4(1.0f), m_cube_position)
// * glm::rotate(glm::mat4(1.0f), m_cube_rotation, glm::normalize(glm::vec3(1.0f, 1.0f, -1.0f)));
glm::mat4 sun = glm::translate(glm::scale(glm::mat4(1.0f), glm::vec3(2.0f, 2.0f, 2.0f)), m_sun_position)
* glm::rotate(glm::mat4(1.0f), m_cube_rotation, glm::normalize(glm::vec3(1.0f, 1.0f, -1.0f)));
glm::mat4 mercury = glm::translate(glm::scale(glm::mat4(1.0f), glm::vec3(1.0f, 1.0f, 1.0f)), m_mercury_position)
* glm::rotate(glm::mat4(1.0f), m_cube_rotation, glm::normalize(glm::vec3(1.0f, 1.0f, -1.0f)));
//m_crate.set_transform(world);
m_sun.set_transform(sun);
m_mercury.set_transform(mercury);
const int frames_per_second = int(1.0f / dt.as_seconds());
const int frame_timing_ms = int(dt.as_milliseconds());
m_overlay.pre_frame(m_width, m_height);
m_overlay.push_line("FPS: %d (%dms)", frames_per_second, frame_timing_ms);
}
Why doesn't the planet move?
This is how I would construct the model matrix:
mat4 model_mat;
const vec3 n_forward = dir.x;
const vec3 n_up = vec3(0, 1, 0);
const vec3 n_left = dir.z;
// construct a basis and a translation
model_mat[0] = normalize(vec4(n_left, 0.0f));
model_mat[1] = normalize(vec4(n_forward, 0.0f));
model_mat[2] = normalize(vec4(n_up, 0.0f));
model_mat[3] = vec4(p, 1.0f);
The position (e.g. p) is given by the parametric circle equation p = r cos(t) + r sin(t), where t is the elapsed time and r is the circular orbit radius. Also see: https://www.mathopenref.com/coordparamcircle.html
Does this help at all?
When camera is moved around, why are my starting rays are still stuck at origin 0, 0, 0 even though the camera position has been updated?
It works fine if I start the program and my camera position is at default 0, 0, 0. But once I move my camera for instance pan to the right and click some more, the lines are still coming from 0 0 0 when it should be starting from wherever the camera is. Am I doing something terribly wrong? I've checked to make sure they're being updated in the main loop. I've used this code snippit below referenced from:
picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone
// 1. Get mouse coordinates then normalize
float x = (2.0f * lastX) / width - 1.0f;
float y = 1.0f - (2.0f * lastY) / height;
// 2. Move from clip space to world space
glm::mat4 inverseWorldMatrix = glm::inverse(proj * view);
glm::vec4 near_vec = glm::vec4(x, y, -1.0f, 1.0f);
glm::vec4 far_vec = glm::vec4(x, y, 1.0f, 1.0f);
glm::vec4 startRay = inverseWorldMatrix * near_vec;
glm::vec4 endRay = inverseWorldMatrix * far_vec;
// perspective divide
startR /= startR.w;
endR /= endR.w;
glm::vec3 direction = glm::vec3(endR - startR);
// start the ray points from the camera position
glm::vec3 startPos = glm::vec3(camera.GetPosition());
glm::vec3 endPos = glm::vec3(startPos + direction * someLength);
The first screenshot I click some rays, the 2nd I move my camera to the right and click some more but the initial starting rays are still at 0, 0, 0. What I'm looking for is for the rays to come out wherever the camera position is in the 3rd image, ie the red rays sorry for the confusion, the red lines are supposed to shoot out and into the distance not up.
// and these are my matrices
// projection
glm::mat4 proj = glm::perspective(glm::radians(camera.GetFov()), (float)width / height, 0.1f, 100.0f);
// view
glm::mat4 view = camera.GetViewMatrix(); // This returns glm::lookAt(this->Position, this->Position + this->Front, this->Up);
// model
glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f));
Its hard to tell where in the code the problem lies. But, I use this function for ray casting that is adapted from code from scratch-a-pixel and learnopengl:
vec3 rayCast(double xpos, double ypos, mat4 projection, mat4 view) {
// converts a position from the 2d xpos, ypos to a normalized 3d direction
float x = (2.0f * xpos) / WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / HEIGHT;
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
// eye space to clip we would multiply by projection so
// clip space to eye space is the inverse projection
vec4 ray_eye = inverse(projection) * ray_clip;
// convert point to forwards
ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
// world space to eye space is usually multiply by view so
// eye space to world space is inverse view
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
where you can draw your line with startPos = camera.Position and endPos = camera.Position + rayCast(...) * scalar_amount.
I'm drawing a 2D tilemap using OpenGL and I will like to be able to know where the position of the mouse corresponds into my scene. This is what I currently have:
To draw this screen this projection is used
glm::mat4 projection = glm::perspective(
glm::radians(45.0f),
(float)screenWidth / (float)screenHeight,
1.0f,
100.0f
);
Then this camera is used to move and zoom the tilemap
glm::vec3 camera(0.0f, 0.0f, -1.00f);
Which then translates into a camera view
glm::mat4 cameraView = glm::translate(state.projection, camera);
That finally gets passed through a uniform to the vertex shader
#version 330 core
layout(location = 0) in vec2 aPosition;
uniform mat4 uCameraView;
void main() {
gl_Position = uCameraView * vec4(aPosition.x, aPosition.y, 0.0f, 1.0f);
}
This shader receives a normalized vertex, which it means that I never know how much in pixels a tile is in my screen.
Now I'm trying to somehow calculate where the mouse will be inside of my scene if it was projected like a ray into the tilemap and then hit it. If I managed to get the position of that collision I will be able to know which tile the mouse is hovering.
What will be the best approach to find this coordinate?
In the end I found this solution to map the mouse pixel coordinates to the perspective:
glm::vec4 tile = glm::translate(projection, glm::vec3(0.0f, 0.0f, camera.z)) *
glm::vec4(size.tile.regular, size.tile.regular, camera.z, 1.0f);
glm::vec3 ndcTile =
glm::vec3(tile.x / tile.w, tile.y / tile.w, tile.z / tile.w);
float pixelUnit = windowWidth * ndcTile.x;
float pixelCameraX = (camera.x / size.tile.regular) * pixelUnit;
float pixelCameraY = (camera.y / size.tile.regular) * pixelUnit;
float originX = (windowWidth / 2.0f) + pixelCameraX;
float originY = (windowHeight / 2.0f) - pixelCameraY;
float tileX = (state.input.pixelCursorX - originX) / pixelUnit;
float tileY = (state.input.pixelCursorY - originY) / pixelUnit;
selectedTileX = tileX > 0 ? tileX : tileX - 1;
selectedTileY = tileY > 0 ? tileY : tileY - 1;
I am having trouble about understand the translate the camera. I already can successfully rotate the camera, but I am still confused about translating the camera. I include the code about how to rotate the camera, Since translating and rotating need to use the lookat function. The homework says translating the camera means that both the eye and the center should be moved with the same amount. I understand I can change the parameters in the lookat function to implement this.
The definition of lookat function is below:
Lookat(cameraPos, center, up)
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 10.0f);
glm::vec3 center(0.0f, 0.0f, 0.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
modelViewProjectionMatrix.Perspective(glm::radians(fov), float(width) / float(height), 0.1f, 100.0f);
modelViewProjectionMatrix.LookAt(cameraPos, center, cameraUp);
void CursorPositionCallback(GLFWwindow* lWindow, double xpos, double ypos)
{
int state = glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_LEFT);
if (state == GLFW_PRESS)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
float xoffset = xpos - lastX;
float yoffset = lastY- ypos;
lastX = xpos;
lastY = ypos;
yaw += xoffset;
pitch += yoffset;
glm::vec3 front;
front.x = center[0] + 5.0f*cos(glm::radians(yaw)) * cos(glm::radians(pitch));
front.y = center[1] + 5.0f*sin(glm::radians(pitch));
front.z = center[1] + 5.0f*sin(glm::radians(yaw)) * cos(glm::radians(pitch));
cameraPos = front;
}
}
If you want to translate the camera by an offset, then you've to add the same vector (glm::vec3 offset) to the camera position (cameraPos) and the camera target (center):
center = center + offset;
cameraPos = cameraPos + offset;
When you calculate a new target of the camera (center), by a pitch and yaw angle, then you've to update the up vector (cameraUp) of the camera, too:
glm::vec3 front(
cos(glm::radians(pitch)) * cos(glm::radians(yaw)),
sin(glm::radians(pitch)),
cos(glm::radians(pitch)) * sin(glm::radians(yaw))
);
glm::vec3 up(
-sin(glm::radians(pitch)) * cos(glm::radians(yaw)),
cos(glm::radians(pitch)),
-sin(glm::radians(pitch)) * sin(glm::radians(yaw))
);
cameraPos = center + front * 5.0f;
cameraUp = up;
To translate the camera along the x axis (from left to right) in viewspace you've to calculate the vector to the right by the Cross product of the vector to the target (front) and the up vector (cameraUp or up):
glm::vec3 right = glm::cross(front, up);
The y axis (from bottom to top) in viewspace, is the up vector.
To translate about the scalars (float trans_x) and (trans_y), the scaled right and up vector have to be add to the camera position (cameraPos) and the camera target (center):
center = center + right * trans_x + up * trans_y;
cameraPos = cameraPos + right * trans_x + up * trans_y;
Use the manipulated vectors to set the view matrix:
modelViewProjectionMatrix.LookAt(cameraPos, center, cameraUp);
My goal is to navigate in the view port using the mouse.
Every frame that the mouse move, I recalculate the cameraFront and cameraUp vectors and finally the view matrix. The problem is that the view matrix sometimes create rotation in the z axis, witch I don't expected to be.
I am not sure what I am doing wrong.
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
float xoffset = xpos - lastX;
float yoffset = ypos - lastY;
lastX = xpos;
lastY = ypos;
float sensitivity = 0.05;
xoffset *= sensitivity;
yoffset *= sensitivity;
glm::quat rotY = glm::angleAxis(glm::radians(xoffset), cameraUp);
cameraFront = glm::normalize(rotY * cameraFront);
glm::vec3 rightAxis = glm::cross(cameraUp, cameraFront);
glm::quat rotX = glm::angleAxis(glm::radians(yoffset), rightAxis);
cameraFront = glm::normalize(rotX * cameraFront);
cameraUp = glm::normalize(glm::cross(cameraFront, rightAxis));
}
in the while loop am recalculate the view matrix:
view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
I am learning OpenGl from the Tutorial which show example how to navigate in the scene, but I am trying to it differently.
can anyone see my mistake?