I'm trying to implement an editor like placing mod with opengl.
When you click somewhere on the screen an object get placed in that position.
So far this is my code
void RayCastMouse(double posx, double posy)
{
glm::fvec2 NDCCoords = glm::fvec2( (2.0*posx)/ float(SCR_WIDTH)-1.f, ((-2.0*posy) / float(SCR_HEIGHT)+1.f) );
glm::mat4 viewProjectionInverse = glm::inverse(projection * camera.GetViewMatrix());
glm::vec4 worldSpacePosition(NDCCoords.x, NDCCoords.y, 0.0f, 1.0f);
glm::vec4 worldRay = viewProjectionInverse*worldSpacePosition;
printf("X=%f / y=%f / z=%f\n", worldRay.x, worldRay.y, worldRay.z);
m_Boxes.emplace_back(worldRay.x, 0, worldRay.z);
}
The problem is the object isn't placed at the correct position, worldRay vec3 is used to translate the model matrix.
Can anyone one please help with this i will really appreciate it.
By the way division the worldRay xyz component by worldRay.w is setting the object at the camera position.
Related
I have what I believed to be a basic need: from "2D position of the mouse on the screen", I need to get "the closest 3D point in the 3D world". Looks like ray-tracing common problematic (even if it's not mine).
I googled / read a lot: looks like the topic is messy and lots of things gets unfortunately quickly intricated. My initial problem / need involves lots of 3D points what I do not know (meshes or point cloud from the internet), so, it's impossible to understand what result you should expect! Thus, I decided to create simple shapes (triangle, quadrangle, cube) with points that I know (each coord of each point is 0.f or 0.5f in local frame), and, try to see if I can "recover" 3D point positions from the mouse cursor when I move it on the screen.
Note: all coord of all points of all shapes are known values like 0.f or 0.5f. For example, with the triangle:
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
What I do
I have a 3D OpenGL renderer where I added a GUI to have controls on the rendered scene
Transformations: tx, ty, tz, rx, ry, rz are controls that enables to change the model matrix. In code
// create transformations: model represents local to world transformation
model = glm::mat4(1.0f); // initialize matrix to identity matrix first
model = glm::translate(model, glm::vec3(tx, ty, tz));
model = glm::rotate(model, glm::radians(rx), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, glm::radians(ry), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, glm::radians(rz), glm::vec3(0.0f, 0.0f, 1.0f));
ourShader.setMat4("model", model);
model changes only the position of the shape in the world and has no connection with the position of the camera (that's what I understand from tutorials).
Camera: from here, I ended-up with a camera class that holds view and proj matrices. In code
// get view and projection from camera
view = cam.getViewMatrix();
ourShader.setMat4("view", view);
proj = cam.getProjMatrix((float)SCR_WIDTH, (float)SCR_HEIGHT, near, 100.f);
ourShader.setMat4("proj", proj);
The camera is a fly-like camera that can be moved when moving the mouse or using keyboard arrows and that does not act on model, but only on view and proj (that's what I understand from tutorials).
The shader then uses model, view and proj this way:
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
void main()
{
// note that we read the multiplication from right to left
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
Screen to world: as using glm::unProject didn't always returned results I expected, I added a control to not use it (back-projecting by-hand). In code, first I get the cursor mouse position frame3DPos following this
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouseCursorCallback(GLFWwindow* window, double xposIn, double yposIn)
{
// screen to world transformation
xposScreen = xposIn;
yposScreen = yposIn;
int windowWidth = 0, windowHeight = 0; // size in screen coordinates.
glfwGetWindowSize(window, &windowWidth, &windowHeight);
int frameWidth = 0, frameHeight = 0; // size in pixel.
glfwGetFramebufferSize(window, &frameWidth, &frameHeight);
glm::vec2 frameWinRatio = glm::vec2(frameWidth, frameHeight) /
glm::vec2(windowWidth, windowHeight);
glm::vec2 screen2DPos = glm::vec2(xposScreen, yposScreen);
glm::vec2 frame2DPos = screen2DPos * frameWinRatio; // window / frame sizes may be different.
frame2DPos = frame2DPos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention.
glm::vec3 frame3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
frame3DPos.x = frame2DPos.x;
frame3DPos.y = frameHeight - 1.0f - frame2DPos.y; // GL's window origin is at the bottom left
frame3DPos.z = 0.f;
glReadPixels((GLint) frame3DPos.x, (GLint) frame3DPos.y, // CAUTION: cast to GLint.
1, 1, GL_DEPTH_COMPONENT,
GL_FLOAT, &zbufScreen); // CAUTION: GL_DOUBLE is NOT supported.
frame3DPos.z = zbufScreen; // z-buffer.
And then I can call glm::unProject or not (back-projecting by-hand) according to controls in GUI
glm::vec3 world3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
if (screen2WorldUsingGLM) {
glm::vec4 viewport(0.0f, 0.0f, (float) frameWidth, (float) frameHeight);
world3DPos = glm::unProject(frame3DPos, view * model, proj, viewport);
} else {
glm::mat4 trans = proj * view * model;
glm::vec4 frame4DPos(frame3DPos, 1.f);
frame4DPos = glm::inverse(trans) * frame4DPos;
world3DPos.x = frame4DPos.x / frame4DPos.w;
world3DPos.y = frame4DPos.y / frame4DPos.w;
world3DPos.z = frame4DPos.z / frame4DPos.w;
}
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
Z-buffering is always allowed whatever the shape is 2D (triangle, quadrangle) or 3D (cube). In code
glEnable(GL_DEPTH_TEST); // Enable z-buffer.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // also clear the z-buffer
In picture I get
The camera is positioned at (0., 0., 0.) and looks "ahead" (front = -z as z-axis is positive from screen to me). The shape is positioned (using tx, ty, tz, rx, ry, rz) "in front of the camera" with tz = -5 (5 units following the front vector of the camera)
What I get
Triangle in initial setting
I have correct xpos and ypos in world frame but incorrect zpos = 0. (z-buffering is allowed). I expected zpos = -5 (as tz = -5).
Question: why zpos is incorrect?
If I do not use glm::unProject, I get outer space results
Question: why "back-projecting" by-hand doesn't return consistent results compared to glm::unProject? Is this logical? Arethey different operations? (I believed they should be equivalent but they are obviously not)
Triangle moved with translation
After translation of about tx = 0.5 I still get same coordinates (local frame) where I expected to have previous coord translated along x-axis. Not using glm::unProject returns oute-space results here too...
Question: why translation (applied by model - not view nor proj) is ignored?
Cube in initial setting
I get correct xpos, ypos and zpos?!... So why is this not working the same way with the "2D" triangle (which is "3D" one to me, so, they should behave the same)?
Cube moved with translation
Translated along ty this time seems to have no effect (still get same coordinates - local frame).
Question: like with triangle, why translation is ignored?
What I'd like to get
The main question is why the model transformation is ignored? If this is to be expected, I'd like to understand why.
If there's a way to recover the "true" position of the shape in the world (including model transformation) from the position of the mouse cursor, I'd like to understand how.
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
As I am new to OpenGL, I didn't get that object coordinates from glm::unProject doc is another way to refer to local space. Solution: pass view*model to glm::unProject and apply model again, or, pass view to glm::unProject as explained here: Screen Coordinates to World Coordinates.
This fixes all weird behaviors I observed.
So I have a camera object in my scenegraph which I want to rotate around another object and still look at it.
So far the code I've tried to translate its position just keeps moving it back and forth to the right and back a small amount.
Here is the code I've tried using in my game update loop:
//ang is set to 75.0f
camera.position += camera.right * glm::vec3(cos(ang * deltaTime), 1.0f, 1.0f);
I'm not really sure where I'm going wrong. I've looked at other code to rotate around the object and they use cos and sine but since im only translating along the x axis I thought I would only need this.
First you have to create a rotated vector. This can be done by glm::rotateZ. Note, since glm version 0.9.6 the angle has to be set in radians.
float ang = ....; // angle per second in radians
float timeSinceStart = ....; // seconds since the start of the animation
float dist = ....; // distance from the camera to the target
glm::vec3 cameraVec = glm::rotateZ(glm::vec3(dist, 0.0f, 0.0f), ang * timeSinceStart);
Furthermore, you must know the point around which the camera should turn. Probably the position of the objet:
glm::vec3 objectPosition = .....; // position of the object where the camera looks to
The new position of the camera is the position of the object, displaced by the rotation vector:
camera.position = objectPosition + cameraVec;
The target of the camera has to be the object position, because the camera should look to the object:
camera.front = glm::normalize(objectPosition - camera.position);
The up vector of the camera should be the z-axis (rotation axis):
camera.up = glm::vec3(0.0f, 0.0f, 1.0f);
So I want to use quaternions and angles to control my camera using my mouse.
I accumulate the vertical/horizontal angles like this:
void Camera::RotateCamera(const float offsetHorizontalAngle, const float offsetVerticalAngle)
{
mHorizontalAngle += offsetHorizontalAngle;
mHorizontalAngle = std::fmod(mHorizontalAngle, 360.0f);
mVerticalAngle += offsetVerticalAngle;
mVerticalAngle = std::fmod(mVerticalAngle, 360.0f);
}
and compute my orientation like this:
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
and the forward vector, which I need for glm::lookAt, like this:
Vec3 Camera::Forward() const
{
return Vec3(glm::inverse(Orientation()) * Vec4(0.0f, 0.0f, -1.0f, 0.0f));
}
I think that should do the trick, but I do not know how in my example game to get actual angles? All I have is the current and previous mouse location in window coordinates.. how can I get proper angles from that?
EDIT: on a second thought.. my "RotateCamera()" cant be right; I am experiencing rubber-banding effect due to the angles reseting after reaching 360 deegres... so how do I accumulate angles properly? I can just sum them up endlessly
Take a cross section of the viewing frustum (the blue circle is your mouse position):
Theta is half of your FOV
p is your projection plane distance (don't worry - it will cancel out)
From simple ratios it is clear that:
But from simple trignometry
So ...
Just calculate the angle psi for each of your mouse positions and subtract to get the difference.
A similar formula can be found for the vertical angle:
Where A is your aspect ratio (width / height)
I'm despairing of the task to zoom in on the current mouse position in OpenGL. I've tried a lot of different things and read other posts on this, but I couldn't adapt the possible solutions to my specific problem. So as far as I understood it, you'll have to get the current window coordinates of the mouse curser, then unproject them to get world coordinates and finally translate to those world coordinates.
To find the current mouse positions, I use the following code in my GLUT mouse callback function every time the right mouse button is clicked.
if(button == 2)
{
mouse_current_x = x;
mouse_current_y = y;
...
Next up, I unproject the current mouse positions in my display function before setting up the ModelView and Projection matrices, which also seems to work perfectly fine:
// Unproject Window Coordinates
float mouse_current_z;
glReadPixels(mouse_current_x, mouse_current_y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &mouse_current_z);
glm::vec3 windowCoordinates = glm::vec3(mouse_current_x, mouse_current_y, mouse_current_z);
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, (float)width, (float)height);
glm::vec3 worldCoordinates = glm::unProject(windowCoordinates, modelViewMatrix, projectionMatrix, viewport);
printf("(%f, %f, %f)\n", worldCoordinates.x, worldCoordinates.y, worldCoordinates.z);
Now the translation is where the trouble starts. Currently I'm drawing a cube with dimensions (dimensionX, dimensionY, dimensionZ) and translate to the center of that cube, so my zooming happens to the center point as well. I'm achieving zooming by translating in z-direction (dolly):
// Set ModelViewMatrix
modelViewMatrix = glm::mat4(1.0); // Start with the identity as the transformation matrix
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(0.0, 0.0, -translate_z)); // Zoom in or out by translating in z-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_x, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the whole szene in x-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_y, glm::vec3(0.0f, 1.0f, 0.0f)); // Rotate the whole szene in y-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, -90.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the camera by 90 degrees in negative x-direction to get a frontal look on the szene
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(-dimensionX/2.0f, -dimensionY/2.0f, -dimensionZ/2.0f)); // Translate the origin to be the center of the cube
glBindBuffer(GL_UNIFORM_BUFFER, globalMatricesUBO);
glBufferSubData(GL_UNIFORM_BUFFER, sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
I tried to replace the translation to the center of the cube by translating to the worldCoordinates vector, but this didn't work. I also tried to scale the vector by width or height.
Am I missing out on some essential step here?
Maybe this won't work in your case. But to me this seems like the best way to handle this. Use glulookat() to look at the xyz position of the mouse click that you have already found. Then change the gluPerspective() to a smaller angle of view to achieve the actual zoom.
I have looked at a ton of quaternion examples on several sites including this one, but found none that answer this, so here goes...
I want to orbit my camera around a large sphere, and have the camera always point at the center of the sphere. To make things easy, the sphere is located at {0,0,0} in the world. I am using a quaternion for camera orientation, and a vector for camera position. The problem is that the camera position orbits the sphere perfectly as it should, but always looks one constant direction straight forward instead of adjusting to point to the center as it orbits.
This must be something simple, but I am new to quaternions... what am I missing?
I'm using C++, DirectX9. Here is my code:
// Note: g_camRotAngleYawDir and g_camRotAnglePitchDir are set to either 1.0f, -1.0f according to keypresses, otherwise equal 0.0f
// Also, g_camOrientationQuat is just an identity quat to begin with.
float theta = 0.05f;
D3DXQUATERNION g_deltaQuat( 0.0f, 0.0f, 0.0f, 1.0f );
D3DXQuaternionRotationYawPitchRoll(&g_deltaQuat, g_camRotAngleYawDir * theta, g_camRotAnglePitchDir * theta, g_camRotAngleRollDir * theta);
D3DXQUATERNION tempQuat = g_camOrientationQuat;
D3DXQuaternionMultiply(&g_camOrientationQuat, &tempQuat, &g_deltaQuat);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
[EDIT - Feb 13, 2012]
Ok, here's my understanding so far:
move the camera using an angular delta each frame.
Get a vector from center to camera-pos.
Call quaternionRotationBetweenVectors with a z-facing unit vector and the target vector.
Then use the result of that function for the orientation of the view matrix, and the camera-position goes in the translation portion of the view matrix.
Here's the new code (called every frame)...
// This part orbits the position around the sphere according to deltas for yaw, pitch, roll
D3DXQuaternionRotationYawPitchRoll(&deltaQuat, yawDelta, pitchDelta, rollDelta);
D3DXMatrixRotationQuaternion(&mat1, &deltaQuat);
D3DXVec3Transform(&g_camPosition, &g_camPosition, &mat1);
// This part adjusts the orientation of the camera to point at the center of the sphere
dir1 = normalize(vec3(0.0f, 0.0f, 0.0f) - g_camPosition);
QuaternionRotationBetweenVectors(&g_camOrientationQuat, vec3(0.0f, 0.0f, 1.0f), &dir1);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
...I tried that solution out, without success. What am I doing wrong?
It should be as easy as doing the following whenever you update the position (assuming the camera is pointing along the z-axis):
direction = normalize(center - position)
orientation = quaternionRotationBetweenVectors(vec3(0,0,1), direction)
It is fairly easy to find examples of how to implement quaternionRotationBetweenVectors. You could start with the question "Finding quaternion representing the rotation from one vector to another".
Here's an untested sketch of an implementation using the DirectX9 API:
D3DXQUATERNION* quaternionRotationBetweenVectors(__inout D3DXQUATERNION* result, __in const D3DXVECTOR3* v1, __in const D3DXVECTOR3* v2)
{
D3DXVECTOR3 axis;
D3DXVec3Cross(&axis, v1, v2);
D3DXVec3Normalize(&axis, &axis);
float angle = acos(D3DXVec3Dot(v1, v2));
D3DXQuaternionRotationAxis(result, &axis, angle);
return result;
}
This implementation expects v1, v2 to be normalized.
You need to show how you calculate the angles. Is there a reason why you aren't using D3DXMatrixLookAtLH?