Raytracing camera rotates in wrong directions - c++

I'm trying to build a raytracer and I use this article on how to build camera system.
The problem is that when, after calculating ray direction in camera space, I multiply it by camera-to-world transformation matrix and my camera seems to rotate in wrong (opposite) directions and works correctly if I inverse transformation matrix before multiplication.
Here is the code (I use glm library and right-handed coordinate system).
Initial data:
glm::vec3 origin_ = glm::vec3(0.f);// camera origin
cont glm::vec3 kDirection = glm::vec3(0.f, 0.f, -1.f);
cont glm::vec3 kUp = glm::vec3(0.f, 1.f, 0.f);
float aspect_ratio_ = (float)raster_height_ / raster_width_;
// bug !!! rotates in opposite direction (camera is actually tilted down)
glm::mat4 camera_to_world_ = glm::lookAtRH(origin_, glm::vec3(0.f, 0.2f, -1.f), kUp);
// works !!! (camera is tilted up)
glm::mat4 camera_to_world_ = glm::inverse(glm::lookAtRH(origin_, glm::vec3(0.f, 0.2f, -1.f), kUp));
And function that generates camera rays
// Calculate ray as if camera is located at 0,0,0 and pointing into negative z direction
// Then transform ray direction to desired plase
// x,y - pixel coordinates of raster image
// calculate as if raster image (screen) is 1.0 unit away from origin (eye)
Ray Camera::GenRay(const uint32_t x, const uint32_t y) {
glm::vec3 ray_direction = kDirection;
// from raster space to NDC space
glm::vec2 pixel_ndc((x + 0.5f) / raster_width_, (y + 0.5f) / raster_height_);
// from NDC space to camera space
float scale = tan(fov_ / 2.0f);
ray_direction.x = (2.0f * pixel_ndc.x - 1.0f) * scale; // *aspect_ratio_;
ray_direction.y = (1.0f - 2.0f * pixel_ndc.y) * scale * aspect_ratio_;
// apply camera-to-world rotation matrix to direction
ray_direction = camera_to_world_ * glm::vec4(ray_direction, 0.0f);
return Ray(origin_, ray_direction, Ray::Type::kPrimary);
}
I really can't understand the root of a problem so any help us appreciated.

Related

For mouse click ray casting a line, why aren't my starting rays updating to my camera position after I move my camera?

When camera is moved around, why are my starting rays are still stuck at origin 0, 0, 0 even though the camera position has been updated?
It works fine if I start the program and my camera position is at default 0, 0, 0. But once I move my camera for instance pan to the right and click some more, the lines are still coming from 0 0 0 when it should be starting from wherever the camera is. Am I doing something terribly wrong? I've checked to make sure they're being updated in the main loop. I've used this code snippit below referenced from:
picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone
// 1. Get mouse coordinates then normalize
float x = (2.0f * lastX) / width - 1.0f;
float y = 1.0f - (2.0f * lastY) / height;
// 2. Move from clip space to world space
glm::mat4 inverseWorldMatrix = glm::inverse(proj * view);
glm::vec4 near_vec = glm::vec4(x, y, -1.0f, 1.0f);
glm::vec4 far_vec = glm::vec4(x, y, 1.0f, 1.0f);
glm::vec4 startRay = inverseWorldMatrix * near_vec;
glm::vec4 endRay = inverseWorldMatrix * far_vec;
// perspective divide
startR /= startR.w;
endR /= endR.w;
glm::vec3 direction = glm::vec3(endR - startR);
// start the ray points from the camera position
glm::vec3 startPos = glm::vec3(camera.GetPosition());
glm::vec3 endPos = glm::vec3(startPos + direction * someLength);
The first screenshot I click some rays, the 2nd I move my camera to the right and click some more but the initial starting rays are still at 0, 0, 0. What I'm looking for is for the rays to come out wherever the camera position is in the 3rd image, ie the red rays sorry for the confusion, the red lines are supposed to shoot out and into the distance not up.
// and these are my matrices
// projection
glm::mat4 proj = glm::perspective(glm::radians(camera.GetFov()), (float)width / height, 0.1f, 100.0f);
// view
glm::mat4 view = camera.GetViewMatrix(); // This returns glm::lookAt(this->Position, this->Position + this->Front, this->Up);
// model
glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f));
Its hard to tell where in the code the problem lies. But, I use this function for ray casting that is adapted from code from scratch-a-pixel and learnopengl:
vec3 rayCast(double xpos, double ypos, mat4 projection, mat4 view) {
// converts a position from the 2d xpos, ypos to a normalized 3d direction
float x = (2.0f * xpos) / WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / HEIGHT;
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
// eye space to clip we would multiply by projection so
// clip space to eye space is the inverse projection
vec4 ray_eye = inverse(projection) * ray_clip;
// convert point to forwards
ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
// world space to eye space is usually multiply by view so
// eye space to world space is inverse view
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
where you can draw your line with startPos = camera.Position and endPos = camera.Position + rayCast(...) * scalar_amount.

How do I change quaternion rotation to use the local camera axis rather than the world axis in DirectX 11

I'm currently trying to rotate the camera around its local axis based on keyboard/mouse input and the code I currently have uses DirectXMath and works nicely, however it is using the world axis to rotate around rather than the cameras local axis. Because of this, some of the rotations are not as expected and causes issues as the camera rotates. For example, when we tilt our camera, the Y axis will change and we will want to rotate around another axis to get our expected results.
What am I doing wrong in the code or what do I need to change in order to rotate around its local axis?
vector.x, vector.y, vector.z (The vector to rotate around, i.e. (1.0f, 0.0f, 0.0f))
//define our camera matrix
XMFLOAT4X4 cameraMatrix;
//position, lookat, up values for the camera
XMFLOAT3 position;
XMFLOAT3 up;
XMFLOAT3 lookat;
void Camera::rotate(XMFLOAT3 vector, float theta) {
XMStoreFloat4x4(&cameraMatrix, XMMatrixIdentity());
//set our view quaternion to our current camera's lookat position
XMVECTOR viewQuaternion = XMQuaternionIdentity();
viewQuaternion = XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f);
//set the rotation vector based on our parameter, i.e (1.0f, 0.0f, 0.0f)
//to rotate around the x axis
XMVECTOR rotationVector = XMVectorSet(vector.x, vector.y, vector.z, 0.0f);
//create a rotation quaternion to rotate around our vector, with a specified angle, theta
XMVECTOR rotationQuaternion = XMVectorSet(
XMVectorGetX(rotationVector) * sin(theta / 2),
XMVectorGetY(rotationVector) * sin(theta / 2),
XMVectorGetZ(rotationVector) * sin(theta / 2),
cos(theta / 2));
//get our rotation quaternion inverse
XMVECTOR rotationInverse = XMQuaternionInverse(rotationQuaternion);
//new view quaternion = [ newView = ROTATION * VIEW * INVERSE ROTATION ]
//multiply our rotation quaternion with our view quaternion
XMVECTOR newViewQuaternion = XMQuaternionMultiply(rotationQuaternion, viewQuaternion);
//multiply the result of our calculation above with the inverse rotation
//to get our new view values
newViewQuaternion = XMQuaternionMultiply(newViewQuaternion, rotationInverse);
//take the new lookat values from our newViewQuaternion and put them into the camera
lookat = XMFLOAT3(XMVectorGetX(newViewQuaternion), XMVectorGetY(newViewQuaternion), XMVectorGetZ(newViewQuaternion));
//build our camera matrix using XMMatrixLookAtLH
XMStoreFloat4x4(&cameraMatrix, XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f)));
}
The view matrix is then set
//store our camera's matrix inside the view matrix
XMStoreFloat4x4(&_view, camera->getCameraMatrix() );
-
Edit:
I have tried an alternative solution without using quaternions, and it seems I can get the camera to rotate correctly around its own axis, however the camera's lookat values now never change and after I have stopped using the mouse/keyboard, it snaps back to its original position.
void Camera::update(float delta) {
XMStoreFloat4x4(&cameraMatrix, XMMatrixIdentity());
//do we have a rotation?
//this is set as we try to rotate, around a current axis such as
//(1.0f, 0.0f, 0.0f)
if (rotationVector.x != 0.0f || rotationVector.y != 0.0f || rotationVector.z != 0.0f) {
//yes, we have an axis to rotate around
//create our axis vector to rotate around
XMVECTOR axisVector = XMVectorSet(rotationVector.x, rotationVector.y, rotationVector.z, 0.0f);
//create our rotation matrix using XMMatrixRotationAxis, and rotate around this axis with a specified angle theta
XMMATRIX rotationMatrix = XMMatrixRotationAxis(axisVector, 2.0 * delta);
//create our camera's view matrix
XMMATRIX viewMatrix = XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f));
//multiply our camera's view matrix by the rotation matrix
//make sure the rotation is on the right to ensure local axis rotation
XMMATRIX finalCameraMatrix = viewMatrix * rotationMatrix;
/* this piece of code allows the camera to correctly rotate and it doesn't
snap back to its original position, as the lookat coordinates are being set
each time. However, this will make the camera rotate around the world axis
rather than the local axis. Which brings us to the same problem we had
with the quaternion rotation */
//XMVECTOR look = XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0);
//XMVECTOR finalLook = XMVector3Transform(look, rotationMatrix);
//lookat.x = XMVectorGetX(finalLook);
//lookat.y = XMVectorGetY(finalLook);
//lookat.z = XMVectorGetZ(finalLook);
//finally store the finalCameraMatrix into our camera matrix
XMStoreFloat4x4(&cameraMatrix, finalCameraMatrix);
} else {
//no, there is no rotation, don't apply the roation matrix
//no rotation, don't apply the rotation matrix
XMStoreFloat4x4(&cameraMatrix, XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f)));
}
An example can be seen here: https://i.gyazo.com/f83204389551eff427446e06624b2cf9.mp4
I think I am missing setting the actual lookat value to the new lookat value, but I'm not sure how to calculate the new value, or extract it from the new view matrix (which I have already tried)

Pixel-perfect projection matrix?

I'm trying to understand how far should I place the camera position in the lookat function (or the object in the model matrix) to have pixel-perfect coordinates to pass in the vertex shader.
This is actually simple with orthographic projection matrices, but I fail to visualize how the math would work for perspective projection.
Here's the perspective matrix I'm using:
glm::mat4 projection = glm::perspective(45.0f, (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 10000.0f);
vertex multiplication in the shader is as simple as:
gl_Position = projection * view * model * vec4(position.xy, 0.0f, 1.0);
I'm basically trying to show a quad on screen that needs to be rotated and show perspective effects (hence why I can't use orthographic projection), but I'd like to specify in pixel coordinates where and how big it should appear on screen.
Well it can only have pixel-coordinates in one "z-plane" if you want to use a trapezoid view-frustum.
Basic Math
If you use a standard camera the basic math for a camera at (0,0,0) would be
for alpha being the vertical fov (45° in your case)
target_y = tan(alpha/2) * z-distance * ((pixel_y/height)*2-1)
target_x = tan(alpha/2) * z-distance * ((pixel_x/width)*aspect-ratio*2-1)
Reversing projection
As for the general case. You can "un-project" to find where a point in 3D before all transforms should be to end up on a specific point.
Basically you need to un-do the math.
gl_Position = projection * view * model * vec4(position.xy, 0.0f, 1.0);
So if you have your final position and want to revert it you do:
unprojection = model^-1 * view^-1 *projection^-1 * gl_Position //not actual glsl notation, '^-1' being the inverse
This is basically what functions like gluUnProject or glm::gtc::matrix_transform::unProject do.
But you should note that the final clip-space after you apply the projection matrix is typically [-1,-1,0] to [1,1,1], so if you want to enter pixel coordinates you can apply an additional matrix to transform into that space.
Something like:
[2/width, 0, 0 -1]
[ 0, 2/height, 0 -1]
screenToClip = [ 0, 0, 1 0]
[ 0, 0, 0 1]
would transform [0,0,0,1] to [-1,-1,0,1] and [width,height,0,1] to [1,1,0,1]
Also, you're probably best off trying some z-value like 0.5 to make sure that you're well within the view frustum and not clipping near the front or back.
You can achieve this effect with a 60 degree field of view. Basically you want to place the camera at a distance from the viewing plane such that the camera forms an equilateral triangle with center points at the top and bottom of the screen.
Here's some code to do that:
float fovy = 60.0f; // field of view - degrees
float aspect = nScreenWidth / nScreenHeight;
float zNearClip = 0.1f;
float zFarClip = nScreenHeight*2.0f;
float degToRad = MF_PI / 180.0f;
float fH = tanf(fovY * degToRad / 2.0f) * zNearClip;
float fW = fH * aspect;
glFrustum(-fW, fW, -fH, fH, zNearClip, zFarClip);
float nCameraDistance = sqrtf( nScreenHeight * nScreenHeight - 0.25f * nScreenHeight * nScreenHeight);
glTranslatef(0, 0, -nCameraDistance);
You can also use a 90 degree fov. In that case the camera distance is 1/2 the height of the window. However, this has a lot of foreshortening.
In the 90 degree case, you could push the camera out by the full height, but then apply a 2x scaling to the x and y components (ie: glScale (2,2,1).
Here's an image of what I mean:
I'll extend PeterT answer and leave here the practical code I used to find the world coordinates of one of the frustum's plane through unprojection
This assumes a basic view matrix (camera pos at 0,0,0)
glm::mat4 projectionInv(0);
glm::mat4 projection = glm::perspective(45.0f, (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 500.0f);
projectionInv = glm::inverse(projection);
std::vector<glm::vec4> NDCCube;
NDCCube.push_back(glm::vec4(-1.0f, -1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, -1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, -1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, -1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, 1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, 1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, 1.0f, 1.0f, 1.0f));
std::vector<glm::vec3> frustumVertices;
for (int i = 0; i < 8; i++)
{
glm::vec4 tempvec;
tempvec = projectionInv * NDCCube.at(i); //multiply by projection matrix inverse to obtain frustum vertex
frustumVertices.push_back(glm::vec3(tempvec.x /= tempvec.w, tempvec.y /= tempvec.w, tempvec.z /= tempvec.w));
}
Keep in mind these coordinates would not end up on screen if your perspective far distance is lower than the one I set in the projection matrix
If you happen to know the world-coordinate width of "some item" that you want to display pixel-exact, this ends up being a bit of trivial trigonometry (works for both y FOV or x FOV):
S = Width of item in world coordinates
T = "Pixel Exact" size of item (say, the width of the texture)
h = Z distance to the object
a = 2 * h * tan(Phi / 2)
b = a / cos(phi / 2)
r = Total screen resolution (width or height depending on the FOV you want)
a = 2 * h * tan(Phi / 2) = (r / T) * S
Theta = atan(2*h / a)
Phi = 180 - 2*Theta
Where b are the sides of your triangle, a is the base of your triangle, h is the height of your triangle, theta is the angles of the two equal angles of the Isosoleces triangle, and Phi is the resulting FOV
So the end code might look something like
float frustumWidth = (float(ScreenWidth) / TextureWidth) * InWorldItemWidth;
float theta = glm::degrees(atan((2 * zDistance) / frustumWidth));
float PixelPerfectFOV = 180 - 2 * theta;

Camera/View matrix

After reading through this article (http://3dgep.com/?p=1700) it seems to imply I got my view matrix wrong. Here's how I compute the view matrix;
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
Mat4 Camera::GetViewMatrix() const
{
return Orientation() * glm::translate(Mat4(1.0f), -mTranslation);
}
Supposedly, I am to invert this resulting matrix, but I have not so far and it has work excellently thus far, and I'm not doing any inverting down the pipeline either. Is there something I am missing here?
You already did the inversion. The view matrix is the inverse of the model transformation that positions the camera. This is:
ModelCamera = Translation(position) * Rotation
So the inverse is:
ViewMatrix = (Translation(position) * Rotation)^-1
= Rotation^-1 * Translation(position)^-1
The translation is inverted by negating the offset:
= Rotation^-1 * Translation(-position)
This leaves us with inverting the rotation. We can assume that the rotation is inverted. Thus, the original rotation of the camera model is
Rotation^-1 = RotationX(verticalAngle) * RotationY(horizontalAngle)
Rotation = (RotationX(verticalAngle) * RotationY(horizontalAngle))^-1
= RotationY(horizontalAngle)^-1 * RotationX(verticalAngle)^-1
= RotationY(-horizontalAngle) * RotationX(-verticalAngle)
So the angles you specify are actually the inverted angles that would rotate the camera. If you increase horizontalAngle, the camera should turn to the right (assuming a right-handed coordinate system). That's just a matter of definitions.

Unprojecting 2d screen position into 3d world space

I am using glm maths library for the following problem: converting a 2d screen position into 3d world space.
In an attempt to track down the problem, I have simplified the code to the following:
float screenW = 800.0f;
float screenH = 600.0f;
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, screenW, screenH);
glm::mat4 tmpView(1.0f);
glm::mat4 tmpProj = glm::perspective( 90.0f, screenW/screenH, 0.1f, 100000.0f);
glm::vec3 screenPos = glm::vec3(0.0f, 0.0f, 1.0f);
glm::vec3 worldPos = glm::unProject(screenPos, tmpView, tmpProj, viewport);
Now with the glm::unProject in this case I would expect worldPos to be (0, 0, 1). However it is coming through as (127100.12, -95325.094, -95325.094).
Am I misunderstanding what glm::unProject is supposed to do? I have traced through the function and it seems to be working OK.
The Z component in screenPos corresponds to the values in the depth buffer. So 0.0f is the near clip plane and 1.0f is the far clip plane.
If you want to find the world pos that is one unit away from the screen, you can rescale the vector:
worldPos = worldPos / (worldPos.z * -1.f);
Note also that the screenPos of 0,0 designates the bottom left corner of the screen, while in worldPos 0,0 is the center of the screen. So 0,0,1 should give you -1.3333,-1,-1, and 400,300,1 should give you 0,0,-1.