total internal reflection bugs in "ray tracing in one weekend" - c++

I am following "Ray Tracing in One Weekend" to build a ray tracer on my own. Everything is OK until I met Dilectric Material.
The refraction performs well (I am not sure, we can see it in image 3 and 4), but when I add the total internal reflection, the sphere gets a black edge, the images are listed as below:
img1 - black edge for total internal reflection
img2 - black edge for total internal reflection
img3 - dilectric without total internal reflection
img4 - dilectric without total internal reflection
My analysis
I debuged my program and found that total internal reflection happens at the edge of the sphere, and the ray bounces infinitely inside the sphere until it exceeds boundce limits, so it returns (0.f, 0.f, 0.f) for the result color.
I don't think the infinite bounce of inner relection is right, but i have compared my code with the one in the book, and could not find any problem.
The scatter method is here:
bool Dilectric::scatter(const Ray& input, const HitRecord& rec, glm::vec3& attenuation, Ray& scatterRay) const
{
glm::vec3 dir;
if (rec.frontFace)
{
// ray is from air to inside surface, only refraction happens
float ratio = 1.f / m_refractIndex;
dir = GfxLib::refract(glm::normalize(input.direction()), rec.n, ratio);
}
else
{
// ray is from inside surface to air, need to think of total internal reflection
float ratio = m_refractIndex;
float cosTheta = std::fmin(glm::dot(-input.direction(), rec.n), 1.f);
float sinTheta = std::sqrt(1.f - cosTheta * cosTheta);
bool internalReflection = (ratio * sinTheta) > 1.f;
if (internalReflection)
{
dir = GfxLib::reflect(glm::normalize(input.direction()), rec.n);
}
else
{
dir = GfxLib::refract(glm::normalize(input.direction()), rec.n, ratio);
}
}
scatterRay.setOrigin(rec.pt);
scatterRay.setDirection(dir);
// m_albedo is set to vec3(1.f)
attenuation = m_albedo;
return true;
}
outer method, rayColor is here:
glm::vec3 RrtTest::rayColor(const Ray& ray, const HittableList& objList, int reflectDepth)
{
if (reflectDepth <= 0) return glm::vec3(0.f);
HitRecord rec;
// use 0.001 instead of 0.f to fix shadow acne
if (objList.hit(ray, 0.001f, FLT_MAX, rec) && rec.hitInd >= 0)
{
Ray scatterRay;
glm::vec3 attenu{ 1.f };
std::shared_ptr<Matl> mat = objList.at(rec.hitInd)->getMatl();
if (!mat->scatter(ray, rec, attenu, scatterRay))
return glm::vec3(0.f);
glm::vec3 retColor = rayColor(scatterRay, objList, --reflectDepth);
return attenu * retColor;
}
else
{
glm::vec3 startColor{ 1.f }, endColor{ 0.5f, 0.7f, 1.f };
float t = (ray.direction().y + 1.f) * 0.5f;
return GfxLib::blend(startColor, endColor, t);
}
}
reflect method is here:
glm::vec3 GfxLib::reflect(const glm::vec3& directionIn, const glm::vec3& n)
{
float b = glm::dot(directionIn, n);
glm::vec3 refDir = directionIn - 2 * b * n;
return glm::normalize(refDir);
}
However, I am not sure my analysis is right or not, can any one lend me a hand and give me a solution for it? The main logic of the rendering is here(including the total demo).
i will really appreciate your advice!

Related

Sphere/Plane collision detection

I am trying to get a sphere and a plane to collide properly. With the below code, they do collide, but only for a little while. I am at a loss when it comes to planes, so there must be something I have overlooked.
What I am trying to discern with the collision (which takes place during the narrowphase) is the depth, the normal and the point of contact. These are used later when resolving impulse etc.
The problem is that distance becomes < 0 after only a few iterations (the sphere is rolling, and then just goes under the plane).
auto distance = glm::dot(sphere->GetPosition() - glm::vec3(planeCollider->GetDistance()), planeCollider->GetNormal());
auto normal = planeCollider->GetNormal() * distance;
if (distance < 0.0f)
{
return false;
}
if (distance == 0.0f)
{
penetrationDepth = sphereCollider->GetRadius();
contactNormal = glm::vec3(0.0f, 1.0f, 0.0f);
contactPoint = bodyA->GetPosition();
}
else
{
penetrationDepth = sphereCollider->GetRadius() - distance;
contactNormal = normal;
contactPoint = normal * (distance - sphereCollider->GetRadius());
}

Ray Tracer Shadow Problems

I am working on a ray tracer, but
I am stuck for days on the shadow part.
My shadow is acting really weird. Here is an image of the ray tracer:
The black part should be the shadow.
The origin of the ray is always (0.f, -10.f, -500.f), because this is a perspective projection and that is the eye of the camera. When the ray hits a plane, the hit point is always the origin of the ray, but with the sphere it is different. It is different because it is based on the position of the sphere. There is never an intersection between the plane and sphere because the origin is huge difference.
I also tried to add shadow on a box, but this doesn't work either. The shadow between two spheres does work!
If someone wants to see the intersection code, let me know.
Thanks for taking the time to help me!
Camera
Camera::Camera(float a_fFov, const Dimension& a_viewDimension, vec3 a_v3Eye, vec3 a_v3Center, vec3 a_v3Up) :
m_fFov(a_fFov),
m_viewDimension(a_viewDimension),
m_v3Eye(a_v3Eye),
m_v3Center(a_v3Center),
m_v3Up(a_v3Up)
{
// Calculate the x, y and z axis
vec3 v3ViewDirection = (m_v3Eye - m_v3Center).normalize();
vec3 v3U = m_v3Up.cross(v3ViewDirection).normalize();
vec3 v3V = v3ViewDirection.cross(v3U);
// Calculate the aspect ratio of the screen
float fAspectRatio = static_cast<float>(m_viewDimension.m_iHeight) /
static_cast<float>(m_viewDimension.m_iWidth);
float fViewPlaneHalfWidth = tanf(m_fFov / 2.f);
float fViewPlaneHalfHeight = fAspectRatio * fViewPlaneHalfWidth;
// The bottom left of the plane
m_v3ViewPlaneBottomLeft = m_v3Center - v3V * fViewPlaneHalfHeight - v3U * fViewPlaneHalfWidth;
// The amount we need to increment to get the direction. The width and height are based on the field of view.
m_v3IncrementX = (v3U * 2.f * fViewPlaneHalfWidth);
m_v3IncrementY = (v3V * 2.f * fViewPlaneHalfHeight);
}
Camera::~Camera()
{
}
const Ray Camera::GetCameraRay(float iPixelX, float iPixelY) const
{
vec3 v3Target = m_v3ViewPlaneBottomLeft + m_v3IncrementX * iPixelX + m_v3IncrementY * iPixelY;
vec3 v3Direction = (v3Target - m_v3Eye).normalize();
return Ray(m_v3Eye, v3Direction);
}
Camera setup
Scene::Scene(const Dimension& a_Dimension) :
m_Camera(1.22173f, a_Dimension, vec3(0.f, -10.f, -500.f), vec3(0.f, 0.f, 0.f), vec3(0.f, 1.f, 0.f))
{
// Setup sky light
Color ambientLightColor(0.2f, 0.1f, 0.1f);
m_AmbientLight = new AmbientLight(0.1f, ambientLightColor);
// Setup shapes
CreateShapes();
// Setup lights
CreateLights();
// Setup buas
m_fBias = 1.f;
}
Scene objects
Sphere* sphere2 = new Sphere();
sphere2->SetRadius(50.f);
sphere2->SetCenter(vec3(0.f, 0.f, 0.f));
sphere2->SetMaterial(matte3);
Plane* plane = new Plane(true);
plane->SetNormal(vec3(0.f, 1.f, 0.f));
plane->SetPoint(vec3(0.f, 0.f, 0.f));
plane->SetMaterial(matte1);
Scene light
PointLight* pointLight1 = new PointLight(1.f, Color(0.1f, 0.5f, 0.7f), vec3(0.f, -200.f, 0.f), 1.f, 0.09f, 0.032f);
Shade function
for (const Light* light : a_Lights) {
vec3 v3LightDirection = (light->m_v3Position - a_Contact.m_v3Hitpoint).normalized();
light->CalcDiffuseLight(a_Contact.m_v3Point, a_Contact.m_v3Normal, m_fKd, lightColor);
Ray lightRay(a_Contact.m_v3Point + a_Contact.m_v3Normal * a_fBias, v3LightDirection);
bool test = a_RayTracer.ShadowTrace(lightRay, a_Shapes);
vec3 normTest = a_Contact.m_v3Normal;
float test2 = normTest.dot(v3LightDirection);
// No shadow
if (!test) {
a_ResultColor += lightColor * !test * test2;
}
else {
a_ResultColor = Color(); // Test code - change color to black.
}
}
You have several bugs:
in Sphere::Collides, m_fCollisionTime is not set, when t2>=t1
in Sphere::Collides, if m_fCollisionTime is negative, then the ray actually doesn't intersect with the sphere (this causes the strange shadow on the top of the ball)
put the plane lower, and you'll see the shadow of the ball
you need to check for nearest collision, when shooting a ray from the eye (just try, swap the order of the objects, and the sphere suddenly becomes behind the plane)
With these fixed, you'll get this:

Transparent sphere is mostly black after implementing refraction in a ray tracer

NOTE: I've edited my code. See below the divider.
I'm implementing refraction in my (fairly basic) ray tracer, written in C++. I've been following (1) and (2).
I get the result below. Why is the center of the sphere black?
The center sphere has a transmission coefficient of 0.9 and a reflective coefficient of 0.1. It's index of refraction is 1.5 and it's placed 1.5 units away from the camera. The other two spheres just use diffuse lighting, with no reflective/refraction component. I placed these two different coloured spheres behind and in front of the transparent sphere to ensure that I don't see a reflection instead of a transmission.
I've made the background colour (the colour achieved when a ray from the camera does not intersect with any object) a colour other than black, so the center of the sphere is not just the background colour.
I have not implemented the Fresnel effect yet.
My trace function looks like this (verbatim copy, with some parts omitted for brevity):
bool isInside(Vec3f rayDirection, Vec3f intersectionNormal) {
return dot(rayDirection, intersectionNormal) > 0;
}
Vec3f trace(Vec3f origin, Vec3f ray, int depth) {
// (1) Find object intersection
std::shared_ptr<SceneObject> intersectionObject = ...;
// (2) Compute diffuse and ambient color contribution
Vec3f color = ...;
bool isTotalInternalReflection = false;
if (intersectionObject->mTransmission > 0 && depth < MAX_DEPTH) {
Vec3f transmissionDirection = refractionDir(
ray,
normal,
1.5f,
isTotalInternalReflection
);
if (!isTotalInternalReflection) {
float bias = 1e-4 * (isInside(ray, normal) ? -1 : 1);
Vec3f transmissionColor = trace(
add(intersection, multiply(normal, bias)),
transmissionDirection,
depth + 1
);
color = add(
color,
multiply(transmissionColor, intersectionObject->mTransmission)
);
}
}
if (intersectionObject->mSpecular > 0 && depth < MAX_DEPTH) {
Vec3f reflectionDirection = computeReflectionDirection(ray, normal);
Vec3f reflectionColor = trace(
add(intersection, multiply(normal, 1e-5)),
reflectionDirection,
depth + 1
);
float intensity = intersectionObject->mSpecular;
if (isTotalInternalReflection) {
intensity += intersectionObject->mTransmission;
}
color = add(
color,
multiply(reflectionColor, intensity)
);
}
return truncate(color, 1);
}
If the object is transparent then it computes the direction of the transmission ray and recursively traces it, unless the refraction causes total internal reflection. In that case, the transmission component is added to the reflection component and thus the color will be 100% of the traced reflection color.
I add a little bias to the intersection point in the direction of the normal (inverted if inside) when recursively tracing the transmission ray. If I don't do that, then I get this result:
The computation for the direction of the transmission ray is performed in refractionDir. This function assumes that we will not have a transparent object inside another, and that the outside material is air, with a coefficient of 1.
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction = 1.0f / refractionIndex;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
// We should be reflecting across a normal inside the object, so
// re-orient the normal to be inside.
normal = multiply(normal, -1);
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) *
(1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, relativeIndexOfRefraction * cosi - sqrtf(base))
);
}
Here's the result when the spheres are further away from the camera:
And closer to the camera:
Edit: I noticed a couple bugs in my code.
When I add bias to the intersection point, it should be in the same direction as the transmission. I was adding it in the wrong direction by adding negative bias when inside the sphere. This doesn't make sense as when the ray is coming from inside the sphere, it will transmit outside the sphere (when TIR is avoided).
Old code:
add(intersection, multiply(normal, bias))
New code:
add(intersection, multiply(transmissionDirection, 1e-4))
Similarly, the normal that refractionDir receives is the surface normal pointing away from the center of the sphere. The normal I want to use when computing the transmission direction is one pointing outside if the transmission ray is going to go outside the object, or inside if the transmission ray is going to go inside the object. Thus, the surface normal pointing out of the sphere should be inverted if we're entering the sphere, thus is the ray is outside.
New code:
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
} else {
relativeIndexOfRefraction = 1.0f / refractionIndex;
normal = multiply(normal, -1);
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) * (1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, sqrtf(base) - relativeIndexOfRefraction * cosi)
);
}
However, this all still gives me an unexpected result:
I've also added some unit tests. They pass the following:
A ray entering the center of the sphere parallel with the normal will transmit through the sphere without being bent (this tests two refractionDir calls, one outside and one inside).
Refraction at 45 degrees from the normal through a glass slab will bend inside the slab by 15 degrees towards the normal, away from the original ray direction. Its direction when it exits the sphere will be the original ray direction.
Similar test at 75 degrees.
Ensuring that total internal reflection happens when a ray is coming from inside the object and is at 45 degrees or wider.
I'll include one of the unit tests here and you can find the rest at this gist.
TEST_CASE("Refraction at 75 degrees from normal through glass slab") {
Vec3f rayDirection = normalize(Vec3f({ 0, -sinf(5.0f * M_PI / 12.0f), -cosf(5.0f * M_PI / 12.0f) }));
Vec3f normal({ 0, 0, 1 });
bool isTotalInternalReflection;
Vec3f refraction = refractionDir(rayDirection, normal, 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == 0);
REQUIRE(refraction[1] == Approx(-sinf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(refraction[2] == Approx(-cosf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(!isTotalInternalReflection);
refraction = refractionDir(refraction, multiply(normal, -1), 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == Approx(rayDirection[0]));
REQUIRE(refraction[1] == Approx(rayDirection[1]));
REQUIRE(refraction[2] == Approx(rayDirection[2]));
REQUIRE(!isTotalInternalReflection);
}

Raycasting (Mouse Picking) while using an Perspective VS Orthographic Projection in OpenGL

I am struggling to understand how to change my algorithm to handle raycasting (utilized for MousePicking) using a Perspective projection and an Orthographic projection.
Currently I have a scene with 3D objects that have AxisAligned bounding boxes attached to them.
While rendering the scene using a perspective projection (created with glm::perspective) I can successfully use raycasting and my mouse to "pick" different objects in my scene. Here is a demonstration.
If I render the same scene, but using an Orthographic projection, and positioning the camera above the facing down (looking down the Y axis, Imagine like a level editor fora game) I am unable to correctly raycasting from the where the user clicks on the screen so I can get MousePicking working while rendering using an Orthographic projection. Here is a demonstration of it not working.
My algorithm at a high level:
auto const coords = mouse.coords();
glm::vec2 const mouse_pos{coords.x, coords.y};
glm::vec3 ray_dir, ray_start;
if (perspective) { // This "works"
auto const ar = aspect_rate;
auto const fov = field_of_view;
glm::mat4 const proj_matrix = glm::perspective(fov, ar, f.near, f.far);
auto const& target_pos = camera.target.get_position();
glm::mat4 const view_matrix = glm::lookAt(target_pos, target_pos, glm::vec3{0, -1, 0});
ray_dir = Raycast::calculate_ray_into_screen(mouse_pos, proj_matrix, view_matrix, view_rect);
ray_start = camera.world_position();
}
else if (orthographic) { // This "doesn't work"
glm::vec3 const POS = glm::vec3{50};
glm::vec3 const FORWARD = glm::vec3{0, -1, 0};
glm::vec3 const UP = glm::vec3{0, 0, -1};
// 1024, 768 with NEAR 0.001 and FAR 10000
//glm::mat4 proj_matrix = glm::ortho(0, 1024, 0, 768, 0.0001, 10000);
glm::mat4 proj_matrix = glm::ortho(0, 1024, 0, 768, 0.0001, 100);
// Look down at the scene from above
glm::mat4 view_matrix = glm::lookAt(POS, POS + FORWARD, UP);
// convert the mouse screen coordinates into world coordinates for the cube/ray test
auto const p0 = screen_to_world(mouse_pos, view_rect, proj_matrix, view_matrix, 0.0f);
auto const p1 = screen_to_world(mouse_pos, view_rect, proj_matrix, view_matrix, 1.0f);
ray_start = p0;
ray_dir = glm::normalize(p1 - p0);
}
bool const intersects = ray_intersects_cube(logger, ray_dir, ray_start,
eid, tr, cube, distances);
In perspective mode, we cast a ray into the scene and see if it intersects with the cube surrounding the object.
In orthographic mode, I'm casting two rays from the screen (one at z=0, the other at z=1) and creating a ray between those two points. I set the ray start point to where the mouse pointer is (with z=0) and use the ray direction just calculated as inputs into the same ray_cube_intersection algorithm.
My question is this
Since the MousePicking works using the Perspective projection, but not using an Orthographic projection:
Is it reasonable to assume the same ray_cube intersection algorithm can be used with a perspective/orthographic projection?
Is my thinking about setting the ray_start and ray_dir variables in the orthographic case correct?
Here is the source for the ray/cube collision algorithm in use.
glm::vec3
Raycast::calculate_ray_into_screen(glm::vec2 const& point, glm::mat4 const& proj,
glm::mat4 const& view, Rectangle const& view_rect)
{
// When doing mouse picking, we want our ray to be pointed "into" the screen
float constexpr Z = -1.0f;
return screen_to_world(point, view_rect, proj, view, Z);
}
bool
ray_cube_intersect(Ray const& r, Transform const& transform, Cube const& cube,
float& distance)
{
auto const& cubepos = transform.translation;
glm::vec3 const minpos = cube.min * transform.scale;
glm::vec3 const maxpos = cube.max * transform.scale;
std::array<glm::vec3, 2> const bounds{{minpos + cubepos, maxpos + cubepos}};
float txmin = (bounds[ r.sign[0]].x - r.orig.x) * r.invdir.x;
float txmax = (bounds[1 - r.sign[0]].x - r.orig.x) * r.invdir.x;
float tymin = (bounds[ r.sign[1]].y - r.orig.y) * r.invdir.y;
float tymax = (bounds[1 - r.sign[1]].y - r.orig.y) * r.invdir.y;
if ((txmin > tymax) || (tymin > txmax)) {
return false;
}
if (tymin > txmin) {
txmin = tymin;
}
if (tymax < txmax) {
txmax = tymax;
}
float tzmin = (bounds[ r.sign[2]].z - r.orig.z) * r.invdir.z;
float tzmax = (bounds[1 - r.sign[2]].z - r.orig.z) * r.invdir.z;
if ((txmin > tzmax) || (tzmin > txmax)) {
return false;
}
distance = tzmin;
return true;
}
edit: The math space conversions functions I'm using:
namespace boomhs::math::space_conversions
{
inline glm::vec4
clip_to_eye(glm::vec4 const& clip, glm::mat4 const& proj_matrix, float const z)
{
auto const inv_proj = glm::inverse(proj_matrix);
glm::vec4 const eye_coords = inv_proj * clip;
return glm::vec4{eye_coords.x, eye_coords.y, z, 0.0f};
}
inline glm::vec3
eye_to_world(glm::vec4 const& eye, glm::mat4 const& view_matrix)
{
glm::mat4 const inv_view = glm::inverse(view_matrix);
glm::vec4 const ray = inv_view * eye;
glm::vec3 const ray_world = glm::vec3{ray.x, ray.y, ray.z};
return glm::normalize(ray_world);
}
inline constexpr glm::vec2
screen_to_ndc(glm::vec2 const& scoords, Rectangle const& view_rect)
{
float const x = ((2.0f * scoords.x) / view_rect.right()) - 1.0f;
float const y = ((2.0f * scoords.y) / view_rect.bottom()) - 1.0f;
auto const assert_fn = [](float const v) {
assert(v <= 1.0f);
assert(v >= -1.0f);
};
assert_fn(x);
assert_fn(y);
return glm::vec2{x, -y};
}
inline glm::vec4
ndc_to_clip(glm::vec2 const& ndc, float const z)
{
return glm::vec4{ndc.x, ndc.y, z, 1.0f};
}
inline glm::vec3
screen_to_world(glm::vec2 const& scoords, Rectangle const& view_rect, glm::mat4 const& proj_matrix,
glm::mat4 const& view_matrix, float const z)
{
glm::vec2 const ndc = screen_to_ndc(scoords, view_rect);
glm::vec4 const clip = ndc_to_clip(ndc, z);
glm::vec4 const eye = clip_to_eye(clip, proj_matrix, z);
glm::vec3 const world = eye_to_world(eye, view_matrix);
return world;
}
} // namespace boomhs::math::space_conversions
I worked on this for several days because I ran into the same problem.
The unproject methods that we are used to work with are working 100% correctly here as well - even with orthographic projection. But with orthographic projection the direction vector going from the camera position into the screen is always the same. So, unprojecting the cursor in the same way dies not work as intended in this case.
What you want to do is getting the camera direction vector as it is but in order to get the ray origin you need to shift the camera position according to the current mouse position on screen.
My approach (C#, but you'll get the idea):
Vector3 worldUpDirection = new Vector3(0, 1, 0); // if your world is y-up
// Get mouse coordinates (2d) relative to window position:
Vector2 mousePosRelativeToWindow = GetMouseCoordsRelativeToWindow(); // (0,0) would be top left window corner
// get camera direction vector:
Vector3 camDirection = Vector3.Normalize(cameraTarget - cameraPosition);
// get x and y coordinates relative to frustum width and height.
// glOrthoWidth and glOrthoHeight are the sizeX and sizeY values
// you created your projection matrix with. If your frustum has a width of 100,
// x would become -50 when the mouse is left and +50 when the mouse is right.
float x = +(2.0f * mousePosRelativeToWindow .X / viewportWidth - 1) * (glOrthoWidth / 2);
float y = -(2.0f * mousePosRelativeToWindow .Y / viewPortHeight - 1) * (glOrthoHeight / 2);
// Now, you want to calculate the camera's local right and up vectors
// (depending on the camera's current view direction):
Vector3 cameraRight = Vector3.Normalize(Vector3.Cross(camDirection, worldUpDirection));
Vector3 cameraUp = Vector3.Normalize(Vector3.Cross(cameraRight, camDirection));
// Finally, calculate the ray origin:
Vector3 rayOrigin = cameraPosition + cameraRight * x + cameraUp * y;
Vector3 rayDirection = camDirection;
Now you have the ray origin and the ray direction for your orthographic projection.
With these you can run any ray-plane/volume-intersections as usual.

OpenGL Matrix Camera controls, local rotation not functioning properly

So I'm trying to figure out how to mannually create a camera class that creates a local frame for camera transformations. I've created a player object based on OpenGL SuperBible's GLFrame class.
I got keyboard keys mapped to the MoveUp, MoveRight and MoveForward functions and the horizontal and vertical mouse movements are mapped to the xRot variable and rotateLocalY function. This is done to create a FPS style camera.
The problem however is in the RotateLocalY. Translation works fine and so does the vertical mouse movement but the horizontal movement scales all my objects down or up in a weird way. Besides the scaling, the rotation also seems to restrict itself to 180 degrees and rotates around the world origin (0.0) instead of my player's local position.
I figured that the scaling had something to do with normalizing vectors but the GLframe class (which I used for reference) never normalized any vectors and that class works just fine. Normalizing most of my vectors only solved the scaling and all the other problems were still there so I'm figuring one piece of code is causing all these problems?
I can't seem to figure out where the problem lies, I'll post all the appropriate code here and a screenshot to show the scaling.
Player object
Player::Player()
{
location[0] = 0.0f; location[1] = 0.0f; location[2] = 0.0f;
up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f;
forward[0] = 0.0f; forward[1] = 0.0f; forward[2] = -1.0f;
}
// Does all the camera transformation. Should be called before scene rendering!
void Player::ApplyTransform()
{
M3DMatrix44f cameraMatrix;
this->getTransformationMatrix(cameraMatrix);
glRotatef(xAngle, 1.0f, 0.0f, 0.0f);
glMultMatrixf(cameraMatrix);
}
void Player::MoveForward(GLfloat delta)
{
location[0] += forward[0] * delta;
location[1] += forward[1] * delta;
location[2] += forward[2] * delta;
}
void Player::MoveUp(GLfloat delta)
{
location[0] += up[0] * delta;
location[1] += up[1] * delta;
location[2] += up[2] * delta;
}
void Player::MoveRight(GLfloat delta)
{
// Get X axis vector first via cross product
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, forward);
location[0] += xAxis[0] * delta;
location[1] += xAxis[1] * delta;
location[2] += xAxis[2] * delta;
}
void Player::RotateLocalY(GLfloat angle)
{
// Calculate a rotation matrix first
M3DMatrix44f rotationMatrix;
// Rotate around the up vector
m3dRotationMatrix44(rotationMatrix, angle, up[0], up[1], up[2]); // Use up vector to get correct rotations even with multiple rotations used.
// Get new forward vector out of the rotation matrix
M3DVector3f newForward;
newForward[0] = rotationMatrix[0] * forward[0] + rotationMatrix[4] * forward[1] + rotationMatrix[8] * forward[2];
newForward[1] = rotationMatrix[1] * forward[1] + rotationMatrix[5] * forward[1] + rotationMatrix[9] * forward[2];
newForward[2] = rotationMatrix[2] * forward[2] + rotationMatrix[6] * forward[1] + rotationMatrix[10] * forward[2];
m3dCopyVector3(forward, newForward);
}
void Player::getTransformationMatrix(M3DMatrix44f matrix)
{
// Get Z axis (Z axis is reversed with camera transformations)
M3DVector3f zAxis;
zAxis[0] = -forward[0];
zAxis[1] = -forward[1];
zAxis[2] = -forward[2];
// Get X axis
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, zAxis);
// Fill in X column in transformation matrix
m3dSetMatrixColumn44(matrix, xAxis, 0); // first column
matrix[3] = 0.0f; // Set 4th value to 0
// Fill in the Y column
m3dSetMatrixColumn44(matrix, up, 1); // 2nd column
matrix[7] = 0.0f;
// Fill in the Z column
m3dSetMatrixColumn44(matrix, zAxis, 2); // 3rd column
matrix[11] = 0.0f;
// Do the translation
M3DVector3f negativeLocation; // Required for camera transform (right handed OpenGL system. Looking down negative Z axis)
negativeLocation[0] = -location[0];
negativeLocation[1] = -location[1];
negativeLocation[2] = -location[2];
m3dSetMatrixColumn44(matrix, negativeLocation, 3); // 4th column
matrix[15] = 1.0f;
}
Player object header
class Player
{
public:
//////////////////////////////////////
// Variables
M3DVector3f location;
M3DVector3f up;
M3DVector3f forward;
GLfloat xAngle; // Used for FPS divided X angle rotation (can't combine yaw and pitch since we'll also get a Roll which we don't want for FPS)
/////////////////////////////////////
// Functions
Player();
void ApplyTransform();
void MoveForward(GLfloat delta);
void MoveUp(GLfloat delta);
void MoveRight(GLfloat delta);
void RotateLocalY(GLfloat angle); // Only need rotation on local axis for FPS camera style. Then a translation on world X axis. (done in apply transform)
private:
void getTransformationMatrix(M3DMatrix44f matrix);
};
Applying transformations
// Clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// Apply camera transforms
player.ApplyTransform();
// Set up lights
...
// Use shaders
...
// Render the scene
RenderScene();
// Do post rendering operations
glutSwapBuffers();
and mouse
float mouseSensitivity = 500.0f;
float horizontal = (width / 2) - mouseX;
float vertical = (height / 2) - mouseY;
horizontal /= mouseSensitivity;
vertical /= (mouseSensitivity / 25);
player.xAngle += -vertical;
player.RotateLocalY(horizontal);
glutWarpPointer((width / 2), (height / 2));
Honestly I think you are taking a way to complicated approach to your problem. There are many ways to create a camera. My favorite is using a R3-Vector and a Quaternion, but you could also work with a R3-Vector and two floats (pitch and yaw).
The setup with two angles is simple:
glLoadIdentity();
glTranslatef(-pos[0], -pos[1], -pos[2]);
glRotatef(-yaw, 0.0f, 0.0f, 1.0f);
glRotatef(-pitch, 0.0f, 1.0f, 0.0f);
The tricky part now is moving the camera. You must do something along the lines of:
flaot ds = speed * dt;
position += tranform_y(pich, tranform_z(yaw, Vector3(ds, 0, 0)));
How to do the transforms, I would have to look that up, but you could to it by using a rotation matrix
Rotation is trivial, just add or subtract from the pitch and yaw values.
I like using a quaternion for the orientation because it is general and thus you have a camera (any entity that is) that independent of any movement scheme. In this case you have a camera that looks like so:
class Camera
{
public:
// lots of stuff omitted
void setup();
void move_local(Vector3f value);
void rotate(float dy, float dz);
private:
mx::Vector3f position;
mx::Quaternionf orientation;
};
Then the setup code uses shamelessly gluLookAt; you could make a transformation matrix out of it, but I never got it to work right.
void Camera::setup()
{
// projection related stuff
mx::Vector3f eye = position;
mx::Vector3f forward = mx::transform(orientation, mx::Vector3f(1, 0, 0));
mx::Vector3f center = eye + forward;
mx::Vector3f up = mx::transform(orientation, mx::Vector3f(0, 0, 1));
gluLookAt(eye(0), eye(1), eye(2), center(0), center(1), center(2), up(0), up(1), up(2));
}
Moving the camera in local frame is also simple:
void Camera::move_local(Vector3f value)
{
position += mx::transform(orientation, value);
}
The rotation is also straight forward.
void Camera::rotate(float dy, float dz)
{
mx::Quaternionf o = orientation;
o = mx::axis_angle_to_quaternion(horizontal, mx::Vector3f(0, 0, 1)) * o;
o = o * mx::axis_angle_to_quaternion(vertical, mx::Vector3f(0, 1, 0));
orientation = o;
}
(Shameless plug):
If you are asking what math library I use, it is mathex. I wrote it...