NOTE: I've edited my code. See below the divider.
I'm implementing refraction in my (fairly basic) ray tracer, written in C++. I've been following (1) and (2).
I get the result below. Why is the center of the sphere black?
The center sphere has a transmission coefficient of 0.9 and a reflective coefficient of 0.1. It's index of refraction is 1.5 and it's placed 1.5 units away from the camera. The other two spheres just use diffuse lighting, with no reflective/refraction component. I placed these two different coloured spheres behind and in front of the transparent sphere to ensure that I don't see a reflection instead of a transmission.
I've made the background colour (the colour achieved when a ray from the camera does not intersect with any object) a colour other than black, so the center of the sphere is not just the background colour.
I have not implemented the Fresnel effect yet.
My trace function looks like this (verbatim copy, with some parts omitted for brevity):
bool isInside(Vec3f rayDirection, Vec3f intersectionNormal) {
return dot(rayDirection, intersectionNormal) > 0;
}
Vec3f trace(Vec3f origin, Vec3f ray, int depth) {
// (1) Find object intersection
std::shared_ptr<SceneObject> intersectionObject = ...;
// (2) Compute diffuse and ambient color contribution
Vec3f color = ...;
bool isTotalInternalReflection = false;
if (intersectionObject->mTransmission > 0 && depth < MAX_DEPTH) {
Vec3f transmissionDirection = refractionDir(
ray,
normal,
1.5f,
isTotalInternalReflection
);
if (!isTotalInternalReflection) {
float bias = 1e-4 * (isInside(ray, normal) ? -1 : 1);
Vec3f transmissionColor = trace(
add(intersection, multiply(normal, bias)),
transmissionDirection,
depth + 1
);
color = add(
color,
multiply(transmissionColor, intersectionObject->mTransmission)
);
}
}
if (intersectionObject->mSpecular > 0 && depth < MAX_DEPTH) {
Vec3f reflectionDirection = computeReflectionDirection(ray, normal);
Vec3f reflectionColor = trace(
add(intersection, multiply(normal, 1e-5)),
reflectionDirection,
depth + 1
);
float intensity = intersectionObject->mSpecular;
if (isTotalInternalReflection) {
intensity += intersectionObject->mTransmission;
}
color = add(
color,
multiply(reflectionColor, intensity)
);
}
return truncate(color, 1);
}
If the object is transparent then it computes the direction of the transmission ray and recursively traces it, unless the refraction causes total internal reflection. In that case, the transmission component is added to the reflection component and thus the color will be 100% of the traced reflection color.
I add a little bias to the intersection point in the direction of the normal (inverted if inside) when recursively tracing the transmission ray. If I don't do that, then I get this result:
The computation for the direction of the transmission ray is performed in refractionDir. This function assumes that we will not have a transparent object inside another, and that the outside material is air, with a coefficient of 1.
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction = 1.0f / refractionIndex;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
// We should be reflecting across a normal inside the object, so
// re-orient the normal to be inside.
normal = multiply(normal, -1);
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) *
(1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, relativeIndexOfRefraction * cosi - sqrtf(base))
);
}
Here's the result when the spheres are further away from the camera:
And closer to the camera:
Edit: I noticed a couple bugs in my code.
When I add bias to the intersection point, it should be in the same direction as the transmission. I was adding it in the wrong direction by adding negative bias when inside the sphere. This doesn't make sense as when the ray is coming from inside the sphere, it will transmit outside the sphere (when TIR is avoided).
Old code:
add(intersection, multiply(normal, bias))
New code:
add(intersection, multiply(transmissionDirection, 1e-4))
Similarly, the normal that refractionDir receives is the surface normal pointing away from the center of the sphere. The normal I want to use when computing the transmission direction is one pointing outside if the transmission ray is going to go outside the object, or inside if the transmission ray is going to go inside the object. Thus, the surface normal pointing out of the sphere should be inverted if we're entering the sphere, thus is the ray is outside.
New code:
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
} else {
relativeIndexOfRefraction = 1.0f / refractionIndex;
normal = multiply(normal, -1);
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) * (1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, sqrtf(base) - relativeIndexOfRefraction * cosi)
);
}
However, this all still gives me an unexpected result:
I've also added some unit tests. They pass the following:
A ray entering the center of the sphere parallel with the normal will transmit through the sphere without being bent (this tests two refractionDir calls, one outside and one inside).
Refraction at 45 degrees from the normal through a glass slab will bend inside the slab by 15 degrees towards the normal, away from the original ray direction. Its direction when it exits the sphere will be the original ray direction.
Similar test at 75 degrees.
Ensuring that total internal reflection happens when a ray is coming from inside the object and is at 45 degrees or wider.
I'll include one of the unit tests here and you can find the rest at this gist.
TEST_CASE("Refraction at 75 degrees from normal through glass slab") {
Vec3f rayDirection = normalize(Vec3f({ 0, -sinf(5.0f * M_PI / 12.0f), -cosf(5.0f * M_PI / 12.0f) }));
Vec3f normal({ 0, 0, 1 });
bool isTotalInternalReflection;
Vec3f refraction = refractionDir(rayDirection, normal, 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == 0);
REQUIRE(refraction[1] == Approx(-sinf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(refraction[2] == Approx(-cosf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(!isTotalInternalReflection);
refraction = refractionDir(refraction, multiply(normal, -1), 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == Approx(rayDirection[0]));
REQUIRE(refraction[1] == Approx(rayDirection[1]));
REQUIRE(refraction[2] == Approx(rayDirection[2]));
REQUIRE(!isTotalInternalReflection);
}
Related
I am following "Ray Tracing in One Weekend" to build a ray tracer on my own. Everything is OK until I met Dilectric Material.
The refraction performs well (I am not sure, we can see it in image 3 and 4), but when I add the total internal reflection, the sphere gets a black edge, the images are listed as below:
img1 - black edge for total internal reflection
img2 - black edge for total internal reflection
img3 - dilectric without total internal reflection
img4 - dilectric without total internal reflection
My analysis
I debuged my program and found that total internal reflection happens at the edge of the sphere, and the ray bounces infinitely inside the sphere until it exceeds boundce limits, so it returns (0.f, 0.f, 0.f) for the result color.
I don't think the infinite bounce of inner relection is right, but i have compared my code with the one in the book, and could not find any problem.
The scatter method is here:
bool Dilectric::scatter(const Ray& input, const HitRecord& rec, glm::vec3& attenuation, Ray& scatterRay) const
{
glm::vec3 dir;
if (rec.frontFace)
{
// ray is from air to inside surface, only refraction happens
float ratio = 1.f / m_refractIndex;
dir = GfxLib::refract(glm::normalize(input.direction()), rec.n, ratio);
}
else
{
// ray is from inside surface to air, need to think of total internal reflection
float ratio = m_refractIndex;
float cosTheta = std::fmin(glm::dot(-input.direction(), rec.n), 1.f);
float sinTheta = std::sqrt(1.f - cosTheta * cosTheta);
bool internalReflection = (ratio * sinTheta) > 1.f;
if (internalReflection)
{
dir = GfxLib::reflect(glm::normalize(input.direction()), rec.n);
}
else
{
dir = GfxLib::refract(glm::normalize(input.direction()), rec.n, ratio);
}
}
scatterRay.setOrigin(rec.pt);
scatterRay.setDirection(dir);
// m_albedo is set to vec3(1.f)
attenuation = m_albedo;
return true;
}
outer method, rayColor is here:
glm::vec3 RrtTest::rayColor(const Ray& ray, const HittableList& objList, int reflectDepth)
{
if (reflectDepth <= 0) return glm::vec3(0.f);
HitRecord rec;
// use 0.001 instead of 0.f to fix shadow acne
if (objList.hit(ray, 0.001f, FLT_MAX, rec) && rec.hitInd >= 0)
{
Ray scatterRay;
glm::vec3 attenu{ 1.f };
std::shared_ptr<Matl> mat = objList.at(rec.hitInd)->getMatl();
if (!mat->scatter(ray, rec, attenu, scatterRay))
return glm::vec3(0.f);
glm::vec3 retColor = rayColor(scatterRay, objList, --reflectDepth);
return attenu * retColor;
}
else
{
glm::vec3 startColor{ 1.f }, endColor{ 0.5f, 0.7f, 1.f };
float t = (ray.direction().y + 1.f) * 0.5f;
return GfxLib::blend(startColor, endColor, t);
}
}
reflect method is here:
glm::vec3 GfxLib::reflect(const glm::vec3& directionIn, const glm::vec3& n)
{
float b = glm::dot(directionIn, n);
glm::vec3 refDir = directionIn - 2 * b * n;
return glm::normalize(refDir);
}
However, I am not sure my analysis is right or not, can any one lend me a hand and give me a solution for it? The main logic of the rendering is here(including the total demo).
i will really appreciate your advice!
I am writing a ray tracer from scratch. The example is rendering two spheres using ray-sphere intersection detection. When the spheres are close to the center of the screen, they look fine. However, when I move the camera, or if I adjust the spheres position so they are closer to the edge, they become distorted.
This is the ray casting code:
void Renderer::RenderThread(int start, int span)
{
// pCamera holds the position, rotation, and fov of the camera
// pRenderTarget is the screen to render to
// calculate the camera space to world space matrix
Mat4 camSpaceMatrix = Mat4::Get3DTranslation(pCamera->position.x, pCamera->position.y, pCamera->position.z) *
Mat4::GetRotation(pCamera->rotation.x, pCamera->rotation.y, pCamera->rotation.z);
// use the cameras origin as the rays origin
Vec3 origin(0, 0, 0);
origin = (camSpaceMatrix * origin.Vec4()).Vec3();
// this for loop loops over all the pixels on the screen
for ( int p = start; p < start + span; ++p ) {
// get the pixel coordinates on the screen
int px = p % pRenderTarget->GetWidth();
int py = p / pRenderTarget->GetWidth();
// in ray tracing, ndc space is from [0, 1]
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
// in ray tracing, screen space is [-1, 1]
Vec2 screen(2 * ndc.x - 1, 1 - 2 * ndc.y);
// scale x by aspect ratio
screen.x *= (float)pRenderTarget->GetWidth() / pRenderTarget->GetHeight();
// scale screen by the field of view
// fov is currently set to 90
screen *= tan((pCamera->fov / 2) * (PI / 180));
// screen point is the pixels point in camera space,
// give a z value of -1
Vec3 camSpace(screen.x, screen.y, -1);
camSpace = (camSpaceMatrix * camSpace.Vec4()).Vec3();
// the rays direction is its point on the cameras viewing plane
// minus the cameras origin
Vec3 dir = (camSpace - origin).Normalized();
Ray ray = { origin, dir };
// find where the ray intersects with the spheres
// using ray-sphere intersection algorithm
Vec4 color = TraceRay(ray);
pRenderTarget->PutPixel(px, py, color);
}
}
The FOV is set to 90. I have seen where other people have had this problem but it was because they were using a very high FOV value. I don't think there should be issues with 90. This issue persists even if the camera is not moved at all. Any object close to the edge of the screen appears distorted.
When in doubt, you can always check out what other renderers are doing. I always compare my results and settings to Blender. Blender 2.82, for example, has a default field of view of 39.6 degrees.
I also feel inclined to point out that this is wrong:
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
If you want to get the center of the pixel, then it should be 0.5f:
Vec2 ndc((px + 0.5f) / pRenderTarget->GetWidth(), (py + 0.5f) / pRenderTarget->GetHeight());
Also, and this is really a nit-picky kind of thing, your intervals are open intervals and not closed ones (as you mentioned in the source comments.) The image plane coordinates never reach 0 or 1 and your camera space coordinates are never fully -1 or 1. Eventually, then the image plane coordinates are converted to pixel coordinates, it is left-closed interval [0, width) and [0, height).
Good luck on your ray tracer!
I am trying to get a sphere and a plane to collide properly. With the below code, they do collide, but only for a little while. I am at a loss when it comes to planes, so there must be something I have overlooked.
What I am trying to discern with the collision (which takes place during the narrowphase) is the depth, the normal and the point of contact. These are used later when resolving impulse etc.
The problem is that distance becomes < 0 after only a few iterations (the sphere is rolling, and then just goes under the plane).
auto distance = glm::dot(sphere->GetPosition() - glm::vec3(planeCollider->GetDistance()), planeCollider->GetNormal());
auto normal = planeCollider->GetNormal() * distance;
if (distance < 0.0f)
{
return false;
}
if (distance == 0.0f)
{
penetrationDepth = sphereCollider->GetRadius();
contactNormal = glm::vec3(0.0f, 1.0f, 0.0f);
contactPoint = bodyA->GetPosition();
}
else
{
penetrationDepth = sphereCollider->GetRadius() - distance;
contactNormal = normal;
contactPoint = normal * (distance - sphereCollider->GetRadius());
}
Let's say there is a grid terrain for a game composed of tiles made of two triangles - made from four vertices. How would we find the Y (up) position of a point between the four vertices?
I have tried this:
float diffZ1 = lerp(heights[0], heights[2], zOffset);
float diffZ2 = lerp(heights[1], heights[3], zOffset);
float yPosition = lerp(diffZ1, diffZ2, xOffset);
Where z/yOffset is the z/y offset from the first vertex of the tile in percent / 100. This works for flat surfaces but not so well on bumpy terrain.
I expect this has something to do with the terrain being made from triangles where the above may work on flat planes. I'm not sure, but does anybody know what's going wrong?
This may better explain what's going on here:
In the code above "heights[]" is an array of the Y coordinate of surrounding vertices v0-3.
Triangle 1 is made of vertex 0, 2 and 1.
Triangle 2 is made of vertex 1, 2 and 3.
I wish to find coordinate Y of p1 when its x,y coordinates lay between v0-3.
So I have tried determining which triangle the point is between through this function:
bool PointInTriangle(float3 pt, float3 pa, float3 pb, float3 pc)
{
// Compute vectors
float2 v0 = pc.xz - pa.xz;
float2 v1 = pb.xz - pa.xz;
float2 v2 = pt.xz - pa.xz;
// Compute dot products
float dot00 = dot(v0, v0);
float dot01 = dot(v0, v1);
float dot02 = dot(v0, v2);
float dot11 = dot(v1, v1);
float dot12 = dot(v1, v2);
// Compute barycentric coordinates
float invDenom = 1.0f / (dot00 * dot11 - dot01 * dot01);
float u = (dot11 * dot02 - dot01 * dot12) * invDenom;
float v = (dot00 * dot12 - dot01 * dot02) * invDenom;
// Check if point is in triangle
return (u >= 0.0f) && (v >= 0.0f) && (u + v <= 1.0f);
}
This isn't giving me the results I expected
I am then trying to find the y coordinate of point p1 inside each triangle:
// Position of point p1
float3 pos = input[0].PosI;
// Calculate point and normal for triangles
float3 p1 = tile[0];
float3 n1 = (tile[2] - p1) * (tile[1] - p1); // <-- Error, cross needed
// = cross(tile[2] - p1, tile[1] - p1);
float3 p2 = tile[3];
float3 n2 = (tile[2] - p2) * (tile[1] - p2); // <-- Error
// = cross(tile[2] - p2, tile[1] - p2);
float newY = 0.0f;
// Determine triangle & get y coordinate inside correct triangle
if(PointInTriangle(pos, tile[0], tile[1], tile[2]))
{
newY = p1.y - ((pos.x - p1.x) * n1.x + (pos.z - p1.z) * n1.z) / n1.y;
}
else if(PointInTriangle(input[0].PosI, tile[3], tile[2], tile[1]))
{
newY = p2.y - ((pos.x - p2.x) * n2.x + (pos.z - p2.z) * n2.z) / n2.y;
}
Using the following to find the correct triangle:
if((1.0f - xOffset) <= zOffset)
inTri1 = true;
And correcting the code above to use the correct cross function seems to have solved the problem.
Because your 4 vertices may not be on a plane, you should consider each triangle separately. First find the triangle that the point resides in, and then use the following StackOverflow discussion to solve for the Z value (note the different naming of the axes). I personally like DanielKO's answer much better, but the accepted answer should work too:
Linear interpolation of three 3D points in 3D space
EDIT: For the 2nd part of your problem (finding the triangle that the point is in):
Because the projection of your tiles onto the xz plane (as you define your coordinates) are perfect squares, finding the triangle that the point resides in is a very simple operation. Here I'll use the terms left-right to refer to the x axis (from lower to higher values of x) and bottom-top to refer to the z axis (from lower to higher values of z).
Each tile can only be split in one of two ways. Either (A) via a diagonal line from the bottom-left corner to the top-right corner, or (B) via a diagonal line from the bottom-right corner to the top-left corner.
For any tile that's split as A:
Check if x' > z', where x' is the distance from the left edge of the tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x' > z' then your point is in the bottom-right triangle; otherwise it's in the upper-left triangle.
For any tile that's split as B: Check if x" > z', where x" is the distance from the right edge of your tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x" > z' then your point is in the lower-left triangle; otherwise it's in the upper-right triangle.
(Minor note: Above I assume your tiles aren't rotated in the xz plane; i.e. that they are aligned with the axes. If that's not correct, simply rotate them to align them with the axes before doing the above checks.)
So I'm trying to figure out how to mannually create a camera class that creates a local frame for camera transformations. I've created a player object based on OpenGL SuperBible's GLFrame class.
I got keyboard keys mapped to the MoveUp, MoveRight and MoveForward functions and the horizontal and vertical mouse movements are mapped to the xRot variable and rotateLocalY function. This is done to create a FPS style camera.
The problem however is in the RotateLocalY. Translation works fine and so does the vertical mouse movement but the horizontal movement scales all my objects down or up in a weird way. Besides the scaling, the rotation also seems to restrict itself to 180 degrees and rotates around the world origin (0.0) instead of my player's local position.
I figured that the scaling had something to do with normalizing vectors but the GLframe class (which I used for reference) never normalized any vectors and that class works just fine. Normalizing most of my vectors only solved the scaling and all the other problems were still there so I'm figuring one piece of code is causing all these problems?
I can't seem to figure out where the problem lies, I'll post all the appropriate code here and a screenshot to show the scaling.
Player object
Player::Player()
{
location[0] = 0.0f; location[1] = 0.0f; location[2] = 0.0f;
up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f;
forward[0] = 0.0f; forward[1] = 0.0f; forward[2] = -1.0f;
}
// Does all the camera transformation. Should be called before scene rendering!
void Player::ApplyTransform()
{
M3DMatrix44f cameraMatrix;
this->getTransformationMatrix(cameraMatrix);
glRotatef(xAngle, 1.0f, 0.0f, 0.0f);
glMultMatrixf(cameraMatrix);
}
void Player::MoveForward(GLfloat delta)
{
location[0] += forward[0] * delta;
location[1] += forward[1] * delta;
location[2] += forward[2] * delta;
}
void Player::MoveUp(GLfloat delta)
{
location[0] += up[0] * delta;
location[1] += up[1] * delta;
location[2] += up[2] * delta;
}
void Player::MoveRight(GLfloat delta)
{
// Get X axis vector first via cross product
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, forward);
location[0] += xAxis[0] * delta;
location[1] += xAxis[1] * delta;
location[2] += xAxis[2] * delta;
}
void Player::RotateLocalY(GLfloat angle)
{
// Calculate a rotation matrix first
M3DMatrix44f rotationMatrix;
// Rotate around the up vector
m3dRotationMatrix44(rotationMatrix, angle, up[0], up[1], up[2]); // Use up vector to get correct rotations even with multiple rotations used.
// Get new forward vector out of the rotation matrix
M3DVector3f newForward;
newForward[0] = rotationMatrix[0] * forward[0] + rotationMatrix[4] * forward[1] + rotationMatrix[8] * forward[2];
newForward[1] = rotationMatrix[1] * forward[1] + rotationMatrix[5] * forward[1] + rotationMatrix[9] * forward[2];
newForward[2] = rotationMatrix[2] * forward[2] + rotationMatrix[6] * forward[1] + rotationMatrix[10] * forward[2];
m3dCopyVector3(forward, newForward);
}
void Player::getTransformationMatrix(M3DMatrix44f matrix)
{
// Get Z axis (Z axis is reversed with camera transformations)
M3DVector3f zAxis;
zAxis[0] = -forward[0];
zAxis[1] = -forward[1];
zAxis[2] = -forward[2];
// Get X axis
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, zAxis);
// Fill in X column in transformation matrix
m3dSetMatrixColumn44(matrix, xAxis, 0); // first column
matrix[3] = 0.0f; // Set 4th value to 0
// Fill in the Y column
m3dSetMatrixColumn44(matrix, up, 1); // 2nd column
matrix[7] = 0.0f;
// Fill in the Z column
m3dSetMatrixColumn44(matrix, zAxis, 2); // 3rd column
matrix[11] = 0.0f;
// Do the translation
M3DVector3f negativeLocation; // Required for camera transform (right handed OpenGL system. Looking down negative Z axis)
negativeLocation[0] = -location[0];
negativeLocation[1] = -location[1];
negativeLocation[2] = -location[2];
m3dSetMatrixColumn44(matrix, negativeLocation, 3); // 4th column
matrix[15] = 1.0f;
}
Player object header
class Player
{
public:
//////////////////////////////////////
// Variables
M3DVector3f location;
M3DVector3f up;
M3DVector3f forward;
GLfloat xAngle; // Used for FPS divided X angle rotation (can't combine yaw and pitch since we'll also get a Roll which we don't want for FPS)
/////////////////////////////////////
// Functions
Player();
void ApplyTransform();
void MoveForward(GLfloat delta);
void MoveUp(GLfloat delta);
void MoveRight(GLfloat delta);
void RotateLocalY(GLfloat angle); // Only need rotation on local axis for FPS camera style. Then a translation on world X axis. (done in apply transform)
private:
void getTransformationMatrix(M3DMatrix44f matrix);
};
Applying transformations
// Clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// Apply camera transforms
player.ApplyTransform();
// Set up lights
...
// Use shaders
...
// Render the scene
RenderScene();
// Do post rendering operations
glutSwapBuffers();
and mouse
float mouseSensitivity = 500.0f;
float horizontal = (width / 2) - mouseX;
float vertical = (height / 2) - mouseY;
horizontal /= mouseSensitivity;
vertical /= (mouseSensitivity / 25);
player.xAngle += -vertical;
player.RotateLocalY(horizontal);
glutWarpPointer((width / 2), (height / 2));
Honestly I think you are taking a way to complicated approach to your problem. There are many ways to create a camera. My favorite is using a R3-Vector and a Quaternion, but you could also work with a R3-Vector and two floats (pitch and yaw).
The setup with two angles is simple:
glLoadIdentity();
glTranslatef(-pos[0], -pos[1], -pos[2]);
glRotatef(-yaw, 0.0f, 0.0f, 1.0f);
glRotatef(-pitch, 0.0f, 1.0f, 0.0f);
The tricky part now is moving the camera. You must do something along the lines of:
flaot ds = speed * dt;
position += tranform_y(pich, tranform_z(yaw, Vector3(ds, 0, 0)));
How to do the transforms, I would have to look that up, but you could to it by using a rotation matrix
Rotation is trivial, just add or subtract from the pitch and yaw values.
I like using a quaternion for the orientation because it is general and thus you have a camera (any entity that is) that independent of any movement scheme. In this case you have a camera that looks like so:
class Camera
{
public:
// lots of stuff omitted
void setup();
void move_local(Vector3f value);
void rotate(float dy, float dz);
private:
mx::Vector3f position;
mx::Quaternionf orientation;
};
Then the setup code uses shamelessly gluLookAt; you could make a transformation matrix out of it, but I never got it to work right.
void Camera::setup()
{
// projection related stuff
mx::Vector3f eye = position;
mx::Vector3f forward = mx::transform(orientation, mx::Vector3f(1, 0, 0));
mx::Vector3f center = eye + forward;
mx::Vector3f up = mx::transform(orientation, mx::Vector3f(0, 0, 1));
gluLookAt(eye(0), eye(1), eye(2), center(0), center(1), center(2), up(0), up(1), up(2));
}
Moving the camera in local frame is also simple:
void Camera::move_local(Vector3f value)
{
position += mx::transform(orientation, value);
}
The rotation is also straight forward.
void Camera::rotate(float dy, float dz)
{
mx::Quaternionf o = orientation;
o = mx::axis_angle_to_quaternion(horizontal, mx::Vector3f(0, 0, 1)) * o;
o = o * mx::axis_angle_to_quaternion(vertical, mx::Vector3f(0, 1, 0));
orientation = o;
}
(Shameless plug):
If you are asking what math library I use, it is mathex. I wrote it...