gluLookAt Issues - opengl

I've been trying to create a roller coaster simulator in OpenGL which uses a series of gluLookAt calls to make the camera 'ride' the roller coaster. The coaster itself is based on a b-spline curve with control points in the coords array. b0(u), b1(u) etc are the blending functions for b-spline curves, with bprime0(u) etc being their derivatives. Here's the relevant part of my code:
for (int i = 0; i <= 10; i++){
for (float u = 0; u <= 1.1; u+=0.1){
x = (b0(u)*coords[(i)%10].x + b1(u)*coords[(i+1)%10].x
+ b2(u)*coords[(i+2)%10].x + b3(u)*coords[(i+3)%10].x)*2.0f;
y = (b0(u)*coords[(i)%10].y + b1(u)*coords[(i+1)%10].y
+ b2(u)*coords[(i+2)%10].y + b3(u)*coords[(i+3)%10].y)*2.0f;
z = (b0(u)*coords[(i)%10].z + b1(u)*coords[(i+1)%10].z
+ b2(u)*coords[(i+2)%10].z + b3(u)*coords[(i+3)%10].z)*2.0f;
xprime = (bprime0(u)*coords[(i)%10].x + bprime1(u)*coords[(i+1)%10].x
+ bprime2(u)*coords[(i+2)%10].x + bprime3(u)*coords[(i+3)%10].x)*-2.0f;
yprime = (b0(u)*coords[(i)%10].y + bprime1(u)*coords[(i+1)%10].y
+ bprime2(u)*coords[(i+2)%10].y + bprime3(u)*coords[(i+3)%10].y)*-2.0f;
zprime = (b0(u)*coords[(i)%10].z + bprime1(u)*coords[(i+1)%10].z
+ bprime2(u)*coords[(i+2)%10].z + bprime3(u)*coords[(i+3)%10].z)*-2.0f;
Coords nvector = {xprime,yprime,zprime};
float magn = sqrt(nvector.x*nvector.x+nvector.y*nvector.y+nvector.z*nvector.z);
nvector.x= nvector.x/magn;
nvector.y= nvector.y/magn;
nvector.z= nvector.z/magn;
glLoadIdentity();
if (rotateCam == 1){
theta+=0.0001;
if (theta > 360) {
theta = 0;
}
gluLookAt(20*cos(theta),15,20*sin(theta),0,0,0,0,1,0);
}//if
else{
printf("%f\t%f\t%f\n", x+xprime,y+yprime,z+zprime);
gluLookAt(x,y+1,z,x+xprime,y+yprime,z+zprime,0,1,0);
}//else
}//for
}//for
The spacebar switches the 'rotateCam' variable, which is supposed to switch between two viewing modes; one which circles the camera around the coaster (the 'if' statement) and one which rides the coaster (the 'else' statement).
Here's the thing: the circling mode works fine, and switching between modes works fine, but the camera is always stationary in the 'ride' mode. The printf statement shows that x, xprime, y, yprime etc are all changing with each timer tick, but the camera never moves.
If more code is needed let me know.

gluLookAt doesn't position the cemera, it only rotates it to the correct angle. After this it is up to you to also translate it. So this should do the trick:
gluLookAt(x,y+1,z,x+xprime,y+yprime,z+zprime,0,1,0);
gluTranslated(x,y+1,z);

Related

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

Zooming towards center of Camera on 2d Plane

Once again, camera zooming on a 2D-Plane. I searched a lot and know that there are similar questions, but I am obviously way too stupid to apply what I was able to find.
Basically I multiply the distance of all elements to the origin by mouseDelta, which is a double between 0.5 and 1. works fine for all elements, but since the anchor of the camera (camX, camY) are the upper left corner of the camera, the objects in the focus of the cam change their position in relation to the focus. I want to scroll "towards" the focus. Here is what I got, but it behaves really weird:
camX and camY, as mentioned, are the coordinates for the upper left of the cam.
mouseDelta is the zoom-level thats stored globally and is changed by each wheel-event.
screenX is the width of the screen/window (fullscreen anyways)
screenY is the height of the screen/window
if (newEvent.type == sf::Event::MouseWheelMoved) //zoom
{
mouseDelta += ((double)newEvent.mouseWheel.delta)/20;
if (mouseDelta > 1) { mouseDelta = 1; }
else if (mouseDelta < 0.5) { mouseDelta = 0.5; }
//resize graphics
for (int i = 0; i < core->universe->world->nodes.size(); i++) {
core->universe->world->nodes.at(i).pic->setSize(mouseDelta);
}
for (int i = 0; i < core->universe->world->links.size(); i++) {
core->universe->world->links.at(i).pic->setSize(mouseDelta);
}
camX = (camX + screenX/2) - (camX + screenX/2)*mouseDelta;
camY = (camY + screenY/2) - (camY + screenY/2)*mouseDelta;
}

Bullet algorithm having trouble with rotation on the X

Here is what I'm trying to do. I'm trying to make a bullet out of the center of the screen. I have an x and y rotation angle. The problem is the Y (which is modified by rotation on the x) is really not working as intended. Here is what I have.
float yrotrad, xrotrad;
yrotrad = (Camera.roty / 180.0f * 3.141592654f);
xrotrad = (Camera.rotx / 180.0f * 3.141592654f);
Vertex3f Pos;
// get camera position
pls.x = Camera.x;
pls.y = Camera.y;
pls.z = Camera.z;
for(float i = 0; i < 60; i++)
{
//add the rotation vector
pls.x += float(sin(yrotrad)) ;
pls.z -= float(cos(yrotrad)) ;
pls.y += float(sin(twopi - xrotrad));
//translate camera coords to cube coords
Pos.x = ceil(pls.x / 3);
Pos.y = ceil((pls.y) / 3);
Pos.z = ceil(pls.z / 3);
if(!CubeIsEmpty(Pos.x,Pos.y,Pos.z)) //remove first cube that made contact
{
delete GetCube(Pos.x,Pos.y,Pos.z);
SetCube(0,Pos.x,Pos.y,Pos.z);
return;
}
}
This is almost identical to how I move the player, I add the directional vector to the camera then find which cube the player is on. If I remove the pls.y += float(sin(twopi - xrotrad)); then I clearly see that on the X and Z, everything is pointing as it should. When I add pls.y += float(sin(twopi - xrotrad)); then it almost works, but not quite, what I observed from rendering out spheres of the trajector is that the furthur up or down I look, the more offset it becomes rather than stay alligned to the camera's center. What am I doing wrong?
Thanks
What basically happens is very difficult to explain, I'd expect the bullet at time 0 to always be at the center of the screen, but it behaves oddly. If i'm looking straight at the horizon to +- 20 degrees upward its fine but then it starts not following any more.
I set up my matrix like this:
void CCubeGame::SetCameraMatrix()
{
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Camera.rotx,1,0,0);
glRotatef(Camera.roty,0,1,0);
glRotatef(Camera.rotz,0,0,1);
glTranslatef(-Camera.x , -Camera.y,-Camera.z );
}
and change the angle like this:
void CCubeGame::MouseMove(int x, int y)
{
if(!isTrapped)
return;
int diffx = x-lastMouse.x;
int diffy = y-lastMouse.y;
lastMouse.x = x;
lastMouse.y = y;
Camera.rotx += (float) diffy * 0.2;
Camera.roty += (float) diffx * 0.2;
if(Camera.rotx > 90)
{
Camera.rotx = 90;
}
if(Camera.rotx < -90)
{
Camera.rotx = -90;
}
if(isTrapped)
if (fabs(ScreenDimensions.x/2 - x) > 1 || fabs(ScreenDimensions.y/2 - y) > 1) {
resetPointer();
}
}
You need to scale X and Z by cos(xradrot). (In other words, multiply by cos(xradrot)).
Imagine you're pointing straight down the Z axis but looking straight up. You don't want the bullet to shoot down the Z axis at all, this is why you need to scale it. (It's basically the same thing that you're doing between X and Z, but now doing it on the XZ vector and Y.)
pls.x += float(sin(yrotrad)*cos(xrotrad)) ;
pls.z -= float(cos(yrotrad)*cos(xrotrad)) ;
pls.y += float(sin(twopi - xrotrad));

How to draw an Arc in OpenGL

While making a little Pong game in C++ OpenGL, I decided it'd be fun to create arcs (semi-circles) when stuff bounces. I decided to skip Bezier curves for the moment and just go with straight algebra, but I didn't get far. My algebra follows a simple quadratic function (y = +- sqrt(mx+c)).
This little excerpt is just an example I've yet to fully parameterize, I just wanted to see how it would look. When I draw this, however, it gives me a straight vertical line where the line's tangent line approaches -1.0 / 1.0.
Is this a limitation of the GL_LINE_STRIP style or is there an easier way to draw semi-circles / arcs? Or did I just completely miss something obvious?
void Ball::drawBounce()
{ float piecesToDraw = 100.0f;
float arcWidth = 10.0f;
float arcAngle = 4.0f;
glBegin(GL_LINE_STRIP);
for (float i = 0.0f; i < piecesToDraw; i += 1.0f) // Positive Half
{ float currentX = (i / piecesToDraw) * arcWidth;
glVertex2f(currentX, sqrtf((-currentX * arcAngle)+ arcWidth));
}
for (float j = piecesToDraw; j > 0.0f; j -= 1.0f) // Negative half (go backwards in X direction now)
{ float currentX = (j / piecesToDraw) * arcWidth;
glVertex2f(currentX, -sqrtf((-currentX * arcAngle) + arcWidth));
}
glEnd();
}
Thanks in advance.
What is the purpose of sqrtf((-currentX * arcAngle)+ arcWidth)? When i>25, that expression becomes imaginary. The proper way of doing this would be using sin()/cos() to generate the X and Y coordinates for a semi-circle as stated in your question. If you want to use a parabola instead, the cleaner way would be to calculate y=H-H(x/W)^2

Rotating coordinates around an axis

I'm representing a shape as a set of coordinates in 3D, I'm trying to rotate the whole object around an axis (In this case the Z axis, but I'd like to rotate around all three once I get it working).
I've written some code to do this using a rotation matrix:
//Coord is a 3D vector of floats
//pos is a coordinate
//angles is a 3d vector, each component is the angle of rotation around the component axis
//in radians
Coord<float> Polymers::rotateByMatrix(Coord<float> pos, const Coord<float> &angles)
{
float xrot = angles[0];
float yrot = angles[1];
float zrot = angles[2];
//z axis rotation
pos[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
pos[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return pos;
}
The image below shows the object I'm trying to rotate (looking down the Z axis) before the rotation is attempted, each small sphere indicates one of the coordinates I'm trying to rotate
alt text http://www.cs.nott.ac.uk/~jqs/notsquashed.png
The rotation is performed for the object by the following code:
//loop over each coordinate in the object
for (int k=start; k<finish; ++k)
{
Coord<float> pos = mp[k-start];
//move object away from origin to test rotation around origin
pos += Coord<float>(5.0,5.0,5.0);
pos = rotateByMatrix(pos, rots);
//wrap particle position
//these bits of code just wrap the coordinates around if the are
//outside of the volume, and write the results to the positions
//array and so shouldn't affect the rotation.
for (int l=0; l<3; ++l)
{
//wrap to ensure torroidal space
if (pos[l] < origin[l]) pos[l] += dims[l];
if (pos[l] >= (origin[l] + dims[l])) pos[l] -= dims[l];
parts->m_hPos[k * 4 + l] = pos[l];
}
}
The problem is that when I perform the rotation in this way, with the angles parameter set to (0.0,0.0,1.0) it works (sort of), but the object gets deformed, like so:
alt text http://www.cs.nott.ac.uk/~jqs/squashed.png
which is not what I want. Can anyone tell me what I'm doing wrong and how I can rotate the entire object around the axis without deforming it?
Thanks
nodlams
Where you do your rotation in rotateByMatrix, you compute the new pos[0], but then feed that into the next line for computing the new pos[1]. So the pos[0] you're using to compute the new pos[1] is not the input, but the output. Store the result in a temp var and return that.
Coord<float> tmp;
tmp[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
tmp[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return tmp;
Also, pass the pos into the function as a const reference.
const Coord<float> &pos
Plus you should compute the sin and cos values once, store them in temporaries and reuse them.