mouse movement with glut - opengl

I am trying to learn the basics of the opengl Glut , and I am following the tutorial of a site .
I can not understand how I can move the camera in space and not just in two ordinates .
Could you help me ?
I'm using a glutPassiveMotionFunc(mouseMove) function.
//MOUSE MOVEMENT----------------------------------------------------------
void mouseMove(int x, int y) {
xOrigin = x;
// this will only be true when the left button is down
if (xOrigin >= 0) {
// update deltaAngle
deltaAngle = (x - xOrigin) * 0.001f;
// update camera's direction
lx = x + sin(angle + deltaAngle);
lz = y - cos(angle + deltaAngle);
}
else{
deltaAngle = (x + xOrigin) * 0.001f;
// update camera's direction
lx = x + sin(angle + deltaAngle);
lz = y - cos(angle + deltaAngle);
}
}

There are multiple coordinate systems in openGL, screen coordinate, eye coordinate, world coordinate, ...
The x and y you get from the mouse callback function are referring to the screen coordinates, which starts from (0,0) from the upper-left corner of the window.
The camera, on the other hand, works on a different level. You didn't mention which version of OpenGL you are using ... but anyways you can read the manual pages for gluLookAt() to learn more about the eye coordinate system.

Related

I have a device reporting left handed coordinate angle and magnitude, how do I represent that as a line on the screen from the center?

The device I am using generates vectors like this;
How do I translate polar (angle and magnitude) from a left handed cordinate to a cartesian line, drawn on a screen where the origin point is the middle of a screen?
I am displaying the line on a wt32-sc01 screen using c++. There is a tft.drawline function but its references are normal pixel locations. In which case 0,0 is the upper left corner of the screen.
This is what I have so far (abbreviated)
....
int screen_height = tft.height();
int screen_width = tft.width();
// Device can read to 12m and reports in mm
float zoom_factor = (screen_width / 2.0) / 12000.0;
int originY = (int)(screen_height / 2);
int originX = (int)(screen_width / 2);
// Offset is for screen scrolling. No screen offset to start
int offsetX = 0;
int offsetY = 0;
...
// ld06 holds the reported angles and distances.
Coord coord = polarToCartesian(ld06.angles[i], ld06.distances[i]);
drawVector(coord, WHITE);
Coord polarToCartesian(float theta, float r) {
// cos() and sin() take radians
float rad = theta * 0.017453292519;
Coord converted = {
(int)(r * cos(rad)),
(int)(r * sin(rad))
};
return converted;
}
void drawVector(Coord coord, int color) {
// Cartesian relative the center of the screen factoring zoom and pan
int destX = (int)(zoom_factor * coord.x) + originX + offsetX;
int destY = originY - (int)(zoom_factor * coord.y) + offsetY;
// From the middle of the screen (origin X, origin Y) to destination x,y
tft.drawLine( originX, originY, destX, destY, color);
}
I have something drawing on the screen, but now I have to translate between a left handed coordinate system and the whole plane is rotated 90 degrees. How do I do that?
If I understood correctly, your coordinate system is with x pointing to the right and the y to the bottom and you used the formula for the standard math coordinate system where y is pointing up so multiplying your sin by -1 should do the trick (if it doesn't, try multiplying random things by -1, it often works for this kind of problems).
I assuming (from your image) your coordinate system has x going right y going up angle going from y axis clockwise and (0,0) is also center of your polar coordinates and your goniometrics accept radians then:
#include <math.h>
float x,y,ang,r;
const float deg = M_PI/180.0;
// ang = <0,360> // your angle
// r >= 0 // your radius (magnitude)
x = r*sin(ang*deg);
y = r*cos(ang*deg);

OpenGL Sphere deforms when setting center coordinate to high values

So I am drawing a sphere not using the "subdividing icosahedron" approach but using triangle strips and parameteric equation of the sphere.
Here is my code
glBegin(GL_TRIANGLE_SRIP);
for(float i = -PI/2; i < PI/2; i+= 0.01f)
{
temp = i+0.01f;
for(float j = 0; j < 2*PI; j+=0.01f)
{
temp -= 0.01f;
glVertex3f( cx + rad * cos(j) * cos(temp), cy + rad * cos(temp) * sin(j), cz + rad * sin(temp));
temp += 0.01f;
glVertex3f( cx + rad * cos(j) * cos(temp), cy + rad * cos(temp) * sin(j), cz + rad * sin(temp));
}
}
glEnd();
The approach is as followes. Imagine a Circle in the XY plane. This is drawn using the inner loop. Now imagine the XY plane moved above or below in the Z-axis and the radius changed cause it's a sphere. This is done using the outer loop.
The first triangle coordinate is given for the Circle when XY plane is at its initial position. After temp+=0.01f the plane moved up by 0.01 and the second triangle vertex coordinate is given. This is how the strip is calculated.
The problem is if cx = cy = cz = 0 or any low value like 2 or 3 the sphere seems fine. However if I put for e.g cx = 15, cy = 15, cz = -6 the sphere gets deformed. Here is the picture.
If i use GL_POINTS this is what im getting
Sorry a very stupid mistake, I wasn't converting the values i put in glFrustum correctly hence a weird FOV was being generated. Solved the issue now. Thanks

Opengl mouse Camera issue (gluLookAt)

Hello i am having a strange issue with my mouse movement in openGL. Here is my code for moving the camera with my mouse
void camera(int x, int y)
{
GLfloat xoff = x- lastX;
GLfloat yoff = lastY - y; // Reversed since y-coordinates range from bottom to top
lastX = x;
lastY = y;
GLfloat sensitivity = 0.5f;
xoff *= sensitivity;
yoff *= sensitivity;
yaw += xoff; // yaw is x
pitch += yoff; // pitch is y
// Limit up and down camera movement to 90 degrees
if (pitch > 89.0)
pitch = 89.0;
if (pitch < -89.0)
pitch = -89.0;
// Update camera position and viewing angle
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
Front.y = sin(convertToRads(pitch));
Front.z = sin(convertToRads(yaw)) * cos(convertToRads(pitch));
}
convertToRads() is a small function i created to convert the mouse coordinates to rads.
With this code i can move my camera how ever i want but if i try to go all the way up when i reach around 45 degrees it rotates 1-2 times around x-axis and then continues to increase y-axis. I can't understand if i have done something wrong so if anyone could help i would appreciate it.
It seems you have misplaced a paranthesis:
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
instead of:
Front.x = cos(convertToRads(yaw)) * cos(convertToRads(pitch));

DirectX11 Mouse Control

I have a 3d program in DirectX and I want to give the mouse control to the camera. The problem is the mouse moves right off the screen ( In windowed mode ) then the camera doesn't turn anymore. I tried to use SetCusorPos to just lock it in place after the mouse is moved. That way I could get a dx and then set the mouse back to the center of the screen. I ended up getting a endless white screen. Here is my camera/mouse movement code so far. If you need any more information just ask.
void PhysicsApp::OnMouseMove(WPARAM btnState, int x, int y)
{
// Make each pixel correspond to a quarter of a degree.
float dx = XMConvertToRadians(0.25f*static_cast<float>(x - mLastMousePos.x));
float dy = XMConvertToRadians(0.25f*static_cast<float>(y - mLastMousePos.y));
// Update angles based on input to orbit camera around box.
mTheta += -dx;
mPhi += -dy;
// Update players direction to always face forward
playerRotation.y = -mTheta;
// Restrict the angle mPhi.
mPhi = MathHelper::Clamp(mPhi, 0.1f, MathHelper::Pi-0.1f);
if( (btnState & MK_RBUTTON) != 0 )
{
// Make each pixel correspond to 0.2 unit in the scene.
float dx = 0.05f*static_cast<float>(x - mLastMousePos.x);
float dy = 0.05f*static_cast<float>(y - mLastMousePos.y);
// Update the camera radius based on input.
mRadius += dx - dy;
// Restrict the radius.
mRadius = MathHelper::Clamp(mRadius, 5.0f, 50.0f);
}
mLastMousePos.x = x;
mLastMousePos.y = y;
}

Picking Ray is inaccurate

I'm trying to implement a picking ray via instructions from this website.
Right now I basically only want to be able to click on the ground to order my little figure to walk towards this point.
Since my ground plane is flat , non-rotated and non-translated I'd have to find the x and z coordinate of my picking ray when y hits 0.
So far so good, this is what I've come up with:
//some constants
float HEIGHT = 768.f;
float LENGTH = 1024.f;
float fovy = 45.f;
float nearClip = 0.1f;
//mouse position on screen
float x = MouseX;
float y = HEIGHT - MouseY;
//GetView() returns the viewing direction, not the lookAt point.
glm::vec3 view = cam->GetView();
glm::normalize(view);
glm::vec3 h = glm::cross(view, glm::vec3(0,1,0) ); //cameraUp
glm::normalize(h);
glm::vec3 v = glm::cross(h, view);
glm::normalize(v);
// convert fovy to radians
float rad = fovy * 3.14 / 180.f;
float vLength = tan(rad/2) * nearClip; //nearClippingPlaneDistance
float hLength = vLength * (LENGTH/HEIGHT);
v *= vLength;
h *= hLength;
// translate mouse coordinates so that the origin lies in the center
// of the view port
x -= LENGTH / 2.f;
y -= HEIGHT / 2.f;
// scale mouse coordinates so that half the view port width and height
// becomes 1
x /= (LENGTH/2.f);
y /= (HEIGHT/2.f);
glm::vec3 cameraPos = cam->GetPosition();
// linear combination to compute intersection of picking ray with
// view port plane
glm::vec3 pos = cameraPos + (view*nearClip) + (h*x) + (v*y);
// compute direction of picking ray by subtracting intersection point
// with camera position
glm::vec3 dir = pos - cameraPos;
//Get intersection between ray and the ground plane
pos -= (dir * (pos.y/dir.y));
At this point I'd expect pos to be the point where my picking ray hits my ground plane.
When I try it, however, I get something like this:
(The mouse cursor wasn't recorded)
It's hard to see since the ground has no texture, but the camera is tilted, like in most RTS games.
My pitiful attempt to model a remotely human looking being in Blender marks the point where the intersection happened according to my calculation.
So it seems that the transformation between view and dir somewhere messed up and my ray ended up pointing in the wrong direction.
The gap between the calculated position and the actual position increases the farther I mouse my move away from the center of the screen.
I've found out that:
HEIGHT and LENGTH aren't acurate. Since Windows cuts away a few pixels for borders it'd be more accurate to use 1006,728 as window resolution. I guess that could make for small discrepancies.
If I increase fovy from 45 to about 78 I get a fairly accurate ray. So maybe there's something wrong with what I use as fovy. I'm explicitely calling glm::perspective(45.f, 1.38f, 0.1f, 500.f) (fovy, aspect ratio, fNear, fFar respectively).
So here's where I am lost. What do I have to do in order to get an accurate ray?
PS: I know that there are functions and libraries that have this implemented, but I try to stay away from these things for learning purposes.
Here's working code that does cursor to 3D conversion using depth buffer info:
glGetIntegerv(GL_VIEWPORT, #fViewport);
glGetDoublev(GL_PROJECTION_MATRIX, #fProjection);
glGetDoublev(GL_MODELVIEW_MATRIX, #fModelview);
//fViewport already contains viewport offsets
PosX := X;
PosY := ScreenY - Y; //In OpenGL Y axis is inverted and starts from bottom
glReadPixels(PosX, PosY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, #vz);
gluUnProject(PosX, PosY, vz, fModelview, fProjection, fViewport, #wx, #wy, #wz);
XYZ.X := wx;
XYZ.Y := wy;
XYZ.Z := wz;
If you do test only ray/plane intersection this is the second part without DepthBuffer:
gluUnProject(PosX, PosY, 0, fModelview, fProjection, fViewport, #x1, #y1, #z1); //Near
gluUnProject(PosX, PosY, 1, fModelview, fProjection, fViewport, #x2, #y2, #z2); //Far
//No intersection
Result := False;
XYZ.X := 0;
XYZ.Y := 0;
XYZ.Z := aZ;
if z2 < z1 then
SwapFloat(z1, z2);
if (z1 <> z2) and InRange(aZ, z1, z2) then
begin
D := 1 - (aZ - z1) / (z2 - z1);
XYZ.X := Lerp(x1, x2, D);
XYZ.Y := Lerp(y1, y2, D);
Result := True;
end;
I find it rather different from what you are doing, but maybe that will make more sense.