DirectX11 Mouse Control - c++

I have a 3d program in DirectX and I want to give the mouse control to the camera. The problem is the mouse moves right off the screen ( In windowed mode ) then the camera doesn't turn anymore. I tried to use SetCusorPos to just lock it in place after the mouse is moved. That way I could get a dx and then set the mouse back to the center of the screen. I ended up getting a endless white screen. Here is my camera/mouse movement code so far. If you need any more information just ask.
void PhysicsApp::OnMouseMove(WPARAM btnState, int x, int y)
{
// Make each pixel correspond to a quarter of a degree.
float dx = XMConvertToRadians(0.25f*static_cast<float>(x - mLastMousePos.x));
float dy = XMConvertToRadians(0.25f*static_cast<float>(y - mLastMousePos.y));
// Update angles based on input to orbit camera around box.
mTheta += -dx;
mPhi += -dy;
// Update players direction to always face forward
playerRotation.y = -mTheta;
// Restrict the angle mPhi.
mPhi = MathHelper::Clamp(mPhi, 0.1f, MathHelper::Pi-0.1f);
if( (btnState & MK_RBUTTON) != 0 )
{
// Make each pixel correspond to 0.2 unit in the scene.
float dx = 0.05f*static_cast<float>(x - mLastMousePos.x);
float dy = 0.05f*static_cast<float>(y - mLastMousePos.y);
// Update the camera radius based on input.
mRadius += dx - dy;
// Restrict the radius.
mRadius = MathHelper::Clamp(mRadius, 5.0f, 50.0f);
}
mLastMousePos.x = x;
mLastMousePos.y = y;
}

Related

I have a device reporting left handed coordinate angle and magnitude, how do I represent that as a line on the screen from the center?

The device I am using generates vectors like this;
How do I translate polar (angle and magnitude) from a left handed cordinate to a cartesian line, drawn on a screen where the origin point is the middle of a screen?
I am displaying the line on a wt32-sc01 screen using c++. There is a tft.drawline function but its references are normal pixel locations. In which case 0,0 is the upper left corner of the screen.
This is what I have so far (abbreviated)
....
int screen_height = tft.height();
int screen_width = tft.width();
// Device can read to 12m and reports in mm
float zoom_factor = (screen_width / 2.0) / 12000.0;
int originY = (int)(screen_height / 2);
int originX = (int)(screen_width / 2);
// Offset is for screen scrolling. No screen offset to start
int offsetX = 0;
int offsetY = 0;
...
// ld06 holds the reported angles and distances.
Coord coord = polarToCartesian(ld06.angles[i], ld06.distances[i]);
drawVector(coord, WHITE);
Coord polarToCartesian(float theta, float r) {
// cos() and sin() take radians
float rad = theta * 0.017453292519;
Coord converted = {
(int)(r * cos(rad)),
(int)(r * sin(rad))
};
return converted;
}
void drawVector(Coord coord, int color) {
// Cartesian relative the center of the screen factoring zoom and pan
int destX = (int)(zoom_factor * coord.x) + originX + offsetX;
int destY = originY - (int)(zoom_factor * coord.y) + offsetY;
// From the middle of the screen (origin X, origin Y) to destination x,y
tft.drawLine( originX, originY, destX, destY, color);
}
I have something drawing on the screen, but now I have to translate between a left handed coordinate system and the whole plane is rotated 90 degrees. How do I do that?
If I understood correctly, your coordinate system is with x pointing to the right and the y to the bottom and you used the formula for the standard math coordinate system where y is pointing up so multiplying your sin by -1 should do the trick (if it doesn't, try multiplying random things by -1, it often works for this kind of problems).
I assuming (from your image) your coordinate system has x going right y going up angle going from y axis clockwise and (0,0) is also center of your polar coordinates and your goniometrics accept radians then:
#include <math.h>
float x,y,ang,r;
const float deg = M_PI/180.0;
// ang = <0,360> // your angle
// r >= 0 // your radius (magnitude)
x = r*sin(ang*deg);
y = r*cos(ang*deg);

Opengl mouse Camera issue (gluLookAt)

Hello i am having a strange issue with my mouse movement in openGL. Here is my code for moving the camera with my mouse
void camera(int x, int y)
{
GLfloat xoff = x- lastX;
GLfloat yoff = lastY - y; // Reversed since y-coordinates range from bottom to top
lastX = x;
lastY = y;
GLfloat sensitivity = 0.5f;
xoff *= sensitivity;
yoff *= sensitivity;
yaw += xoff; // yaw is x
pitch += yoff; // pitch is y
// Limit up and down camera movement to 90 degrees
if (pitch > 89.0)
pitch = 89.0;
if (pitch < -89.0)
pitch = -89.0;
// Update camera position and viewing angle
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
Front.y = sin(convertToRads(pitch));
Front.z = sin(convertToRads(yaw)) * cos(convertToRads(pitch));
}
convertToRads() is a small function i created to convert the mouse coordinates to rads.
With this code i can move my camera how ever i want but if i try to go all the way up when i reach around 45 degrees it rotates 1-2 times around x-axis and then continues to increase y-axis. I can't understand if i have done something wrong so if anyone could help i would appreciate it.
It seems you have misplaced a paranthesis:
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
instead of:
Front.x = cos(convertToRads(yaw)) * cos(convertToRads(pitch));

mouse movement with glut

I am trying to learn the basics of the opengl Glut , and I am following the tutorial of a site .
I can not understand how I can move the camera in space and not just in two ordinates .
Could you help me ?
I'm using a glutPassiveMotionFunc(mouseMove) function.
//MOUSE MOVEMENT----------------------------------------------------------
void mouseMove(int x, int y) {
xOrigin = x;
// this will only be true when the left button is down
if (xOrigin >= 0) {
// update deltaAngle
deltaAngle = (x - xOrigin) * 0.001f;
// update camera's direction
lx = x + sin(angle + deltaAngle);
lz = y - cos(angle + deltaAngle);
}
else{
deltaAngle = (x + xOrigin) * 0.001f;
// update camera's direction
lx = x + sin(angle + deltaAngle);
lz = y - cos(angle + deltaAngle);
}
}
There are multiple coordinate systems in openGL, screen coordinate, eye coordinate, world coordinate, ...
The x and y you get from the mouse callback function are referring to the screen coordinates, which starts from (0,0) from the upper-left corner of the window.
The camera, on the other hand, works on a different level. You didn't mention which version of OpenGL you are using ... but anyways you can read the manual pages for gluLookAt() to learn more about the eye coordinate system.

Picking Ray is inaccurate

I'm trying to implement a picking ray via instructions from this website.
Right now I basically only want to be able to click on the ground to order my little figure to walk towards this point.
Since my ground plane is flat , non-rotated and non-translated I'd have to find the x and z coordinate of my picking ray when y hits 0.
So far so good, this is what I've come up with:
//some constants
float HEIGHT = 768.f;
float LENGTH = 1024.f;
float fovy = 45.f;
float nearClip = 0.1f;
//mouse position on screen
float x = MouseX;
float y = HEIGHT - MouseY;
//GetView() returns the viewing direction, not the lookAt point.
glm::vec3 view = cam->GetView();
glm::normalize(view);
glm::vec3 h = glm::cross(view, glm::vec3(0,1,0) ); //cameraUp
glm::normalize(h);
glm::vec3 v = glm::cross(h, view);
glm::normalize(v);
// convert fovy to radians
float rad = fovy * 3.14 / 180.f;
float vLength = tan(rad/2) * nearClip; //nearClippingPlaneDistance
float hLength = vLength * (LENGTH/HEIGHT);
v *= vLength;
h *= hLength;
// translate mouse coordinates so that the origin lies in the center
// of the view port
x -= LENGTH / 2.f;
y -= HEIGHT / 2.f;
// scale mouse coordinates so that half the view port width and height
// becomes 1
x /= (LENGTH/2.f);
y /= (HEIGHT/2.f);
glm::vec3 cameraPos = cam->GetPosition();
// linear combination to compute intersection of picking ray with
// view port plane
glm::vec3 pos = cameraPos + (view*nearClip) + (h*x) + (v*y);
// compute direction of picking ray by subtracting intersection point
// with camera position
glm::vec3 dir = pos - cameraPos;
//Get intersection between ray and the ground plane
pos -= (dir * (pos.y/dir.y));
At this point I'd expect pos to be the point where my picking ray hits my ground plane.
When I try it, however, I get something like this:
(The mouse cursor wasn't recorded)
It's hard to see since the ground has no texture, but the camera is tilted, like in most RTS games.
My pitiful attempt to model a remotely human looking being in Blender marks the point where the intersection happened according to my calculation.
So it seems that the transformation between view and dir somewhere messed up and my ray ended up pointing in the wrong direction.
The gap between the calculated position and the actual position increases the farther I mouse my move away from the center of the screen.
I've found out that:
HEIGHT and LENGTH aren't acurate. Since Windows cuts away a few pixels for borders it'd be more accurate to use 1006,728 as window resolution. I guess that could make for small discrepancies.
If I increase fovy from 45 to about 78 I get a fairly accurate ray. So maybe there's something wrong with what I use as fovy. I'm explicitely calling glm::perspective(45.f, 1.38f, 0.1f, 500.f) (fovy, aspect ratio, fNear, fFar respectively).
So here's where I am lost. What do I have to do in order to get an accurate ray?
PS: I know that there are functions and libraries that have this implemented, but I try to stay away from these things for learning purposes.
Here's working code that does cursor to 3D conversion using depth buffer info:
glGetIntegerv(GL_VIEWPORT, #fViewport);
glGetDoublev(GL_PROJECTION_MATRIX, #fProjection);
glGetDoublev(GL_MODELVIEW_MATRIX, #fModelview);
//fViewport already contains viewport offsets
PosX := X;
PosY := ScreenY - Y; //In OpenGL Y axis is inverted and starts from bottom
glReadPixels(PosX, PosY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, #vz);
gluUnProject(PosX, PosY, vz, fModelview, fProjection, fViewport, #wx, #wy, #wz);
XYZ.X := wx;
XYZ.Y := wy;
XYZ.Z := wz;
If you do test only ray/plane intersection this is the second part without DepthBuffer:
gluUnProject(PosX, PosY, 0, fModelview, fProjection, fViewport, #x1, #y1, #z1); //Near
gluUnProject(PosX, PosY, 1, fModelview, fProjection, fViewport, #x2, #y2, #z2); //Far
//No intersection
Result := False;
XYZ.X := 0;
XYZ.Y := 0;
XYZ.Z := aZ;
if z2 < z1 then
SwapFloat(z1, z2);
if (z1 <> z2) and InRange(aZ, z1, z2) then
begin
D := 1 - (aZ - z1) / (z2 - z1);
XYZ.X := Lerp(x1, x2, D);
XYZ.Y := Lerp(y1, y2, D);
Result := True;
end;
I find it rather different from what you are doing, but maybe that will make more sense.

How do I Translate mouse movement to Camera Panning

I use DirectX3D 11 to wirte an application and my Camera target vector is determined by the variables xdelta, ydelta, and zdelta.
I have to PAN my view in the XY as I move my mouse across the screen with the RMB & LMB pressed.
I figured that I need to add the delta in the mouse movement to my VIEW space, so that it pans in X and Y relative to my view, not world X and Y.
However, being new to this, I'm not sure how to convert my VIEW coordinates back to WORLD coordinates.
I hope that I am following all of the formatting rules, as I don't post here enough to remember all of them exactly.
Any help would be appreciated. below is my snippet of code. Perhaps there is also a better way.
if ( m_Input->RMBPressed() == true && m_Input->LMBPressed() == true ) // Pan View
{
if ( curX != mouseX || curY != mouseY ) {
//Get Target Coordinates in View Space
D3DXVECTOR3 targetView = (D3DXVECTOR3 ( mView(2,0), mView(2,1), mView(2,2) );
//Add mouse XY delta vector to target View Coordinates
D3DXVECTOR3 delta ( (float)curX-(float)mouseX,
(float)curY-(float)mouseY, zdelta );
targetView += delta;
//Convert back to World Coordinates
}
}
I'm Trying a different approach, that I believe is the correct approach, but it still doesn't appear to be working correctly.
I get the the delta in X and Y from the screen as my mouse moves and store them in the variables "xdelta" and "ydelta".
I then create a transformation matrix
D3DXMATRIX mTrans;
I then populate the values in the matrix
// Rotation
mTrans(0,0) = delta.x;
mTrans(1,1) = delta.y;
// Translation
mTrans(0,3) = delta.x;
mTrans(1,3) = delta.y;
Now to get their corresponding View coordinates, I think I should multiply it by the View Matrix, which I call mView.
mTrans= mTrans*mView;
Now add these translated values to my current target X and Y which is determind by the variables "target_x" and "target_y", it should move my target vector, relative to my view coordinates X and Y (i.e. orthogonal to my current view).
target_x += mTrans(0,3);
target_y += mTrans(1,3);
But it doesn't. It moves my target, along the world X and Y axis, not my View X and Y axis.
MORE SNIPPETS
I do use the D3DXMatrixLookAtLH function, but I'm trying to calculate the change in the target location based on my mouse movements to get new target to feed into that function. I added in some code snippets:
if ( m_Input->RMBPressed() == true && m_Input->LMBPressed() == true ) // Pan View
{
if ( curX != mouseX || curY != mouseY ) {
//Get mouse XY delta vector
D3DXVECTOR3 delta ( (float)curX-(float)mouseX, (float)curY-(float)mouseY, zdelta );
//Create Transformation Matrix
D3DXMATRIX mTemp;
// Rotation
mTemp (0,0) = delta.x;
mTemp (1,1) = delta.y;
mTemp (2,2) = delta.z;
// Translation
mTemp (0,3) = delta.x;
mTemp (1,3) = delta.y;
mTemp (2,3) = delta.z;
//Convert to View Coordinates
mTemp = mTemp*mView;
xdelta += mTemp(0,3);
ydelta += mTemp(1,3);
// Convert Spherical to Cartesian coordinates: mPhi measured from +y
// and mTheta measured counterclockwise from z.
float height = mRadius * cosf(mPhi);
float sideRadius = mRadius * sinf(mPhi);
float x = xdelta + sideRadius * cosf(mTheta);
float z = zdelta + sideRadius * sinf(mTheta);
float y = ydelta + height;
// Build the view matrix.
D3DXVECTOR3 pos(x, y, z);
D3DXVECTOR3 target(xdelta, ydelta, zdelta);
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f); // Y Axis is up. Looking down Z.
D3DXMatrixLookAtLH(&mView, &pos, &target, &up);
First of all, thank you for all of your information. It is helping me slowly understand DirectX matrices.
Am I correct in the logic below?
Assume my mouse change is 5.0 in X and 5.0 in Y on the screen. that is my delta. Z would always be 0.0.
I could find the view coordinates as follows.
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f); // Y Axis is up. Looking down Z.
D3DXVECTOR3 delta ( 5.0f, 5.0f, 0.0f );
D3DXVECTOR3 origin ( 0.0f. 0.0f, 0.0f );
D3DMATRIX mTemp;
D3DXMatrixIdentity (&mTemp);
D3DXMatrixLookAtLH (&mTemp, &delta, &origin, &up );
I should now have the view coordinates of the XY delta stored in the mTemp view matrix?
If so, is the best way to proceed, by adding the XY delta View coordinates to the mView XY coordinates and then translating back to world coordinates, to get the actual XY world value I have to set the target to?
I'm at a loss as to how to achieve this. It's really not all that clear to me, and all the books that I purchased on the subject are not clear either.
You can calculate world coordinates from local coordinates by multiplying you local coords by world matrix.
If you want your camera to move, than just define their current position using 3D vector, and then use D3DXMatrixLookAtLH to calculate view matrix from your current position.
Check out this tutorial for more details: http://www.braynzarsoft.net/index.php?p=D3D11WVP