WM_MOUSEMOVE not working with FPS camera implemetation in Direct3D - c++

hey guys,
i am trying to implement an FPS=Style camera. The mouse movement is working but without even touching the mouse. The camera is going on all degrees without me even touching the mouse. Basically, the yaw and the pitch are getting wrong values from the mouse without the movement of the mouse itself.
here is the code for the win32 loop
case WM_MOUSEMOVE:
gCamera->Yaw() = (float)LOWORD(lparam);
gCamera->Pitch() = (float)HIWORD(lparam);
break;
the Yaw and Pitch methods basically return a reference to the data members mPitch and mYaw, and through them, i do the rotations for the basis vectors(right, up and look vectors)
Just to clarify, i WM_MOUSEMOVE is getting input(i checked through debugging), but it is getting very high and very wrong values because i am not even moving the mouse and because the camera is rotating in every direction like it just ate some rocket fuel.
P.S: i had to typecast the values because i am using the Yaw and the Pitch to create matrices, i have to use floats.
Appreciate the help, guys

Keep in mind your units. The WM_MOUSEMOVE lparam x & y values are in logical (screen pixel) coordinates, but most rotation values in games are expected in terms of degrees or radians. For instance, if the mouse is at say <400, 300> but your camera class expects radians, then you're multiplying a large number of rotations against some other (potentially varying) numbers in your transform math, potentially leading to crazy movement even though you're not moving the mouse. The solution in such a case is to convert your logical units into radians or degrees, using a scale factor of your choosing.
In response to further comments:
One way to think about it is to ask yourself the question: how many logical units of movement (by the mouse) do you want to correspond to 360 degrees of rotation?
For instance, if you decided you wanted mouse movement across the full width of the window to correspond to 360 degrees, then then mathematical relationship is
screenW * scaleFactor = 2 * PI
Solve for scaleFactor then apply it to future mouse values using:
mouseX * scaleFactor = orientationInRadians
Keep in mind, this approach would link an absolute mouse location to an absolute camera orientation (for at least one DOF), so you may instead want to track changes in mouse position, rather than absolute mouse position; and then calculate changes in orientation (radians) to apply to existing orientation. The same formula can be used to convert the delta (change amount) from logical coords to radians.

Related

OpenCV Camera to Object Horizontal Angle Calculation

So, I'm a high school student and the lead programmer on my local robotics team, and this year I decided to try out OpenCV and do some vision processing on our robot.
From my vision code, I need to know a few things about some objects on our competition field. These things are: distance (ft), horizontal angle from camera, and horizontal distance from camera (ft). Essentially, one large right triangle.
I already have the camera successfully detecting these objects and putting a boundingRect around them. With a gyroscope on our robot, we should be able to get our robot to ~90 degree angle to the object once it is detected (as it's a set angle on the field). Thus, I can calculate distance just based on an empirically made function of the area of the boundingRect of the object.
The horizontal angle of the object from the camera, however, I'm not exactly sure how to approach. Once I have that, though, I can do some simple trig and get the horizontal distance.
-
So here's what we have/know: Distance to object in ft, object is at ~90 degrees to camera, camera has horizontal fov of 67 degrees w/ resolution of 800x600, the real world dimensions of the object, and a boundingRect around the object.
How would I, using all of this information, calculate the horizontal angle from the camera to the object?

Finding the angle of a coordinate inside a circle

Right down to business, basically I am making a small mini game which has characters running on top of a flat clock with the clock hand rotating around, the characters have to avoid it by jumping.
the part im struggling with is coding the collision, the clock hand is just a set model that is rotated applying matrices and for whatever reason box collision will not work.
So my theory is because i know the angle that the clock hand is currently being multiplied by, is there some mathematical way to calculate the angle of the player in relation to the centre point of the circle so that this can be checked against the clock hand angle?
Sure.
float angle = atan2(y_handle - y_center, x_handle - x_center);

Tracking circular mouse movement in OpenGL

I am working on a simple mesh viewer implementation in C++ with basic functionality such as translation, rotation, scaling.
I'm stuck with with implementing the rotation of the object along z-axis using the mouse. What I want to implement is the following:
Click and drag the mouse vertically (almost vertical will do, as I use a simple threshold to filter slight deviations along the horizontal axis) to rotate the object along y-axis (this part is done).
Click and drag the mouse horizontally just as described above to rotate the object along x-axis (this part is done too).
For z-axis rotation, I want to detect a circular (or along an arc) mouse movement. I'm stuck with this part, and don't know how to implement this.
For the above two, i just use atan2() to determine the angle of movement. But how do I detect circular movements?
The only way to deal with this is to have a delay between the user starting to make the motion and the object rotating:
When user clicks and begins to move the mouse you need to determine if its going to become a straight line movement, or a circular one. This will require a certain amount of data to be collected before that judgement can be made.
The most extreme case would be requiring the user to make one complete circle first, then the rotation begins (in reality you could do much better than this). Just how small you are able to cut this period down to will depend on a) how precise you dictate your users actions must be, and b) how good you are with pattern recognition algorithms.
To get you started heres an outline of an extremely poor algorithm:
On user click store the x and y coordinates.
Every 1/10 of a second store the new coordinates and process_for_pattern.
in process_for_pattern you're looking for:
A period where the x coordinates and the y coordinates regularly both increase, both decrease, or one increases and one decreases. Over time if this pattern changes such that either the x or the y begins to reverse whilst the other continues as it was, then at that moment you can be fairly sure you've got a circle.
This algorithm would require the user to draw a quarter circle before it was detected, and it does not account for size, direction, or largely irregular movements.
If you really want to continue with this method you can get a much better algorithm, but you might want to reconsider your control method.
Perhaps, you should define a screen region (e.g. at window boundaries), which, when was clicked, will initiate arc movement - or use some other modifier, a button or whatever.
Then at a mouse click you capture the coordinates and center of rotation (mesh axis) in 2D screen space. This gets you a vector (mesh center, button down pos)
On every mouse move you calculate a new vector (mesh center, mouse pos) and the angle between the two vectors is the angle of rotation.
I don't think it works like that...
You could convert mouse wheel rotation to z-axis, or use quaternion camera orientation, which is able to rotate along every axis almost intuitively...
The opposite is true for quarternion camera: if one tries to rotate the mesh along a straight line, the mesh appears to rotate slightly around some other weird axis -- and to compensate that, one intuitively tries to follow some slightly curved trajectory.
It's not exactly what you want, but should come close enough.
Choose a circular region within which your movements numbered 1 and 2 work as described (in the picture this would be some region that is smaller than the red circle. However, when the user clicks outside the circular region, you save the initial click position (shown in green). This defines a point which has a certain angle relative to the x-axis of your screen (you can find this easily with some trig), and it also defines the radius of the circle on which the user is working (in red). The release of the mouse adds a second point (blue). You then find the angle this point has relative to the center of the screen and the x-axis (just like before). You then project that angle onto your circle with the radius determined by the first click. The dark red arc defines the amount of rotation of the model.
This should be enough to get you started.
That will not be a good input method, I think. Because you will always need some travel distance to discriminate between a line and a curve, which means some input delay. Here is an alternative:
Only vertical mouse having their line crossing the center of the screen are considered vertical. Same for horizontal. In other cases it's considered a rotation, and to calculate its amplitude, calculate the angle between the last mouse location and the current location relatively to the center of the screen.
Alternatively you could use the center of the selected mesh if your application works like that.
You can't detect the "circular, along an arc" mouse movement with anywhere near the precision needed for 3d model viewing. What you want is something like this: http://thetechartist.com/?p=80
You nominate an axis (x, y, or z) using either keyboard shortcuts or on-screen axis indicators that you can grab with the mouse.
This will be much more precise than trying to detect an "arc" gesture. Any "arc" recognition would necessarily involve a delay while you accumulate enough mouse samples to decide whether an arc gesture has begun or not. Gesture recognition like this is non-trivial (I've done some gesture work with the Wii-mote). Similarly, even your simple "vertical" and "horizontal" mouse movement detection will require a delay for the same reason. Any "simple threshold to filter slight deviations" will make it feel dampened and weird.
For 3d viewing you want 1:1 mouse responsiveness, and that means just explicitly nominating an axis with a shortcut key or UI etc. For x-axis rotation, just restrict it to mouse x, y-axis to mouse y if you like. For z you could similarly restrict to x or y mouse input, or just take the total 2d mouse distance travelled. It depends what feels nicest to you.
As an alternative, you could try coding up support for a 3D mouse like the 3dConnexion SpaceExplorer.

Trying to implement a mouse look "camera" in OpenGL/SFML

I've been using OpenGL with SFML 1.6 for some time now, and it has been a blast! With one exception: I can't seem to implement a camera class correctly. You see, I am trying to create a C++ class called "Camera". Here are my functions:
Camera::Strafe(float fSpeed)
checks whether the WASD keys are pressed, and if so, move the camera at "fSpeed" in their respective directions.
Camera::MouseMove(int currentX, int currentY)
should provide a first-person mouse look, taking in the current mouse coordinates and rotating the camera accordingly. My Strafe() implementation works fine, but I can't seem to get MouseMove() right.
I already know from reading other resources on OpenGL mouse look implementations that I must center the mouse after every frame, and I have that part down. But that's about it. I can't seem to get how to actually rotate the camera on the spot from the mouse coordinates. Probably need to use some trig, I bet.
I've done something similar to this (it was a 3rd person camera). If I remember what I did correctly, I took the change in mouse position and used that to calculate two angles (I did that with some trig, I believe). One angle gave me horizontal rotation, the other gave me vertical rotation. Pitch, Yaw and Roll specifically, although I can't remember which refers to which direction. There is also one you have to do before the other, or else things will rotate funny. I'm pretty sure it was pitch first, then yaw or roll.
Hopefully it should be obvious what the change in mouse position did. It allowed mouse senesitivity. If I moved the mouse fast, I would have a larger change, and so I would rotate "faster."
EDIT: Ok, I looked at my code and it's a very simple calculation.
This was done with C#, so bear with me for syntax:
_angles.X += MathHelper.ToDegrees(changeInX / 100);
_angles.Y += MathHelper.ToDegrees(changeInY / 100);
my angles were stored in a 2 dimensional vector (since I only rotated on two axes). You'll see I took my changeInX and changeInY values and simply divided them by 100 to get some arbitrary radian value, then converted that number to degrees. Adjust the 100 for sensitivity. Keep in mind, no solid-math was done here to figure this out. I just did some trial-and-error until I got something that worked well.

3D Rotation in OpenGL and Local Rotation

I am trying to prototype a space flight sim in OpenGL, but after reading many articles online I still have difficulty with getting the rotations to work correctly (I did have a quaternion camera that I didn't understand well, but it drifts and has other odd behaviors).
I am trying to do the following:
1) Local rotation - when the user presses arrow keys, rotation occurs relative to the viewport (rotating "up" is toward the top of the screen, for example). Two keys, such as Z and X, will control the "roll" of the ship (rotation around the current view).
2) The rotations will be stored in Axis-angle format (which is most natural for OpenGL and a single rotate call with the camera vector should rotate the scene properly). Therefore, given the initial Angle-axis vector, and one or more of the local rotations noted above (we could locally call "X" the left/right axis, "Y" the top/bottom axis, and "Z" the roll axis), I would like the end result to be a new Axis-angle vector.
3) Avoid quarternions and minimize the use of matrices (for some reason I find both unintuitive). Instead of matrix notation please just show in psuedocode the vector components and what's happening.
4) You should be able to rotate in a direction (using the arrow keys) 360 degrees and return to the starting view without drifting. Preferably, if the user presses one combination and then reverses it, they would expect to be able to return to near their original orientation.
5) The starting state for the camera is at coordinates (0,0,0) facing the Axis-angle vector (0,0,1,0 - z-axis with no starting rotation). "up" is (0,1,0).
Using Euler angles approach is wrong with spacesim. I have tried that approach and quickly had to give up. Player wants all degrees of freedom, and Euler's angles don't provide that, or complicate it enormously.
What you really, really want are quaternions. This is a part of my update code.
Quaternion qtmp1, qtmp2, qtmp3;
Rotation r(........);
qtmp1.CreateFromAxisAngle(1., 0., 0., r.j*m_updatediff);
qtmp2.CreateFromAxisAngle(0., 1., 0., r.i*m_updatediff);
qtmp3.CreateFromAxisAngle(0., 0., 1., r.k*m_updatediff);
m_rotq = qtmp1 * qtmp2 * qtmp3 * m_rotq;
r.i, r.j and r.k contain the current speed of rotation around a certain axis. Getting a spacesim-like feel is just a matter of multiplying these quaternions.
Everything else is just a complication. With Euler's angles, you can play all day long -- in fact, all year long -- but you will just make loads of messy code.
Your daily recommendation: quaternions.