Trying to implement a mouse look "camera" in OpenGL/SFML - c++

I've been using OpenGL with SFML 1.6 for some time now, and it has been a blast! With one exception: I can't seem to implement a camera class correctly. You see, I am trying to create a C++ class called "Camera". Here are my functions:
Camera::Strafe(float fSpeed)
checks whether the WASD keys are pressed, and if so, move the camera at "fSpeed" in their respective directions.
Camera::MouseMove(int currentX, int currentY)
should provide a first-person mouse look, taking in the current mouse coordinates and rotating the camera accordingly. My Strafe() implementation works fine, but I can't seem to get MouseMove() right.
I already know from reading other resources on OpenGL mouse look implementations that I must center the mouse after every frame, and I have that part down. But that's about it. I can't seem to get how to actually rotate the camera on the spot from the mouse coordinates. Probably need to use some trig, I bet.

I've done something similar to this (it was a 3rd person camera). If I remember what I did correctly, I took the change in mouse position and used that to calculate two angles (I did that with some trig, I believe). One angle gave me horizontal rotation, the other gave me vertical rotation. Pitch, Yaw and Roll specifically, although I can't remember which refers to which direction. There is also one you have to do before the other, or else things will rotate funny. I'm pretty sure it was pitch first, then yaw or roll.
Hopefully it should be obvious what the change in mouse position did. It allowed mouse senesitivity. If I moved the mouse fast, I would have a larger change, and so I would rotate "faster."
EDIT: Ok, I looked at my code and it's a very simple calculation.
This was done with C#, so bear with me for syntax:
_angles.X += MathHelper.ToDegrees(changeInX / 100);
_angles.Y += MathHelper.ToDegrees(changeInY / 100);
my angles were stored in a 2 dimensional vector (since I only rotated on two axes). You'll see I took my changeInX and changeInY values and simply divided them by 100 to get some arbitrary radian value, then converted that number to degrees. Adjust the 100 for sensitivity. Keep in mind, no solid-math was done here to figure this out. I just did some trial-and-error until I got something that worked well.

Related

Rotation and Movement with rigid body in Bullet Physics

I have made a rigid body for the player and have been trying to get the rigid body moving along with the player's controls.
What I mean is that whenever I press forward I want the rigid body to move forward in the direction the player is facing, same with back, left, right. So far I'm able to use apply force to move the rigid body in static directions.
My straight question is how do I move the player's rigid body in the direction the player is facing.
Other Details:
I don't really want to use kinematic bodies if not necessary mostly because their very fiddly at the moment
I'm using glfw3 for input
This is quite amazing that you would not see how to do that after you actually managed to apply forces in static directions to something you configured over bullet.
Come on, you HAVE the skill to figure it out.
Here, just a push in the direction (hehe), hem. Just take the vector of the facing direction (which could be determined by camera, 1st or 3rd view, or even something else...).
Congrats, this vector is your force by a k factor.
You should also modulate this force according to speed, you don't need to accelerate to infinite speed, just accelerate lots at first and then regulate force to tend to desired walk speed.
Then, the side directions are obtained by rotating the facing vector by 90 degrees around the standing axis (most surely the vertical). You can obtain that by simply swapping components and multiplying by -1 one of them. x,y,z becomes y,-x,z
To go backward, its just -x, -y, -z on the facing vector.
So your up key is not bound to 0,1,0 but to facing_dir actually. This facing dir can change with mouse or some other view controls, like numeric keys 2,6,8,4 for example. Or you could drop up,left,right,down for movement and use w,a,s,d like everybody else, and use direction keys to rotate facing direction. (+mouse)
It is much more difficult to obtain the facing vector from mouse movement or direction keys than finding out how to apply the force, so if you already have the facing vector I'm puzzled that you even have a problem.

moving object in the world towards a stationary camera

I want to move the camera forward, which is equivalent to moving the world back towards camera. I'm using Glut and glTranslate would do the job, but my question is how should I use it?
Suppose initially I start with glLoadIdentity(), then I set up the look at point using gluLookAt, and then I did some translation/rotation to the model. In this case how should I use glTranslate to translate the object in the world so that they can move with respect to the camera instead of their own origin/coordinate?
I thought I could save the current matrix using glGet, load Identity matrix, then do the translation I wanted, and then multiply the previous matrix back using glMultmatrix. But this didn't work for me.
And also if I want to enable yaw/pitch using glrotate, how should I do? (Also in the sense to rotate the world to make it seems rotating camera)
Sorry for my poor wording or conceptual mistake if there is any. I'm quite new to opengl and graphic programming in general and I'm still trying to fully understand the opengl pipeline, especially the matrix part. Any detailed explanation to that will also be greatly appreciated!
From reading your question, it sounds to me like what you're trying to do is simulate camera movement by translating every other object in the world about a fixed point (the camera)
While you're correct in saying that moving the camera actually moves everything else in the world about it, you seem to be going about it the wrong way. After all, look how much difficulty you're having just moving one box. Now imagine you have hundreds! Not much fun :)
Fortunately, there is a function that can help you, and you're already using it! gluLookAt (http://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml) is your guy. What it does under the hood is it creates a matrix (Not sure what a matrix is? Give this a read: http://solarianprogrammer.com/2013/05/22/opengl-101-matrices-projection-view-model/) that every other point in the world is multiplied by. This multiplication translates each point until its in its correct position relative to the camera. So you are correct in saying that moving the camera actually moves the whole world relative to the camera, this way we can do it all in one pass instead of having to calculate the new positions of each point manually.
So, you want to move the camera forward on the z axis? Just call gluLookAt, but pass in a value of eyez that is less than when you previously called gluLookAt. Here's an example:
gluLookAt(0,3,0,0,0,0,0,1,0);//This is out starting position, (0,3,0)
gluLookAt(0,2,0,0,0,0,0,1,0);//And this is out ending position. Notice that the eyez value has decreased by one
As for how to rotate, take a look at the second group of three parameters, the "center" parameters. Those determine what point is in the center of the camera, that is, what the camera is looking at. In the previous example, the center point was (0,0,0). You can rotate the camera by moving these points around. How you do it is a pretty complicated topic with a good bit of math thrown in, but the following links should help a bit:
http://ogldev.atspace.co.uk/www/tutorial15/tutorial15.html
http://www.fastgraph.com/makegames/3drotation/
http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Don't get discouraged if it seems too hard, keep at it! Feel free to ask me if you need clarification on this answer.

Tracking circular mouse movement in OpenGL

I am working on a simple mesh viewer implementation in C++ with basic functionality such as translation, rotation, scaling.
I'm stuck with with implementing the rotation of the object along z-axis using the mouse. What I want to implement is the following:
Click and drag the mouse vertically (almost vertical will do, as I use a simple threshold to filter slight deviations along the horizontal axis) to rotate the object along y-axis (this part is done).
Click and drag the mouse horizontally just as described above to rotate the object along x-axis (this part is done too).
For z-axis rotation, I want to detect a circular (or along an arc) mouse movement. I'm stuck with this part, and don't know how to implement this.
For the above two, i just use atan2() to determine the angle of movement. But how do I detect circular movements?
The only way to deal with this is to have a delay between the user starting to make the motion and the object rotating:
When user clicks and begins to move the mouse you need to determine if its going to become a straight line movement, or a circular one. This will require a certain amount of data to be collected before that judgement can be made.
The most extreme case would be requiring the user to make one complete circle first, then the rotation begins (in reality you could do much better than this). Just how small you are able to cut this period down to will depend on a) how precise you dictate your users actions must be, and b) how good you are with pattern recognition algorithms.
To get you started heres an outline of an extremely poor algorithm:
On user click store the x and y coordinates.
Every 1/10 of a second store the new coordinates and process_for_pattern.
in process_for_pattern you're looking for:
A period where the x coordinates and the y coordinates regularly both increase, both decrease, or one increases and one decreases. Over time if this pattern changes such that either the x or the y begins to reverse whilst the other continues as it was, then at that moment you can be fairly sure you've got a circle.
This algorithm would require the user to draw a quarter circle before it was detected, and it does not account for size, direction, or largely irregular movements.
If you really want to continue with this method you can get a much better algorithm, but you might want to reconsider your control method.
Perhaps, you should define a screen region (e.g. at window boundaries), which, when was clicked, will initiate arc movement - or use some other modifier, a button or whatever.
Then at a mouse click you capture the coordinates and center of rotation (mesh axis) in 2D screen space. This gets you a vector (mesh center, button down pos)
On every mouse move you calculate a new vector (mesh center, mouse pos) and the angle between the two vectors is the angle of rotation.
I don't think it works like that...
You could convert mouse wheel rotation to z-axis, or use quaternion camera orientation, which is able to rotate along every axis almost intuitively...
The opposite is true for quarternion camera: if one tries to rotate the mesh along a straight line, the mesh appears to rotate slightly around some other weird axis -- and to compensate that, one intuitively tries to follow some slightly curved trajectory.
It's not exactly what you want, but should come close enough.
Choose a circular region within which your movements numbered 1 and 2 work as described (in the picture this would be some region that is smaller than the red circle. However, when the user clicks outside the circular region, you save the initial click position (shown in green). This defines a point which has a certain angle relative to the x-axis of your screen (you can find this easily with some trig), and it also defines the radius of the circle on which the user is working (in red). The release of the mouse adds a second point (blue). You then find the angle this point has relative to the center of the screen and the x-axis (just like before). You then project that angle onto your circle with the radius determined by the first click. The dark red arc defines the amount of rotation of the model.
This should be enough to get you started.
That will not be a good input method, I think. Because you will always need some travel distance to discriminate between a line and a curve, which means some input delay. Here is an alternative:
Only vertical mouse having their line crossing the center of the screen are considered vertical. Same for horizontal. In other cases it's considered a rotation, and to calculate its amplitude, calculate the angle between the last mouse location and the current location relatively to the center of the screen.
Alternatively you could use the center of the selected mesh if your application works like that.
You can't detect the "circular, along an arc" mouse movement with anywhere near the precision needed for 3d model viewing. What you want is something like this: http://thetechartist.com/?p=80
You nominate an axis (x, y, or z) using either keyboard shortcuts or on-screen axis indicators that you can grab with the mouse.
This will be much more precise than trying to detect an "arc" gesture. Any "arc" recognition would necessarily involve a delay while you accumulate enough mouse samples to decide whether an arc gesture has begun or not. Gesture recognition like this is non-trivial (I've done some gesture work with the Wii-mote). Similarly, even your simple "vertical" and "horizontal" mouse movement detection will require a delay for the same reason. Any "simple threshold to filter slight deviations" will make it feel dampened and weird.
For 3d viewing you want 1:1 mouse responsiveness, and that means just explicitly nominating an axis with a shortcut key or UI etc. For x-axis rotation, just restrict it to mouse x, y-axis to mouse y if you like. For z you could similarly restrict to x or y mouse input, or just take the total 2d mouse distance travelled. It depends what feels nicest to you.
As an alternative, you could try coding up support for a 3D mouse like the 3dConnexion SpaceExplorer.

direct x c++ camera movement

I'm fairly new to directx so this may sound really basic.
I have started working on a first person game where you can walk through rooms, the language i am coding in is c++ and I'm using directx to help me create my game.
So far i have all the rooms drawn with doors etc but im a bit stuck how to make a first person camera and allow the user move forwards, backwards and side to side using the arrow keys on a keyboard.
The simpler the better as i am a beginner.
Could anyone help me out with this or point me in the right direction?
Thanks in advance
There's a lot of tutorials in web covering this topic, so Google will certainly help you.
As for the basics: You will want to store your position and camera rotation. Assuming Z is your up-axis, you should use the arrows to change only the X and Y.
Let's also say, that you will store your camera orientation is stored as a composition of rotations along Z-axis (movement direction) and X axis (looking up-down).
Simple class:
class Player
{
protected:
float3 Position; // Z-up
float2 CameraRotation; // X for turning, Y for up-down
public:
void MoveForward()
{
Position.X += -cosf(CameraRotation.X) * PLAYER_SPEED;
Position.Y += -sinf(CameraRotation.X) * PLAYER_SPEED;
}
// when using any other arrow, just add a multiply of PI/2 to the camera rotation
// PI for backwards, +PI/2 for left strafe and -PI/2 for right strafe.
// If you don't want to use mouse, use left and right arrow to modify camera rotation
// and MoveForward and Backward will look the same, having different signs.
};
The '-' signs in front of sinf and cosf functions are there because you will probably want this kind of behavior, feel free to change them.
As for camera, you will have to implement mouse delta between frames. In every frame compare the mouse position with the previous one. Then multiply it by turning and looking speed and set directly to camera value.
Hope this helped.
I am more of an OpenGL guy so I cannot help you with the technical side, what I can do is give you a direction.
In general, a 3D camera has:
Translation - where the camera is (x, y, z)
Rotation - angle of the camera around each axis
What you want do is related to the translation part only:
Monitor the user input, and capture when he presses one of the arrow keys.
When an arrow key is down, you want to start changing the camera translation. If the user presses the up key, you would add a constant value to the corresponding camera translation component (x) each iteration of your main game loop until he releases the key. If he presses the down key, you would want to subtract that value instead of adding it.
When he releases the key, your code needs to stop adding that value to the camera translation.
Let's assume that your game runs at 60Hz, and you add, say, 1/60 units to the camera translation each iteration for each direction the user wants to go. If the user held up the up arrow key for 2 seconds, the camera would have moved forward 2 units.
This is the "theory" in general, now I can only point you to web pages I found that may be useful for solving the technical side of your problem:
DirectX camera movement - I'm guessing this article has a lot more than you need, but it looked pretty good and I think that you should read it anyway... But you can just skip to the View Transformation part.
Input handling - nothing much to say, regular Win32 input handling. If you are not familiar with win32 input handling I think that you should take an hour or two to learn that first.
Alright that's it, hope I helped

How to determine if mouse is moving clockwise or counterclockwise?

I have an MFC appplication where the user have to move the mouse around a circle circonference with a dragging mouvement. I need to retrieve the number of degrees during this mouse drag "rotation" and I need to know if it's clockwise or counterclockwise.
At first, to determine the rotation direction, I was comparing x-coordinnate between the current mouse position and the mouse position where the user clicked to initiate the dragging. That works well until the user rotate over 180 degrees.
How can I handle the other half of the circle?
You'll need at least three ordered points to determine whether someone is moving clockwise or counterclockwise over time. With only two points, it isn't obvious whether (for instance) someone rotated 90 degrees or -270 degrees. So simply taking the cross product of the start and end won't work.
Try sampling the mouse during the dragging to get the additional information you need, and then taking incremental cross products between each pair of consecutive points. That will tell you what you want to know. However, you'll need to sample fast enough that no rotation of more than 180 degrees could have occurred; otherwise you'll wind up in an ambiguous situation again.
These might help you.
http://en.wikipedia.org/wiki/Atan2
http://www.phy.syr.edu/courses/java-suite/crosspro.html
And here is a simple example of recognizing gestures (it's in flash but the idea is the important bit)
http://www.bytearray.org/?p=91
Read about cross products. Computing a cross product between the X and Y vectors (differences from the start point) will always reliably give the rotation direction.