I'm fairly new to directx so this may sound really basic.
I have started working on a first person game where you can walk through rooms, the language i am coding in is c++ and I'm using directx to help me create my game.
So far i have all the rooms drawn with doors etc but im a bit stuck how to make a first person camera and allow the user move forwards, backwards and side to side using the arrow keys on a keyboard.
The simpler the better as i am a beginner.
Could anyone help me out with this or point me in the right direction?
Thanks in advance
There's a lot of tutorials in web covering this topic, so Google will certainly help you.
As for the basics: You will want to store your position and camera rotation. Assuming Z is your up-axis, you should use the arrows to change only the X and Y.
Let's also say, that you will store your camera orientation is stored as a composition of rotations along Z-axis (movement direction) and X axis (looking up-down).
Simple class:
class Player
{
protected:
float3 Position; // Z-up
float2 CameraRotation; // X for turning, Y for up-down
public:
void MoveForward()
{
Position.X += -cosf(CameraRotation.X) * PLAYER_SPEED;
Position.Y += -sinf(CameraRotation.X) * PLAYER_SPEED;
}
// when using any other arrow, just add a multiply of PI/2 to the camera rotation
// PI for backwards, +PI/2 for left strafe and -PI/2 for right strafe.
// If you don't want to use mouse, use left and right arrow to modify camera rotation
// and MoveForward and Backward will look the same, having different signs.
};
The '-' signs in front of sinf and cosf functions are there because you will probably want this kind of behavior, feel free to change them.
As for camera, you will have to implement mouse delta between frames. In every frame compare the mouse position with the previous one. Then multiply it by turning and looking speed and set directly to camera value.
Hope this helped.
I am more of an OpenGL guy so I cannot help you with the technical side, what I can do is give you a direction.
In general, a 3D camera has:
Translation - where the camera is (x, y, z)
Rotation - angle of the camera around each axis
What you want do is related to the translation part only:
Monitor the user input, and capture when he presses one of the arrow keys.
When an arrow key is down, you want to start changing the camera translation. If the user presses the up key, you would add a constant value to the corresponding camera translation component (x) each iteration of your main game loop until he releases the key. If he presses the down key, you would want to subtract that value instead of adding it.
When he releases the key, your code needs to stop adding that value to the camera translation.
Let's assume that your game runs at 60Hz, and you add, say, 1/60 units to the camera translation each iteration for each direction the user wants to go. If the user held up the up arrow key for 2 seconds, the camera would have moved forward 2 units.
This is the "theory" in general, now I can only point you to web pages I found that may be useful for solving the technical side of your problem:
DirectX camera movement - I'm guessing this article has a lot more than you need, but it looked pretty good and I think that you should read it anyway... But you can just skip to the View Transformation part.
Input handling - nothing much to say, regular Win32 input handling. If you are not familiar with win32 input handling I think that you should take an hour or two to learn that first.
Alright that's it, hope I helped
Related
I have made a rigid body for the player and have been trying to get the rigid body moving along with the player's controls.
What I mean is that whenever I press forward I want the rigid body to move forward in the direction the player is facing, same with back, left, right. So far I'm able to use apply force to move the rigid body in static directions.
My straight question is how do I move the player's rigid body in the direction the player is facing.
Other Details:
I don't really want to use kinematic bodies if not necessary mostly because their very fiddly at the moment
I'm using glfw3 for input
This is quite amazing that you would not see how to do that after you actually managed to apply forces in static directions to something you configured over bullet.
Come on, you HAVE the skill to figure it out.
Here, just a push in the direction (hehe), hem. Just take the vector of the facing direction (which could be determined by camera, 1st or 3rd view, or even something else...).
Congrats, this vector is your force by a k factor.
You should also modulate this force according to speed, you don't need to accelerate to infinite speed, just accelerate lots at first and then regulate force to tend to desired walk speed.
Then, the side directions are obtained by rotating the facing vector by 90 degrees around the standing axis (most surely the vertical). You can obtain that by simply swapping components and multiplying by -1 one of them. x,y,z becomes y,-x,z
To go backward, its just -x, -y, -z on the facing vector.
So your up key is not bound to 0,1,0 but to facing_dir actually. This facing dir can change with mouse or some other view controls, like numeric keys 2,6,8,4 for example. Or you could drop up,left,right,down for movement and use w,a,s,d like everybody else, and use direction keys to rotate facing direction. (+mouse)
It is much more difficult to obtain the facing vector from mouse movement or direction keys than finding out how to apply the force, so if you already have the facing vector I'm puzzled that you even have a problem.
I'm working on a Little Mobile Game with Cocos2D-X and Box2D.
The Point where I got stuck is the movement of a box2d-body (the main actor) and the according Sprite. Now I want to :
move this Body with a constant velocity along the x-axis, no matter if it's rolling (it's a circleshape) upwards or downwards
keep the body nearly sticking to the ground on which it's rolling
keep the Body and the according Sprite in the Center of the Screen.
What I tried :
in the update()- method I used body->SetLinearVelocity(b2Vec2(x,y)) to higher/lower values, if the Body was passing a constant value for his velocity
I used to set very high y-Values in body->SetLinearVelocity(b2Vec2(x,y))
First tried to use CCFollow with my playerSprite, which was also Scrolling along the y-axis, as i only need to scroll along the x-axis, so I decided to move the whole layer which is containing the ambience (platforms etc.) to the left of my Screen and my Player Body & Player sprite to the right of the Screen, adjusting the speed values to Keep the Player in the Center of the Screen.
Well...
...didn't work as i wanted it to, because each time i set the velocity manually (I also tried to use body->applyLinearImpulse(...) when the Body is moving upwards just as playing around with the value of velocityIterations in world->Step(...)) there's a small delay, which pushes the player Body more or less further of the Center of the Screen.
... didn't also work as I expected it to, because I needed to adjust the x-Values, when the Body was moving upwards to Keep it not getting slowed down, this made my Body even less sticky to the ground....
... CCFollow did a good Job, except that I didn't want to scroll along the y-axis also and it Forces the overgiven sprite to start in the Center of the Screen. Moving the whole Layer even brought no good results, I have tried a Long time to adjust values of the movement Speed of the layer and the Body to Keep it negating each other, that the player stays nearly in the Center of the Screen....
So my question is :
Does anyone of you have any Kind of new Approach for me to solve this cohesive bunch of Problems ?
Cheers,
Seb
To make it easy to control the body, the main figure to which the force is applied should be round. This should be done because of the processing mechanism of collisions. More details in this article: Why does the character get stuck?.
For processing collisions with the present contour of the body you can use the additional fixtures and sensors with an id or using category and mask bits. For of constant velocity is often better to use SetLinearVelocity, because even when using impulse velocity gets lost at sharp uphill or when jumping. If you want to use the implulse to change the position of the body, then you need to use the code for the type of this:
b2Vec2 vel = m_pB2Body->GetLinearVelocity();
float desiredVel = mMoveSpeed.x; //set there your speed x value
float velChange = desiredVel - vel.x;
float impulse = m_pB2Body->GetMass() * velChange;
m_pB2Body->ApplyLinearImpulse( b2Vec2(impulse, mMoveSpeed.y), m_pB2Body->GetWorldCenter());
This will allow maintain a constant speed most of the time. Do not forget that these functions must be called every time in your game loop. You can combine these forces, depending on the situation. For example, if the at the beginning you need to make a small acceleration, it is possible to use ApplyForce to the body, and when a desired speed is to use ApplyLinearImpulse or SetLinearVelocity. How correctly to use it is described here: Moving at constant speed
If you use world with the normal gravity(b2Vec2(0, -9.81)), then it should not be a problem.
I answer for this question here: Cocos2D-x - Issues when with using CCFollow. I use this code, it may be useful to you:
CCPoint position = ccpClamp(playerPosition, mLeftBounds, mRightBounds);
CCPoint diff = ccpSub(mWorldScrollBound, mGameNode->convertToWorldSpace(position));
CCPoint newGameNodePosition = ccpAdd(mGameNode->getPosition(), mGameNode->getParent()->convertToNodeSpace(diff));
mGameNode->setPosition(newGameNodePosition);
P.S. If you are new to box2d, it is advisable to read all the articles iforce2d(tuts), they are among the best in the network, as well as his Box2D Editor - RUBE. At one time they really helped me.
I do not know if this is possible but I have an idea:
Keep the circle at a fixed position and move the background relatively. For example, during the course of the game, if the circle has a velocity of 5 towards left then keep circle fixed and move screen with velocity 5 towards right. If circle has 5 velocity towards left and screen has 3 velocity towards right, then keep circle fixed and move screen with 8 velocity towards left and so on. This should allow you to fix the circle at the center of the screen.
Another method would be to translate the entire screen along with the ball. Make everything on the screen an object that can have a velocity. And the x-component of the velocity of the ball (circle) should be the velocity of all other objects. This way, whenever the circle moves, all the other objects will try and keep up with it.
I am working on a simple mesh viewer implementation in C++ with basic functionality such as translation, rotation, scaling.
I'm stuck with with implementing the rotation of the object along z-axis using the mouse. What I want to implement is the following:
Click and drag the mouse vertically (almost vertical will do, as I use a simple threshold to filter slight deviations along the horizontal axis) to rotate the object along y-axis (this part is done).
Click and drag the mouse horizontally just as described above to rotate the object along x-axis (this part is done too).
For z-axis rotation, I want to detect a circular (or along an arc) mouse movement. I'm stuck with this part, and don't know how to implement this.
For the above two, i just use atan2() to determine the angle of movement. But how do I detect circular movements?
The only way to deal with this is to have a delay between the user starting to make the motion and the object rotating:
When user clicks and begins to move the mouse you need to determine if its going to become a straight line movement, or a circular one. This will require a certain amount of data to be collected before that judgement can be made.
The most extreme case would be requiring the user to make one complete circle first, then the rotation begins (in reality you could do much better than this). Just how small you are able to cut this period down to will depend on a) how precise you dictate your users actions must be, and b) how good you are with pattern recognition algorithms.
To get you started heres an outline of an extremely poor algorithm:
On user click store the x and y coordinates.
Every 1/10 of a second store the new coordinates and process_for_pattern.
in process_for_pattern you're looking for:
A period where the x coordinates and the y coordinates regularly both increase, both decrease, or one increases and one decreases. Over time if this pattern changes such that either the x or the y begins to reverse whilst the other continues as it was, then at that moment you can be fairly sure you've got a circle.
This algorithm would require the user to draw a quarter circle before it was detected, and it does not account for size, direction, or largely irregular movements.
If you really want to continue with this method you can get a much better algorithm, but you might want to reconsider your control method.
Perhaps, you should define a screen region (e.g. at window boundaries), which, when was clicked, will initiate arc movement - or use some other modifier, a button or whatever.
Then at a mouse click you capture the coordinates and center of rotation (mesh axis) in 2D screen space. This gets you a vector (mesh center, button down pos)
On every mouse move you calculate a new vector (mesh center, mouse pos) and the angle between the two vectors is the angle of rotation.
I don't think it works like that...
You could convert mouse wheel rotation to z-axis, or use quaternion camera orientation, which is able to rotate along every axis almost intuitively...
The opposite is true for quarternion camera: if one tries to rotate the mesh along a straight line, the mesh appears to rotate slightly around some other weird axis -- and to compensate that, one intuitively tries to follow some slightly curved trajectory.
It's not exactly what you want, but should come close enough.
Choose a circular region within which your movements numbered 1 and 2 work as described (in the picture this would be some region that is smaller than the red circle. However, when the user clicks outside the circular region, you save the initial click position (shown in green). This defines a point which has a certain angle relative to the x-axis of your screen (you can find this easily with some trig), and it also defines the radius of the circle on which the user is working (in red). The release of the mouse adds a second point (blue). You then find the angle this point has relative to the center of the screen and the x-axis (just like before). You then project that angle onto your circle with the radius determined by the first click. The dark red arc defines the amount of rotation of the model.
This should be enough to get you started.
That will not be a good input method, I think. Because you will always need some travel distance to discriminate between a line and a curve, which means some input delay. Here is an alternative:
Only vertical mouse having their line crossing the center of the screen are considered vertical. Same for horizontal. In other cases it's considered a rotation, and to calculate its amplitude, calculate the angle between the last mouse location and the current location relatively to the center of the screen.
Alternatively you could use the center of the selected mesh if your application works like that.
You can't detect the "circular, along an arc" mouse movement with anywhere near the precision needed for 3d model viewing. What you want is something like this: http://thetechartist.com/?p=80
You nominate an axis (x, y, or z) using either keyboard shortcuts or on-screen axis indicators that you can grab with the mouse.
This will be much more precise than trying to detect an "arc" gesture. Any "arc" recognition would necessarily involve a delay while you accumulate enough mouse samples to decide whether an arc gesture has begun or not. Gesture recognition like this is non-trivial (I've done some gesture work with the Wii-mote). Similarly, even your simple "vertical" and "horizontal" mouse movement detection will require a delay for the same reason. Any "simple threshold to filter slight deviations" will make it feel dampened and weird.
For 3d viewing you want 1:1 mouse responsiveness, and that means just explicitly nominating an axis with a shortcut key or UI etc. For x-axis rotation, just restrict it to mouse x, y-axis to mouse y if you like. For z you could similarly restrict to x or y mouse input, or just take the total 2d mouse distance travelled. It depends what feels nicest to you.
As an alternative, you could try coding up support for a 3D mouse like the 3dConnexion SpaceExplorer.
I've been using OpenGL with SFML 1.6 for some time now, and it has been a blast! With one exception: I can't seem to implement a camera class correctly. You see, I am trying to create a C++ class called "Camera". Here are my functions:
Camera::Strafe(float fSpeed)
checks whether the WASD keys are pressed, and if so, move the camera at "fSpeed" in their respective directions.
Camera::MouseMove(int currentX, int currentY)
should provide a first-person mouse look, taking in the current mouse coordinates and rotating the camera accordingly. My Strafe() implementation works fine, but I can't seem to get MouseMove() right.
I already know from reading other resources on OpenGL mouse look implementations that I must center the mouse after every frame, and I have that part down. But that's about it. I can't seem to get how to actually rotate the camera on the spot from the mouse coordinates. Probably need to use some trig, I bet.
I've done something similar to this (it was a 3rd person camera). If I remember what I did correctly, I took the change in mouse position and used that to calculate two angles (I did that with some trig, I believe). One angle gave me horizontal rotation, the other gave me vertical rotation. Pitch, Yaw and Roll specifically, although I can't remember which refers to which direction. There is also one you have to do before the other, or else things will rotate funny. I'm pretty sure it was pitch first, then yaw or roll.
Hopefully it should be obvious what the change in mouse position did. It allowed mouse senesitivity. If I moved the mouse fast, I would have a larger change, and so I would rotate "faster."
EDIT: Ok, I looked at my code and it's a very simple calculation.
This was done with C#, so bear with me for syntax:
_angles.X += MathHelper.ToDegrees(changeInX / 100);
_angles.Y += MathHelper.ToDegrees(changeInY / 100);
my angles were stored in a 2 dimensional vector (since I only rotated on two axes). You'll see I took my changeInX and changeInY values and simply divided them by 100 to get some arbitrary radian value, then converted that number to degrees. Adjust the 100 for sensitivity. Keep in mind, no solid-math was done here to figure this out. I just did some trial-and-error until I got something that worked well.
I have an MFC appplication where the user have to move the mouse around a circle circonference with a dragging mouvement. I need to retrieve the number of degrees during this mouse drag "rotation" and I need to know if it's clockwise or counterclockwise.
At first, to determine the rotation direction, I was comparing x-coordinnate between the current mouse position and the mouse position where the user clicked to initiate the dragging. That works well until the user rotate over 180 degrees.
How can I handle the other half of the circle?
You'll need at least three ordered points to determine whether someone is moving clockwise or counterclockwise over time. With only two points, it isn't obvious whether (for instance) someone rotated 90 degrees or -270 degrees. So simply taking the cross product of the start and end won't work.
Try sampling the mouse during the dragging to get the additional information you need, and then taking incremental cross products between each pair of consecutive points. That will tell you what you want to know. However, you'll need to sample fast enough that no rotation of more than 180 degrees could have occurred; otherwise you'll wind up in an ambiguous situation again.
These might help you.
http://en.wikipedia.org/wiki/Atan2
http://www.phy.syr.edu/courses/java-suite/crosspro.html
And here is a simple example of recognizing gestures (it's in flash but the idea is the important bit)
http://www.bytearray.org/?p=91
Read about cross products. Computing a cross product between the X and Y vectors (differences from the start point) will always reliably give the rotation direction.