I'm trying to do a first cam person using OPENGL. But here's my problem.
Let me introduce you my code.
First of all, I have this function that allows me to get the mouse X and Y position:
case WM_MOUSEMOVE:
CameraManager.oldMouseX = CameraManager.mouseX;
CameraManager.oldMouseY = CameraManager.mouseY;
CameraManager.mouseX = GET_X_LPARAM(lParam);
CameraManager.mouseY = GET_Y_LPARAM(lParam);
mousePoint.x = CameraManager.mouseX - CameraManager.oldMouseX;
mousePoint.y = CameraManager.mouseY - CameraManager.oldMouseY;
mousePoint.z = 0.f;
CameraManager.lookAt(mousePoint);
App.onRender();
break;
Here I get the difference between the old mouse position and the new one (just because I want to know when I have to increase/decrease the angle). Then, I call the lookAt function on my CameraManager.
Here I do the following:
if (point.x > 0.f) {
camAngle.x -= 0.3f;
}
else if (point.x < 0.f) {
camAngle.x += 0.3f ;
}
float radiansX = MathUtils::CalculateRadians(camAngle.x);
//eye.x = sinf(radiansX);
//center.x += sinf(radiansX);
//center.z += (-cosf(radiansX));
Update();
And then my update does:
glLoadIdentity();
gluLookAt(eye.x, eye.y, eye.z,
center.x, center.y, center.z,
up.x, up.y, up.z);
I've read a lot of stuff on how to refresh the eye and center, but I couldn't get anything to work.
Any advices?
Related
i am new to raylib and wanted to make a little 2d ball thing, and i don't know how to stop the sprite from going of the screen, it only works with 2 edges and not the others, would anybody please help?
My C++ file:
game.cpp:
#include <raylib.h>
int main() {
InitWindow(800, 600, "My Game!");
// Vector2 ballPosition = {400.0f, 300.0f};
Vector2 ballPosition = { (float)800/2, (float)600/2 };
SetTargetFPS(60);
while(!WindowShouldClose()) {
if (IsKeyDown(KEY_RIGHT)) ballPosition.x += 2.0f;
if (IsKeyDown(KEY_LEFT)) ballPosition.x -= 2.0f;
if (IsKeyDown(KEY_UP)) ballPosition.y -= 2.0f;
if (IsKeyDown(KEY_DOWN)) ballPosition.y += 2.0f;
DrawText("move the ball with arrow keys", 10, 10, 20, DARKGRAY);
Texture2D ball = LoadTexture("ball.png");
DrawTexture(ball, ballPosition.x - 50, ballPosition.y - 50, WHITE);
if (ballPosition.x < 0) ballPosition.x = 0;
if (ballPosition.x > 800) ballPosition.x = 800;
if (ballPosition.y < 0) ballPosition.y = 0;
if (ballPosition.y > 600) ballPosition.y = 600;
BeginDrawing();
ClearBackground(RAYWHITE);
// Vector2 mousePosition = GetMousePosition();
// ballPosition = mousePosition;
// DrawCircleV(ballPosition, 20.0f, BLUE);
EndDrawing();
}
CloseWindow();
return 0;
}
Took me a while but your problem is a combination of the offset for drawing the ball texture and how you check for the bounds of the screen (assuming the image is a 50x50, you didn't specify).
Notice I've locally added a circle directly at the ballposition (right after the DrawTextureCall):
DrawCircleV(ballPosition, 2.0f, MAROON);
Since this is to the right and bottom of the image, your bounds check will only be correct for the right and bottom edges. For the left and top edges the entire image disappears before the ballposition hits the edge.
In my opinion you should draw the image at the ballposition and adjust the bounds check to take the image size into account:
const float ballHalfSize = ball.width / 2.0f; // assuming square image
DrawTexture(ball, int(ballPosition.x - ballHalfSize), int(ballPosition.y - ballHalfSize), WHITE);
if ((ballPosition.x - ballHalfSize) < 0) ballPosition.x = ballHalfSize;
if ((ballPosition.x + ballHalfSize) > 800) ballPosition.x = 800 - ballHalfSize;
if ((ballPosition.y - ballHalfSize) < 0) ballPosition.y = ballHalfSize;
if ((ballPosition.y + ballHalfSize) > 600) ballPosition.y = 600 - ballHalfSize;
With this new version I get correct collision behavior on all 4 edges of the screen.
I want to implement a conventional FPS control. By pressing the spacebar the camera moves upward and then comes down back on the grid, simulating the jumping action. Right now, I am only able to move the camera upward for a distance but I don't know how to let the camera come down.
This is my camera class:
Camera::Camera(glm::vec3 cameraPosition, glm::vec3 cameraFront, glm::vec3 cameraUp){
position = glm::vec3(cameraPosition);
front = glm::vec3(cameraFront);
up = glm::vec3(cameraUp);
// Set to predefined defaults
yawAngle = -90.0f;
pitchAngle = 0.0f;
fieldOfViewAngle = 45.0f;
}
Camera::~Camera() {}
void Camera::panCamera(float yaw){
yawAngle += yaw;
updateCamera();}
void Camera::tiltCamera(float pitch){
pitchAngle += pitch;
// Ensure pitch is inbounds for both + and - values to prevent irregular behavior
pitchAngle > 89.0f ? pitchAngle = 89.0f : NULL;
pitchAngle < -89.0f ? pitchAngle = -89.0f : NULL;
updateCamera();}
void Camera::zoomCamera(float zoom){
if (fieldOfViewAngle >= MIN_ZOOM && fieldOfViewAngle <= MAX_ZOOM){
fieldOfViewAngle -= zoom * 0.1;}
// Limit zoom values to prevent irregular behavior
fieldOfViewAngle <= MIN_ZOOM ? fieldOfViewAngle = MIN_ZOOM : NULL;
fieldOfViewAngle >= MAX_ZOOM ? fieldOfViewAngle = MAX_ZOOM : NULL;}
glm::mat4 Camera::calculateViewMatrix(){
return glm::lookAt(position, position + front, up);}
void Camera::updateCamera(){
front.x = cos(glm::radians(yawAngle)) * cos(glm::radians(pitchAngle));
front.y = sin(glm::radians(pitchAngle));
front.z = sin(glm::radians(yawAngle)) * cos(glm::radians(pitchAngle));
front = glm::normalize(front);}
void Camera::moveForward(float speed){
position += speed * front;}
void Camera::moveBackward(float speed){
position -= speed * front;}
void Camera::moveLeft(float speed){
position -= glm::normalize(glm::cross(front, up)) * speed;}
void Camera::moveRight(float speed){
position += glm::normalize(glm::cross(front, up)) * speed;}
void Camera::moveUpward(float speed){
position.y += speed;}
This is how I implement it in main.cpp:
if (glfwGetKey(window, GLFW_KEY_SPACE) == GLFW_PRESS)
{
camera.moveUpward(cameraSpeed);
}
Can someone help me?
You can simulate a jump by applying an acceleration to the y-position. You'll need to add 3 new attributes to your struct: acceleration (vec3), velocity (vec3), and onGround (boolean).
void Camera::jump() {
// Apply a upwards acceleration of 10 units. You can experiment with
// this value or make it physically correct and calculate the
// best value, but this requires the rest of your program to
// be in well-defined units.
this->onGround = false;
this->acceleration.y = 10.0f;
}
void Camera::update() {
// Your other update code
// The acceleration should decrease with gravity.
// Eventually it'll become negative.
this->acceleration.y += GRAVITY; // Define 'GRAVITY' to a negative constant.
// Only update if we're not on the ground.
if (!this->onGround) {
// Add the acceleration to the velocity and the velocity to
// the position.
this->velocity.y += this->acceleration.y
this->position.y += this->velocity.y;
if (this->position.y <= 0) {
// When you're on the ground, reset the variables.
this->onGround = true;
this->acceleration.y = 0;
this->velocity.y = 0;
this->position.y = 0;
}
}
}
And in your event handling you'll call the jump method.
if (glfwGetKey(window, GLFW_KEY_SPACE) == GLFW_PRESS)
{
camera.jump();
}
However, this code probably shouldn't be on the camera, but instead on a Physics object of some sort. But it'll work.
I have to move a ball in an angle in an open SFML and keep it within the window size (Like the DVD thing), but my current function makes it to the bottom and doesn't "bounce". It slides across the bottom and stops once it reaches the other corner. The initial position is (1,1)
void Bubble::updatePosition() {
if( isTopBottom() ){
do{
_x += .1;
_y += -.2;
}while( !isTopBottom() );
}
else if( isLeftRight() ){
do{
_x += -.1;
_y += .2;
}while( !isLeftRight() );
}
else{
_x += .1;
_y += .2;
}
_bubble.setPosition(_x, _y);
}
the isLeftRight, isTopBottom are bools that check if they have reached the edges
Simple Solution
Use velocities and manipulate those on collision; then, use the velocity to update the position.
Check each edge separately and decide on one relevant velocity component based on that.
e.g. (following your values)
// Positions:
float x = 1.f;
float y = 1.f;
// Velocities:
float vx = 0.1f;
float vy = 0.2f;
// ... then, inside loop:
// Check collisions (and adjust velocity):
if (x < 0.f)
vx = 0.1f;
else if (x > 640.f)
vx = -0.1f;
if (y < 0.f)
vy = 0.2f;
else if (y > 640.f)
vy = -0.2f;
// update position (still inside loop):
x += vx;
y += vy;
Cleaner Solution
This is the same as the simple solution above but, since you tagged SFML, you can use SFML vectors to keep the two components together. Also modified variable names to be more clear. Pulled out the size of the window and the velocity amounts from being hard-coded into the logic as well:
const sf::Vector2f windowSize(640.f, 640.f);
const sf::Vector2f velocityAmount(0.1f, 0.2f);
sf::Vector2f position(1.f, 1.f);
sf::Vector2f velocity = velocityAmount;
// ... then, inside loop:
// Check collisions (and adjust velocity):
if (position.x < 0.f)
velocity.x = velocityAmount.x;
else if (position.x > windowSize.x)
velocity.x = -velocityAmount.x;
if (position.y < 0.f)
velocity.y = velocityAmount.y;
else if (position.y > windowSize.y)
velocity.y = -velocityAmount.y;
// update position (still inside loop):
position += velocity;
You should notice that the velocity is the values that are added to the position on each iteration of the loop and that velocity does not change when it is not considered colliding with an edge.
Initial Problem
The initial problem you had is it always moves in the same direction (towards the bottom-right) if it is not hitting an edge. This means that it'll never be allowed to rise above the bottom edge (or away from right edge).
I am working on a Direct3D9 space simulator game in which I need to create a camera that holds the position and point of view of the player's spaceship. For the moment I limited my code just for moving backward and forward, up and down as well as strafing. Below is my code and it has a problem. Everything is wrapped in a class and all vectors are initialized to D3DXVECTOR3(0.0f, 0.0f, 0.0f) in the constructor, excepting the LocalUp( D3DXVECTOR3(0.0f, 1.0f, 0.0f) ) and LocalAhead( D3DXVECTOR3(0.0f, 0.0f, 1.0f) ) and the floats are set to 0.0f;
D3DXVECTOR3 Position, LookAt ,PosDelta, PosDeltaWorld, WorldAhead, WorldUp, LocalUp,
LocalAhead, Velocity;
D3DXMATRIX View, CameraRotation;
float SpeedX, SpeedY, SpeedZ;
void Update(float ElapsedTime)
{
SpeedX = 0.0f;
SpeedY = 0.0f;
if(IsKeyDown('A'))
{
SpeedX = -0.02f;
Velocity.x -= SpeedX;
}
if(IsKeyDown('D'))
{
SpeedX = 0.02f;
Velocity.x += SpeedX;
}
if(IsKeyDown('X'))
{
SpeedZ += 0.01f;
Velocity.z += SpeedZ;
}
if(IsKeyDown('Z'))
{
SpeedZ -= 0.01f;
Velocity.z -= SpeedZ;
}
if(IsKeyDown('W'))
{
SpeedY = 0.02f;
Velocity.y += SpeedY;
}
if(IsKeyDown('S'))
{
SpeedY = -0.02f;
Velocity.y -= SpeedY;
}
D3DXVec3Normalize(&Velocity, &Velocity);
PosDelta.x = Velocity.x * SpeedX;
PosDelta.y = Velocity.y * SpeedY;
PosDelta.z = Velocity.z * SpeedZ;
D3DXMatrixRotationYawPitchRoll(&CameraRotation, 0, 0, 0);
D3DXVec3TransformCoord(&WorldUp, &LocalUp, &CameraRotation);
D3DXVec3TransformCoord(&WorldAhead, &LocalAhead, &CameraRotation);
D3DXVec3TransformCoord(&PosDeltaWorld, &PosDelta, &CameraRotation);
Position += PosDeltaWorld;
LookAt = Position + WorldAhead;
D3DXMatrixLookAtLH(&View, &Position, &LookAt, &WorldUp);
}
The "D3DXMatrixPerspectiveFovLH" and "IDirect3DDevice9::SetTransform" function are called in another part of the application. As they are working fine I will no longer talk about them.
The problem is that whenever the Z-axis speed is quite big and I strafe and move laterally, separately or at the same time, the camera's Z-axis speed will decrease. Moreover, after the speed is almost 0 and then I press the key that increased the speed, the sense of the vector inverts, then comes back to normal. This also happens when changing the vector's sense at quite high speeds(e.g. Pressing X then immediately pressing 'Z'). Can anybody explain me why is this happening and how can I solve this problem?
I will also ask another question: how can I slowly decrease the strafe and Y-axis speed if no key is pressed? I want to have the inertia effect implemented in the game.
If there is anyone able to help me, please respond!
EDIT: NEW CODE:
void NewFrontiers3DEntityPlayer::OnFrameUpdate(float ElapsedTime)
{
State.SpeedX = 0.0f;
State.SpeedY = 0.0f;
if(IsKeyDown(State.Keys[CAM_STRAFE_LEFT]))
State.SpeedX = -0.02f;
if(IsKeyDown(State.Keys[CAM_STRAFE_RIGHT]))
State.SpeedX = 0.02f;
if(IsKeyDown(State.Keys[CAM_MOVE_FORWARD]))
{
State.SpeedZ += 0.01f;
}
if(IsKeyDown(State.Keys[CAM_MOVE_BACKWARD]))
{
State.SpeedZ -= 0.01f;
}
if(IsKeyDown(State.Keys[CAM_MOVE_UP]))
State.SpeedY = 0.02f;
if(IsKeyDown(State.Keys[CAM_MOVE_DOWN]))
State.SpeedY = -0.02f;
State.Velocity.x = State.SpeedX;
State.Velocity.y = State.SpeedY;
State.Velocity.z = State.SpeedZ;
D3DXVec3Normalize(&State.Velocity, &State.Velocity);
State.PosDelta.x = State.Velocity.x * ElapsedTime;
State.PosDelta.y = State.Velocity.y * ElapsedTime;
State.PosDelta.z = State.Velocity.z * ElapsedTime;
D3DXMatrixRotationYawPitchRoll(&State.CameraRotation, 0, 0, 0);
D3DXVec3TransformCoord(&State.WorldUp, &State.LocalUp, &State.CameraRotation);
D3DXVec3TransformCoord(&State.WorldAhead, &State.LocalAhead, &State.CameraRotation);
D3DXVec3TransformCoord(&State.PosDeltaWorld, &State.PosDelta, &State.CameraRotation);
State.Position += State.PosDeltaWorld;
State.LookAt = State.Position + State.WorldAhead;
D3DXMatrixLookAtLH(&State.View, &State.Position, &State.LookAt, &State.WorldUp);
return;
}
"State" is a structure that holds all the information about the camera.
I would guess your speed changes when you move in more than one direction at once because you normalize your velocity.
For example, moving in Z:
Velocity = (0, 0, 0.01)
Speed = (0,0, 0.01)
Normalized Velocity = (0, 0, 1)
PosDelta = (0, 0, 0.01)
and moving in X+Z:
Velocity = (0.02, 0, 0.01)
Speed = (0.02, 0, 0.01)
Normalized Velocity = (0.897, 0, 0.435)
PosDelta = (0.018, 0, 0.0044)
Regarding your inversion of direction I'm guessing it may be related in part to your relatively strange method of using velocity/speed (see more below) and possibly due to the in-precision of floats. Regarding the last point what do you think the following code outputs (disregard potential compiler optimizations):
float test1 = 0f;
test1 += 0.1f;
test1 += 0.1f;
test1 += 0.1f;
test1 += 0.1f;
test1 += 0.1f;
test1 -= 0.1f;
test1 -= 0.1f;
test1 -= 0.1f;
test1 -= 0.1f;
test1 -= 0.1f;
printf("%g\n", test1);
It (likely?) won't output the obvious answer of 0 since 0.1 cannot be exactly represented in base-2. It prints 1.49012e-008 on my system. What could be happening is you may be close to 0 but not exactly which may cause the coordinate inversion to appear. You can get rid of this by rounding speeds to a certain accuracy.
Your overall method of handling the velocity/speed/position is a little strange and may be the source of your difficulties. For example, I would expect Velocity to be a vector representing the velocity of the player and not a normalized vector as you have. I would do something like:
SpeedX = 0.0f;
SpeedY = 0.0f;
SpeedZ = 0.0f
if(IsKeyDown('A')) SpeedX += -0.02f;
if(IsKeyDown('D')) SpeedX += 0.02f;
...
Velocity.x += SpeedX;
Velocity.y += SpeedY;
Velocity.z += SpeedZ;
D3DXVECTOR3 NormVelocity;
D3DXVec3Normalize(&NormVelocity, &Velocity);
//Round velocity here if you need to
Velocity.x = floor(Velocity.x * 10000) / 10000.0f;
...
float FrameTime = 1; //Use last frame time here
PosDelta.x = Velocity.x * FrameTime;
PosDelta.y = Velocity.y * FrameTime;
PosDelta.z = Velocity.z * FrameTime;
That gets rid of your velocity changing when moving in more than one direction. It also lets you properly compensate for changing frame rates if you set the FrameTime to be the time of the last frame (or a value derived from it). It also correctly stops the player from moving when they try to move in two opposite directions at once.
As for your last question regarding the decay of the Y-velocity there are a few ways to do it. You could simply do something like:
Velocity.y *= 0.7f;
every frame (adjust the constant to suit your needs). A more accurate model would be to do something like:
if (Velocity.y > 0) {
Velocity.y -= 0.001f; //Pick constant to suit
if (Velocity.y < 0) Velocity.y = 0;
}
An even better way would be to use the last frame time to account for varying frame rates like:
Velocity.y -= FrameTime * 0.2f; //Pick constant to suit
Here is what I'm trying to do. I'm trying to make a bullet out of the center of the screen. I have an x and y rotation angle. The problem is the Y (which is modified by rotation on the x) is really not working as intended. Here is what I have.
float yrotrad, xrotrad;
yrotrad = (Camera.roty / 180.0f * 3.141592654f);
xrotrad = (Camera.rotx / 180.0f * 3.141592654f);
Vertex3f Pos;
// get camera position
pls.x = Camera.x;
pls.y = Camera.y;
pls.z = Camera.z;
for(float i = 0; i < 60; i++)
{
//add the rotation vector
pls.x += float(sin(yrotrad)) ;
pls.z -= float(cos(yrotrad)) ;
pls.y += float(sin(twopi - xrotrad));
//translate camera coords to cube coords
Pos.x = ceil(pls.x / 3);
Pos.y = ceil((pls.y) / 3);
Pos.z = ceil(pls.z / 3);
if(!CubeIsEmpty(Pos.x,Pos.y,Pos.z)) //remove first cube that made contact
{
delete GetCube(Pos.x,Pos.y,Pos.z);
SetCube(0,Pos.x,Pos.y,Pos.z);
return;
}
}
This is almost identical to how I move the player, I add the directional vector to the camera then find which cube the player is on. If I remove the pls.y += float(sin(twopi - xrotrad)); then I clearly see that on the X and Z, everything is pointing as it should. When I add pls.y += float(sin(twopi - xrotrad)); then it almost works, but not quite, what I observed from rendering out spheres of the trajector is that the furthur up or down I look, the more offset it becomes rather than stay alligned to the camera's center. What am I doing wrong?
Thanks
What basically happens is very difficult to explain, I'd expect the bullet at time 0 to always be at the center of the screen, but it behaves oddly. If i'm looking straight at the horizon to +- 20 degrees upward its fine but then it starts not following any more.
I set up my matrix like this:
void CCubeGame::SetCameraMatrix()
{
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Camera.rotx,1,0,0);
glRotatef(Camera.roty,0,1,0);
glRotatef(Camera.rotz,0,0,1);
glTranslatef(-Camera.x , -Camera.y,-Camera.z );
}
and change the angle like this:
void CCubeGame::MouseMove(int x, int y)
{
if(!isTrapped)
return;
int diffx = x-lastMouse.x;
int diffy = y-lastMouse.y;
lastMouse.x = x;
lastMouse.y = y;
Camera.rotx += (float) diffy * 0.2;
Camera.roty += (float) diffx * 0.2;
if(Camera.rotx > 90)
{
Camera.rotx = 90;
}
if(Camera.rotx < -90)
{
Camera.rotx = -90;
}
if(isTrapped)
if (fabs(ScreenDimensions.x/2 - x) > 1 || fabs(ScreenDimensions.y/2 - y) > 1) {
resetPointer();
}
}
You need to scale X and Z by cos(xradrot). (In other words, multiply by cos(xradrot)).
Imagine you're pointing straight down the Z axis but looking straight up. You don't want the bullet to shoot down the Z axis at all, this is why you need to scale it. (It's basically the same thing that you're doing between X and Z, but now doing it on the XZ vector and Y.)
pls.x += float(sin(yrotrad)*cos(xrotrad)) ;
pls.z -= float(cos(yrotrad)*cos(xrotrad)) ;
pls.y += float(sin(twopi - xrotrad));