How to define how much CPU to use in SFML game? - c++

I've made a game but I don't know if the game will work the same way in other devices. For example if the CPU of a computer is high will the player and enemies move faster? If so, is there a way to define CPU usage available in SFML? The way the player and enemies move in my program is to :
1-Check if the key is pressed
2-If so : move(x,y); Or is there a way to get the CPU to do some operations in the move function.
Thank you!

It sounds like you are worried about the physics of your game being affected by the game's framerate. Your intuition is serving you well! This is a significant problem, and one you'll want to address if you want your game to feel professional.
According to Glenn Fiedler in his Gaffer on Games article 'Fix Your Timestep!'
[A game loop that handles time improperly can make] the behavior of your physics simulation [depend] on the delta time you pass in. The effect could be subtle as your game having a slightly different “feel” depending on framerate or it could be as extreme as your spring simulation exploding to infinity, fast moving objects tunneling through walls and the player falling through the floor!
Logic dictates that you must detach the dependencies of your update from the time it takes to draw a frame. A simple solution is to:
Pick an amount of time which can be safely processed (your timestep)
Add the time passed every frame into an accumulated pool of time
Process the time passed in safe chunks
In pseudocode:
time_pool = 0;
timestep = 0.01; //or whatever is safe for you!
old_time = get_current_time();
while (!closed) {
new_time = get_current_time();
time_pool += new_time - old_time;
old_time = new_time;
handle_input();
while (time_pool > timestep)
{
consume_time(timestep); //update your gamestate
time_pool -= timestep;
}
//note: leftover time is not lost, and will be left in time_pool
render();
}
It is worth noting that this method has its own problem: future frames have to consume the time produced by calls to consume_time. If a call to consume_time takes too long, the time produced might require two calls be made next frame - then four - then eight - and so on. If you use this method, you will have to make sure consume_time is very efficient, and even then it would be best to have a contingency plan.
For a more thorough treatment I encourage you to read the linked article.

Related

Box2D: how can I get same physics simulation if I use slow motion?

I have a functional personal 2D game engine that uses Box2D. I am trying to implement fast and slow motion effects for my physics simulation. The fast motion I implemented by calling the Step function more frequently. This way, I get same simulation result no matter how fast I increase the physics update rate. However, the problem lies when I slow down the simulation. If I use the same technique, the simulation provides same result, but it looks like the frames dropped due to physics not updating as frequent, which is expected, but not what I am looking for.
I tried implementing a new way to update the physics world by stepping the world by updateRate * timeScale if timeScale is less than 1. It fixed the laggy updates, but now my simulation provides a different result compared to the ones with time scales of 1 or above.
I am looking to get the same simulation results for a time scale of 1, any number above 1, and any number below 1 and have the simulation still look smooth for slow motion. This is the code I have for reference.
bool slowMotion = false;
float physicsScaledTimeFactor = Time::getPhysicsScaledTimeFactor();
if (physicsScaledTimeFactor >= 1) {
float deltaUpdateTime = Time::getTimeRunningF() - Time::getPhysicsLoopRunTimeF();
if (deltaUpdateTime < physicsTimeStep / physicsScaledTimeFactor)
return;
}
else {
slowMotion = true;
float deltaUpdateTime = Time::getTimeRunningF() - Time::getPhysicsLoopRunTimeF();
if (deltaUpdateTime < physicsTimeStep)
return;
}
Time::updatePhysicsLoop
// Update / step physics world
if (slowMotion)
physicsWorld->Step(physicsTimeStep * physicsScaledTimeFactor, physicsVelocityIterations, physicsPositionIterations);
else
physicsWorld->Step(physicsTimeStep, physicsVelocityIterations, physicsPositionIterations);
Here is how the simulation result for time scales of 1 or above look like:
Here is how the simulation result for time scales of less than 1 look like:
As you can see, the squares end up in different positions. Is there a way to get same simulation results with the provided quality conditions or is this acceptable and leave it like that?
Update:
I checked other engines and they also contain the laggy effect. But, would there be a way to get same simulation results? The slow motion effect look very smooth and would be nice to have it work as expected.

Cocos2D BezierBy with increasing speed over time

I'm pretty new to C++/Cocos2d, but I've been making pretty good progress. :)
What I want to do is animate a coin 'falling off' the screen after a player gets it. I've managed to successfully implement it 2 different ways, but each way has major downsides.
The goal: After a player gets a coin, the coin should 'jump', then 'fall' off of the screen. Ideally, the coin acts as if acted upon by gravity, so it jumps up with a fast speed, slows down to a stop, then proceeds to go downward at an increasing rate.
Attempts so far:
void Coin::tick(float dt) {
velocityY += gravity * dt;
float newX = coin->getPositionX() + velocityX;
float newY = coin->getPositionY() + velocityY;
coin->setPosition(newX, newY);
// using MoveBy(dt, Vec2(newX, newY)) has same result
}
// This is run on every 'update' of the main game loop.
This method does exactly what I would like it to do as far as movement, however, the frame rate gets extremely choppy and it starts to 'jump' between frames, sometimes quite significant distances.
ccBezierConfig bz;
bz.controlPoint_1 = Vec2(0, 0);
bz.controlPoint_2 = Vec2(20, 50); // These are just test values. Will normally be randomized to a degree.
bz.endPosition = Vec2(100, -2000);
auto coinDrop = BezierBy::create(2, bz);
coin->runAction(coinDrop);
This one has the benefit of 'perfect' framerate, where there is no choppiness whatsoever, however, it moves at a constant rate which ruins the experience of it falling and just makes it look like it's arbitrarily moving along some set path. (Which, well, it is.)
Has anybody run into a similar situation or know of a fix? Either to better handle the frame rate of the first one (MoveBy/To don't work- still has the choppy effect) or to programmatically set speeds of the second one (change speeds going to/from certain points in the curve)
Another idea I've had is to use a number of different MoveBy actions with different speeds, but that would have awkward 'pointy' curves and awkward changes in speed, so not really a solution.
Any ideas/help are/is greatly appreciated. :)
Yes, I have run into a similar situation. This is where 'easing' comes in handy. There are many built in easing functions that you can use such as Ease In or Ease Out. So your new code would look something like:
coin->runAction(cocos2d::EaseBounceOut::create(coinDrop));
This page shows the graphs for several common easing methods:
http://cocos2d-x.org/docs/programmers-guide/4/index.html
For your purposes (increasing speed over time) I would recommend trying the 'EaseIn' method.

How to reduce OpenGL CPU usage and/or how to use OpenGL properly

I'm working a on a Micromouse simulation application built with OpenGL, and I have a hunch that I'm not doing things properly. In particular, I'm suspicious about the way I am getting my (mostly static) graphics to refresh at a close-to-constant framerate (60 FPS). My approach is as follows:
1) Start a timer
2) Draw my shapes and text (about a thousand of them):
glBegin(GL_POLYGON);
for (Cartesian vertex : polygon.getVertices()) {
std::pair<float, float> coordinates = getOpenGlCoordinates(vertex);
glVertex2f(coordinates.first, coordinates.second);
}
glEnd();
and
glPushMatrix();
glScalef(scaleX, scaleY, 0);
glTranslatef(coordinates.first * 1.0/scaleX, coordinates.second * 1.0/scaleY, 0);
for (int i = 0; i < text.size(); i += 1) {
glutStrokeCharacter(GLUT_STROKE_MONO_ROMAN, text.at(i));
}
glPopMatrix();
3) Call
glFlush();
4) Stop the timer
5) Sleep for (1/FPS - duration) seconds
6) Call
glutPostRedisplay();
The "problem" is that the above approach really hogs my CPU - the process is using something like 96-100%. I know that there isn't anything inherently wrong with using lots of CPU, but I feel like I shouldn't be using that much all of the time.
The kicker is that most of the graphics don't change from frame to frame. It's really just a single polygon moving over (and covering up) some static shapes. Is there any way to tell OpenGL to only redraw what has changed since the previous frame (with the hope it would reduce the number of glxxx calls, which I've deemed to be the source of the "problem")? Or, better yet, is my approach to getting my graphics to refresh even correct?
First and foremost the biggest CPU hog with OpenGL is immediate mode… and you're using it (glBegin, glEnd). The problem with IM is, that every single vertex requires a whole couple of OpenGL calls being made; and because OpenGL uses a thread local state this means that each and every OpenGL call must go through some indirection. So the first step would be getting rid of that.
The next issue is with how you're timing your display. If low latency between user input and display is not your ultimate goal the standard approach would setting up the window for double buffering, enabling V-Sync, set a swap interval of 1 and do a buffer swap (glutSwapBuffers) once the frame is rendered. The exact timings what and where things will block are implementation dependent (unfortunately), but you're more or less guaranteed to exactly hit your screen refresh frequency, as long as your renderer is able to keep up (i.e. rendering a frame takes less time that a screen refresh interval).
glutPostRedisplay merely sets a flag for the main loop to call the display function if no further events are pending, so timing a frame redraw through that is not very accurate.
Last but not least you may be simply mocked by the way Windows does account CPU time (time spent in driver context, which includes blocking, waiting for V-Sync) will be accouted to the consumed CPU time, while it's in fact interruptible sleep. However you wrote, that you already do a sleep in your code, which would rule that out, because the go-to approach to get a more reasonable accounting would be adding a Sleep(1) before or after the buffer swap.
I found that by putting render thread to sleep helps reducing cpu usage from (my case) 26% to around 8%
#include <chrono>
#include <thread>
void render_loop(){
...
auto const start_time = std::chrono::steady_clock::now();
auto const wait_time = std::chrono::milliseconds{ 17 };
auto next_time = start_time + wait_time;
while(true){
...
// execute once after thread wakes up every 17ms which is theoretically 60 frames per
// second
auto then = std::chrono::high_resolution_clock::now();
std::this_thread::sleep_until(next_time);
...rendering jobs
auto elasped_time =
std::chrono::duration_cast<std::chrono::milliseconds> (std::chrono::high_resolution_clock::now() - then);
std::cout << "ms: " << elasped_time.count() << '\n';
next_time += wait_time;
}
}
I thought about attempting to measure the frame rate while the thread is asleep but there isn't any reason for my use case to attempt that. The result was averaging around 16ms so I thought it was good enough
Inspired by this post

How to pause an animation with OpenGL / glut

To achieve an animation, i am just redrawing things on a loop.
However, I need to be able to pause when a key is pressed. I know the way i'm doing it now its wrong because it eats all of my cycles when the loop is going on.
Which way is better, and will allow for a key pause and resume?
I tried using a bool flag but obviously it didnt change the flag until the loop was done.
You have the correct very basic architecture sorted in that the everything needs to be updated in a loop, but you need to make your loop a lot smarter for a game (or other application requiring OpenGL animations).
However, I need to be able to pause when a key is pressed.
A basic way of doing this is to have a boolean value paused and to wrap the game into a loop.
while(!finished) {
while(!paused) {
update();
render();
}
}
Typically however you still want to do things such as look at your inventory, craft things, etc. while your game is paused, and many games still run their main loop while the game's paused, they just don't let the actors know any time has passed. For instance, it sounds like your animation frames simply have a number of game-frames to be visible for. This is a bad idea because if the animation speed increases or decreases on a different computer, the animation speed will look wrong on those computers. You can consider my answer here, and the linked samples to see how you can achieve framerate-independent animation by specifying animation frames in terms of millisecond duration and passing in the frame time in the update loop. For instance, your main game then changes to look like this:
float previousTime = 0.0f;
float thisTime = 0.0f;
float framePeriod = 0.0f;
while(!finished) {
thisTime = getTimeInMilliseconds();
framePeriod = previousTime - thisTime;
update(framePeriod);
render();
previousTime = thisTime;
}
Now, everything in the game that gets updated will know how much time has passed since the previous frame. This is helpful for all your physics calculations as all of our physical formulae are in terms of time + starting factors + decay factors (for instance, the SUVAT equations). The same information can be used for your animations to make them framerate independent as I have described with some links to examples here.
To answer the next part of the question:
it eats all of my cycles when the loop is going on.
This is because you're using 100% of the CPU and never going to sleep. If we consider that we want for instance 30fps on the target device (and we know that this is possible) then we know the period of one frame is 1/30th of a second. We've just calculated the time it takes to update and render our game, so we can sleep for any of the spare time:
float previousTime = 0.0f;
float thisTime = 0.0f;
float framePeriod = 0.0f;
float availablePeriod = 1 / 30.0f;
while (!finished) {
thisTime = getTimeInMilliseconds();
framePeriod = previousTime - thisTime;
update(framePeriod);
render();
previousTime = thisTime;
if (framePeriod < availablePeriod)
sleep(availablePeriod - framePeriod);
}
This technique is called framerate governance as you are manually controlling the rate at which you are rendering and updating.

Slow C++ DirectX 2D Game

I'm new to C++ and DirectX, I come from XNA.
I have developed a game like Fly The Copter.
What i've done is created a class named Wall.
While the game is running I draw all the walls.
In XNA I stored the walls in a ArrayList and in C++ I've used vector.
In XNA the game just runs fast and in C++ really slow.
Here's the C++ code:
void GameScreen::Update()
{
//Update Walls
int len = walls.size();
for(int i = wallsPassed; i < len; i++)
{
walls.at(i).Update();
if (walls.at(i).pos.x <= -40)
wallsPassed += 2;
}
}
void GameScreen::Draw()
{
//Draw Walls
int len = walls.size();
for(int i = wallsPassed; i < len; i++)
{
if (walls.at(i).pos.x < 1280)
walls.at(i).Draw();
else
break;
}
}
In the Update method I decrease the X value by 4.
In the Draw method I call sprite->Draw (Direct3DXSprite).
That the only codes that runs in the game loop.
I know this is a bad code, if you have an idea to improve it please help.
Thanks and sorry about my english.
Try replacing all occurrences of at() with the [] operator. For example:
walls[i].Draw();
and then turn on all optimisations. Both [] and at() are function calls - to get the maximum performance you need to make sure that they are inlined, which is what upping the optimisation level will do.
You can also do some minimal caching of a wall object - for example:
for(int i = wallsPassed; i < len; i++)
{
Wall & w = walls[i];
w.Update();
if (w.pos.x <= -40)
wallsPassed += 2;
}
Try to narrow the cause of the performance problem (also termed profiling). I would try drawing only one object while continue updating all the objects. If its suddenly faster, then its a DirectX drawing problem.
Otherwise try drawing all the objects, but updating only one wall. If its faster then your update() function may be too expensive.
How fast is 'fast'?
How slow is'really slow'?
How many sprites are you drawing?
How big is each one as an image file, and in pixels drawn on-screen?
How does performance scale (in XNA/C++) as you change the number of sprites drawn?
What difference do you get if you draw without updating, or vice versa
Maybe you just have forgotten to turn on release mode :) I had some problems with it in the past - I thought my code was very slow because of debug mode. If it's not it, you can have a problem with rendering part, or with huge count of objects. The code you provided looks good...
Have you tried multiple buffers (a.k.a. Double Buffering) for the bitmaps?
The typical scenario is to draw in one buffer, then while the first buffer is copied to the screen, draw in a second buffer.
Another technique is to have a huge "logical" screen in memory. The portion draw in the physical display is a viewport or view into a small area in the logical screen. Moving the background (or screen) just requires a copy on the part of the graphics processor.
You can aid batching of sprite draw calls. Presumably Your draw call calls your only instance of ID3DXSprite::Draw with the relevant parameters.
You can get much improved performance by doing a call to ID3DXSprite::Begin (with the D3DXSPRITE_SORT_TEXTURE flag set) and then calling ID3DXSprite::End when you've done all your rendering. ID3DXSprite will then sort all your sprite calls by texture to decrease the number of texture switches and batch the relevant calls together. This will improve performance massively.
Its difficult to say more, however, without seeing the internals of your Update and Draw calls. The above is only a guess ...
To draw every single wall with a different draw call is a bad idea. Try to batch the data into a single vertex buffer/index buffer and send them into a single draw. That's a more sane idea.
Anyway for getting an idea of WHY it goes slowly try with some CPU and GPU (PerfHud, Intel GPA, etc...) to know first of all WHAT's the bottleneck (if the CPU or the GPU). And then you can fight to alleviate the problem.
The lookups into your list of walls are unlikely to be the source of your slowdown. The cost of drawing objects in 3D will typically be the limiting factor.
The important parts are your draw code, the flags you used to create the DirectX device, and the flags you use to create your textures. My stab in the dark... check that you initialize the device as HAL (hardware 3d) rather than REF (software 3d).
Also, how many sprites are you drawing? Each draw call has a fair amount of overhead. If you make more than couple-hundred per frame, that will be your limiting factor.