Simple C++ SFML program high CPU usage - c++

I'm currently working on a platformer and trying to implement a timestep, but for framerate limits greater than 60 the CPU usage goes up from 1% to 25% and more.
I made this minimal program to demonstrate the issue. There are two comments (lines 10-13, lines 26-30) in the code that describe the problem and what I have tested.
Note that the FPS stuff is not relevant to the problem (I think).
I tried to keep the code short and simple:
#include <memory>
#include <sstream>
#include <iomanip>
#include <SFML\Graphics.hpp>
int main() {
// Window
std::shared_ptr<sf::RenderWindow> window;
window = std::make_shared<sf::RenderWindow>(sf::VideoMode(640, 480, 32), "Test", sf::Style::Close);
/*
When I use the setFramerateLimit() function below, the CPU usage is only 1% instead of 25%+
(And only if I set the limit to 60 or less. For example 120 increases CPU usage to 25%+ again.)
*/
//window->setFramerateLimit(60);
// FPS text
sf::Font font;
font.loadFromFile("font.ttf");
sf::Text fpsText("", font, 30);
fpsText.setColor(sf::Color(0, 0, 0));
// FPS
float fps;
sf::Clock fpsTimer;
sf::Time fpsElapsedTime;
/*
When I set framerateLimit to 60 (or anything less than 60)
instead of 120, CPU usage goes down to 1%.
When the limit is greater, in this case 120, CPU usage is 25%+
*/
unsigned int framerateLimit = 120;
sf::Time fpsStep = sf::milliseconds(1000 / framerateLimit);
sf::Time fpsSleep;
fpsTimer.restart();
while (window->isOpen()) {
// Update timer
fpsElapsedTime = fpsTimer.restart();
fps = 1000.0f / fpsElapsedTime.asMilliseconds();
// Update FPS text
std::stringstream ss;
ss << "FPS: " << std::fixed << std::setprecision(0) << fps;
fpsText.setString(ss.str());
// Get events
sf::Event evt;
while (window->pollEvent(evt)) {
switch (evt.type) {
case sf::Event::Closed:
window->close();
break;
default:
break;
}
}
// Draw
window->clear(sf::Color(255, 255, 255));
window->draw(fpsText);
window->display();
// Sleep
fpsSleep = fpsStep - fpsTimer.getElapsedTime();
if (fpsSleep.asMilliseconds() > 0) {
sf::sleep(fpsSleep);
}
}
return 0;
}
I don't want to use SFML's setFramerateLimit(), but my own implementation with the sleep because I will use the fps data to update my physics and stuff.
Is there a logic error in my code? I fail to see it, given it works with a framerate limit of for example 60 (or less). Is it because I have a 60 Hz monitor?
PS: Using SFML's window->setVerticalSync() doesn't change the results

I answered another similar question with this answer.
The thing is, it's not exactly helping you with CPU usage, but I tried your code and it is working fine under 1% cpu usage at 120 FPS (and much more). When you make a game or an interactive media with a "game-loop", you don't want to lose performance by sleeping, you want to use as much cpu time as the computer can give you. Instead of sleeping, you can process other data, like loading stuff, pathfinding algorithm, etc., or just don't put limits on rendering.
I provide some useful links and code, here it is:
Similar question: Movement Without Framerate Limit C++ SFML.
What you really need is fixed time step. Take a look at the SFML Game
development book source code. Here's the interesting snippet from
Application.cpp:
const sf::Time Game::TimePerFrame = sf::seconds(1.f/60.f);
// ...
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
while (mWindow.isOpen())
{
sf::Time elapsedTime = clock.restart();
timeSinceLastUpdate += elapsedTime;
while (timeSinceLastUpdate > TimePerFrame)
{
timeSinceLastUpdate -= TimePerFrame;
processEvents();
update(TimePerFrame);
}
updateStatistics(elapsedTime);
render();
}
If this is not really what you want, see "Fix your timestep!"
which Laurent Gomila himself linked in the SFML forum.

I suggest to use the setFrameRate limit, because it's natively implemented in SFML and will work a lot better.
For getting the elapsed time you must do :
fpsElapsedTime = fpsTimer.getElapsedTime();
If I had to implement something similar, I would do:
/* in the main loop */
fpsElapsedTime = fpsTimer.getElapsedTime();
if(fpsElapsedTime.asMillisecond() >= (1000/framerateLimit))
{
fpsTimer.restart();
// All your content
}
Other thing, use sf::Color::White or sf::Color::Black instead of (sf::Color(255,255,255))
Hope this help :)

Related

Make Windows MFC Game Loop Faster

I am creating a billiards game and am having major problems with tunneling at high speeds. I figured using linear interpolation for animations would help quite a bit, but the problem persists. To see this, I drew a circle at the previous few positions an object has been. At the highest velocity the ball can travel, the path looks like this:
Surely, these increments of advancement are much too large even after using linear interpolation.
At each frame, every object's location is updated based on the amount of time since the window was last drawn. I noticed that the average time for the window to be redrawn is somewhere between 70 and 80ms. I would really like this game to work at 60 fps, so this is about 4 or 5 times longer than what I am looking for.
Is there a way to change how often the window is redrawn? Here is how I am currently redrawing the screen
#include "pch.h"
#include "framework.h"
#include "ChildView.h"
#include "DoubleBufferDC.h"
const int FrameDuration = 16;
void CChildView::OnPaint()
{
CPaintDC paintDC(this); // device context for painting
CDoubleBufferDC dc(&paintDC); // device context for painting
Graphics graphics(dc.m_hDC); // Create GDI+ graphics context
mGame.OnDraw(&graphics);
if (mFirstDraw)
{
mFirstDraw = false;
SetTimer(1, FrameDuration, nullptr);
LARGE_INTEGER time, freq;
QueryPerformanceCounter(&time);
QueryPerformanceFrequency(&freq);
mLastTime = time.QuadPart;
mTimeFreq = double(freq.QuadPart);
}
LARGE_INTEGER time;
QueryPerformanceCounter(&time);
long long diff = time.QuadPart - mLastTime;
double elapsed = double(diff) / mTimeFreq;
mLastTime = time.QuadPart;
mGame.Update(elapsed);
}
void CChildView::OnTimer(UINT_PTR nIDEvent)
{
RedrawWindow(NULL, NULL, RDW_UPDATENOW);
Invalidate();
CWnd::OnTimer(nIDEvent);
}
EDIT: Upon request, here is how the actual drawing is done:
void CGame::OnDraw(Gdiplus::Graphics* graphics)
{
// Draw the background
graphics->DrawImage(mBackground.get(), 0, 0,
mBackground->GetWidth(), mBackground->GetHeight());
mTable->Draw(graphics);
Pen pen(Color(500, 128, 0), 1);
Pen penW(Color(1000, 1000, 1000), 1);
this->mCue->Draw(graphics);
for (shared_ptr<CBall> ball : this->mBalls)
{
ball->Draw(graphics);
}
for (shared_ptr<CBall> ball : this->mSunkenSolidBalls)
{
ball->Draw(graphics);
}
for (shared_ptr<CBall> ball : this->mSunkenStripedBalls)
{
ball->Draw(graphics);
}
this->mPowerBar->Draw(graphics);
}
Game::OnDraw will call Draw on all of the game items, which draw on the graphics object they receive as an argument.

SDL image disappears after 15 seconds

I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.
Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.
This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.

Fixed Timestep at 2500 FPS?

I am using SFML making a 2D platformer. I read so many timestep articles but they don't work well for me. I am implementing it like 2500 FPS timestep, on my desktop pc it's amazingly smooth, on my laptop it's getting 300 FPS(I check with Fraps), it's not that smooth at laptop but still playable.
Here are the code snippets:
sf::Clock clock;
const sf::Time TimePerFrame = sf::seconds(1.f/2500.f);
sf::Time TimeSinceLastUpdate = sf::Time::Zero;
sf::Time elapsedTime;
These are variables and here is the game loop,
while(!quit){
elapsedTime = clock.restart();
TimeSinceLastUpdate += elapsedTime;
while (TimeSinceLastUpdate > TimePerFrame){
TimeSinceLastUpdate -= TimePerFrame;
Player::instance()->handleAll();
}
Player::instance()->render();
}
In the Player.h, I've got movement constants,
const float GRAVITY = 0.35 /2500.0f; // Uses += every frame
const float JUMP_SPEED = -400.0f/2500.0f; //SPACE -> movementSpeed.y = JUMP_SPEED;
//When character is touching to ground
const float LAND_ACCEL = 0.075 /2500.0f; // These are using +=
const float LAND_DECEL = 1.5 /2500.0f;
const float LAND_FRICTION = 0.5 /2500.0f;
const float LAND_STARTING_SPEED = 0.075; // This uses =, instead of +=
In the handleAll function of Player class, there is
cImage.move(movementSpeed);
checkCollision();
And lastly, checkCollision function, simply checks if character's master bounding box intersects the object's rectangle from each side, sets the speed x or y to 0, then fixes the overlapping by setting character position to the edge.
//Collision
if(masterBB().intersects(objectsIntersecting[i]->GetAABB())){
//HORIZONTAL
if(leftBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x < 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left + objectsIntersecting[i]->GetAABB().width + leftBB().width , cImage.getPosition().y);
}
else if(rightBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x > 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left - rightBB().width , cImage.getPosition().y);
}
//VERTICAL
if(movementSpeed.y < 0 && topBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top + objectsIntersecting[i]->GetAABB().height + masterBB().height/2);
}
if(movementSpeed.y > 0 && bottomBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top - masterBB().height/2);
//and some state updates
}
}
I tried to use 60 FPS Timestep like million times but all speed variables become so slow, I can't simply do like *2500.0f / 60.0f to all constants, It doesn't feel same. If I get close constants, It feels "ok" but then when the collision happens, character's position is getting setted all the time and it flys out of the map because of the big lap on the object caused by high speed constants applied every frame I guess...
I need to add, Normally, the book I took the timestep code uses
cImage.move(movementSpeed*TimePerFrame.asSeconds());
but as you saw, I just put /2500.0f to every constant and I don't use it.
So, is 1/2500 seconds per frame good? If not, how can I change all of these to 1/60.0f?
You're doing it wrong.
Your monitor most likely has a refresh rate of 60 Hz (= 60 FPS), thus trying to render an image at 2500 FPS is a huge waste of resources. If the only reason for choosing 2500 FPS is that your movement doesn't work the same, haven't you ever thought about, that the problem then might be with the movement code?
At best you'd implement a fixed timestep (famous article), that way your physics can run at whatever rate you want (2500 "FPS" would still be crazy, so don't do it) and is independent from your rendering rate. So even if you get some varying FPS, it won't influence your physics.

How to adjust a game loop frame rate?

I'm at the moment trying to Programm my first Game in SFML (and my first overall) but i ran into the Problem that i get a stutter about once a second. During such a stutter the Frametime is 3 to 4 times higher than normal which is really noticeable as long i don't run really high FPS (300+).
No Problem (at least atm) as performance is not an Issue, but:
When doing that my movement Method really freaks out and moves way way slower that it's supposed to do.
my Movement method:
void Player::Update(float frametime){
mMovementSpeedTimefactor = frametime * 60 / 1000.0f;
setMovementVector(sf::Vector2f( mMovementVector.x * mMovementSpeedTimefactor, mMovementVector.y *mMovementSpeedTimefactor));
validateMovement();
//std::cout << mTestClock->restart().asMilliseconds() << std::endl;
moveObject();
this->updateAnimation();
}
frametime is the frametime in Milliseconds, and i Multiply by 60, as my movementspeed is set as a value for pixel/second and not per frame.
movementspeed is 5, so the character should move 5 px per second, whatever FPS( and therefore Frametime) i have.
But: that gives me really jumpy movement, as those "stutterframes" result in a jump, and on nto stuttering frames the palyer moves a lot slower than it should.
my mainloop is really simple, just
while(mMainWindow->isOpen()){
HandleEvents();
Update();
Render();
}
while using the inbuild framelimiter (tried writing my own, but i get the very same result, as long as i use sf:sleep to regulate FPS for not having the cpu core running at 100% load) to 300 FPS.
So yeah, i could just set my standard speed to 1 instead of 5, but
setframeratelimit is not very accurate, so i get some variation in movementspeed, that i really not like.
anyone has an idea, what i could best do? Maybe i'm not seeing the forest for all the trees ( i actually have no idea if you say that in english :P) but as this is my first game i have no experience to look back upon.
Similar question: Movement Without Framerate Limit C++ SFML.
What you really need is fixed time step.
Take a look at the SFML Game development book source code. Here's the interesting snippet from Application.cpp:
const sf::Time Game::TimePerFrame = sf::seconds(1.f/60.f);
[...]
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
while (mWindow.isOpen())
{
sf::Time elapsedTime = clock.restart();
timeSinceLastUpdate += elapsedTime;
while (timeSinceLastUpdate > TimePerFrame)
{
timeSinceLastUpdate -= TimePerFrame;
processEvents();
update(TimePerFrame);
}
updateStatistics(elapsedTime);
render();
}
EDIT: If this is not really what you want, see "Fix your timestep!" which Laurent Gomila himself linked in the SFML forum.
When frametime is really high or really low your calculations may not work correctly because of float precision issues. I suggest setting standard speed to, maybe, 500 and mMovementSpeedTimeFactor to frametime * 60 / 10.0f and check if the issue still happens.

Simulated time in a game loop using c++

I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.
Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:
void main_loop()
{
while(!quit)
{
handle_events();
DrawScene();
...
SDL_Delay(time_left());
}
}
I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.
I had thought that I just need a few global variables like
int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;
Then a function like :
void handle_time()
{
Uint32 current_ticks;
Uint32 dticks;
current_ticks = SDL_GetTicks();
dticks = current_ticks - prev_ticks; // get difference since last time
// if difference is greater than 30000 (half minute) increment game mins
if(dticks >= 30000) {
prev_ticks = current_ticks;
current_mins++;
if(current_mins >= 60) {
current_mins = 0;
current_hour++;
}
if(current_hour > 23) {
current_hour = 0;
num_days++;
}
}
}
and then call the handle_time() function in the main loop.
It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?
I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series
What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:
running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
while(SDL_PollEvent(&event)){
switch(event.type){
...
}
}
ct = SDL_GetTicks();
t_accum += ct - lt;
lt = ct;
while(t_accum >= timestep){
t += timestep; /* this is our actual time, in milliseconds. */
t_accum -= timestep;
for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
integrate(en, (float)t * 0.001f, timestep);
}
}
/* This should really be in a separate thread, synchronized with a mutex */
std::vector<Entity> tmpEntities(entities.size());
for(int i=0; i<entities.size(); ++i){
float alpha = (float)t_accum / (float)timestep;
tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
}
Render(tmpEntities);
}
This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.
Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.
However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.
One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.
I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.
http://linux.die.net/man/2/timer_create
EDIT:
SDL Seem to support Timers: SDL_SetTimer