How to keep the CPU usage down while running an SDL program? - c++

I've done a very basic window with SDL and want to keep it running until I press the X on window.
#include "SDL.h"
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
int main(int argc, char **argv)
{
SDL_Init( SDL_INIT_VIDEO );
SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0,
SDL_HWSURFACE | SDL_DOUBLEBUF );
SDL_WM_SetCaption( "SDL Test", 0 );
SDL_Event event;
bool quit = false;
while (quit != false)
{
if (SDL_PollEvent(&event)) {
if (event.type == SDL_QUIT) {
quit = true;
}
}
SDL_Delay(80);
}
SDL_Quit();
return 0;
}
I tried adding SDL_Delay() at the end of the while-clause and it worked quite well.
However, 80 ms seemed to be the highest value I could use to keep the program running smoothly and even then the CPU usage is about 15-20%.
Is this the best way to do this and do I have to just live with the fact that it eats this much CPU already on this point?

I know this is an older post, but I myself just came across this issue with SDL when starting up a little demo project. Like user 'thebuzzsaw' noted, the best solution is to use SDL_WaitEvent to reduce the CPU usage of your event loop.
Here's how it would look in your example for anyone looking for a quick solution to it in the future. Hope it helps!
#include "SDL.h"
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
int main(int argc, char **argv)
{
SDL_Init( SDL_INIT_VIDEO );
SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0,
SDL_HWSURFACE | SDL_DOUBLEBUF );
SDL_WM_SetCaption( "SDL Test", 0 );
SDL_Event event;
bool quit = false;
while (quit == false)
{
if (SDL_WaitEvent(&event) != 0) {
switch (event.type) {
case SDL_QUIT:
quit = true;
break;
}
}
}
SDL_Quit();
return 0;
}

I would definitely experiment with fully blocking functions (such as SDL_WaitEvent). I have an OpenGL application in Qt, and I noticed the CPU usage hovers between 0% and 1%. It spikes to maybe 4% during "usage" (moving the camera and/or causing animations).
I am working on my own windowing toolkit. I have noticed I can achieve similar CPU usage when I use blocking event loops. This will complicate any timers you may depend on, but it is not terribly difficult to implement timers with this new approach.

I just figured out how to reduce CPU usage in my game from 50% down to < 10%.
Your program is much more simple and simply using SDL_Delay() should be enough.
What I did was:
Use SDL_DisplayFormat() when loading images, so the blitting would be faster. This brought its CPU usage down to about 30%.
So I found out that blitting the games background (big one-piece .png file) was eating the most out of my CPU. I searched the Internet for a solution, but all I found was the same answer - just use SDL_Delay(). Finally, I found out that the problem was embarrassingly simple - the SDL_DisplayFormat() was converting my 24-bit images to 32-bit. So I set my display BPP to 24, which brought CPU usage to ~20%. Bringing it down to 16 bit solved the problem for me and the CPU usage is under 10% now.
Of course this means loss of color detail, but as my game is a simplistic 2D game with not too detailed graphics, this was OK.

In order to really understand this, you need to understand threading. In a threaded application, the program runs until it is waiting for something, then it tells the OS that something else can run. In essence, you are doing this with the SDL_Delay command. If there was no delay at all, I suspect your program would be running at near 100% capacity.
The amount of time that you should put in the delay statement only matters if the other commands are taking a significant amount of time. In general, I would put the delay to be a similar amount of time that it takes to test the poll command, but not more than, say, 10 ms. What will happen is that the OS will wait at least that length of time, allowing other applications to run in the background.
As to what you can do to improve this, well, it looks like there isn't a whole lot that you can do. However, take note that if there was another process running taking a significant amount of CPU power, your program's share would decrease.

Related

SDL_Texture renders black after resize unless it is redrawn

I've got a bit of a nasty bug for you folks. (Yes, it's probably my bug and not SDL's.) I have been in the process of writing a modern C++ wrapper for SDL and everything appears to be working as intended. However, my Texture class has a strange bug: if it is redrawn after a resize, it looks fine, but if it is not, it becomes entirely black. Here is what that looks like:
Before the resize
After the resize
I can't exactly post just one part of the code here, so here is the entire folder (hosted on GitLab): SDL wrapper
Here is a small program that reproduces the error using this library:
#include "sdl_wrapper/context.hh"
#include "sdl_wrapper/video/context.hh"
#include "sdl_wrapper/render/renderer.hh"
#include "sdl_wrapper/render/texture.hh"
#include "sdl_wrapper/colors.hh"
#include "SDL2/SDL_events.h"
int main()
{
sdl::Context sdlContext;
sdl::video::Context videoContext = sdlContext.initVideo();
sdl::video::Window window = videoContext.createWindow("test", 0, 0, 800, 600).resizable().build();
int width = 800;
int height = 600;
sdl::render::Renderer renderer = window.createRenderer().targetTexture().build();
sdl::render::Texture example(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 400, 300);
renderer.setTarget(example);
renderer.setDrawColor(sdl::colors::Blue);
renderer.clear();
renderer.resetTarget();
bool run = true;
while (run)
{
SDL_Event e;
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
run = false;
break;
}
}
renderer.setDrawColor({0x44, 0x44, 0x44, 0xff});
renderer.clear();
renderer.copy(example, std::nullopt, {{width / 4, height / 4, example.getWidth(), example.getHeight()}});
renderer.present();
}
}
To reproduce, simply run this program and resize the window. There will be a blue square that becomes black after the resize.
I would greatly appreciate it if someone could point me in the right direction here. I would really like to avoid redrawing on every resize (feel free to argue with me on that point if I am misguided).
SDL doesn't promise to keep target textures data. There are cases, especially with d3d or mobile, where data is lost due to some big state change. Changing window size may sound not that big, but on some hardware/driver configurations is causes problems, I suppose that's the reason why SDL detects resize and drops all renderer data. You get SDL_RENDER_TARGETS_RESET event when you need to update your render textures.
That shouldn't happen with e.g. opengl renderer implementation (that may sound great but reasons behind it are not so great); on windows, SDL2 defaults to direct3d, which could be modified by issuing SDL_SetHint or setting envvars.

SDL + OpenGL, irregular frame times with periodic spikes

I was developing an application with SDL and glad. I noticed that there were some small freezes in the animations. I have written a separate project with the minimal functionality and I found the issue appears even with a very basic setup.
This is the code:
#include <iostream>
#include <vector>
#include <fstream>
#include <SDL.h>
#include <glad/glad.h>
using namespace std;
// the application will close after this amount of time
const float MILISECONDS_TO_CLOSE = 10 * 1000;
int main(int argc, char** argv)
{
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_Window* window =
SDL_CreateWindow(
"tracer",
100, 100,
800, 600,
SDL_WINDOW_OPENGL
);
SDL_GLContext context = SDL_GL_CreateContext(window);
// SDL_GL_SetSwapInterval(1); // this line mitigates the problem but just slightly
if (!gladLoadGL())
{
cout << "gladLoadGL failed" << endl;
}
const GLubyte *oglVersion = glGetString(GL_VERSION);
std::cout << "This system supports OpenGL Version: " << oglVersion << std::endl;
const GLubyte *gpuVendor = glGetString(GL_VENDOR);
std::cout << "Graphics card: " << gpuVendor << std::endl;
glClearColor(0.15f, 0.15f, 0.15f, 1.0f);
vector<unsigned> deltas;
deltas.reserve(10 * MILISECONDS_TO_CLOSE);
static unsigned firstTime, prevTime, curTime;
firstTime = prevTime = curTime = SDL_GetTicks();
while (true)
{
// compute delta time
curTime = SDL_GetTicks();
unsigned dt = curTime - prevTime;
prevTime = curTime;
deltas.push_back(dt);
// close the application after some time
if (curTime - firstTime > MILISECONDS_TO_CLOSE) break;
// handle closing events
SDL_Event event;
if (SDL_PollEvent(&event))
{
if (event.type == SDL_KEYDOWN && event.key.keysym.sym == SDLK_ESCAPE || event.type == SDL_QUIT) break;
}
glClear(GL_COLOR_BUFFER_BIT);
SDL_GL_SwapWindow(window);
}
// save recorded deltas to a file
fstream f("out.txt", ios::out | ios::trunc);
for (unsigned dt : deltas) f << dt << endl;
f << flush;
f.close();
return 0;
}
The program records the time between frames during 10 seconds and saves the result in a text file.
I plotted the data using python I got this:
The horizontal axis tells the frame time in milliseconds. The vertical axis tells the time passed since the last frame. Also in milliseconds.
As you can see the time between frames is very irregular and there are periodic spikes (about 1 seconds between spikes).
I have uploaded the CMake project to a github repository, so you can test it if you wish.
I have tested with both my integrated and dedicated GPUs (Intel 530HD and NVIDIA GTX 960M).
The SDL version is 2.0.5.
I tested it on Windows 10 and Ubuntu 16.04 LTS.
EDIT:
I have ported the application to GLFW and the same happens. So, it is very unlikelly that there is a bug in SDL. I have updated the git repo accordingly, now there are two CMake projects.
EDIT2:
I have tested it in another computer and works fine.
I have no clue what's happening. Could it be a hardware problem? Then why it doesn't happen when I run other applications?
Driver problem? Very unlikelly because it happens both using the Intel and NVIDIA GPUs. Also, it happens on both Ubuntu and Windows.
I've had similar behavior (though with longer&lower plateaus, not spikes) caused by the CPU overheating and hence deciding to lower clock rate temporarily. If that happens with your PC, you can try to upgrade your internal coolers or add external ones. Or just lower the temperature setting on the air conditioner in your room.
Naturally, it was always more an issue during summer.
Here's where I've posted about it.

Simple C++ SFML program high CPU usage

I'm currently working on a platformer and trying to implement a timestep, but for framerate limits greater than 60 the CPU usage goes up from 1% to 25% and more.
I made this minimal program to demonstrate the issue. There are two comments (lines 10-13, lines 26-30) in the code that describe the problem and what I have tested.
Note that the FPS stuff is not relevant to the problem (I think).
I tried to keep the code short and simple:
#include <memory>
#include <sstream>
#include <iomanip>
#include <SFML\Graphics.hpp>
int main() {
// Window
std::shared_ptr<sf::RenderWindow> window;
window = std::make_shared<sf::RenderWindow>(sf::VideoMode(640, 480, 32), "Test", sf::Style::Close);
/*
When I use the setFramerateLimit() function below, the CPU usage is only 1% instead of 25%+
(And only if I set the limit to 60 or less. For example 120 increases CPU usage to 25%+ again.)
*/
//window->setFramerateLimit(60);
// FPS text
sf::Font font;
font.loadFromFile("font.ttf");
sf::Text fpsText("", font, 30);
fpsText.setColor(sf::Color(0, 0, 0));
// FPS
float fps;
sf::Clock fpsTimer;
sf::Time fpsElapsedTime;
/*
When I set framerateLimit to 60 (or anything less than 60)
instead of 120, CPU usage goes down to 1%.
When the limit is greater, in this case 120, CPU usage is 25%+
*/
unsigned int framerateLimit = 120;
sf::Time fpsStep = sf::milliseconds(1000 / framerateLimit);
sf::Time fpsSleep;
fpsTimer.restart();
while (window->isOpen()) {
// Update timer
fpsElapsedTime = fpsTimer.restart();
fps = 1000.0f / fpsElapsedTime.asMilliseconds();
// Update FPS text
std::stringstream ss;
ss << "FPS: " << std::fixed << std::setprecision(0) << fps;
fpsText.setString(ss.str());
// Get events
sf::Event evt;
while (window->pollEvent(evt)) {
switch (evt.type) {
case sf::Event::Closed:
window->close();
break;
default:
break;
}
}
// Draw
window->clear(sf::Color(255, 255, 255));
window->draw(fpsText);
window->display();
// Sleep
fpsSleep = fpsStep - fpsTimer.getElapsedTime();
if (fpsSleep.asMilliseconds() > 0) {
sf::sleep(fpsSleep);
}
}
return 0;
}
I don't want to use SFML's setFramerateLimit(), but my own implementation with the sleep because I will use the fps data to update my physics and stuff.
Is there a logic error in my code? I fail to see it, given it works with a framerate limit of for example 60 (or less). Is it because I have a 60 Hz monitor?
PS: Using SFML's window->setVerticalSync() doesn't change the results
I answered another similar question with this answer.
The thing is, it's not exactly helping you with CPU usage, but I tried your code and it is working fine under 1% cpu usage at 120 FPS (and much more). When you make a game or an interactive media with a "game-loop", you don't want to lose performance by sleeping, you want to use as much cpu time as the computer can give you. Instead of sleeping, you can process other data, like loading stuff, pathfinding algorithm, etc., or just don't put limits on rendering.
I provide some useful links and code, here it is:
Similar question: Movement Without Framerate Limit C++ SFML.
What you really need is fixed time step. Take a look at the SFML Game
development book source code. Here's the interesting snippet from
Application.cpp:
const sf::Time Game::TimePerFrame = sf::seconds(1.f/60.f);
// ...
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
while (mWindow.isOpen())
{
sf::Time elapsedTime = clock.restart();
timeSinceLastUpdate += elapsedTime;
while (timeSinceLastUpdate > TimePerFrame)
{
timeSinceLastUpdate -= TimePerFrame;
processEvents();
update(TimePerFrame);
}
updateStatistics(elapsedTime);
render();
}
If this is not really what you want, see "Fix your timestep!"
which Laurent Gomila himself linked in the SFML forum.
I suggest to use the setFrameRate limit, because it's natively implemented in SFML and will work a lot better.
For getting the elapsed time you must do :
fpsElapsedTime = fpsTimer.getElapsedTime();
If I had to implement something similar, I would do:
/* in the main loop */
fpsElapsedTime = fpsTimer.getElapsedTime();
if(fpsElapsedTime.asMillisecond() >= (1000/framerateLimit))
{
fpsTimer.restart();
// All your content
}
Other thing, use sf::Color::White or sf::Color::Black instead of (sf::Color(255,255,255))
Hope this help :)

High CPU usage with SDL + OpenGL

I have a modern CPU (AMD FX 4170) and a modern GPU (NVidia GTX 660). Yet this simple program manages to fully use one of my CPU's cores. This means it uses one 4.2 GHz core to draw nothing at 60 FPS. What is wrong with this program?
#include <SDL/SDL.h>
int main(int argc, char** argv)
{
SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO);
SDL_SetVideoMode(800, 600, 0, SDL_OPENGL | SDL_RESIZABLE);
while(true)
{
Uint32 now = SDL_GetTicks();
SDL_GL_SwapBuffers();
int delay = 1000 / 60 - (SDL_GetTicks() - now);
if(delay > 0) SDL_Delay(delay);
}
return 0;
}
It turns out that NVidia's drivers' implement waiting for vsync with a busy loop which causes SDL_GL_SwapBuffers() to use 100 % CPU. Turning off vsync from NVidia Control Panel removes this problem.
Loops use as much computing power as they can. The main problem may be located in:
int delay = 1000 / 60 - (SDL_GetTicks() - now);
your delay duration may be less than zero so that your operation may be just an infinite loop without waiting. You need to control the value of variable delay.
Moreover, in the this link: it is proposed that
SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL,1); can be used to enable vsync so that it will not use all the CPU

SFML Keyboard input delay

i'm getting started with the SFML library for C++,and i've come along a place where,everything is working,but not the way i'd like it to.
My question is,how can i move an object with keyboard,smoothly,without that little delay?
#include <SFML/System.hpp>
#include <SFML/Graphics.hpp>
using namespace sf;
const int SCREEN_WIDTH = 800;
const int SCREEN_HEIGHT = 600;
int main(){
VideoMode VMode(SCREEN_WIDTH, SCREEN_HEIGHT, 32);
RenderWindow screen(VMode, "Empty Window");
Event event;
Keyboard key;
Color screencolor(0,150,0);
Texture pietexture;
pietexture.loadFromFile("image.png");
Sprite pie(pietexture);
pie.setPosition(150,150);
screen.draw(pie);
screen.display();
bool on = true;
while(on){
screen.clear(Color(0,255,0));
while(screen.pollEvent(event)){
if(event.type == Event::Closed){
screen.close();
on = false;
}
else if(event.type == Event::KeyPressed){
if(key.isKeyPressed(Keyboard::Left)){
pie.move(-10,0);
}
if(key.isKeyPressed(Keyboard::Right)){
pie.move(10,0);
}
if(key.isKeyPressed(Keyboard::Up)){
pie.move(0,-10);
}
if(key.isKeyPressed(Keyboard::Down)){
pie.move(0,10);
}
}
}
screen.draw(pie);
screen.display();
}
}
What it currently does,is makes a little pause after a keypress,and then goes normally,but how can i make it without that little pause at the beggining.
Instead of moving the object on keypress events, set a variable to indicate which direction it should be moving, and at what speed. Then, on key release events, reset the speed to 0. Then handle the movement separately from the event processing section, regulated by a combination of those variables and some kind of timing system.
Use sf::Keyboard::isKeyPressed(key) to poll the key status in real time. As a rule of thumb, don't use events where a highly responsive interface is needed.
You will also need a timing mechanism to regulate speed. There is a sf::Clock which you can use for this purpose.