Make Windows MFC Game Loop Faster - c++

I am creating a billiards game and am having major problems with tunneling at high speeds. I figured using linear interpolation for animations would help quite a bit, but the problem persists. To see this, I drew a circle at the previous few positions an object has been. At the highest velocity the ball can travel, the path looks like this:
Surely, these increments of advancement are much too large even after using linear interpolation.
At each frame, every object's location is updated based on the amount of time since the window was last drawn. I noticed that the average time for the window to be redrawn is somewhere between 70 and 80ms. I would really like this game to work at 60 fps, so this is about 4 or 5 times longer than what I am looking for.
Is there a way to change how often the window is redrawn? Here is how I am currently redrawing the screen
#include "pch.h"
#include "framework.h"
#include "ChildView.h"
#include "DoubleBufferDC.h"
const int FrameDuration = 16;
void CChildView::OnPaint()
{
CPaintDC paintDC(this); // device context for painting
CDoubleBufferDC dc(&paintDC); // device context for painting
Graphics graphics(dc.m_hDC); // Create GDI+ graphics context
mGame.OnDraw(&graphics);
if (mFirstDraw)
{
mFirstDraw = false;
SetTimer(1, FrameDuration, nullptr);
LARGE_INTEGER time, freq;
QueryPerformanceCounter(&time);
QueryPerformanceFrequency(&freq);
mLastTime = time.QuadPart;
mTimeFreq = double(freq.QuadPart);
}
LARGE_INTEGER time;
QueryPerformanceCounter(&time);
long long diff = time.QuadPart - mLastTime;
double elapsed = double(diff) / mTimeFreq;
mLastTime = time.QuadPart;
mGame.Update(elapsed);
}
void CChildView::OnTimer(UINT_PTR nIDEvent)
{
RedrawWindow(NULL, NULL, RDW_UPDATENOW);
Invalidate();
CWnd::OnTimer(nIDEvent);
}
EDIT: Upon request, here is how the actual drawing is done:
void CGame::OnDraw(Gdiplus::Graphics* graphics)
{
// Draw the background
graphics->DrawImage(mBackground.get(), 0, 0,
mBackground->GetWidth(), mBackground->GetHeight());
mTable->Draw(graphics);
Pen pen(Color(500, 128, 0), 1);
Pen penW(Color(1000, 1000, 1000), 1);
this->mCue->Draw(graphics);
for (shared_ptr<CBall> ball : this->mBalls)
{
ball->Draw(graphics);
}
for (shared_ptr<CBall> ball : this->mSunkenSolidBalls)
{
ball->Draw(graphics);
}
for (shared_ptr<CBall> ball : this->mSunkenStripedBalls)
{
ball->Draw(graphics);
}
this->mPowerBar->Draw(graphics);
}
Game::OnDraw will call Draw on all of the game items, which draw on the graphics object they receive as an argument.

Related

How to make player animation in sfml?

i wanted to make player a bit realistic instead of just hovering around like a ghost. this is my code just some basic gravity and movement, i want to make jumping animation, walking animation, turn left and turn right animation. how do i do that?
here code:
void Game::initTexture()
{
if (!texture.loadFromFile("/home/webmaster/Documents/vscode-sfml/src/Ball_and_Chain_Bot/pixil-frame-0(1).png"))
{
std::cout << "failed to load texture\n";
rect.setTexture(texture);
rect.setPosition(this->window->getView().getCenter());
rect.setScale(2,2);
}
}
rendering:
void Game::render()
{
this->window->clear(sf::Color(239, 235, 216));
this->window->draw(rect);
this->window->display();
}
You forgot to show your actual code, the headers are not very useful. Regardless:
Define some animations. Each animation should have a target length and a number of frames that cover that target. At the beginning it is easiest to make every frame be equally long, but nobody stops you from having frame 1 take 0.2s, frame 2 0.8s, and frame 3 0.15s.
add code to keep track of a "current animation" that properly cycles through the frames on the proper timescale (ie show each frame for 0.25s if you have 4 frames and a target of 1s). Some animations may cycle, such as the "running" or "idle" animation. A common technique for storing animations is a "texture atlas" that contains all frames of an animation. You can then use sf::Shape::setTextureRect to select a part of the texture to draw.
update your movement and input code to change the animation if the state of the character changes
Let us define the frames of an animation in terms of sf::IntRect sections of a given sprite sheet: (example sprite sheet)
std::vector<sf::IntRect> idle {
{0, 0, 35, 61}
};
std::vector<sf::IntRect> runningRight {
{657, 473, 43, 52}, // first frame is at 657x473 and is 43x52 pixels
// other frames.
};
We can define an Animation class with the following data and methods:
class Animation {
public:
Animation(const std::vector<sf::IntRect>& frames, float duration, bool cycles = true) : frames(frames), frameTime(duration / frames.size()), cycles(cycles) { reset(); }
void reset() {
currentFrame = 0;
currentFrameTime = 0;
}
void update(float dt);
const sf::IntRect& getCurrentRect() const { return frames[currentFrame]; }
private:
const std::vector<sf::IntRect>& frames;
const float frameTime;
bool cycles;
int currentFrame;
float currentFrameTime;
};
This implements most of step 2: keeping track of which frame should be on screen, assuming update(dt) is called every frame.
Now all that remains is the update method:
void Animation::update(float dt) {
currentFrameTime += dt;
// TODO: take `cycles` into account.
while (currentFrameTime >= frameTime) {
currentFrameTime -= frameTime;
currentFrame = (currentFrame + 1) % frames.size();
}
}
Finally, to hook this up, create the following variables:
sf::Texture textureAtlas = ...;
Animation currentAnimation{idle, 10.0f};
sf::Sprite player(textureAtlas, currentAnimation.getCurrentRect());
In your game's update() code, call currentAnimation.update(dt).
In the render function, make sure to call player.setTextureRect(currentAnimation.getCurrentRect()).
If you receive input, do something like currentAnimation = Animation{runningRight, 1.0f};

How is SFML so fast?

I need to draw some graphics in c++, pixel by pixel on a window. In order to do this I create a SFML window, sprite and texture. I draw my desired graphics to a uint8_t array and then update the texture and sprite with it. This process takes about 2500 us. Drawing two triangles which fill the entire window takes only 10 us. How is this massive difference possible? I've tried multithreading the pixel-by-pixel drawing, but the difference of two orders of magnitude remains. I've also tried drawing the pixels using a point-map, with no improvement. I understand that SFML uses some GPU-acceleration in the background, but simply looping and assigning the values to the pixel array already takes hundreds of microseconds.
Does anyone know of a more effective way to assign the values of pixels in a window?
Here is an example of the code I'm using to compare the speed of triangle and pixel-by-pixel drawing:
#include <SFML/Graphics.hpp>
#include <chrono>
using namespace std::chrono;
#include <iostream>
#include<cmath>
uint8_t* pixels;
int main(int, char const**)
{
const unsigned int width=1200;
const unsigned int height=1200;
sf::RenderWindow window(sf::VideoMode(width, height), "MA: Rasterization Test");
pixels = new uint8_t[width*height*4];
sf::Texture pixels_texture;
pixels_texture.create(width, height);
sf::Sprite pixels_sprite(pixels_texture);
sf::Clock clock;
sf::VertexArray triangle(sf::Triangles, 3);
triangle[0].position = sf::Vector2f(0, height);
triangle[1].position = sf::Vector2f(width, height);
triangle[2].position = sf::Vector2f(width/2, height-std::sqrt(std::pow(width,2)-std::pow(width/2,2)));
triangle[0].color = sf::Color::Red;
triangle[1].color = sf::Color::Blue;
triangle[2].color = sf::Color::Green;
while (window.isOpen()){
sf::Event event;
while (window.pollEvent(event)) {
if (event.type == sf::Event::Closed) {
window.close();
}
if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::Escape) {
window.close();
}
}
window.clear(sf::Color(255,255,255,255));
// Pixel-by-pixel
int us = duration_cast< microseconds >(system_clock::now().time_since_epoch()).count();
for(int i=0;i!=width*height*4;++i){
pixels[i]=255;
}
pixels_texture.update(pixels);
window.draw(pixels_sprite);
int duration=duration_cast< microseconds >(system_clock::now().time_since_epoch()).count()-us;
std::cout<<"Background: "<<duration<<" us\n";
// Triangle
us = duration_cast< microseconds >(system_clock::now().time_since_epoch()).count();
window.draw(triangle);
duration=duration_cast< microseconds >(system_clock::now().time_since_epoch()).count()-us;
std::cout<<"Triangle: "<<duration<<" us\n";
window.display();
}
return EXIT_SUCCESS;
}
Graphics drawing in modern devices using Graphic cards, and the speed of drawing depends on how many triangles in the data you sent to the Graphic memory. That's why just drawing two triangles is fast.
As you mentioned about multithreading, if you using OpenGL (I don't remember what SFML use, but should be the same), what you thinking you are drawing is basically send commands and data to graphic cards, so multithreading here is not very useful, the graphic card has it's own thread to do this.
If you are curious about how graphic card works, this tutorial is the
book you should read.
P.S. As you edit you question, I guess the duration 2500us vs 10us is because you for loop create a texture(even if the texture is a pure white background)(and the for loop, you probably need to start counting after the for loop), and send texture to graphic card need time, while draw triangle only send several points. Still, I suggest to read the tutorial, create a texture pixel by pixel potentially prove the miss understanding of how GPU works.

Why is my UWP game slower in release than in debug mode?

I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.
"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.

Is there a reasonable limit to how many images SDL can render? [duplicate]

I am programming a raycasting game using SDL2.
When drawing the floor, I need to call SDL_RenderCopy pixelwise. This leads to a bottleneck which drops the framerate below 10 fps.
I am looking for performance boosts but can't seem to find some.
Here's a rough overview of the performance drop:
int main() {
while(true) {
for(x=0; x<800; x++) {
for(y=0; y<600; y++) {
SDL_Rect src = { 0, 0, 1, 1 };
SDL_Rect dst = { x, y, 1, 1 };
SDL_RenderCopy(ren, tx, &src, &dst); // this drops the framerate below 10
}
}
SDL_RenderPresent(ren);
}
}
You should probably be using texture streaming for this. Basically you will create an SDL_Texture of type SDL_TEXTUREACCESS_STREAMING and then each frame you 'lock' the texture, update the pixels that you require then 'unlock' the texture again. The texture is then rendered in a single SDL_RenderCopy call.
LazyFoo Example -
http://lazyfoo.net/tutorials/SDL/42_texture_streaming/index.php
Exploring Galaxy -
http://slouken.blogspot.co.uk/2011/02/streaming-textures-with-sdl-13.html
Other than that calling SDL_RenderCopy 480,000 times a frame is always going to kill your framerate.
You are calling SDL_RenderCopy() in each frame so 600 * 800 = 480 000 times! It is normal for performance to drop.

Simple C++ SFML program high CPU usage

I'm currently working on a platformer and trying to implement a timestep, but for framerate limits greater than 60 the CPU usage goes up from 1% to 25% and more.
I made this minimal program to demonstrate the issue. There are two comments (lines 10-13, lines 26-30) in the code that describe the problem and what I have tested.
Note that the FPS stuff is not relevant to the problem (I think).
I tried to keep the code short and simple:
#include <memory>
#include <sstream>
#include <iomanip>
#include <SFML\Graphics.hpp>
int main() {
// Window
std::shared_ptr<sf::RenderWindow> window;
window = std::make_shared<sf::RenderWindow>(sf::VideoMode(640, 480, 32), "Test", sf::Style::Close);
/*
When I use the setFramerateLimit() function below, the CPU usage is only 1% instead of 25%+
(And only if I set the limit to 60 or less. For example 120 increases CPU usage to 25%+ again.)
*/
//window->setFramerateLimit(60);
// FPS text
sf::Font font;
font.loadFromFile("font.ttf");
sf::Text fpsText("", font, 30);
fpsText.setColor(sf::Color(0, 0, 0));
// FPS
float fps;
sf::Clock fpsTimer;
sf::Time fpsElapsedTime;
/*
When I set framerateLimit to 60 (or anything less than 60)
instead of 120, CPU usage goes down to 1%.
When the limit is greater, in this case 120, CPU usage is 25%+
*/
unsigned int framerateLimit = 120;
sf::Time fpsStep = sf::milliseconds(1000 / framerateLimit);
sf::Time fpsSleep;
fpsTimer.restart();
while (window->isOpen()) {
// Update timer
fpsElapsedTime = fpsTimer.restart();
fps = 1000.0f / fpsElapsedTime.asMilliseconds();
// Update FPS text
std::stringstream ss;
ss << "FPS: " << std::fixed << std::setprecision(0) << fps;
fpsText.setString(ss.str());
// Get events
sf::Event evt;
while (window->pollEvent(evt)) {
switch (evt.type) {
case sf::Event::Closed:
window->close();
break;
default:
break;
}
}
// Draw
window->clear(sf::Color(255, 255, 255));
window->draw(fpsText);
window->display();
// Sleep
fpsSleep = fpsStep - fpsTimer.getElapsedTime();
if (fpsSleep.asMilliseconds() > 0) {
sf::sleep(fpsSleep);
}
}
return 0;
}
I don't want to use SFML's setFramerateLimit(), but my own implementation with the sleep because I will use the fps data to update my physics and stuff.
Is there a logic error in my code? I fail to see it, given it works with a framerate limit of for example 60 (or less). Is it because I have a 60 Hz monitor?
PS: Using SFML's window->setVerticalSync() doesn't change the results
I answered another similar question with this answer.
The thing is, it's not exactly helping you with CPU usage, but I tried your code and it is working fine under 1% cpu usage at 120 FPS (and much more). When you make a game or an interactive media with a "game-loop", you don't want to lose performance by sleeping, you want to use as much cpu time as the computer can give you. Instead of sleeping, you can process other data, like loading stuff, pathfinding algorithm, etc., or just don't put limits on rendering.
I provide some useful links and code, here it is:
Similar question: Movement Without Framerate Limit C++ SFML.
What you really need is fixed time step. Take a look at the SFML Game
development book source code. Here's the interesting snippet from
Application.cpp:
const sf::Time Game::TimePerFrame = sf::seconds(1.f/60.f);
// ...
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
while (mWindow.isOpen())
{
sf::Time elapsedTime = clock.restart();
timeSinceLastUpdate += elapsedTime;
while (timeSinceLastUpdate > TimePerFrame)
{
timeSinceLastUpdate -= TimePerFrame;
processEvents();
update(TimePerFrame);
}
updateStatistics(elapsedTime);
render();
}
If this is not really what you want, see "Fix your timestep!"
which Laurent Gomila himself linked in the SFML forum.
I suggest to use the setFrameRate limit, because it's natively implemented in SFML and will work a lot better.
For getting the elapsed time you must do :
fpsElapsedTime = fpsTimer.getElapsedTime();
If I had to implement something similar, I would do:
/* in the main loop */
fpsElapsedTime = fpsTimer.getElapsedTime();
if(fpsElapsedTime.asMillisecond() >= (1000/framerateLimit))
{
fpsTimer.restart();
// All your content
}
Other thing, use sf::Color::White or sf::Color::Black instead of (sf::Color(255,255,255))
Hope this help :)