When I try to show an fps counter in my program with this code:
sf::Clock clock;
sf::Time time = clock.getElapsedTime();
std::cout << 1.0f / time.asSeconds() << std::endl;
clock.restart().asSeconds();
It just repeats the numbers "500000" and "333333", even though I set the framerate limit to 60. Any ideas on how to fix this?
Related
I'm making a game using SFML Library, I wanna implement a function showing the seconds on the screen since the running of the program, and it will get increasing until the window is closed. I tried this:
sf::Clock clock;
while (window.isOpen())
{
sf::Time elapsed = clock.restart();
updateGame(elapsed);
}
But I have no idea how it's work or even if it's the right function.
Here is my code so far https://github.com/basmaashouur/GamesLib/blob/master/cards/main.cpp
There are multiple ways to get the number of seconds.
First of all, you can use an exclusive sf::Clock for this that's never reset:
sf::Clock clock;
const unsigned int seconds = static_cast<unsigned int>(clock.getElapsedTime().asSeconds());
As an alternative, you can use an sf::Time to accumulate the time between frames (e.g. inside your updateGame() function):
sf::Clock clock;
sf::Time time;
time += clock.restart();
const unsigned int seconds = static_cast<unsigned int>(time.asSeconds());
I'm measuring execution time for several functions in an image processing program in C++. In particular, I want to have the actual execution time for capturing a frame with my USB camera.
The problem is that the results don't seem consistent with the camera parameters: the camera is supposed to be 30 fps at most, and I often get a measured time that is less than 33 ms for getting a frame, which is the value I think should be expected. For instance, I get a lot of 12 ms intervals and that really seems too little.
Here is the code:
#include <time.h>
#include <sys/time.h>
double get_wall_time(){
struct timeval time;
if (gettimeofday(&time,NULL)){
// Handle error
return 0;
}
return (double)time.tv_sec + (double)time.tv_usec * .000001;
}
int main(){
while (true) {
double previoustime = get_wall_time();
this->camera.readFrame();
double currenttime = get_wall_time();
std::cout << currenttime-previoustime << std::endl;
// Other stuff
// ...
// ...
usleep(3000);
}
}
As #Revolver_Ocelot remarked, you are measuring time spent from the end of get_wall_time to the end of another similar call. To fix your code, do this:
double currenttime = get_wall_time();
while (true) {
double previoustime = currenttime;
this->camera.readFrame();
...
currentime = get_wall_time();
}
Can you spot the difference? This code measures the interval between each pass, which is what you want to get frames per second.
The speed at which you can read your camera will not be the same as the rate at which it will complete a new frame. Your camera could be recording at 30 FPS and you could be reading it at 15 FPS or 90 FPS, thus subsampling or oversampling the frame stream.
The limit at which you can oversample is 1 / time that it takes to read in an image and store it.
That's what #Jacob Hull meant with blocking; if readFrame just reads the last frame, it's not blocking until a new frame and you will get the results like you do with your measurement.
I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:
I'm getting the deltaTime through this method that is executed each frame:
void Time::calc_deltaTime() {
double currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
}
deltaTime is having the value I would expect (around 0.012.... to 0.016...)
And than I'm using deltaTime to delay the frame through the Sleep windows function like this:
void Time::limitToMAXFPS() {
if(1.0 / MAXFPS > deltaTime)
Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}
MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)
I also tried this method using while loop:
void Time::limitToMAXFPS() {
double diff = 1.0 / MAXFPS - deltaTime;
if(diff > 0) {
double t = glfwGetTime( );
while(glfwGetTime( ) - t < diff) { }
}
}
But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?
How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.
Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:
const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;
// approx ~ 16.666 ms
bool running = true;
double lastTime = 0.0;
while( running ) {
double time = glfwGetTime();
double deltaTime = time - lastTime;
if( deltaTime >= maxPeriod ) {
lastTime = time;
// code here gets called with max FPS
}
}
Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.
Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.
Is Sleep() inaccurate?
I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).
I also tried :
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));
and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.
Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.
So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.
I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps
auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
++frames;
auto now = std::chrono::steady_clock::now();
auto diff = now - start;
auto end = now + std::chrono::milliseconds(16);
if(diff >= std::chrono::seconds(1))
{
start = now;
std::cout << "FPS: " << frames << std::endl;
frames = 0;
}
glfwPollEvents();
processTransition(countit);
render.TickTok();
render.RenderBackground();
render.RenderCovers(countit);
std::this_thread::sleep_until(end);
glfwSwapBuffers(window);
}
to add you can easily adjust FPS preference by adjusting end.
now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.
Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.
I am using SFML 2.1 with C++. I want to know that how can we handle our player's movement with a delay for example:
if(right-key-is-pressed)
{
player.move(5, 0);
}
Now we want the player to move to 5 spaces but we want it to take 2 sec to do it.
How can we do it in SFML?
You have to use sf::clock in order to time your game.
http://www.sfml-dev.org/documentation/2.1-fr/classsf_1_1Clock.php
One way to do this is to introduce the concept of frame. Every second can have several frames, say 60. And each frame is a snapshot of objects state at a given time point in that second. You compute the states of objects in the new frame, and then render and show it on the screen at that time point, and continues to compute the frame.
In your example, say I can have 10 frames per second. The speed of the player is 5/2 unit/second. I'll compute the player state, and get player is at 0.25 at 0.1 seconds, 0.5 at 0.2 seconds, 0.75 at 0.3 seconds, and so on.
int main()
{
sf::Clock Clock;
float LastTime = 0;
//Main Game Loop
while(Window.isOpen())
{
/* ... */
float CurTime = Clock.restart().asSeconds();
float FPS = 1.0 / CurTime;
LastTime = CurTime;
/*Then you could multiply the objects move axis speed by FPS*/
Player->Shape.move(Vel.x * FPS, Vel.y * FPS);
Window.clear();
Window.display();
}
return 0
}
Since CurTime = Clock.restart().asSeconds(); you may(more than likely) have to significantly increase the value of your Objects Velocity.
I'm trying to make a simple tile-based platformer in C++ and SDL2. My framerate stays at 59-60 fps, but when I start to hold down a key, it loses about 10 fps. This happens even when I don't call update or retrieve the keystates. This is the code inside my game loop:
//keys = (Uint8 *)SDL_GetKeyboardState(NULL);
elapsed = SDL_GetTicks() - current;
current += elapsed;
timeSinceSecond += elapsed;
//update(keys, elapsed / 1000.0);
draw();
frames++;
if(timeSinceSecond >= 1000) {
timeSinceSecond = 0;
cout << frames << endl;
frames = 0;
}
next = SDL_GetTicks();
if(next - current < 1000.0 / framerate) {
SDL_Delay(1000.0 / framerate - (next - current));
}
Any ideas on why this is happening? Could it be that it's a problem with SDL2? I haven't tried this with SDL 1.2.
SDL_Delay will not work the way you want. It is not precise enough (has 10 millisecond precision), so it will be impossible to get required number of frames per second this way. Us vsync instead. Another thing is that printing to stderr/stdout is slow when console is visible. If you're printing something when key is pressed, or if pressing the key somehow increases amount of text being printed, game will slow down.