Setting Frame Rate? Is this a good idea? [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What's the usual way of controlling frame rate?
I'm an amateur when it comes to programming, but I wanna ask if this is an efficient way of handling things.
Right now, my program currently updates itself with every step, but I am looking to divide the program up into a smaller frame rate. My current idea is to set a clock in main, where with every 30 ticks or so (for example), the game will update itself. However, I am looking to update the different parts of the program separately within that slot of time (for instance, every 10 seconds) with the program updating the screen at the end of that period. I figured that this will help to alleviate some of the "pressure" (assuming that there is any).

I wouldn't go that way. It's going to be much better/easier/cleaner, especially when starting, to update the game/screen as often as possible (e.g. put it in a while(true) loop). Then each iteration, figure out the elapsed time and use that accordingly. (e.g. Move an object 1 pixel for every elapsed 20ms) or something
The reason that this is a better starting point, is you'll be hard-pressed to guarantee exactly 30fps and the game will behave weirdly (e.g. if a slow computer can only pull 15fps, you don't want objects going twice the speed) and not to mention drift/individual slow frames etc.

Related

C++11 Most accurate way to pause execution for a certain amount of time? [duplicate]

This question already has answers here:
How to guarantee exact thread sleep interval?
(4 answers)
accurate sampling in c++
(2 answers)
Closed 4 years ago.
I'm currently working on some C++ code that reads from a video file, parses the video/audio streams into its constituent units (such as an FLV tag) and sends it back out in order to "restream" it.
Because my input comes from file but I want to simulate a proper framerate when restreaming this data, I am considering the ways that I might sleep the thread that is performing the read on the file in order to attempt to extract a frame at the given rate that one might expect out of typical 30 or 60 FPS.
One solution is to use an obvious std::this_thread::sleep_for call and pass in the amount of milliseconds depending on what my FPS is. Another solution I'm considering is using a condition variable, and using std::condition_variable::wait_for with the same idea.
I'm a little stuck, because I know that the first solution doesn't guarantee exact precision -- the sleep will last around as long as the argument I pass in but may in theory be longer. And I know that the std::condition_variable::wait_for call will require lock reacquisition which will take some time too. Is there a better solution than what I'm considering? Otherwise, what's the best methodology to attempt to pause execution for as precise a granularity as possible?
C++11 Most accurate way to pause execution for a certain amount of time?
This:
auto start = now();
while(now() < start + wait_for);
now() is a placeholder for the most accurate time measuring method available for the system.
This is the analogue to sleep as what spinlock is to a mutex. Like a spinlock, it will consume all the CPU cycles while pausing, but it is what you asked for: The most accurate way to pause execution. There is trade-off between accuracy and CPU-usage-efficiency: You must choose which is more important to have for your program.
why is it more accurate than std::this_thread::sleep_for?
Because sleep_for yields execution of the thread. As a consequence, it can never have better granularity than the process scheduler of the operating system has (assuming there are other processes competing for time).
The live loop shown above which doesn't voluntarily give up its time slice will achieve the highest granularity provided by the clock that is used for measurement.
Of course, the time slice granted by the scheduler will eventually run out, and that might happen near the time we should be resuming. Only way to reduce that effect is to increase the priority of our thread. There is no standard way of affecting the priority of a thread in C++. The only way to get completely rid of that effect is to run on a non-multi-tasking system.
On multi-CPU systems, one trick that you might want to do is to set the thread affinity so that the OS thread won't be migrated to other hard ware threads which would introduce latency. Likewise, you might want to set thread affinity of your other threads to stay off the time measuring thread. There is no standard tool to set thread affinity.
Let T be the time you wish to sleep for and let G be the maximum time that sleep_for could possibly overshoot.
If T is greater than G, then it will be more efficient to use sleep_for for T - G time units, and only use the live loop for the final G - O time units (where O is the time that sleep_for was observed to overshoot).
Figuring out what G is for your target system can be quite tricky however. There is no standard tool for that. If you over-estimate, you'll waste more cycles than necessary. If you under-estimate, your sleep may overshoot the target.
In case you're wondering what is a good choice for now(), the most appropriate tool provided by the standard library is std::chrono::steady_clock. However, that is not necessarily the most accurate tool available on your system. What tool is the most accurate depends on what system you're targeting.

How do I profile Hiccups in performance?

Usually profile data is gathered by randomly sampling the stack of the running program to see which function is in execution, over a running period it is possible to be statistically sure which methods/function calls eats most time and need intervention in case of bottlenecks.
However this has to do with overall application/game performance. Sometime happens that there are singular and isolated hiccups in performance that are causing usability troubles anyway (user notice it / introduced lag in some internal mechanism etc). With regular profiling over few seconds of execution is not possible to know which. Even if the hiccup lasts long enough (says 30 ms, which are not enough anyway), to detect some method that is called too often, we will still miss to see execution of many other methods that are just "skipped" because of the random sampling.
So are there any tecniques to profile hiccups in order to keep framerate more stable after fixing those kind of "rare bottlenecks"? I'm assuming usage of languages like C# or C++.
This has been answered before, but I can't find it, so here goes...
The problem is that the DrawFrame routine sometimes takes too long.
Suppose it normally takes less than 1000/30 = 33ms, but once in a while it takes longer than 33ms.
At the beginning of DrawFrame, set a timer interrupt that will expire after, say, 40ms.
Then at the end of DrawFrame, disable the interrupt.
So if it triggers, you know DrawFrame is taking an unusually long time.
Put a breakpoint in the interrupt handler, and when it gets there, examine the stack.
Chances are pretty good that you have caught it in the process of doing the costly thing.
That's a variation on random pausing.

Find time it takes for a C++ function to run [duplicate]

This question already has answers here:
How do I profile C++ code running on Linux?
(19 answers)
Measuring execution time of a function in C++
(14 answers)
Closed 7 years ago.
I'm debugging a large C++ project (Linux environment) and one binary appears to be taking more time to run than I expect. How can I see a breakdown of how much time each function call in each source file takes, so I can find the problem(s)?
There's another way to find the problem(s) than by getting a breakdown of function times.
Run it under a debugger, and manually interrupt it several times, and each time examine the call stack.
If you look at each level of the call stack that's in your code, you can see exactly why that moment in time is being spent.
Suppose you have a speed problem that, when fixed, will save some fraction of time, like 30%.
That means each stack sample that you examine has at least a 30% chance of happening during the problem.
So, turning it around, if you see it doing something that could be eliminated, and you see it on more than one sample, you've found your problem! (or at least one of them) **
That's the random-pausing technique.
It will find any problem that timers will, and problems that they won't.
** You might have to think about it a bit. If you see it doing something on a single sample, that doesn't mean much.
Even if the code is only doing a thousand completely different things, none of them taking significant time, it has to stop somewhere.
But if you see it doing something, and you see it on more than one sample, and you haven't taken a lot of samples, the probability that you would hit the same insignificant thing twice is very very small.
So it is far more likely that its probability is significant.
In fact, a reasonable guess of its probability is the number of samples in which you saw it, divided by the total number of samples.
#include <iostream>
#include <ctime>
int main() {
std::clock_t start = std::clock();
//code here
double duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
std::cout << duration << std::endl;
}
You can create your own timer class. At the starting of each block call method to reset the timer variable to zero
and get the timer at the end of the code block. You can do this
in various blocks of the code. Once you had identified the code block that
takes more time, you can have internal timers too.. If you want try a standard tool than I would recommend to use gprof. http://www.thegeekstuff.com/2012/08/gprof-tutorial/

How to locate a certain series of assembly instruction within a period of time?

Suppose I'm observing the memory of an application(e.g. Calculator) and I want to find out a series of instructions being called within a period of time, say, 10:20 AM - 10:21 AM 25/08/14.
At 10:20 AM, I should be pressing the button of execution(geting the result of computation).
And I want to find out all the associated instructions and memory calls in the execution process.
I know I can do this in a simply way like by iterative searching for input values on the calculator. However, in other cases it becomes difficult to search for the corresponding value due to complex layers of pointers.
My question:
Is it possible to implement this(finding out instructions or calls within a period of time) in C++?
Try start with using statistical profiling. If the execution is question is not a short moment, and takes at least a few statistical timer periods, you'll get enough to dig into. Multiple starts will increase result accuracy.

Making C++ run at full speed [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
To compare C++ and Java on certain tasks I have made two similar programs, one in Java and one in C++. When I run the Java one it takes 25% CPU without fluctuation, which you would expect as I'm using a quad core. However, the C++ version only uses about 8% and fluctuates havily. I run both programs on the same computer, on the same OS, with the same programs active in the background. How do I make C++ use one full core? These are 2 programs both not interrupted by anything. They both ask for some info and then enter an infinite loop until you exit the program, giving feedback on how many calculations per second.
The code:
http://pastebin.com/5rNuR9wA
http://pastebin.com/gzSwgBC1
http://pastebin.com/60wpcqtn
To answer some questions:
I'm basically looping a bunch of code and seeing how often per second it loops. The problem is: it doesn't use all the CPU it can use. The whole point is to have the same processor do the same task in Java and C++ and compare the amount of loops per second. But if one is using irregular amounts of CPU time and the other one is looping stable at a certain percentage they are hard to compare. By the way, if I ask it to execute this:
while(true){}
it takes 25%, why doesn't it do that with my code?
----edit:----
After some experimenting it seems that my code starts to use less than 25% if I use a cout statement. It isn't clear to me why a cout would cause the program to use less cpu (I guess it pauses until the statement is written which appearantly takes a while?
With this knowledge I will reprogram both programs (to keep them comparable) and just let it report the results after 60 seconds instead of every time it completed a loop.
Thanks for all the help, some of the tips were really helpful. After I discovered the answer someone also turned out to give this as an answer, so even if I wouldn't have found it myself I would have gotten the answer. Thanks!
(though I would like to know why a std::cout takes such an amount of time)
Your main loop has a cout in it, which will call out to the OS to write the accumulated output at some point. Either OS time is not counted against your app, or it causes some disk IO or other activity that forces your program to wait.
It's probably not accurate to compare both of these running at the same time without considering the fact that they will compete for cpu time. The OS will automatically choose the scheduling for these two tasks which can be affected by which one started first and a multitude of other criteria.
Running them both at the same time would require some type of configuration of the scheduling so that each one is confined to run to one (or two) cpus and each application uses different cpus. This can be done by having each main function execute a separate thread that performs all the work and then setting the cpu where this thread will run. In c++11 this can be done using a std::thread and then setting the underlying cpu affinity by getting the native_handle and setting it there.
I'm not sure how to do this in Java but I'm sure the process is similar.