Basically I need a function that makes x decrement to 0 over a certain time period (40 seconds)
This seems pretty simple in theory but I haven't been able to do it for a bit now.
static auto decrement = [](int start_value, int end_value, int time) {
//i need this function to decrement start_value until it reaches end_value
//this should happen over a set time as well, in this case 40 seconds.
};
int cool_variable = decrement(2000, 0, 40); //40 seconds, the time should be expected in seconds
#DavidSchwartz has a great comment that should be considered a serious solution:
Why not just compute the correct value of cool_variable based on the clock whenever you need its value?
That being said, this is an answer to the actual question: How to write this function:
decrement = [](int start_value, int end_value, int time)
Where cool_variable starts with the value start_value and decrements at a steady rate of time until it equals end_value, and the total amount of time for this multi-decrement operation is time seconds.
This is a function with a time deadline. It is well-established that for problems with a deadline, one should lean towards *_until solutions as opposed to *_for solutions in handling the time aspect. This implies that instead of sleeping for some time duration between decrements, we need to sleep until it is time to decrement from some value to the next lower value.
The use of sleep_until allows a somewhat varying time for each iteration of the decrement loop, while ensuing that the total time of the full loop closely approximates the total desired time.
To achieve the use of sleep_until, we need a (presumably) linear function:
duration next_time(int value) {return a0 + a1 * value;}
where next_time(start_value) == 0s and next_time(end_value) == seconds{time}.
We have two equations, and two unknowns: a0 and a1. We can solve for the two unknowns to create our desired next_time function:
auto next_time = [&](int value)
{
return (value - start_value) * time / (end_value - start_value);
};
Now for each value of cool_variable, one can sleep_until(t0 + next_time(cool_variable)) where t0 is the time where you want cool_variable == start_value (and thus want to sleep for 0 seconds).
The next most important thing (after use of sleep_until) is to use <chrono>. int time is an error-prone API that has no place in modern C++. The type of time should be a <chrono> duration such as seconds (or perhaps some other unit of time). Let's start with seconds:
#include <atomic>
#include <chrono>
#include <iostream>
#include <thread>
std::atomic<int> cool_variable = 0;
void
decrement(int start_value, int end_value, std::chrono::seconds time)
{
using namespace std;
using namespace std::chrono;
auto next_time = [&](int value)
{
return (value - start_value) * nanoseconds{time} / (end_value - start_value);
};
auto t0 = steady_clock::now();
for (cool_variable = start_value; cool_variable >= end_value; --cool_variable)
{
this_thread::sleep_until(t0 + next_time(cool_variable));
cout << cool_variable << endl;
}
}
cool_variable is stored as an atomic<int> so that it can be concurrently read by other threads to avoid undefined behavior.
The input time variable is converted to nanoseconds precision in the computation so that the argument to sleep_until can be as precise as is practical.
Note that the current time need only be computed once, prior to the decrement loop.
Just as an example, cool_variable is printed to the terminal on each iteration. This is of course not necessary, and just used for demonstration purposes.
This can now be called like so:
decrement(2000, 0, 40s);
It can also be instructive to wrap the call to decrement with timing information in order to ensure that it is behaving as intended:
auto t0 = system_clock::now();
decrement(2000, 0, 40s);
auto t1 = system_clock::now();
std::cout << (t1-t0)/1s << '\n';
This will output each value of cool_variable between 2000 and 0 (inclusive), and then say how many seconds it took to do the operation (hopefully 40 in this example).
Finally, one minor simplification can be made:
Since we desire time to be nanoseconds in the computation, it is actually simpler to simply accept nanoseconds in the API, relieving us of the need to convert seconds to nanoseconds internally:
void
decrement(int start_value, int end_value, std::chrono::nanoseconds time)
{
using namespace std;
using namespace std::chrono;
auto next_time = [&](int value)
{
return (value - start_value) * time / (end_value - start_value);
};
auto t0 = steady_clock::now();
for (cool_variable = start_value; cool_variable >= end_value; --cool_variable)
{
this_thread::sleep_until(t0 + next_time(cool_variable));
cout << cool_variable << endl;
}
}
The client code need not change at all:
decrement(2000, 0, 40s);
The 40s argument will implicitly convert to 40'000'000'000ns at the call site. And this is why it is so important to use <chrono> types for time. Had we not done this, this final (minor) simplification would not have been minor at all. It would have required changing client code at the call site, which in real-world applications is often impractical.
In Summary
Use sleep_until.
Use <chrono>.
Related
I am a beginner to C++, trying to improve my skills by working on a project.
I am trying to have my program call a certain function 100 times a second for 30 seconds.
I thought that this would be a common, well documented problem but so far I did not manage to find a solution.
Could anyone provide me with an implementation example or point me towards one?
Notes: my program is intended to be single-threaded and to use only the standard library.
There are two reasons you couldn't find a trivial answer:
This statement "I am trying to have my program call a certain function 100 times a second for 30 seconds" is not well-defined.
Timing and scheduling is a very complication problem.
In a practical sense, if you just want something to run approximately 100 times a second for 30 seconds, assuming the function doesn't take long to run, you can say something like:
for (int i=0;i<3000;i++) {
do_something();
this_thread::sleep_for(std::chrono::milliseconds(10));
}
This is an approximate solution.
Problems with this solution:
If do_something() takes longer than around 0.01 milliseconds your timing will eventually be way off.
Most operating systems do not have very accurate sleep timing. There is no guarantee that asking to sleep for 10 milliseconds will wait for exactly 10 milliseconds. It will usually be approximately accurate.
You can use std::this_thread::sleep_until and calculate the end time of the sleep according to desired frequency:
void f()
{
static int counter = 0;
std::cout << counter << '\n';
++counter;
}
int main() {
using namespace std::chrono_literals;
using Clock = std::chrono::steady_clock;
constexpr auto period = std::chrono::duration_cast<std::chrono::milliseconds>(1s) / 100; // conversion to ms needed to prevent truncation in integral division
constexpr auto repetitions = 30s / period;
auto const start = Clock::now();
for (std::remove_const_t<decltype(repetitions)> i = 1; i <= repetitions; ++i)
{
f();
std::this_thread::sleep_until(start + period * i);
}
}
Note that this code will not work, if f() takes more than 10ms to complete.
Note: The exact duration of the sleep_until calls may be off, but the fact that the sleep duration is calculated based on the current time by sleep_until should result in any errors being kept to a minimum.
You can't time it perfectly, but you can try like this:
using std::chrono::steady_clock;
using namespace std::this_thread;
auto running{ true };
auto frameTime{ std::chrono::duration_cast<steady_clock::duration>(std::chrono::duration<float>{1.0F / 100.0F}) }
auto delta{ steady_clock::duration::zero() };
while (running) {
auto t0{ steady_clock::now() };
while (delta >= frameTime) {
call_your_function(frameTime);
delta -= frameTime;
}
if (const auto dt{ delta + steady_clock::now() - t0 }; dt < frameTime) {
sleep_for(frameTime - dt);
delta += steady_clock::now() - t0;
}
else {
delta += dt;
}
}
What is the best way in C++11 to implement a high-resolution timer that continuously checks for time in a loop, and executes some code after it passes a certain point in time? e.g. check what time it is in a loop from 9am onwards and execute some code exactly at 11am. I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
I will be implementing this program on Linux CentOS 7.3, and have no issues with dedicating CPU resources to execute this task.
Instead of implementing this manually, you could use e.g. a systemd.timer. Make sure to specify the desired accuracy which can apparently be as precise as 1us.
a high-resolution timer that continuously checks for time in a loop,
First of all, you do not want to continuously check the time in a loop; that's extremely inefficient and simply unnecessary.
...executes some code after it passes a certain point in time?
Ok so you want to run some code at a given time in the future, as accurately as possible.
The simplest way is to simply start a background thread, compute how long until the target time (in the desired resolution) and then put the thread to sleep for that time period. When your thread wakes up, it executes the actual task. This should be accurate enough for the vast majority of needs.
The std::chrono library provides calls which make this easy:
System clock in std::chrono
High resolution clock in std::chrono
Here's a snippet of code which does what you want using the system clock (which makes it easier to set a wall clock time):
// c++ --std=c++11 ans.cpp -o ans
#include <thread>
#include <iostream>
#include <iomanip>
// do some busy work
int work(int count)
{
int sum = 0;
for (unsigned i = 0; i < count; i++)
{
sum += i;
}
return sum;
}
std::chrono::system_clock::time_point make_scheduled_time (int yyyy, int mm, int dd, int HH, int MM, int SS)
{
tm datetime = tm{};
datetime.tm_year = yyyy - 1900; // Year since 1900
datetime.tm_mon = mm - 1; // Month since January
datetime.tm_mday = dd; // Day of the month [1-31]
datetime.tm_hour = HH; // Hour of the day [00-23]
datetime.tm_min = MM;
datetime.tm_sec = SS;
time_t ttime_t = mktime(&datetime);
std::chrono::system_clock::time_point scheduled = std::chrono::system_clock::from_time_t(ttime_t);
return scheduled;
}
void do_work_at_scheduled_time()
{
using period = std::chrono::system_clock::period;
auto sched_start = make_scheduled_time(2019, 9, 17, // date
00, 14, 00); // time
// Wait until the scheduled time to actually do the work
std::this_thread::sleep_until(sched_start);
// Figoure out how close to scheduled time we actually awoke
auto actual_start = std::chrono::system_clock::now();
auto start_delta = actual_start - sched_start;
float delta_ms = float(start_delta.count())*period::num/period::den * 1e3f;
std::cout << "worker: awoken within " << delta_ms << " ms" << std::endl;
// Now do some actual work!
int sum = work(12345);
}
int main()
{
std::thread worker(do_work_at_scheduled_time);
worker.join();
return 0;
}
On my laptop, the typical latency is about 2-3ms. If you use the high_resolution_clock you should be able to get even better results.
There are other APIs you could use too, such as Boost where you could use ASIO to implement high res timeout.
I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
Do you really need it to be accurate to the microsecond? Consider that at this resolution, you will also need to take into account all sorts of other factors, including system load, latency, clock jitter, and so on. Your code can start to execute at close to that time, but that's only part of the problem.
My suggestion would be to use timer_create(). This allows you to get notified by a signal at a given time. You can then implement your action in the signal handler.
In any case you should be aware that the accuracy of course depends on the system clock accuracy.
So I'm trying to call a function every n seconds. The below is a simple representation of what I'm trying to achieve. I wanted to know if the below method is the only way to achieve this. I would love if the "if" condition can be avoided.
#include <stdio.h>
#include <time.h>
void print_hello(int i) {
printf("hello\n");
printf("%d\n", i);
}
int main () {
time_t start_t, end_t;
double diff_t;
time(&start_t);
int i = 0;
while(1) {
time(&end_t);
// printf("here in main");
i = i + 1;
diff_t = difftime(end_t, start_t);
if(diff_t==5) {
// printf("Execution time = %f\n", diff_t);
print_hello(i);
time(&start_t);
}
}
return(0);
}
The usage of time in OPs program can be reduced to something like
// get tStart;
// set tEnd = tStart + x;
do {
// get t;
} while (t < tEnd);
This is what is called busy-wait.
It might be used to write code with most precise timing as well as in other special cases. The draw-back is that the waiting consumes ful CPU load. (You might be even able to hear this – by raising ventilation noise.)
In general, however, spinning is considered an anti-pattern and should be avoided, as processor time that could be used to execute a different task is instead wasted on useless activity.
Another option is to delegate the wake-up to the system, which reduces the load of process/thread to minimum while waiting:
#include <chrono>
#include <iostream>
#include <thread>
void print_hello(int i)
{
std::cout << "hello\n"
<< i << '\n';
}
int main ()
{
using namespace std::chrono_literals; // to support e.g. 5s for 5 sceconds
auto tStart = std::chrono::system_clock::now();
for (int i = 1; i <= 3; ++i) {
auto tEnd = tStart + 2s;
std::this_thread::sleep_until(tEnd);
print_hello(i);
tStart = tEnd;
}
}
Output:
hello
1
hello
2
hello
3
Live Demo on coliru
(I had to reduce number of iterations and the waiting times to prevent the TLE in online compiler.)
std::this_thread::sleep_until
Blocks the execution of the current thread until specified sleep_time has been reached.
The clock tied to sleep_time is used, which means that adjustments of the clock are taken into account. Thus, the duration of the block might, but might not, be less or more than sleep_time - Clock::now() at the time of the call, depending on the direction of the adjustment. The function also may block for longer than until after sleep_time has been reached due to scheduling or resource contention delays.
The last sentence mentions the draw-back of this solution: The OS may decide to wake-up the thread/process later than requested. That may happen e.g. is OS is under high load. In the “normal” case, the latency shouldn't be more than a few milli-seconds. So, the latency might be tolerable.
Please, note how tEnd and tStart are updated in loop. The current wake-up time is not considered to prevent accumulation of latencies.
So I have a program that evaluates a polynomial in two different ways: Honrer's method and a Naive method. I'm trying to see their run times respectively, but depending on which order I place the function calls their times change. For example, I place the Horner method first and it takes longer. I then tried with the naive method first and then it takes longer. The Horner method should be much much faster since it only has one loop where the naive method has a nested loop. So i figured it must be the way I'm using the clocks from the chrono library. I tried both the high_resolution_clock and system_clock, but the same thing happens. Any help/comments are welcomed.
#include <cstdlib>
#include <iostream>
#include <chrono>
#include "Polynomial.h"
int main(int argc, char** argv) {
double c[5] = {5, 0, -3, 1, -8};
int degree = 4;
Polynomial obj(c, degree);
auto start = std::chrono::high_resolution_clock::now();
std::cout<<"Horner Evaluation: " << obj.hornerEval(-2)<<", ";
auto elapsed = std::chrono::high_resolution_clock::now() - start;
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(elapsed).count();
std::cout<< duration << " nanoseconds "<<std::endl;
auto start2 = std::chrono::high_resolution_clock::now();
std::cout<<"Naive Evaluation: " << obj.naiveEval(-2)<<", ";
auto elapsed2 = std::chrono::high_resolution_clock::now() - start2;
auto duration2 = std::chrono::duration_cast<std::chrono::nanoseconds>(elapsed2).count();
std::cout<< duration2 << " nanoseconds "<<std::endl;
}
You didn't put all the code but from description it looks it is caching effect.
When it runs first method CPU cache is cold (data from memory is not yet populated with CPU cache) so it takes more time to process (memory is slow compared to cache).
When second method is called it has all (or most depending on data size) the data already available in cache - cache is hot.
Solution - call both methods outside timing part first to warm up the cache than do measurements.
Like one of the previous responds already said, it's probably something with the cache, the prefetcher can maybe better determine which memory to load into cache in the naiveEval method. Here is a talk about benchmarking c++ code for futher information for exapmle on the effect of cold starts on benchmarking: https://www.youtube.com/watch?v=zWxSZcpeS8Q
I basically have a school project testing the time it takes different sort algorithms and record how long they take with n amount of numbers to sort. So I decided to use Boost library with c++ to record the time. I am at the point I am not sure how to do it, I have googled it and have found people using different ways. for examples
auto start = boost::chrono::high_resolution_clock::now();
auto end = boost::chrono::high_resolution_clock::now();
auto time = (end-start).count();
or
boost::chrono::system_clock::now();
or
boost::chrono::steady_clock::now()
or even using something like this
boost::timer::cpu_timer and boost::timer::auto_cpu_time
or
boost::posix_time::ptime start = boost::posix_time::microsec_clock::local_time( );
so I want to be sure on how to do it right now this is what I have
typedef boost::chrono::duration<double, boost::nano> boost_nano;
auto start_t = boost::chrono::high_resolution_clock::now();
// call function
auto end_t = boost::chrono::high_resolution_clock::now();
boost_nano time = (end_t - start_t);
cout << t.count();
so am I on the right track?
You likely want the high resolution timer.
You can use either that of boost::chrono or std::chrono.
Boost Chrono has some support for IO builtin, so it makes it easier to report times in a human friendly way.
I usually use a wrapper similar to this:
template <typename Caption, typename F>
auto timed(Caption const& task, F&& f) {
using namespace boost::chrono;
struct _ {
high_resolution_clock::time_point s;
Caption const& task;
~_() { std::cout << " -- (" << task << " completed in " << duration_cast<milliseconds>(high_resolution_clock::now() - s) << ")\n"; }
} timing { high_resolution_clock::now(), task };
return f();
}
Which reports time taken in milliseconds.
The good part here is that you can time construction and similar:
std::vector<int> large = timed("generate data", [] {
return generate_uniform_random_data(); });
But also, general code blocks:
timed("do_step2", [] {
// step two is foo and bar:
foo();
bar();
});
And it works if e.g. foo() throws, just fine.
DEMO
Live On Coliru
int main() {
return timed("demo task", [] {
sleep(1);
return 42;
});
}
Prints
-- (demo task completed in 1000 milliseconds)
42
I typically use time(0) to control the duration of a loop. time(0) is simply one time measurement that, because of its own short duration, has the least impact on everything else going on (and you can even run a do-nothing loop to capture how much to subtract from any other loop measurement effort).
So in a loop running for 3 (or 10 seconds), how many times can the loop invoke the thing you are trying to measure?
Here is an example of how my older code measures the duration of 'getpid()'
uint32_t spinPidTillTime0SecChange(volatile int& pid)
{
uint32_t spinCount = 1; // getpid() invocation count
// no measurement, just spinning
::time_t tStart = ::time(nullptr);
::time_t tEnd = tStart;
while (0 == (tEnd - tStart)) // (tStart == tEnd)
{
pid = ::getpid();
tEnd = ::time(nullptr);
spinCount += 1;
}
return(spinCount);
}
Invoke this 3 (or 10) times, adding the return values together. To make it easy, discard the first measurement (because it probably will be a partial second).
Yes, I am sure there is a c++11 version of accessing what time(0) accesses.
Use std::chrono::steady_clock or std::chrono::high_resolution_clock (if it is steady - see below) and not std::chrono::system_clock for measuring run time in C++11 (or use its boost equivalent). The reason is (quoting system_clock's documentation):
on most systems, the system time can be adjusted at any moment
while steady_clock is monotonic and is better suited for measuring intervals:
Class std::chrono::steady_clock represents a monotonic clock. The time
points of this clock cannot decrease as physical time moves forward.
This clock is not related to wall clock time, and is best suitable for
measuring intervals.
Here's an example:
auto start = std::chrono::steady_clock::now();
// do something
auto finish = std::chrono::steady_clock::now();
double elapsed_seconds = std::chrono::duration_cast<
std::chrono::duration<double> >(finish - start).count();
A small practical tip: if you are measuring run time and want to report seconds std::chrono::duration_cast<std::chrono::seconds> is rarely what you need because it gives you whole number of seconds. To get the time in seconds as a double use the example above.
As suggested by Gregor McGregor, you can use a high_resolution_clock which may sometimes provide higher resolution (although it can be an alias of steady_clock), but beware that it may also be an alias of system_clock, so you might want to check is_steady.