Incorrect Interval Timer for a CallBack function in C++ - c++

I find on the web this class to implement a callback function that asynchronously do some work while I'm on the Main thread. This is the class:
#include "callbacktimer.h"
CallBackTimer::CallBackTimer()
:_execute(false)
{}
CallBackTimer::~CallBackTimer() {
if( _execute.load(std::memory_order_acquire) ) {
stop();
};
}
void CallBackTimer::stop()
{
_execute.store(false, std::memory_order_release);
if( _thd.joinable() )
_thd.join();
}
void CallBackTimer::start(int interval, std::function<void(void)> func)
{
if( _execute.load(std::memory_order_acquire) ) {
stop();
};
_execute.store(true, std::memory_order_release);
_thd = std::thread([this, interval, func]()
{
while (_execute.load(std::memory_order_acquire)) {
func();
std::this_thread::sleep_for(
std::chrono::milliseconds(interval)
);
}
});
}
bool CallBackTimer::is_running() const noexcept {
return ( _execute.load(std::memory_order_acquire) &&
_thd.joinable() );
}
The problem here is that if I put a job to be done every millisecond I don't know why but it is repeated every 64 milliseconds and not every 1 millisecond, this snippet get an idea:
#include "callbacktimer.h"
int main()
{
CallBackTimer cBT;
int i = 0;
cBT.start(1, [&]()-> void {
i++;
});
while(true)
{
std::cout << i << std::endl;
Sleep(1000);
}
return 0;
}
Here I should see on the Standard Output: 1000, 2000, 3000, and so on. But it doesn't...

It's quite hard to do something on a PC in a 1ms interval. Thread scheduling happens at 1/64s, which is ~16ms.
When you try to sleep for 1 ms, it will likely sleep for 1/64s instead, given that no other thread is scheduled to run. As your main thread sleeps for one second, your callback timer may run up to 64 times during that interval.
See also How often per second does Windows do a thread switch?
You can try multimedia timers which may go down to 1 millisecond.
I'm trying to implement a chronometer in qt which should show also the microsecondo
Well, you can show microseconds, I guess. But your function won't run every microsecond.

Related

How to get local hour efficiently?

I'm developing a service. Currently I need to get local hour for every request, since it involves system call, it costs too much.
In my case, some deviation like 200ms is OK for me.
So what's the best way to maintain a variable storing local_hour, and update it every 200ms?
static int32_t GetLocalHour() {
time_t t = std::time(nullptr);
if (t == -1) { return -1; }
struct tm *time_info_ptr = localtime(&t);
return (nullptr != time_info_ptr) ? time_info_ptr->tm_hour : -1;
}
If you want your main thread to spend as little time as possible on getting the current hour you can start a background thread to do all the heavy lifting.
For all things time use std::chrono types.
Here is the example, which uses quite a few (very useful) multithreading building blocks from C++.
#include <chrono>
#include <future>
#include <condition_variable>
#include <mutex>
#include <atomic>
#include <iostream>
// building blocks
// std::future/std::async, to start a loop/function on a seperate thread
// std::atomic, to be able to read/write threadsafely from a variable
// std::chrono, for all things time
// std::condition_variable, for communicating between threads. Basicall a signal that only signals that something has changed that might be interesting
// lambda functions : anonymous functions that are useful in this case for starting the asynchronous calls and to setup predicates (functions returning a bool)
// std::mutex : threadsafe access to a bit of code
// std::unique_lock : to automatically unlock a mutex when code goes out of scope (also needed for condition_variable)
// helper to convert time to start of day
using days_t = std::chrono::duration<int, std::ratio_multiply<std::chrono::hours::period, std::ratio<24> >::type>;
// class that has an asynchronously running loop that updates two variables (threadsafe)
// m_hours and m_seconds (m_seconds so output is a bit more interesting)
class time_keeper_t
{
public:
time_keeper_t() :
m_delay{ std::chrono::milliseconds(200) }, // update loop period
m_future{ std::async(std::launch::async,[this] {update_time_loop(); }) } // start update loop
{
// wait until asynchronous loop has started
std::unique_lock<std::mutex> lock{ m_mtx };
// wait until the asynchronous loop has started.
// this can take a bit of time since OS needs to schedule a thread for that
m_cv.wait(lock, [this] {return m_started; });
}
~time_keeper_t()
{
// threadsafe stopping of the mainloop
// to avoid problems that the thread is still running but the object
// with members is deleted.
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_stop = true;
m_cv.notify_all(); // this will wakeup the loop and stop
}
// future.get will wait until the loop also has finished
// this ensures no member variables will be accessed
// by the loop thread and it is safe to fully destroy this instance
m_future.get();
}
// inline to avoid extra calls
inline int hours() const
{
return m_hours;
}
// inline to avoid extra calls
inline int seconds() const
{
return m_seconds;
}
private:
void update_time()
{
m_now = std::chrono::steady_clock::now();
std::chrono::steady_clock::duration tp = m_now.time_since_epoch();
// calculate back till start of day
days_t days = duration_cast<days_t>(tp);
tp -= days;
// calculate hours since start of day
auto hours = std::chrono::duration_cast<std::chrono::hours>(tp);
tp -= hours;
m_hours = hours.count();
// seconds since start of last hour
auto seconds = std::chrono::duration_cast<std::chrono::seconds>(tp);
m_seconds = seconds.count() % 60;
}
void update_time_loop()
{
std::unique_lock<std::mutex> lock{ m_mtx };
update_time();
// loop has started and has initialized all things time with values
m_started = true;
m_cv.notify_all();
// stop condition for the main loop, put in a predicate lambda
auto stop_condition = [this]()
{
return m_stop;
};
while (!m_stop)
{
// wait until m_cv is signaled or m_delay timed out
// a condition variable allows instant response and thus
// is better then just having a sleep here.
// (imagine a delay of seconds, that would also mean stopping could
// take seconds, this is faster)
m_cv.wait_for(lock, m_delay, stop_condition);
if (!m_stop) update_time();
}
}
std::atomic<int> m_hours;
std::atomic<int> m_seconds;
std::mutex m_mtx;
std::condition_variable m_cv;
bool m_started{ false };
bool m_stop{ false };
std::chrono::steady_clock::time_point m_now;
std::chrono::steady_clock::duration m_delay;
std::future<void> m_future;
};
int main()
{
time_keeper_t time_keeper;
// the mainloop now just can ask the time_keeper for seconds
// or in your case hours. The only time needed is the time
// to return an int (atomic) instead of having to make a full
// api call to get the time.
for (std::size_t n = 0; n < 30; ++n)
{
std::cout << "seconds now = " << time_keeper.seconds() << "\n";
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
return 0;
}
You don't need to query local time for every request because hour doesn't change every 200ms. Just update the local hour variable every hour
The most correct solution would be registering to a timer event like scheduled task on Windows or cronjobs on Linux that runs at the start of every hour. Alternatively create a timer that runs every hour and update the variable
The timer creation depends on the platform, for example on Windows use SetTimer, on Linux use timer_create. Here's a very simple solution using boost::asio which assumes that you run on the exact hour. You'll need to make some modification to allow it to run at any time, for example by creating a one-shot timer or by sleeping until the next hour
#include <chrono>
using namespace std::chrono_literals;
int32_t get_local_hour()
{
time_t t = std::time(nullptr);
if (t == -1) { return -1; }
struct tm *time_info_ptr = localtime(&t);
return (nullptr != time_info_ptr) ? time_info_ptr->tm_hour : -1;
}
static int32_t local_hour = get_local_hour();
bool running = true;
// Timer callback body, called every hour
void update_local_hour(const boost::system::error_code& /*e*/,
boost::asio::deadline_timer* t)
{
while (running)
{
t->expires_at(t->expires_at() + boost::posix_time::hour(1));
t->async_wait(boost::bind(print,
boost::asio::placeholders::error, t, count));
local_hour = get_local_hour();
}
}
int main()
{
boost::asio::io_service io;
// Timer that runs every hour and update the local_hour variable
boost::asio::deadline_timer t(io, boost::posix_time::hour(1));
t.async_wait(boost::bind(update_local_hour,
boost::asio::placeholders::error, &t));
running = true;
io.run();
std::this_thread::sleep_for(3h);
running = false; // stop the timer
}
Now just use local_hour directly instead of GetLocalHour()

libuv - Limiting callback rate of idle event without blocking thread without multithreading

I'm using libsourcey which uses libuv as its underlying I/O networking layer.
Everything is setup and seems to run (haven't testen anything yet at all since I'm only prototyping and experimenting). However, I require that next to the application loop (the one that comes with libsourcey which relies on libuv's loop), also calls an "Idle function". As it is now, it calls the Idle CB on every cycle which is very CPU consuming. I'd need a way to limit the call-rate of the uv_idle_cb without blocking the calling thread which is the same the application uses to process I/O data (not sure about this last statement, correct me if i'm mistaken).
The idle function will be managing several different aspects of the application and it needs to run only x times within 1 second. Also, everything needs to run one the same thread (planning to upgrade an older application's network infrastructure which runs entirely single-threaded).
This is the code I have so far which also includes the test I did with sleeping the thread within the callback but it blocks everything so even the 2nd idle cb I set up has the same call-rate as the 1st one.
struct TCPServers
{
CTCPManager<scy::net::SSLSocket> ssl;
};
int counter = 0;
void idle_cb(uv_idle_t *handle)
{
printf("Idle callback %d TID %d\n", counter, std::this_thread::get_id());
counter++;
std::this_thread::sleep_for(std::chrono::milliseconds(1000 / 25));
}
int counter2 = 0;
void idle_cb2(uv_idle_t *handle)
{
printf("Idle callback2 %d TID %d\n", counter2, std::this_thread::get_id());
counter2++;
std::this_thread::sleep_for(std::chrono::milliseconds(1000 / 50));
}
class CApplication : public scy::Application
{
public:
CApplication() : scy::Application(), m_uvIdleCallback(nullptr), m_bUseSSL(false)
{}
void start()
{
run();
if (m_uvIdleCallback)
uv_idle_start(&m_uvIdle, m_uvIdleCallback);
if (m_uvIdleCallback2)
uv_idle_start(&m_uvIdle2, m_uvIdleCallback2);
}
void stop()
{
scy::Application::stop();
uv_idle_stop(&m_uvIdle);
if (m_bUseSSL)
scy::net::SSLManager::instance().shutdown();
}
void bindIdleEvent(uv_idle_cb cb)
{
m_uvIdleCallback = cb;
uv_idle_init(loop, &m_uvIdle);
}
void bindIdleEvent2(uv_idle_cb cb)
{
m_uvIdleCallback2 = cb;
uv_idle_init(loop, &m_uvIdle2);
}
void initSSL(const std::string& privateKeyFile = "", const std::string& certificateFile = "")
{
scy::net::SSLManager::instance().initNoVerifyServer(privateKeyFile, certificateFile);
m_bUseSSL = true;
}
private:
uv_idle_t m_uvIdle;
uv_idle_t m_uvIdle2;
uv_idle_cb m_uvIdleCallback;
uv_idle_cb m_uvIdleCallback2;
bool m_bUseSSL;
};
int main()
{
CApplication app;
app.bindIdleEvent(idle_cb);
app.bindIdleEvent2(idle_cb2);
app.initSSL();
app.start();
TCPServers srvs;
srvs.ssl.start("127.0.0.1", 9000);
app.waitForShutdown([&](void*) {
srvs.ssl.shutdown();
});
app.stop();
system("PAUSE");
return 0;
}
Thanks in advance if anyone can help out.
Solved the problem by using uv_timer_t and uv_timer_cb (Hadn't digged into libuv's doc yet). CPU usage went down drastically and nothing gets blocked.

Timer callback with low precision in microsecond with thread and lock in c++

I wrote a timercallback class that don't have enough speed in running.
class Manager
{
...
void CallFunction(Function<Treturn>* m_function)
{
do
{
if (m_Status == TimerStatus::Paused)
{
unique_lock<mutex> lock(m_MutexLock);
m_Notifier.wait(lock);
}
while (m_Status != TimerStatus::Stoped&&m_Status != TimerStatus::Paused)
{
unique_lock<mutex> lock(m_MutexLock);
m_Notifier.wait_for(lock, m_Interval);
(*m_function)();
}
} while (m_Status == TimerStatus::Paused);
}
...
}
But when set time of timercallback to call function every 1ms don't call and take more for example 10 ms. I need help for improve code to call callback function (event function of this timer) in 1 ms and run in one thread with lock. How can I do this? sample of use this class:
Manager tester;
TimerCallback<microseconds> m_timer(chrono::microseconds(10),Core::Utility::ptr_fun(&tester,&Manager::runner));

Calling functions at timed intervals using threads

I'm building a simulator to test student code for a very simple robot. I need to run two functions(to update robot sensors and robot position) on separate threads at regular time intervals. My current implementation is highly processor inefficient because it has a thread dedicated to simply incrementing numbers to keep track of the position in the code. My recent theory is that I may be able to use sleep to give the time delay between updating value of the sensor and robot position. My first question is: is this efficient? Second: Is there any way to do a simple thing but measure clock cycles instead of seconds?
Putting a thread to sleep by waiting on a mutex-like object is generally efficient. A common pattern involves waiting on a mutex with a timeout. When the timeout is reached, the interval is up. When the mutex is releaed, it is the signal for the thread to terminate.
Pseudocode:
void threadMethod() {
for(;;) {
bool signalled = this->mutex.wait(1000);
if(signalled) {
break; // Signalled, owners wants us to terminate
}
// Timeout, meaning our wait time is up
doPeriodicAction();
}
}
void start() {
this->mutex.enter();
this->thread.start(threadMethod);
}
void stop() {
this->mutex.leave();
this->thread.join();
}
On Windows systems, timeouts are generally specified in milliseconds and are accurate to roughly within 16 milliseconds (timeBeginPeriod() may be able to improve this). I do not know of a CPU cycle-triggered synchronization primitive. There are lightweight mutexes called "critical sections" that spin the CPU for a few thousand cycles before delegating to the OS thread scheduler. Within this time they are fairly accurate.
On Linux systems the accuracy may be a bit higher (high frequency timer or tickless kernel) and in addition to mutexes, there are "futexes" (fast mutex) which are similar to Windows' critical sections.
I'm not sure I grasped what you're trying to achieve, but if you want to test student code, you might want to use a virtual clock and control the passing of time yourself. For example by calling a processInputs() and a decideMovements() method that the students have to provide. After each call, 1 time slot is up.
This C++11 code uses std::chrono::high_resolution_clock to measure subsecond timing, and std::thread to run three threads. The std::this_thread::sleep_for() function is used to sleep for a specified time.
#include <iostream>
#include <thread>
#include <vector>
#include <chrono>
void seconds()
{
using namespace std::chrono;
high_resolution_clock::time_point t1, t2;
for (unsigned i=0; i<10; ++i) {
std::cout << i << "\n";
t1 = high_resolution_clock::now();
std::this_thread::sleep_for(std::chrono::seconds(1));
t2 = high_resolution_clock::now();
duration<double> elapsed = duration_cast<duration<double> >(t2-t1);
std::cout << "\t( " << elapsed.count() << " seconds )\n";
}
}
int main()
{
std::vector<std::thread> t;
t.push_back(std::thread{[](){
std::this_thread::sleep_for(std::chrono::seconds(3));
std::cout << "awoke after 3\n"; }});
t.push_back(std::thread{[](){
std::this_thread::sleep_for(std::chrono::seconds(7));
std::cout << "awoke after 7\n"; }});
t.push_back(std::thread{seconds});
for (auto &thr : t)
thr.join();
}
It's hard to know whether this meets your needs because there are a lot of details missing from the question. Under Linux, compile with:
g++ -Wall -Wextra -pedantic -std=c++11 timers.cpp -o timers -lpthread
Output on my machine:
0
( 1.00014 seconds)
1
( 1.00014 seconds)
2
awoke after 3
( 1.00009 seconds)
3
( 1.00015 seconds)
4
( 1.00011 seconds)
5
( 1.00013 seconds)
6
awoke after 7
( 1.0001 seconds)
7
( 1.00015 seconds)
8
( 1.00014 seconds)
9
( 1.00013 seconds)
Other C++11 standard features that may be of interest include timed_mutex and promise/future.
Yes your theory is correct. You can use sleep to put some delay between execution of a function by thread. Efficiency depends on how wide you can choose that delay to get desired result. You have to explain details of your implementation. For e.g we don't know whether two threads are dependent ( in that case you have to take care of synchronization which would blow up some cycles ).
Here's the one way to do it. I'm using C++11, thread, atomics and high precision clock. The scheduler will callback a function that takes dt seconds which is time elapsed since last call. The loop can be stopped by calling stop() method of if callback function returns false.
Scheduler code
#include <thread>
#include <chrono>
#include <functional>
#include <atomic>
#include <system_error>
class ScheduledExecutor {
public:
ScheduledExecutor()
{}
ScheduledExecutor(const std::function<bool(double)>& callback, double period)
{
initialize(callback, period);
}
void initialize(const std::function<bool(double)>& callback, double period)
{
callback_ = callback;
period_ = period;
keep_running_ = false;
}
void start()
{
keep_running_ = true;
sleep_time_sum_ = 0;
period_count_ = 0;
th_ = std::thread(&ScheduledExecutor::executorLoop, this);
}
void stop()
{
keep_running_ = false;
try {
th_.join();
}
catch(const std::system_error& /* e */)
{ }
}
double getSleepTimeAvg()
{
//TODO: make this function thread safe by using atomic types
//right now this is not implemented for performance and that
//return of this function is purely informational/debugging purposes
return sleep_time_sum_ / period_count_;
}
unsigned long getPeriodCount()
{
return period_count_;
}
private:
typedef std::chrono::high_resolution_clock clock;
template <typename T>
using duration = std::chrono::duration<T>;
void executorLoop()
{
clock::time_point call_end = clock::now();
while (keep_running_) {
clock::time_point call_start = clock::now();
duration<double> since_last_call = call_start - call_end;
if (period_count_ > 0 && !callback_(since_last_call.count()))
break;
call_end = clock::now();
duration<double> call_duration = call_end - call_start;
double sleep_for = period_ - call_duration.count();
sleep_time_sum_ += sleep_for;
++period_count_;
if (sleep_for > MinSleepTime)
std::this_thread::sleep_for(std::chrono::duration<double>(sleep_for));
}
}
private:
double period_;
std::thread th_;
std::function<bool(double)> callback_;
std::atomic_bool keep_running_;
static constexpr double MinSleepTime = 1E-9;
double sleep_time_sum_;
unsigned long period_count_;
};
Example usage
bool worldUpdator(World& w, double dt)
{
w.update(dt);
return true;
}
void main() {
//create world for your simulator
World w(...);
//start scheduler loop for every 2ms calls
ScheduledExecutor exec;
exec.initialize(
std::bind(worldUpdator, std::ref(w), std::placeholders::_1),
2E-3);
exec.start();
//main thread just checks on the results every now and then
while (true) {
if (exec.getPeriodCount() % 10000 == 0) {
std::cout << exec.getSleepTimeAvg() << std::endl;
}
}
}
There are also other, related questions on SO.

Basic timer with std::thread and std::chrono

I'm trying to implement a basic timer with the classic methods: start() and stop(). I'm using c++11 with std::thread and std::chrono.
Start method. Creates a new thread that is asleep for a given interval time, then execute a given std::function. This process is repeated while a 'running' flag is true.
Stop method. Just sets the 'running' flag to false.
I created and started a Timer object that show "Hello!" every second, then with other thread I try to stop the timer but I can't. The Timer never stops.
I think the problem is with th.join()[*] that stops execution until the thread has finished, but when I remove th.join() line obviously the program finishes before the timer start to count.
So, my question is how to run a thread without stop other threads?
#include <iostream>
#include <thread>
#include <chrono>
using namespace std;
class Timer
{
thread th;
bool running = false;
public:
typedef std::chrono::milliseconds Interval;
typedef std::function<void(void)> Timeout;
void start(const Interval &interval,
const Timeout &timeout)
{
running = true;
th = thread([=]()
{
while (running == true) {
this_thread::sleep_for(interval);
timeout();
}
});
// [*]
th.join();
}
void stop()
{
running = false;
}
};
int main(void)
{
Timer tHello;
tHello.start(chrono::milliseconds(1000),
[]()
{
cout << "Hello!" << endl;
});
thread th([&]()
{
this_thread::sleep_for(chrono::seconds(2));
tHello.stop();
});
th.join();
return 0;
}
Output:
Hello!
Hello!
...
...
...
Hello!
In Timer::start, you create a new thread in th and then immediately join it with th.join(). Effectively, start won't return until that spawned thread exits. Of course, it won't ever exit because nothing will set running to false until after start returns...
Don't join a thread until you intend to wait for it to finish. In this case, in stop after setting running = false is probably the correct place.
Also - although it's not incorrect - there's no need to make another thread in main to call this_thread::sleep_for. You can simply do so with the main thread:
int main()
{
Timer tHello;
tHello.start(chrono::milliseconds(1000), []{
cout << "Hello!" << endl;
});
this_thread::sleep_for(chrono::seconds(2));
tHello.stop();
}
Instead of placing the join in start place it after running = false in stop. Then the stop method will effectively wait until the thread is completed before returning.