C++ callback timer implementation - c++

I have found the following implementation for a callback timer to use in my c++ application. However, this implementation requires me to "join" the thread from the start caller, which effectively blocks the caller of the start function.
What I really like to do is the following.
someone can call foo(data) multiple times and store them in a db.
whenever foo(data) is called, it initiates a timer for few seconds.
while the timer is counting down, foo(data) can be called several
times and multiple items can be stored, but doesn't call erase until timer finishes
whenever the timer is up,
the "remove" function is called once to remove all the records from the
db.
Bascially I want to be able to do a task, and wait a few seconds and batch do a single batch task B after a few seconds.
class CallBackTimer {
public:
/**
* Constructor of the CallBackTimer
*/
CallBackTimer() :_execute(false) { }
/**
* Destructor
*/
~CallBackTimer() {
if (_execute.load(std::memory_order_acquire)) {
stop();
};
}
/**
* Stops the timer
*/
void stop() {
_execute.store(false, std::memory_order_release);
if (_thd.joinable()) {
_thd.join();
}
}
/**
* Start the timer function
* #param interval Repeating duration in milliseconds, 0 indicates the #func will run only once
* #param delay Time in milliseconds to wait before the first callback
* #param func Callback function
*/
void start(int interval, int delay, std::function<void(void)> func) {
if(_execute.load(std::memory_order_acquire)) {
stop();
};
_execute.store(true, std::memory_order_release);
_thd = std::thread([this, interval, delay, func]() {
std::this_thread::sleep_for(std::chrono::milliseconds(delay));
if (interval == 0) {
func();
stop();
} else {
while (_execute.load(std::memory_order_acquire)) {
func();
std::this_thread::sleep_for(std::chrono::milliseconds(interval));
}
}
});
}
/**
* Check if the timer is currently running
* #return bool, true if timer is running, false otherwise.
*/
bool is_running() const noexcept {
return ( _execute.load(std::memory_order_acquire) && _thd.joinable() );
}
private:
std::atomic<bool> _execute;
std::thread _thd;
};
I have tried modifying the above code using the thread.detach(). However, I am running issues in detached thread not being able to write (erase) from the database..
Any help and suggestions are appreciated!

Rather than using threads you could use std::async. The following class will process the queued strings in order 4 seconds after the last string is added. Only 1 async task will be launched at a time and std::aysnc takes care of all the threading for you.
If there are unprocessed items in the queue when the class is destructed then the async task stops without waiting and these items aren't processed (but this would be easy to change if its not your desired behaviour).
#include <iostream>
#include <string>
#include <future>
#include <mutex>
#include <chrono>
#include <queue>
class Batcher
{
public:
Batcher()
: taskDelay( 4 ),
startTime( std::chrono::steady_clock::now() ) // only used for debugging
{
}
void queue( const std::string& value )
{
std::unique_lock< std::mutex > lock( mutex );
std::cout << "queuing '" << value << " at " << std::chrono::duration_cast< std::chrono::milliseconds >( std::chrono::steady_clock::now() - startTime ).count() << "ms\n";
work.push( value );
// increase the time to process the queue to "now + 4 seconds"
timeout = std::chrono::steady_clock::now() + taskDelay;
if ( !running )
{
// launch a new asynchronous task which will process the queue
task = std::async( std::launch::async, [this]{ processWork(); } );
running = true;
}
}
~Batcher()
{
std::unique_lock< std::mutex > lock( mutex );
// stop processing the queue
closing = true;
bool wasRunning = running;
condition.notify_all();
lock.unlock();
if ( wasRunning )
{
// wait for the async task to complete
task.wait();
}
}
private:
std::mutex mutex;
std::condition_variable condition;
std::chrono::seconds taskDelay;
std::chrono::steady_clock::time_point timeout;
std::queue< std::string > work;
std::future< void > task;
bool closing = false;
bool running = false;
std::chrono::steady_clock::time_point startTime;
void processWork()
{
std::unique_lock< std::mutex > lock( mutex );
// loop until std::chrono::steady_clock::now() > timeout
auto wait = timeout - std::chrono::steady_clock::now();
while ( !closing && wait > std::chrono::seconds( 0 ) )
{
condition.wait_for( lock, wait );
wait = timeout - std::chrono::steady_clock::now();
}
if ( !closing )
{
std::cout << "processing queue at " << std::chrono::duration_cast< std::chrono::milliseconds >( std::chrono::steady_clock::now() - startTime ).count() << "ms\n";
while ( !work.empty() )
{
std::cout << work.front() << "\n";
work.pop();
}
std::cout << std::flush;
}
else
{
std::cout << "aborting queue processing at " << std::chrono::duration_cast< std::chrono::milliseconds >( std::chrono::steady_clock::now() - startTime ).count() << "ms with " << work.size() << " remaining items\n";
}
running = false;
}
};
int main()
{
Batcher batcher;
batcher.queue( "test 1" );
std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
batcher.queue( "test 2" );
std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
batcher.queue( "test 3" );
std::this_thread::sleep_for( std::chrono::seconds( 2 ) );
batcher.queue( "test 4" );
std::this_thread::sleep_for( std::chrono::seconds( 5 ) );
batcher.queue( "test 5" );
}

Related

How could one delay a function without the use of sleep / suspending the code?

I need to delay a function by x amount of time. The problem is that I can't use sleep nor any function that suspends the function (that's because the function is a loop that contains more function, sleeping / suspending one will sleep / suspend all)
Is there a way I could do it?
If you want to execute some specific code at a certain time interval and don't want to use threads (to be able to suspend), then you have to keep track of time and execute the specific code when the delay time was exceeded.
Example (pseudo):
timestamp = getTime();
while (true) {
if (getTime() - timestamp > delay) {
//main functionality
//reset timer
timestamp = getTime();
}
//the other functionality you mentioned
}
With this approach, you invoke a specific fuction every time interval specified by delay. The other functions will be invoked at each iteration of the loop.
In other words, it makes no difference if you delay a function or execute it at specific time intervals.
Assuming that you need to run functions with their own arguments inside of a loop with custom delay and wait for them to finish before each iteration:
#include <cstdio>
void func_to_be_delayed(const int &idx = -1, const unsigned &ms = 0)
{
printf("Delayed response[%d] by %d ms!\n", idx, ms);
}
#include <chrono>
#include <future>
template<typename T, typename ... Ta>
void delay(const unsigned &ms_delay, T &func, Ta ... args)
{
std::chrono::time_point<std::chrono::high_resolution_clock> start = std::chrono::high_resolution_clock::now();
double elapsed;
do {
std::chrono::time_point<std::chrono::high_resolution_clock> end = std::chrono::high_resolution_clock::now();
elapsed = std::chrono::duration<double, std::milli>(end - start).count();
} while(elapsed <= ms_delay);
func(args...);
}
int main()
{
func_to_be_delayed();
const short iterations = 5;
for (int i = iterations; i >= 0; --i)
{
auto i0 = std::async(std::launch::async, [i]{ delay((i+1)*1000, func_to_be_delayed, i, (i+1)*1000); } );
// Will arrive with difference from previous
auto i1 = std::async(std::launch::async, [i]{ delay(i*1000, func_to_be_delayed, i, i*1000); } );
func_to_be_delayed();
// Loop will wait for all calls
}
}
Notice: this method potentially will spawn additional thread on each call with std::launch::async type of policy.
Standard solution is to implement event loop.
If you use some library, framework, system API, then most probably there is something similar provided to solve this kind of problem.
For example Qt has QApplication which provides this loop and there is QTimer.
boost::asio has io_context which provides even loop in which timer can be run boost::asio::deadline_timer.
You can also try implement such event loop yourself.
Example wiht boost:
#include <boost/asio.hpp>
#include <boost/date_time.hpp>
#include <exception>
#include <iostream>
void printTime(const std::string& label)
{
auto timeLocal = boost::posix_time::second_clock::local_time();
boost::posix_time::time_duration durObj = timeLocal.time_of_day();
std::cout << label << " time = " << durObj << '\n';
}
int main() {
boost::asio::io_context io_context;
try {
boost::asio::deadline_timer timer{io_context};
timer.expires_from_now(boost::posix_time::seconds(5));
timer.async_wait([](const boost::system::error_code& error){
if (!error) {
printTime("boom");
} else {
std::cerr << "Error: " << error << '\n';
}
});
printTime("start");
io_context.run();
} catch (const std::exception& e) {
std::cerr << e.what() << '\n';
}
return 0;
}
https://godbolt.org/z/nEbTvMhca
C++20 introduces coroutines, this could be a good solution too.

C++ ThreadSafe time counting class

I am trying to build a simple threadsafe time counter class. The code I managed to write is the follwing:
#include <iostream>
#include <chrono>
#include <mutex>
#include <condition_variable>
/* Get timestamp in microseconds */
static inline uint64_t micros()
{
return (uint64_t)std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
}
class Timer
{
public:
explicit Timer() = default;
/**
* #brief Restart the counter
*/
void Restart()
{
std::unique_lock<std::mutex> mlock(_mutex);
{
this->_PreviousUs = micros();
this->_IsRunning = true;
}
mlock.unlock();
_cond.notify_one();
}
/**
* #brief Stop the timer
*/
void Stop()
{
std::unique_lock<std::mutex> mlock(_mutex);
{
this->_IsRunning = false;
}
mlock.unlock();
_cond.notify_one();
}
/**
* #brief Check whether counter is started or not
* #return true if timer is running, false otherwise
*/
bool IsRunning()
{
std::unique_lock<std::mutex> mlock(_mutex);
bool tmp = _IsRunning;
mlock.unlock();
_cond.notify_one();
return tmp;
}
/**
* #brief Calculate number of elapsed milliseconds from current timestamp
* #return Return elapsed milliseconds
*/
uint64_t ElapsedMs()
{
std::unique_lock<std::mutex> mlock(_mutex);
uint64_t tmp = _PreviousUs;
mlock.unlock();
_cond.notify_one();
return ( millis() - (tmp/1000u) );
}
/**
* #brief Calculate number of elapsed microseconds from current timestamp
* #return Return elapsed microseconds
*/
uint64_t ElapsedUs()
{
std::unique_lock<std::mutex> mlock(_mutex);
uint64_t tmp = _PreviousUs;
mlock.unlock();
_cond.notify_one();
return ( micros() - tmp );
}
private:
/** Timer's state */
bool _IsRunning = false;
/** Thread sync for read/write */
std::mutex _mutex;
std::condition_variable _cond;
/** Remember when timer was stated */
uint64_t _PreviousUs = 0;
};
The usage is simple. I just create a global variable then access it from few different threads.
/* global variable */
Timer timer;
..............................
/* restart in some methods */
timer.Restart();
...............................
/* From some other threads */
if(timer.IsRunning())
{
// retrieve time since Restsrt() then do something
timer.ElapsedMs();
// Restart eventually
timer.Restart();
}
It is working under Linux and is fine for now. But the pice of code which is worrying me is this:
std::unique_lock<std::mutex> mlock(_mutex);
uint64_t tmp = _PreviousUs;
mlock.unlock();
_cond.notify_one();
return ( micros() - tmp );
I have to create a temporary variable everytime I check for the elapsed time for the sake of the "thread safety".
Is there any way to improve my code and to keep it thread safe at the same time?
PS: I know that I can use only the function micros() to count time as simple as possible but my plans are to develop this class further in the future.
Later edit: My question is not really how do I get the timestamps. My question is how do I read/write safe _PreviousUs given that the same instance of the Timer class will be shared across multiple threads?
Your class doesn't look right.
There is an example how to measure time in std::chrono::duration_cast:
#include <iostream>
#include <chrono>
#include <ratio>
#include <thread>
void f()
{
std::this_thread::sleep_for(std::chrono::seconds(1));
}
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
f();
auto t2 = std::chrono::high_resolution_clock::now();
// floating-point duration: no duration_cast needed
std::chrono::duration<double, std::milli> fp_ms = t2 - t1;
// integral duration: requires duration_cast
auto int_ms = std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1);
// converting integral duration to integral duration of shorter divisible time unit:
// no duration_cast needed
std::chrono::duration<long, std::micro> int_usec = int_ms;
std::cout << "f() took " << fp_ms.count() << " ms, "
<< "or " << int_ms.count() << " whole milliseconds "
<< "(which is " << int_usec.count() << " whole microseconds)" << std::endl;
}

c++ work queues with blocking

This question should be a little simpler than my last few. I've implemented the following work queue in my program:
Pool.h:
// tpool class
// It's always closed. :glasses:
#ifndef __POOL_H
#define __POOL_H
class tpool {
public:
tpool( std::size_t tpool_size );
~tpool();
template< typename Task >
void run_task( Task task ){
boost::unique_lock< boost::mutex > lock( mutex_ );
if( 0 < available_ ) {
--available_;
io_service_.post( boost::bind( &tpool::wrap_task, this, boost::function< void() > ( task ) ) );
}
}
private:
boost::asio::io_service io_service_;
boost::asio::io_service::work work_;
boost::thread_group threads_;
std::size_t available_;
boost::mutex mutex_;
void wrap_task( boost::function< void() > task );
};
extern tpool dbpool;
#endif
pool.cpp:
#include <boost/asio/io_service.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include "pool.h"
tpool::tpool( std::size_t tpool_size ) : work_( io_service_ ), available_( tpool_size ) {
for ( std::size_t i = 0; i < tpool_size; ++i ){
threads_.create_thread( boost::bind( &boost::asio::io_service::run, &io_service_ ) );
}
}
tpool::~tpool() {
io_service_.stop();
try {
threads_.join_all();
}
catch( ... ) {}
}
void tpool::wrap_task( boost::function< void() > task ) {
// run the supplied task
try {
task();
} // suppress exceptions
catch( ... ) {
}
boost::unique_lock< boost::mutex > lock( mutex_ );
++available_;
}
tpool dbpool( 50 );
The problem is, though, is that not all my calls to run_task() are being completed by worker threads. I'm not sure if it's because it's not entering into the queue or because the task vanishes when the thread that created it exits.
So my question is, is there anything special I have to give to boost::thread to make it wait until the queue is unlocked? and what is the expected lifetime of a task entered into a queue? Do the tasks go out of scope when the thread that created them exits? If so, how can I prevent that from happening?
Edit: I've made the following changes to my code:
template< typename Task >
void run_task( Task task ){ // add item to the queue
io_service_.post( boost::bind( &tpool::wrap_task, this, boost::function< void() > ( task ) ) );
}
and am now seeing all entries being entered correctly. However, I am left with one lingering question: What is the lifetime of tasks added to the queue? Do they cease to exists once the thread that created them exits?
Well. That's really quite simple; You're rejecting the tasks posted!
template< typename Task >
void run_task(task task){
boost::unique_lock<boost::mutex> lock( mutex_ );
if(0 < available_) {
--available_;
io_service_.post(boost::bind(&tpool::wrap_task, this, boost::function< void() > ( task )));
}
}
Note that the lock "waits" until the mutex is not owned by a thread. This might already be the case, and possibly when available_ is already 0. Now the line
if(0 < available_) {
This line is simply the condition. It's not "magical" because you're holding the mutex_ locked. (The program doesn't even know that a relation exists between mutex_ and available_). So, if available_ <= 0 you will just skip posting the job.
Solution #1
You should use the io_service to queue for you. This is likely what you wanted to achieve in the first place. Instead of keeping track of "available" threads, io_service does the work for you. You control how many threads it may use, by running the io_service on as many threads. Simple.
Since io_service is already thread-safe, you can do without the lock.
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <iostream>
// tpool class
// It's always closed. :glasses:
#ifndef __POOL_H
#define __POOL_H
class tpool {
public:
tpool( std::size_t tpool_size );
~tpool();
template<typename Task>
void run_task(Task task){
io_service_.post(task);
}
private:
// note the order of destruction of members
boost::asio::io_service io_service_;
boost::asio::io_service::work work_;
boost::thread_group threads_;
};
extern tpool dbpool;
#endif
#include <boost/asio/io_service.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
//#include "pool.h"
tpool::tpool(std::size_t tpool_size) : work_(io_service_) {
for (std::size_t i = 0; i < tpool_size; ++i)
{
threads_.create_thread(
boost::bind(&boost::asio::io_service::run, &io_service_)
);
}
}
tpool::~tpool() {
io_service_.stop();
try {
threads_.join_all();
}
catch(...) {}
}
void foo() { std::cout << __PRETTY_FUNCTION__ << "\n"; }
void bar() { std::cout << __PRETTY_FUNCTION__ << "\n"; }
int main() {
tpool dbpool(50);
dbpool.run_task(foo);
dbpool.run_task(bar);
boost::this_thread::sleep_for(boost::chrono::seconds(1));
}
For shutdown purposes, you will want to enable "clearing" the io_service::work object, otherwise your pool will never exit.
Solution #2
Don't use io_service, instead roll your own queue implementation with a condition variable to notify a worker thread of new work being posted. Again, the number of workers is determined by the number of threads in the group.
#include <boost/thread.hpp>
#include <boost/phoenix.hpp>
#include <boost/optional.hpp>
using namespace boost;
using namespace boost::phoenix::arg_names;
class thread_pool
{
private:
mutex mx;
condition_variable cv;
typedef function<void()> job_t;
std::deque<job_t> _queue;
thread_group pool;
boost::atomic_bool shutdown;
static void worker_thread(thread_pool& q)
{
while (auto job = q.dequeue())
(*job)();
}
public:
thread_pool() : shutdown(false) {
for (unsigned i = 0; i < boost::thread::hardware_concurrency(); ++i)
pool.create_thread(bind(worker_thread, ref(*this)));
}
void enqueue(job_t job)
{
lock_guard<mutex> lk(mx);
_queue.push_back(std::move(job));
cv.notify_one();
}
optional<job_t> dequeue()
{
unique_lock<mutex> lk(mx);
namespace phx = boost::phoenix;
cv.wait(lk, phx::ref(shutdown) || !phx::empty(phx::ref(_queue)));
if (_queue.empty())
return none;
auto job = std::move(_queue.front());
_queue.pop_front();
return std::move(job);
}
~thread_pool()
{
shutdown = true;
{
lock_guard<mutex> lk(mx);
cv.notify_all();
}
pool.join_all();
}
};
void the_work(int id)
{
std::cout << "worker " << id << " entered\n";
// no more synchronization; the pool size determines max concurrency
std::cout << "worker " << id << " start work\n";
this_thread::sleep_for(chrono::seconds(2));
std::cout << "worker " << id << " done\n";
}
int main()
{
thread_pool pool; // uses 1 thread per core
for (int i = 0; i < 10; ++i)
pool.enqueue(bind(the_work, i));
}

Boost synchronization

I have NUM_THREADS threads, with the following codes in my thread:
/*
Calculate some_value;
*/
//Critical section to accummulate all thresholds
{
boost::mutex::scoped_lock lock(write_mutex);
T += some_value;
num_threads++;
if (num_threads == NUM_THREADS){
T = T/NUM_THREADS;
READY = true;
cond.notify_all();
num_threads = 0;
}
}
//Wait for average threshold to be ready
if (!READY)
{
boost::unique_lock<boost::mutex> lock(wait_mutex);
while (!READY){
cond.wait(lock);
}
}
//End critical section
/*
do_something;
*/
Basically, I want all the threads to wait for the READY signal before continuing. num_thread is set to 0, and READY is false before threads are created. Once in a while, deadlock occurs. Can anyone help please?
All the boost variables are globally declared as follows:
boost::mutex write_mutex;
boost::mutex wait_mutex;
boost::condition cond;
The code has a race condition on the READY flag (which I assume is just a bool variable). What may happen (i.e. one possible variant of thread execution interleaving) is:
Thread T1: Thread T2:
if (!READY)
{
unique_lock<mutex> lock(wait_mutex); mutex::scoped_lock lock(write_mutex);
while (!READY) /* ... */
{ READY = true;
/* !!! */ cond.notify_all();
cond.wait(lock);
}
}
The code testing the READY flag is not synchronized with the code setting it (note the locks are different for these critical sections). And when T1 is in a "hole" between the flag test and waiting at cond, T2 may set the flag and send a signal to cond which T1 may miss.
The simplest solution is to lock the right mutex for the update of READY and condition notification:
/*...*/
T = T/NUM_THREADS;
{
boost::mutex::scoped_lock lock(wait_mutex);
READY = true;
cond.notify_all();
}
It looks like Boost.Thread's barriers might be what you need.
Here's a working example that averages values provided by several worker threads. Each worker thread uses the same shared barrier (via the accumulator instance) to synchronize each other.
#include <cstdlib>
#include <iostream>
#include <vector>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
boost::mutex coutMutex;
typedef boost::lock_guard<boost::mutex> LockType;
class Accumulator
{
public:
Accumulator(int count) : barrier_(count), sum_(0), count_(count) {}
void accumulateAndWait(float value)
{
{
// Increment value
LockType lock(mutex_);
sum_ += value;
}
barrier_.wait(); // Wait for other the threads to wait on barrier.
}
void wait() {barrier_.wait();} // Wait on barrier without changing sum.
float sum() {LockType lock(mutex_); return sum_;} // Return current sum
float average() {LockType lock(mutex_); return sum_ / count_;}
// Reset the sum. The barrier is automatically reset when triggered.
void reset() {LockType lock(mutex_); sum_ = 0;}
private:
typedef boost::lock_guard<boost::mutex> LockType;
boost::barrier barrier_;
boost::mutex mutex_;
float sum_;
int count_;
};
/* Posts a value for the accumulator to add and waits for other threads
to do the same. */
void workerFunction(Accumulator& accumulator)
{
// Sleep for a random amount of time before posting value
int randomMilliseconds = std::rand() % 3000;
boost::posix_time::time_duration randomDelay =
boost::posix_time::milliseconds(randomMilliseconds);
boost::this_thread::sleep(randomDelay);
// Post some random value
float value = std::rand() % 100;
{
LockType lock(coutMutex);
std::cout << "Thread " << boost::this_thread::get_id() << " posting "
<< value << " after " << randomMilliseconds << "ms\n";
}
accumulator.accumulateAndWait(value);
float avg = accumulator.average();
// Print a message to indicate this thread is past the barrier.
{
LockType lock(coutMutex);
std::cout << "Thread " << boost::this_thread::get_id() << " unblocked. "
<< "Average = " << avg << "\n" << std::flush;
}
}
int main()
{
int workerThreadCount = 5;
Accumulator accumulator(workerThreadCount);
// Create and launch worker threads
boost::thread_group threadGroup;
for (int i=0; i<workerThreadCount; ++i)
{
threadGroup.create_thread(
boost::bind(&workerFunction, boost::ref(accumulator)));
}
// Wait for all worker threads to finish
threadGroup.join_all();
{
LockType lock(coutMutex);
std::cout << "All worker threads finished\n" << std::flush;
}
/* Pause a bit before exiting, to give worker threads a chance to
print their messages. */
boost::this_thread::sleep(boost::posix_time::seconds(1));
}
I get the following output:
Thread 0x100100f80 posting 72 after 1073ms
Thread 0x100100d30 posting 44 after 1249ms
Thread 0x1001011d0 posting 78 after 1658ms
Thread 0x100100ae0 posting 23 after 1807ms
Thread 0x100101420 posting 9 after 1930ms
Thread 0x100101420 unblocked. Average = 45.2
Thread 0x100100f80 unblocked. Average = 45.2
Thread 0x100100d30 unblocked. Average = 45.2
Thread 0x1001011d0 unblocked. Average = 45.2
Thread 0x100100ae0 unblocked. Average = 45.2
All worker threads finished

Implementing an event timer using boost::asio

The sample code looks long, but actually it's not so complicated :-)
What I'm trying to do is, when a user calls EventTimer.Start(), it will execute the callback handler (which is passed into the ctor) every interval milliseconds for repeatCount times.
You just need to look at the function EventTimer::Stop()
#include <iostream>
#include <string>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/function.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <ctime>
#include <sys/timeb.h>
#include <Windows.h>
std::string CurrentDateTimeTimestampMilliseconds() {
double ms = 0.0; // Milliseconds
struct timeb curtime;
ftime(&curtime);
ms = (double) (curtime.millitm);
char timestamp[128];
time_t now = time(NULL);
struct tm *tp = localtime(&now);
sprintf(timestamp, "%04d%02d%02d-%02d%02d%02d.%03.0f",
tp->tm_year + 1900, tp->tm_mon + 1, tp->tm_mday, tp->tm_hour, tp->tm_min, tp->tm_sec, ms);
return std::string(timestamp);
}
class EventTimer
{
public:
static const int kDefaultInterval = 1000;
static const int kMinInterval = 1;
static const int kDefaultRepeatCount = 1;
static const int kInfiniteRepeatCount = -1;
static const int kDefaultOffset = 10;
public:
typedef boost::function<void()> Handler;
EventTimer(Handler handler = NULL)
: interval(kDefaultInterval),
repeatCount(kDefaultRepeatCount),
handler(handler),
timer(io),
exeCount(-1)
{
}
virtual ~EventTimer()
{
}
void SetInterval(int value)
{
// if (value < 1)
// throw std::exception();
interval = value;
}
void SetRepeatCount(int value)
{
// if (value < 1)
// throw std::exception();
repeatCount = value;
}
bool Running() const
{
return exeCount >= 0;
}
void Start()
{
io.reset(); // I don't know why I have to put io.reset here,
// since it's already been called in Stop()
exeCount = 0;
timer.expires_from_now(boost::posix_time::milliseconds(interval));
timer.async_wait(boost::bind(&EventTimer::EventHandler, this));
io.run();
}
void Stop()
{
if (Running())
{
// How to reset everything when stop is called???
//io.stop();
timer.cancel();
io.reset();
exeCount = -1; // Reset
}
}
private:
virtual void EventHandler()
{
// Execute the requested operation
//if (handler != NULL)
// handler();
std::cout << CurrentDateTimeTimestampMilliseconds() << ": exeCount = " << exeCount + 1 << std::endl;
// Check if one more time of handler execution is required
if (repeatCount == kInfiniteRepeatCount || ++exeCount < repeatCount)
{
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(interval));
timer.async_wait(boost::bind(&EventTimer::EventHandler, this));
}
else
{
Stop();
std::cout << CurrentDateTimeTimestampMilliseconds() << ": Stopped" << std::endl;
}
}
private:
int interval; // Milliseconds
int repeatCount; // Number of times to trigger the EventHandler
int exeCount; // Number of executed times
boost::asio::io_service io;
boost::asio::deadline_timer timer;
Handler handler;
};
int main()
{
EventTimer etimer;
etimer.SetInterval(1000);
etimer.SetRepeatCount(1);
std::cout << CurrentDateTimeTimestampMilliseconds() << ": Started" << std::endl;
etimer.Start();
// boost::thread thrd1(boost::bind(&EventTimer::Start, &etimer));
Sleep(3000); // Keep the main thread active
etimer.SetInterval(2000);
etimer.SetRepeatCount(1);
std::cout << CurrentDateTimeTimestampMilliseconds() << ": Started again" << std::endl;
etimer.Start();
// boost::thread thrd2(boost::bind(&EventTimer::Start, &etimer));
Sleep(5000); // Keep the main thread active
}
/* Current Output:
20110520-125506.781: Started
20110520-125507.781: exeCount = 1
20110520-125507.781: Stopped
20110520-125510.781: Started again
*/
/* Expected Output (timestamp might be slightly different with some offset)
20110520-125506.781: Started
20110520-125507.781: exeCount = 1
20110520-125507.781: Stopped
20110520-125510.781: Started again
20110520-125512.781: exeCount = 1
20110520-125512.781: Stopped
*/
I don't know why that my second time of calling to EventTimer::Start() does not work at all. My questions are:
What should I do in
EventTimer::Stop() in order to reset
everything so that next time of
calling Start() will work?
Is there anything else I have to modify?
If I use another thread to start the EventTimer::Start() (see the commented code in the main function), when does the thread actually exit?
Thanks.
Peter
As Sam hinted, depending on what you're attempting to accomplish, most of the time it is considered a design error to stop an io_service. You do not need to stop()/reset() the io_service in order to reschedule a timer.
Normally you would leave a thread or thread pool running attatched to an io_service and then you would schedule whatever event you need with the io_service. With the io_service machinery in place, leave it up to the io_service to dispatch your scheduled work as requested and then you only have to work with the events or work requests that you schedule with the io_service.
It's not entirely clear to me what you are trying to accomplish, but there's a couple of things that are incorrect in the code you have posted.
io_service::reset() should only be invoked after a previous invocation of io_service::run() was stopped or ran out of work as the documentation describes.
you should not need explicit calls to Sleep(), the call to io_service::run() will block as long as it has work to do.
I figured it out, but I don't know why that I have to put io.reset() in Start(), since it's already been called in Stop().
See the updated code in the post.