Accurate continuous timer callback - c++

Ive got an application where I want to display a frame every x milliseconds.
Previously I did it like this:
class SomeClass
{
boost::thread thread_;
boost::timer timer_;
public:
SomeClass() : thread_([=]{Display();})
{
}
void Display
{
double wait = 1.0/fps*1000.0;
while(isRunning_)
{
double elapsed = timer.elapsed()*1000.0;
if(elapsed < wait)
boost::this_thread::sleep(boost::posix_time::milliseconds(static_cast<unsigned int>(wait - elapsed)));
timer.restart();
// ... Get Frame. This can block while no frames are being rendered.
// ... Display Frame.
}
}
}
However I dont think solution has very good accuracy. I might be wrong?
I was hoping to used boost::asio::deadline_timer instead, but I'm unsure how to use it.
This is what ive tried, which doesn't seem to wait at all. It seems to just render the frames as fast as it can.
class SomeClass
{
boost::thread thread_;
boost::asio::io_service io_;
boost::asio::deadline_timer timer_;
public:
SomeClass() : timer_(io_, 1.0/fps*1000.0)
{
timer_.async_wait([=]{Display();});
thread_ = boost::thread([=]{io_.run();})
}
void Display
{
double wait = 1.0/fps*1000.0;
while(isRunning_)
{
timer_.expires_from_now(boost::posix_time::milliseconds(wait_)); // Could this overflow?
// ... Get Frame. This can block while no frames are being rendered.
// ... Display Frame.
timer_.async_wait([=]{Display();});
}
}
}
What am I doing wrong? And if I got this solution working would it be better than the first?

Here's a fairly trivial example using boost::asio::deadline_timer, hopefully it helps
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <iostream>
class Timer : public boost::enable_shared_from_this<Timer>
{
public:
Timer(
boost::asio::io_service& io_service
) :
_io_service( io_service ),
_timer( io_service )
{
}
void start() {
_timer.expires_from_now(
boost::posix_time::seconds( 0 )
);
_timer.async_wait(
boost::bind(
&Timer::handler,
shared_from_this(),
boost::asio::placeholders::error
)
);
}
private:
void handler(
const boost::system::error_code& error
)
{
if ( error ) {
std::cerr << error.message() << std::endl;
return;
}
std::cout << "handler" << std::endl;
_timer.expires_from_now(
boost::posix_time::seconds( 1 )
);
_timer.async_wait(
boost::bind(
&Timer::handler,
shared_from_this(),
boost::asio::placeholders::error
)
);
}
private:
boost::asio::io_service& _io_service;
boost::asio::deadline_timer _timer;
};
int
main()
{
boost::asio::io_service io_service;
boost::shared_ptr<Timer> timer(
new Timer( io_service )
);
timer->start();
io_service.run();
}

Remember that the accuracy with which a frame is displayed is limited by the refresh rate of your display (typically 17 ms for a 60 Hz display, or 13 ms for a 75 Hz display). If you're not syncing to the display refresh then you have an indeterminate latency of 0 - 17 ms to add to whatever timing method you use, hence accuracy doesn't really need be much better than 10 ms (even 1 ms is probably overkill).

Related

Incorrect Interval Timer for a CallBack function in C++

I find on the web this class to implement a callback function that asynchronously do some work while I'm on the Main thread. This is the class:
#include "callbacktimer.h"
CallBackTimer::CallBackTimer()
:_execute(false)
{}
CallBackTimer::~CallBackTimer() {
if( _execute.load(std::memory_order_acquire) ) {
stop();
};
}
void CallBackTimer::stop()
{
_execute.store(false, std::memory_order_release);
if( _thd.joinable() )
_thd.join();
}
void CallBackTimer::start(int interval, std::function<void(void)> func)
{
if( _execute.load(std::memory_order_acquire) ) {
stop();
};
_execute.store(true, std::memory_order_release);
_thd = std::thread([this, interval, func]()
{
while (_execute.load(std::memory_order_acquire)) {
func();
std::this_thread::sleep_for(
std::chrono::milliseconds(interval)
);
}
});
}
bool CallBackTimer::is_running() const noexcept {
return ( _execute.load(std::memory_order_acquire) &&
_thd.joinable() );
}
The problem here is that if I put a job to be done every millisecond I don't know why but it is repeated every 64 milliseconds and not every 1 millisecond, this snippet get an idea:
#include "callbacktimer.h"
int main()
{
CallBackTimer cBT;
int i = 0;
cBT.start(1, [&]()-> void {
i++;
});
while(true)
{
std::cout << i << std::endl;
Sleep(1000);
}
return 0;
}
Here I should see on the Standard Output: 1000, 2000, 3000, and so on. But it doesn't...
It's quite hard to do something on a PC in a 1ms interval. Thread scheduling happens at 1/64s, which is ~16ms.
When you try to sleep for 1 ms, it will likely sleep for 1/64s instead, given that no other thread is scheduled to run. As your main thread sleeps for one second, your callback timer may run up to 64 times during that interval.
See also How often per second does Windows do a thread switch?
You can try multimedia timers which may go down to 1 millisecond.
I'm trying to implement a chronometer in qt which should show also the microsecondo
Well, you can show microseconds, I guess. But your function won't run every microsecond.

Timeout in C++ using Boost datetime

How to implement a timeout while loop in C++ using boost::datetime?
something like:
#define TIMEOUT 12
while(some_boost_datetime_expression(TIMEOUT))
{
do_something(); // do it until timeout expires
}
// timeout expired
Use Boost::deadline_timer for timeouts. Constant check of value in loop is overkill for CPU.
You'll first want to mark the time you start, then calculate the difference between the current time and the time you started. No built-in boost datetime expression will work exactly like you describe. In boost datetime terminology: http://www.boost.org/doc/libs/1_51_0/doc/html/date_time.html the duration of your timeout is a "time duration", and the point you start is a "time point".
Suppose you want to be accurate to within a second, and have a 4 minute 2 second interval.
using namespace boost::posix_time;
ptime start = second_clock::local_time();
gives you a time point to start your timing
ptime end = start + minutes(4)+seconds(2);
gives you a point in time 4 minutes and 2 seconds from now.
And then
( second_clock::local_time() < end )
is true if and only if the current time is before the end time.
(Disclaimer: this is not based off actually writing any boost datetime code before, but just reading the docs and example code over at the boost website.)
You can just check the time difference:
boost::posix_time::ptime now = boost::posix_time::microsec_clock::local_time();
while((boost::posix_time::microsec_clock::local_time() - now) < boost::posix_time::milliseconds(TIMEOUT ) )
{
// do something
}
But instead of doing something like that you might rethink your design.
This can easily be done with boost.Asio. Start a deadline_timer as one async process. It cancels the event loop when it expires. Keep posting your work to the same event loop till it is running. A working solution:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
class timed_job
{
public:
timed_job( int timeout ) :
timer_( io_service_, boost::posix_time::seconds( timeout ) ) // Deadline timer
{
}
void start()
{
// Start timer
timer_.async_wait
(
boost::bind
(
&timed_job::stop, this
)
);
// Post your work
io_service_.post
(
boost::bind
(
&timed_job::do_work, this
)
);
io_service_.run();
std::cout << "stopped." << std::endl;
}
private:
void stop()
{
std::cout << "call stop..." << std::endl;
io_service_.stop();
}
void do_work ()
{
std::cout << "running..." << std::endl;
// Keep posting the work.
io_service_.post
(
boost::bind
(
&timed_job::do_work, this
)
);
}
private:
boost::asio::io_service io_service_;
boost::asio::deadline_timer timer_;
};
int main()
{
timed_job job( 5 );
job.start();
return 0;
}

boost deadline_timer issue

Here follows the implementation of a test class wrapping a thread with a timer.
The strange thing is that if the deadline is set to 500 milliseconds it works but if I set it to 1000 milliseconds it does not. What am I doing wrong?
#include "TestTimer.hpp"
#include "../SysMLmodel/Package1/Package1.hpp"
TestTimer::TestTimer(){
thread = boost::thread(boost::bind(&TestTimer::classifierBehavior,this));
timer = new boost::asio::deadline_timer(service,boost::posix_time::milliseconds(1000));
timer->async_wait(boost::bind(&TestTimer::timerBehavior, this));
};
TestTimer::~TestTimer(){
}
void TestTimer::classifierBehavior(){
service.run();
};
void TestTimer::timerBehavior(){
std::cout<<"timerBehavior\r";
timer->expires_at(timer->expires_at() + boost::posix_time::milliseconds(1000));
timer->async_wait(boost::bind(&TestTimer::timerBehavior,this));
}
UPDATE 1
I have noticed that the program stucks (or at least the standard output in the console for many seconds, about 30) then a lot of "timerBehavior" strings are printed out together as if they have been queued somewhere.
You program might have several problems. From what you have shown, it's hard to say, if the program stops before the timer had a chance to trigger. And, you do not flush your output, use std::endl, if you want to flush the output after a newline. Third, if your thread is going to run the io_service.run() function, it might be that the thread finds an empty io queue and run() will return immediately. To prevent that, there is a work class, that will prevent this. Here is my example, from you code, that might work as expected:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <iostream>
class TestTimer
{
public:
TestTimer()
: service()
, work( service )
, thread( boost::bind( &TestTimer::classifierBehavior,this ) )
, timer( service,boost::posix_time::milliseconds( 1000 ) )
{
timer.async_wait( boost::bind( &TestTimer::timerBehavior, this ) );
}
~TestTimer()
{
thread.join();
}
private:
void classifierBehavior()
{
service.run();
}
void timerBehavior() {
std::cout << "timerBehavior" << std::endl;
timer.expires_at( timer.expires_at() + boost::posix_time::milliseconds( 1000 ) );
timer.async_wait( boost::bind( &TestTimer::timerBehavior,this ) );
}
boost::asio::io_service service;
boost::asio::io_service::work work;
boost::thread thread;
boost::asio::deadline_timer timer;
};
int main()
{
TestTimer test;
}

Intermittently no data delivered through boost::asio / io completion port

Problem
I am using boost::asio for a project where two processes on the same machine communicate using TCP/IP. One generates data to be read by the other, but I am encountering a problem where intermittently no data is being sent through the connection. I've boiled this down to a very simple example below, based on the async tcp echo server example.
The processes (source code below) start out fine, delivering data at a fast rate from the sender to the receiver. Then all of a sudden, no data at all is delivered for about five seconds. Then data is delivered again until the next inexplicable pause. During these five seconds, the processes eat 0% CPU and no other processes seem to do anything in particular. The pause is always the same length - five seconds.
I am trying to figure out how to get rid of these stalls and what causes them.
CPU usage during an entire run:
Notice how there are three dips of CPU usage in the middle of the run - a "run" is a single invocation of the server process and the client process. During these dips, no data was delivered. The number of dips and their timing differs between runs - some times no dips at all, some times many.
I am able to affect the "probability" of these stalls by changing the size of the read buffer - for instance if I make the read buffer a multiple of the send chunk size it appears that this problem almost goes away, but not entirely.
Source and test description
I've compiled the below code with Visual Studio 2005, using Boost 1.43 and Boost 1.45. I have tested on Windows Vista 64 bit (on a quad-core) and Windows 7 64 bit (on both a quad-core and a dual-core).
The server accepts a connection and then simply reads and discards data. Whenever a read is performed a new read is issued.
The client connects to the server, then puts a bunch of packets into a send queue. After this it writes the packets one at the time. Whenever a write has completed, the next packet in the queue is written. A separate thread monitors the queue size and prints this to stdout every second. During the io stalls, the queue size remains exactly the same.
I have tried to used scatter io (writing multiple packets in one system call), but the result is the same. If I disable IO completion ports in Boost using BOOST_ASIO_DISABLE_IOCP, the problem appears to go away but at the price of significantly lower throughput.
// Example is adapted from async_tcp_echo_server.cpp which is
// Copyright (c) 2003-2010 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Start program with -s to start as the server
#ifndef _WIN32_WINNT
#define _WIN32_WINNT 0x0501
#endif
#include <iostream>
#include <tchar.h>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#define PORT "1234"
using namespace boost::asio::ip;
using namespace boost::system;
class session {
public:
session(boost::asio::io_service& io_service) : socket_(io_service) {}
void do_read() {
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this, _1, _2));
}
boost::asio::ip::tcp::socket& socket() { return socket_; }
protected:
void handle_read(const error_code& ec, size_t bytes_transferred) {
if (!ec) {
do_read();
} else {
delete this;
}
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
explicit server(boost::asio::io_service& io_service)
: io_service_(io_service)
, acceptor_(io_service, tcp::endpoint(tcp::v4(), atoi(PORT)))
{
session* new_session = new session(io_service_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session, _1));
}
void handle_accept(session* new_session, const error_code& ec) {
if (!ec) {
new_session->do_read();
new_session = new session(io_service_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session, _1));
} else {
delete new_session;
}
}
private:
boost::asio::io_service& io_service_;
boost::asio::ip::tcp::acceptor acceptor_;
};
class client {
public:
explicit client(boost::asio::io_service &io_service)
: io_service_(io_service)
, socket_(io_service)
, work_(new boost::asio::io_service::work(io_service))
{
io_service_.post(boost::bind(&client::do_init, this));
}
~client() {
packet_thread_.join();
}
protected:
void do_init() {
// Connect to the server
tcp::resolver resolver(io_service_);
tcp::resolver::query query(tcp::v4(), "localhost", PORT);
tcp::resolver::iterator iterator = resolver.resolve(query);
socket_.connect(*iterator);
// Start packet generation thread
packet_thread_.swap(boost::thread(
boost::bind(&client::generate_packets, this, 8000, 5000000)));
}
typedef std::vector<unsigned char> packet_type;
typedef boost::shared_ptr<packet_type> packet_ptr;
void generate_packets(long packet_size, long num_packets) {
// Add a single dummy packet multiple times, then start writing
packet_ptr buf(new packet_type(packet_size, 0));
write_queue_.insert(write_queue_.end(), num_packets, buf);
queue_size = num_packets;
do_write_nolock();
// Wait until all packets are sent.
while (long queued = InterlockedExchangeAdd(&queue_size, 0)) {
std::cout << "Queue size: " << queued << std::endl;
Sleep(1000);
}
// Exit from run(), ignoring socket shutdown
work_.reset();
}
void do_write_nolock() {
const packet_ptr &p = write_queue_.front();
async_write(socket_, boost::asio::buffer(&(*p)[0], p->size()),
boost::bind(&client::on_write, this, _1));
}
void on_write(const error_code &ec) {
if (ec) { throw system_error(ec); }
write_queue_.pop_front();
if (InterlockedDecrement(&queue_size)) {
do_write_nolock();
}
}
private:
boost::asio::io_service &io_service_;
tcp::socket socket_;
boost::shared_ptr<boost::asio::io_service::work> work_;
long queue_size;
std::list<packet_ptr> write_queue_;
boost::thread packet_thread_;
};
int _tmain(int argc, _TCHAR* argv[]) {
try {
boost::asio::io_service io_svc;
bool is_server = argc > 1 && 0 == _tcsicmp(argv[1], _T("-s"));
std::auto_ptr<server> s(is_server ? new server(io_svc) : 0);
std::auto_ptr<client> c(is_server ? 0 : new client(io_svc));
io_svc.run();
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
So my question is basically:
How do I get rid of these stalls?
What causes this to happen?
Update: There appears to be some correlation with disk activity contrary to what I stated above, so it appears that if I start a large directory copy on the disk while the test is running this might increase the frequency of the io stalls. This could indicate that this is the Windows IO Prioritization that kicks in? Since the pauses are always the same length, that does sound somewhat like a timeout somewhere in the OS io code...
adjust boost::asio::socket_base::send_buffer_size and receive_buffer_size
adjust max_length to a larger number. Since TCP is stream oriented, don't think of it as receiving single packets. This is most likely causing some sort of "gridlock" between TCP send/receive windows.
I recently encountered a very similar sounding problem, and have a solution that works for me. I have an asynchronous server/client written in asio that sends and receives video (and small request structures), and I was seeing frequent 5 second stalls just as you describe.
Our fix was to increase the size of the socket buffers on each end, and to disable the Nagle algorithm.
pSocket->set_option( boost::asio::ip::tcp::no_delay( true) );
pSocket->set_option( boost::asio::socket_base::send_buffer_size( s_SocketBufferSize ) );
pSocket->set_option( boost::asio::socket_base::receive_buffer_size( s_SocketBufferSize ) );
It might be that only one of the above options is critical, but I've not investigated this further.

BOOST ASIO - How to write console server

I have to write asynchronous TCP Sever.
TCP Server have to be managed by console
(for eg: remove client, show list of all connected client, etcc..)
The problem is: How can I attach (or write) console, which can calls above functionalities.
This console have to be a client? Should I run this console client as a sepearate thread?
I have read a lot of tutorials and I couldn`t find a solution to my problem.
ServerTCP code
class ServerTCP
{
public:
ServerTCP(boost::asio::io_service& A_ioService, unsigned short A_uPortNumber = 13)
: m_tcpAcceptor(A_ioService, tcp::endpoint(tcp::v4(), A_uPortNumber)), m_ioService (A_ioService)
{
start();
}
private:
void start()
{
ClientSessionPtr spClient(new ClientSession(m_tcpAcceptor.io_service(), m_connectedClients));
m_tcpAcceptor.async_accept(spClient->getSocket(),
boost::bind(&ServerTCP::handleAccept, this, spClient,
boost::asio::placeholders::error));
}
void handleAccept(ClientSessionPtr A_spNewClient, const boost::system::error_code& A_nError)
{
if ( !A_nError )
{
A_spNewClient->start();
start();
}
}
boost::asio::io_service& m_ioService;
tcp::acceptor m_tcpAcceptor;
Clients m_connectedClients;
};
Main function:
try
{
boost::asio::io_service ioService;
ServerTCP server(ioService);
ioService.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Hello Sam. Thanks for reply. Could you be so kind and show me a some piece of code or some links to examples involve with this problem ?
Propably, I don`t understand correctly "... single threaded server ..."
In Fact in "console" where I want to manage server operations, I need smt like below:
main()
cout << "Options: q - close server, s - show clients";
while(1)
{
char key = _getch();
switch( key )
{
case 'q':
closeServer();
break
case 's':
showClients();
break
}
}
The problem is: How can I attach (or
write) console, which can calls above
functionalities. This console have to
be a client? Should I run this console
client as a sepearate thread?
You don't need a separate thread, use a posix::stream_descriptor and assign STDIN_FILENO to it. Use async_read and handle the requests in the read handlers.
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/shared_ptr.hpp>
#include <iostream>
using namespace boost::asio;
class Input : public boost::enable_shared_from_this<Input>
{
public:
typedef boost::shared_ptr<Input> Ptr;
public:
static void create(
io_service& io_service
)
{
Ptr input(
new Input( io_service )
);
input->read();
}
private:
explicit Input(
io_service& io_service
) :
_input( io_service )
{
_input.assign( STDIN_FILENO );
}
void read()
{
async_read(
_input,
boost::asio::buffer( &_command, sizeof(_command) ),
boost::bind(
&Input::read_handler,
shared_from_this(),
placeholders::error,
placeholders::bytes_transferred
)
);
}
void read_handler(
const boost::system::error_code& error,
size_t bytes_transferred
)
{
if ( error ) {
std::cerr << "read error: " << boost::system::system_error(error).what() << std::endl;
return;
}
if ( _command != '\n' ) {
std::cout << "command: " << _command << std::endl;
}
this->read();
}
private:
posix::stream_descriptor _input;
char _command;
};
int
main()
{
io_service io_service;
Input::create( io_service );
io_service.run();
}
If I understand the OP correctly, he/she wants to run an async TCP server that is controlled via a console i.e console is used as user interface.
In that case you don't need a separate client application to query the server for connected clients, etc.:
You need to spawn a thread that somehow calls the io_service::run method. Currently you are calling this from main. Since your server will probably be scoped in main, you need do something like pass a ref to the server to the new thread. The io_service could e.g be a member of the server class (unless your application has other requirements in which case pass both the server and the io_service to the new thread).
add the corresponding methods such as showClients, closeServer, etc. to your server class
make sure that these calls which are triggered via the console are thread-safe
in your closeServer method you could for instance call io_service::stop which would result in the server ending.