Waiting for interrupt-loop - c++

I need a code construction for my project which waits for some time, but when there is an interrupt (e.g. incoming udp packets) it leaves this loop, does something, and after this restart the waiting.
How can I implement this? My first idea is using while(wait(2000)), but wait is a void construct...
Thank you!

I would put the loop inside a function
void awesomeFunction() {
bool loop = true;
while (loop) {
wait(2000);
...
...
if (conditionMet)
loop = false;
}
}
Then i would put this function inside another loop
while (programRunning) {
awesomeFunction();
/* Loop ended, do stuff... */
}

There are a few things I am not clear about from the question. Is this a multi-threaded application, where one thread handles (say) the UDP packets, and the other waits for the event, or is this single-threaded? You also didn't mention what operating system this is, which is relevant. So I am going to assume Linux, or something that supports the poll API, or something similar (like select).
Let's assume a single threaded application that waits for UDP packets. The main idea is that once you have the socket's file descriptor, you have an infinite loop on a call to poll. For instance:
#include <poll.h>
// ...
void handle_packets() {
// m_fd was created with `socket` and `bind` or `connect`.
struct pollfd pfd = {.fd = m_fd, .events = POLLIN};
int timeout;
timeout = -1; // Wait indefinitely
// timeout = 2000; // Wait for 2 seconds
while (true) {
pfd.revents = 0;
poll(&pfd, 1, timeout);
if ((pfd.revents & POLLIN) != 0) {
handle_single_packet(); // Method to actually read and handle the packet
}
if ((pfd.revents & (POLLERR | POLLHUP)) != 0) {
break; // return on error or hangup
}
}
}
A simple example of select can be found here.
If you are looking at a multi-threaded application, trying to communicate between the two threads, then there are several options. Two of which are:
Use the same mechanism above. The file descriptor is the result of a call to pipe. The thread sleeping gets the read end of the pipe. The thread waking get the write end, and writes a character when it's time to wake up.
Use C++'s std::condition_variable. It is documented here, with a complete example. This solution depends on your context, e.g., whether you have a variable that you can wait on, or what has to be done.
Other interrupts can also be caught in this way. Signals, for instance, have a signalfd. Timer events have timerfd. This depends a lot on what you need, and in what environment you are running. For instance, timerfd is Linux-specific.

Related

Creating a dispatch queue / thread handler in C++ with pipes: FIFOs overfilling

Threads are resource-heavy to create and use, so often a pool of threads will be reused for asynchronous tasks. A task is packaged up, and then "posted" to a broker that will enqueue the task on the next available thread.
This is the idea behind dispatch queues (i.e. Apple's Grand Central Dispatch), and thread handlers (Android's Looper mechanism).
Right now, I'm trying to roll my own. In fact, I'm plugging a gap in Android whereby there is an API for posting tasks in Java, but not in the native NDK. However, I'm keeping this question platform independent where I can.
Pipes are the ideal choice for my scenario. I can easily poll the file descriptor of the read-end of a pipe(2) on my worker thread, and enqueue tasks from any other thread by writing to the write-end. Here's what that looks like:
int taskRead, taskWrite;
void setup() {
// Create the pipe
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Set up a routine that is called when task_r reports new data
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task - this is unsafe! See below.
(*taskPtr)();
// Clean up
delete taskPtr;
});
}
void post(const std::function<void(void)>& task) {
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
// Write the pointer to the pipe - this may block if the FIFO is full!
::write(taskWrite, &taskPtr, sizeof(taskPtr));
}
This code puts a std::function on the heap, and passes the pointer to the pipe. The function_that_polls_file_descriptor then calls the provided expression to read the pipe and execute the function. Note that there are no safety checks in this example.
This works great 99% of the time, but there is one major drawback. Pipes have a limited size, and if the pipe is filled, then calls to post() will hang. This in itself is not unsafe, until a call to post() is made within a task.
auto evil = []() {
// Post a new task back onto the queue
post({});
// Not enough new tasks, let's make more!
for (int i = 0; i < 3; i++) {
post({});
}
// Now for each time this task is posted, 4 more tasks will be added to the queue.
});
post(evil);
post(evil);
...
If this happens, then the worker thread will be blocked, waiting to write to the pipe. But the pipe's FIFO is full, and the worker thread is not reading anything from it, so the entire system is in deadlock.
What can be done to ensure that calls to post() eminating from the worker thread always succeed, allowing the worker to continue processing the queue in the event it is full?
Thanks to all the comments and other answers in this post, I now have a working solution to this problem.
The trick I've employed is to prioritise worker threads by checking which thread is calling post(). Here is the rough algorithm:
pipe ← NON-BLOCKING-PIPE()
overflow ← Ø
POST(task)
success ← WRITE(task, pipe)
IF NOT success THEN
IF THREAD-IS-WORKER() THEN
overflow ← overflow ∪ {task}
ELSE
WAIT(pipe)
POST(task)
Then on the worker thread:
LOOP FOREVER
task ← READ(pipe)
RUN(task)
FOR EACH overtask ∈ overflow
RUN(overtask)
overflow ← Ø
The wait is performed with pselect(2), adapted from the answer by #Sigismondo.
Here's the algorithm implemented in my original code example that will work for a single worker thread (although I haven't tested it after copy-paste). It can be extended to work for a thread pool by having a separate overflow queue for each thread.
int taskRead, taskWrite;
// These variables are only allowed to be modified by the worker thread
std::__thread_id workerId;
std::queue<std::function<void(void)>> overflow;
bool overflowInUse;
void setup() {
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Make the pipe non-blocking to check pipe overflows manually
::fcntl(taskWrite, F_SETFL, ::fcntl(taskWrite, F_GETFL, 0) | O_NONBLOCK);
// Save the ID of this worker thread to compare later
workerId = std::this_thread::get_id();
overflowInUse = false;
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task
(*taskPtr)();
delete taskPtr;
// Run any tasks that were posted to the overflow
while (!overflow.empty()) {
taskPtr = overflow.front();
overflow.pop();
(*taskPtr)();
delete taskPtr;
}
// Release the overflow mechanism if applicable
overflowInUse = false;
});
}
bool write(std::function<void(void)>* taskPtr, bool blocking = true) {
ssize_t rc = ::write(taskWrite, &taskPtr, sizeof(taskPtr));
// Failure handling
if (rc < 0) {
// If blocking is allowed, wait for pipe to become available
int err = errno;
if ((errno == EAGAIN || errno == EWOULDBLOCK) && blocking) {
fd_set fds;
FD_ZERO(&fds);
FD_SET(taskWrite, &fds);
::pselect(1, nullptr, &fds, nullptr, nullptr, nullptr);
// Try again
return write(tdata);
}
// Otherwise return false
return false;
}
return true;
}
void post(const std::function<void(void)>& task) {
auto* taskPtr = new std::function<void(void)>(task);
if (std::this_thread::get_id() == workerId) {
// The worker thread gets 1st-class treatment.
// It won't be blocked if the pipe is full, instead
// using an overflow queue until the overflow has been cleared.
if (!overflowInUse) {
bool success = write(taskPtr, false);
if (!success) {
overflow.push(taskPtr);
overflowInUse = true;
}
} else {
overflow.push(taskPtr);
}
} else {
write(taskPtr);
}
}
Make the pipe write file descriptor non-blocking, so that write fails with EAGAIN when the pipe is full.
One improvement is to increase the pipe buffer size.
Another is to use a UNIX socket/socketpair and increase the socket buffer size.
Yet another solution is to use a UNIX datagram socket which many worker threads can read from, but only one gets the next datagram. In other words, you can use a datagram socket as a thread dispatcher.
You can use the old good select to determine whether the file descriptors are ready to be used for writing:
The file descriptors in writefds will be watched to see if
space is available for write (though a large write may still block).
Since you are writing a pointer, your write() cannot be classified as large at all.
Clearly you must be ready to handle the fact that a post may fail, and then be ready to retry it later... otherwise you will be facing indefinitely growing pipes, until you system will break again.
More or less (not tested):
bool post(const std::function<void(void)>& task) {
bool post_res = false;
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
fd_set wfds;
struct timeval tv;
int retval;
FD_ZERO(&wfds);
FD_SET(taskWrite, &wfds);
// Don't wait at all
tv.tv_sec = 0;
tv.tv_usec = 0;
retval = select(1, NULL, &wfds, NULL, &tv);
// select() returns 0 when no FD's are ready
if (retval == -1) {
// handle error condition
} else if (retval > 0) {
// Write the pointer to the pipe. This write will succeed
::write(taskWrite, &taskPtr, sizeof(taskPtr));
post_res = true;
}
return post_res;
}
If you only look at Android/Linux using a pipe is not start of the art but using a event file descriptor together with epoll is the way to go.

Event Scheduling in C++

I am building an application where in I receive socket data. I need to reply this received data after few seconds(say 8 sec after). So I want to know is there a way to schedule an event which sends the socket data after 8 seconds automatically. I don't like to sleep unnecessarily for 8 seconds in the receiving thread or any other thread. This is what I have written so far for receiving socket data which is a pthread.
long DataSock_fd=socket(AF_INET,SOCK_DGRAM,IPPROTO_UDP);
StSocketAddress.sin_family=AF_INET; //address family
StSocketAddress.sin_addr.s_addr=inet_addr("10.10.10.10"); //load ip address
StSocketAddress.sin_port=htons(1234); //load port number
//bind the above socket to the above mentioned address, if result is less than 0(error in binding)
if(bind(DataSock_fd,(struct sockaddr *)&StSocketAddress,sizeof(StSocketAddress))<0)
{
close(DataSock_fd); //close the socket
perror("error while binding\n");
exit(EXIT_FAILURE); //exit the program
}
char Buff[1024];
long lSize = recvfrom(DataSock_fd,(char *)Buff,sizeof(Buff),0,NULL,NULL);
But I am stuck at scheduling an event that sends data after 8 seconds.
Take a look at this SO answer.
You could use <async> like this to solve your problem:
auto f = std::async(std::launch::async, [] {
std::this_thread::sleep_for(std::chrono::seconds(5));
printf("(5 seconds later) Hello");
});
you can either use boost::sleep, or chrono:: sleep_for or chrono:: sleep_until,
but if you don't want to call sleep, my best suggestion for you is to use std::mutex and lock the thread that receive the information from Time.currenttime -startTime == 8.
Approach-1
Since you don't have a C++11 enabled compiler, and am assuming you are not using frameworks such as Qt/boost etc.. Please check if the following code answer your question. It is a simple async timer implementation using pthreads
Sample code:
#include <pthread.h>
#include <stdio.h>
#include <unistd.h>
#include <time.h>
#define TIME_TO_WAIT_FOR_SEND_SECS (8)
#define FAIL_STATUS_CODE (-1)
#define SUCCESS_STATUS_CODE (0)
typedef void (*TimerThreadCbk)(void *);
typedef struct tTimerThreadInitParams
{
int m_DurationSecs; /* Duration of the timer */
TimerThreadCbk m_Callback; /* Timer callback */
void * m_pAppData; /* App data */
}tTimerThreadInitParams;
void PrintCurrTime()
{
time_t timer;
char buffer[26];
struct tm* tm_info;
time(&timer);
tm_info = localtime(&timer);
strftime(buffer, 26, "%Y-%m-%d %H:%M:%S", tm_info);
puts(buffer);
}
void* TimerThreadEntry(void *a_pTimerThreadInitParams)
{
tTimerThreadInitParams *pTimerThreadInitParams = (tTimerThreadInitParams *)a_pTimerThreadInitParams;
if(NULL != pTimerThreadInitParams)
{
/*Do validattion of init params */
sleep(pTimerThreadInitParams->m_DurationSecs);
pTimerThreadInitParams->m_Callback(pTimerThreadInitParams->m_pAppData);
}
else
{
printf("pTimerThreadInitParams is (nil)\n");
}
}
TimerCallbackForSend(void *a_pAppData)
{
(void)a_pAppData;
/* Perform action on timer expiry using a_pAppData */
printf("TimerCallbackForSend trigggered at: ");
PrintCurrTime();
}
int main()
{
/* Timer thread initialization parameters */
pthread_t TimerThread;
tTimerThreadInitParams TimerInitParams = {};
TimerInitParams.m_DurationSecs = TIME_TO_WAIT_FOR_SEND_SECS;
TimerInitParams.m_Callback = (TimerThreadCbk) TimerCallbackForSend;
/* Print current time */
printf("Starting timer at:");
PrintCurrTime();
/* Create timer thread*/
if(pthread_create(&TimerThread, NULL, TimerThreadEntry, &TimerInitParams))
{
fprintf(stderr, "Error creating thread\n");
return FAIL_STATUS_CODE;
}
else
{
printf("TimerThread created\n");
}
/* wait for the second thread to finish */
if(pthread_join(TimerThread, NULL))
{
fprintf(stderr, "Error joining thread\n");
return FAIL_STATUS_CODE;
}
else
{
printf("TimerThread finished\n");
}
return SUCCESS_STATUS_CODE;
}
Sample output:
Starting timer at:2017-08-08 20:55:33
TimerThread created
TimerCallbackForSend trigggered at: 2017-08-08 20:55:41
TimerThread finished
Notes:
This is a scratch custom implementation. You can rename main as ScheduleTimer, which will be a generic API which spawns a thread and invokes the registered callback in its own context.
Just now saw that you don't want to sleep in any of the threads.
Approach-2
Refer C: SIGALRM - alarm to display message every second for SIGALRM. May be in the signal handler you can post an event to the queue which your thread will be monitoring
Sleeping, whether by a C++ wrapper or by the system's nanosleep function -- it cannot be said often enough -- is... wrong. Unless precision and reliability doesn't matter at all, do not sleep. Never.
For anything related to timing, use a timer.
If portability is not a high priority, and since the question is tagged "Linux", a timerfd would be one of the best solutions.
The timerfd can be waited upon with select/poll/epoll while waiting for something to be received, and other stuff (signals, events) at the same time. That's very elegant, and it is quite performant, too.
Admitted, since you are using UDP, there is the temptation to not wait for readiness in the first place but to just have recvfrom block. There is however nothing inherently wrong with waiting for readiness. For moderate loads, the extra syscall doesn't matter, but for ultra-high loads, you might even consider going a step further into non-portable land and use recvmmsg to receive several datagrams in one go as indicated by the number of datagrams reported by epoll (see code example on the recvmmsg man page, which combines recvmmsg with epoll_wait).
With an eventfd, you have everything in one single event loop, in one single thread, reliable and efficient. No trickery needed, no need to be extra smart, no worries about concurrency issues.

Exit an infinite looping thread elegantly

I keep running into this problem of trying to run a thread with the following properties:
runs in an infinite loop, checking some external resource, e.g. data from the network or a device,
gets updates from its resource promptly,
exits promptly when asked to,
uses the CPU efficiently.
First approach
One solution I have seen for this is something like the following:
void class::run()
{
while(!exit_flag)
{
if (resource_ready)
use_resource();
}
}
This satisfies points 1, 2 and 3, but being a busy waiting loop, uses 100% CPU.
Second approach
A potential fix for this is to put a sleep statement in:
void class::run()
{
while(!exit_flag)
{
if (resource_ready)
use_resource();
else
sleep(a_short_while);
}
}
We now don't hammer the CPU, so we address 1 and 4, but we could wait up to a_short_while unnecessarily when the resource is ready or we are asked to quit.
Third approach
A third option is to do a blocking read on the resource:
void class::run()
{
while(!exit_flag)
{
obtain_resource();
use_resource();
}
}
This will satisfy 1, 2, and 4 elegantly, but now we can't ask the thread to quit if the resource does not become available.
Question
The best approach seems to be the second one, with a short sleep, so long as the tradeoff between CPU usage and responsiveness can be achieved.
However, this still seems suboptimal, and inelegant to me. This seems like it would be a common problem to solve. Is there a more elegant way to solve it? Is there an approach which can address all four of those requirements?
This depends on the specifics of the resources the thread is accessing, but basically to do it efficiently with minimal latency, the resources need to provide an API for either doing an interruptible blocking wait.
On POSIX systems, you can use the select(2) or poll(2) system calls to do that, if the resources you're using are files or file descriptors (including sockets). To allow the wait to be preempted, you also create a dummy pipe which you can write to.
For example, here's how you might wait for a file descriptor or socket to become ready or for the code to be interrupted:
// Dummy pipe used for sending interrupt message
int interrupt_pipe[2];
int should_exit = 0;
void class::run()
{
// Set up the interrupt pipe
if (pipe(interrupt_pipe) != 0)
; // Handle error
int fd = ...; // File descriptor or socket etc.
while (!should_exit)
{
// Set up a file descriptor set with fd and the read end of the dummy
// pipe in it
fd_set fds;
FD_CLR(&fds);
FD_SET(fd, &fds);
FD_SET(interrupt_pipe[1], &fds);
int maxfd = max(fd, interrupt_pipe[1]);
// Wait until one of the file descriptors is ready to be read
int num_ready = select(maxfd + 1, &fds, NULL, NULL, NULL);
if (num_ready == -1)
; // Handle error
if (FD_ISSET(fd, &fds))
{
// fd can now be read/recv'ed from without blocking
read(fd, ...);
}
}
}
void class::interrupt()
{
should_exit = 1;
// Send a dummy message to the pipe to wake up the select() call
char msg = 0;
write(interrupt_pipe[0], &msg, 1);
}
class::~class()
{
// Clean up pipe etc.
close(interrupt_pipe[0]);
close(interrupt_pipe[1]);
}
If you're on Windows, the select() function still works for sockets, but only for sockets, so you should install use WaitForMultipleObjects to wait on a resource handle and an event handle. For example:
// Event used for sending interrupt message
HANDLE interrupt_event;
int should_exit = 0;
void class::run()
{
// Set up the interrupt event as an auto-reset event
interrupt_event = CreateEvent(NULL, FALSE, FALSE, NULL);
if (interrupt_event == NULL)
; // Handle error
HANDLE resource = ...; // File or resource handle etc.
while (!should_exit)
{
// Wait until one of the handles becomes signaled
HANDLE handles[2] = {resource, interrupt_event};
int which_ready = WaitForMultipleObjects(2, handles, FALSE, INFINITE);
if (which_ready == WAIT_FAILED)
; // Handle error
else if (which_ready == WAIT_OBJECT_0))
{
// resource can now be read from without blocking
ReadFile(resource, ...);
}
}
}
void class::interrupt()
{
// Signal the event to wake up the waiting thread
should_exit = 1;
SetEvent(interrupt_event);
}
class::~class()
{
// Clean up event etc.
CloseHandle(interrupt_event);
}
You get a efficient solution if your obtain_ressource() function supports a timeout value:
while(!exit_flag)
{
obtain_resource_with_timeout(a_short_while);
if (resource_ready)
use_resource();
}
This effectively combines the sleep() with the obtain_ressurce() call.
Check out the manpage for nanosleep:
If the nanosleep() function returns because it has been interrupted by a signal, the function returns a value of -1 and sets errno to indicate the interruption.
In other words, you can interrupt sleeping threads by sending a signal (the sleep manpage says something similar). This means you can use your 2nd approach, and use an interrupt to immediately wake the thread if it's sleeping.
Use the Gang of Four Observer Pattern:
http://home.comcast.net/~codewrangler/tech_info/patterns_code.html#Observer
Callback, don't block.
Self-Pipe trick can be used here.
http://cr.yp.to/docs/selfpipe.html
Assuming that you are reading the data from file descriptor.
Create a pipe and select() for readability on the pipe input as well as on the resource you are interested.
Then when data comes on resource, the thread wakes up and does the processing. Else it sleeps.
To terminate the thread send it a signal and in signal handler, write something on the pipe (I would say something which will never come from the resource you are interested in, something like NULL for illustrating the point). The select call returns and thread on reading the input knows that it got the poison pill and it is time to exit and calls pthread_exit().
EDIT: Better way will be just to see that the data came on the pipe and hence just exit rather than checking the value which came on that pipe.
The Win32 API uses more or less this approach:
someThreadLoop( ... )
{
MSG msg;
int retVal;
while( (retVal = ::GetMessage( &msg, TaskContext::winHandle_, 0, 0 )) > 0 )
{
::TranslateMessage( &msg );
::DispatchMessage( &msg );
}
}
GetMessage itself blocks until any type of message is received therefore not using any processing (refer). If a WM_QUIT is received, it returns false, exiting the thread function gracefully. This is a variant of the producer/consumer mentioned elsewhere.
You can use any variant of a producer/consumer, and the pattern is often similar. One could argue that one would want to split the responsibility concerning quitting and obtaining of a resource, but OTOH quitting could depend on obtaining a resource too (or could be regarded as one of the resources - but a special one). I would at least abstract the producer consumer pattern and have various implementations thereof.
Therefore:
AbstractConsumer:
void AbstractConsumer::threadHandler()
{
do
{
try
{
process( dequeNextCommand() );
}
catch( const base_except& ex )
{
log( ex );
if( ex.isCritical() ){ throw; }
//else we don't want loop to exit...
}
catch( const std::exception& ex )
{
log( ex );
throw;
}
}
while( !terminated() );
}
virtual void /*AbstractConsumer::*/process( std::unique_ptr<Command>&& command ) = 0;
//Note:
// Either may or may not block until resource arrives, but typically blocks on
// a queue that is signalled as soon as a resource is available.
virtual std::unique_ptr<Command> /*AbstractConsumer::*/dequeNextCommand() = 0;
virtual bool /*AbstractConsumer::*/terminated() const = 0;
I usually encapsulate command to execute a function in the context of the consumer, but the pattern in the consumer is always the same.
Any (welln at least, most) approaches mentioned above will do the following: thread is created, then it's blocked wwiting for resource, then it's deleted.
If you're worried about efficiency, this is not a best approach when waiting for IO. On Windows at least, you'll allocate around 1mb of memory in user mode, some in kernel for just one additional thread. What if you have many such resources? Having many waiting threads will also increase context switches and slow down your program. What if resource takes longer to be available and many requests are made? You may end up with tons of waiting threads.
Now, the solution to it (again, on Windows, but I'm sure there should be something similar on other OSes) is using threadpool (the one provided by Windows). On Windows this will not only create limited amount of threads, it'll be able to detect when thread is waiting for IO and will stwal thread from there and reuse it for other operations while waitting.
See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686766(v=vs.85).aspx
Also, for more fine-grained control bit still having ability give up thread when waiting for IO, see IO completion ports (I think they'll anyway use threadpool inside): http://msdn.microsoft.com/en-us/library/windows/desktop/aa365198(v=vs.85).aspx

using libev with multiple threads

I want to use libev with multiple threads for the handling of tcp connections. What I want to is:
The main thread listen on incoming connections, accept the
connections and forward the connection to a workerthread.
I have a pool of workerthreads. The number of threads depends on the
number of cpu's. Each worker-thread has an event loop. The worker-thread listen if I can write on the tcp socket or if
somethings available for reading.
I looked into the documentation of libev and I known this can be done with libev, but I can't find any example how I have to do that.
Does someone has an example?
I think that I have to use the ev_loop_new() api, for the worker-threads and for the main thread I have to use the ev_default_loop() ?
Regards
The following code can be extended to multiple threads
//This program is demo for using pthreads with libev.
//Try using Timeout values as large as 1.0 and as small as 0.000001
//and notice the difference in the output
//(c) 2009 debuguo
//(c) 2013 enthusiasticgeek for stack overflow
//Free to distribute and improve the code. Leave credits intact
#include <ev.h>
#include <stdio.h> // for puts
#include <stdlib.h>
#include <pthread.h>
pthread_mutex_t lock;
double timeout = 0.00001;
ev_timer timeout_watcher;
int timeout_count = 0;
ev_async async_watcher;
int async_count = 0;
struct ev_loop* loop2;
void* loop2thread(void* args)
{
printf("Inside loop 2"); // Here one could initiate another timeout watcher
ev_loop(loop2, 0); // similar to the main loop - call it say timeout_cb1
return NULL;
}
static void async_cb (EV_P_ ev_async *w, int revents)
{
//puts ("async ready");
pthread_mutex_lock(&lock); //Don't forget locking
++async_count;
printf("async = %d, timeout = %d \n", async_count, timeout_count);
pthread_mutex_unlock(&lock); //Don't forget unlocking
}
static void timeout_cb (EV_P_ ev_timer *w, int revents) // Timer callback function
{
//puts ("timeout");
if (ev_async_pending(&async_watcher)==false) { //the event has not yet been processed (or even noted) by the event loop? (i.e. Is it serviced? If yes then proceed to)
ev_async_send(loop2, &async_watcher); //Sends/signals/activates the given ev_async watcher, that is, feeds an EV_ASYNC event on the watcher into the event loop.
}
pthread_mutex_lock(&lock); //Don't forget locking
++timeout_count;
pthread_mutex_unlock(&lock); //Don't forget unlocking
w->repeat = timeout;
ev_timer_again(loop, &timeout_watcher); //Start the timer again.
}
int main (int argc, char** argv)
{
if (argc < 2) {
puts("Timeout value missing.\n./demo <timeout>");
return -1;
}
timeout = atof(argv[1]);
struct ev_loop *loop = EV_DEFAULT; //or ev_default_loop (0);
//Initialize pthread
pthread_mutex_init(&lock, NULL);
pthread_t thread;
// This loop sits in the pthread
loop2 = ev_loop_new(0);
//This block is specifically used pre-empting thread (i.e. temporary interruption and suspension of a task, without asking for its cooperation, with the intention to resume that task later.)
//This takes into account thread safety
ev_async_init(&async_watcher, async_cb);
ev_async_start(loop2, &async_watcher);
pthread_create(&thread, NULL, loop2thread, NULL);
ev_timer_init (&timeout_watcher, timeout_cb, timeout, 0.); // Non repeating timer. The timer starts repeating in the timeout callback function
ev_timer_start (loop, &timeout_watcher);
// now wait for events to arrive
ev_loop(loop, 0);
//Wait on threads for execution
pthread_join(thread, NULL);
pthread_mutex_destroy(&lock);
return 0;
}
Using libev within different threads at the same time is fine as long as each of them runs its own loop[1].
The c++ wrapper in libev (ev++.h) always uses the default loop instead of letting you specify which one you want to use. You should use the C header instead (ev.h) which allows you to specify which loop to use (e.g. ev_io_start takes a pointer to an ev_loop but the ev::io::start doesn't).
You can signal another thread's ev_loop safely through ev_async.
[1]http://doc.dvgu.ru/devel/ev.html#threads_and_coroutines

Closing a thread with select() system call statement?

I have a thread to monitor serial port using select system call, the run function of the thread is as follows:
void <ProtocolClass>::run()
{
int fd = mPort->GetFileDescriptor();
fd_set readfs;
int maxfd=fd+1;
int res;
struct timeval Timeout;
Timeout.tv_usec=0;
Timeout.tv_sec=3;
//BYTE ack_message_frame[ACKNOWLEDGE_FRAME_SIZE];
while(true)
{
usleep(10);
FD_ZERO(&readfs);
FD_SET(fd,&readfs);
res=select(maxfd,&readfs,NULL,NULL,NULL);
if(res<0)
perror("\nselect failed");
else if( res==0)
puts("TIMEOUT");
else if(FD_ISSET(fd,&readfs))
{//IF INPUT RECEIVED
qDebug("************RECEIVED DATA****************");
FlushBuf();
qDebug("\nReading data into a read buffer");
int bytes_read=mPort->ReadPort(mBuf,1000);
mFrameReceived=false;
for(int i=0;i<bytes_read;i++)
{
qDebug("%x",mBuf[i]);
}
//if complete frame has been received, write the acknowledge message frame to the port.
if(bytes_read>0)
{
qDebug("\nAbout to Process Received bytes");
ProcessReceivedBytes(mBuf,bytes_read);
qDebug("\n Processed Received bytes");
if(mFrameReceived)
{
int no_bytes=mPort->WritePort(mAcknowledgeMessage,ACKNOWLEDGE_FRAME_SIZE);
}//if frame received
}//if bytes read > 0
} //if input received
}//end while
}
The problem is when I exit from this thread, using
delete <protocolclass>::instance();
the program crashes with a glibc error of malloc memory corruption. On checking the core with gdb it was found the when exiting the thread it was processing the data and thus the error. The destructor of the protocol class looks as follows:
<ProtocolClass>::~<ProtocolClass>()
{
delete [] mpTrackInfo; //delete data
wait();
mPort->ClosePort();
s_instance = NULL; //static instance of singleton
delete mPort;
}
Is this due to select? Do the semantics for destroying objects change when select is involved? Can someone suggest a clean way to destroy threads involving select call.
Thanks
I'm not sure what threading library you use, but you should probably signal the thread in one way or another that it should exit, rather than killing it.
The most simple way would be to keep a boolean that is set true when the thread should exit, and use a timeout on the select() call to check it periodically.
ProtocolClass::StopThread ()
{
kill_me = true;
// Wait for thread to die
Join();
}
ProtocolClass::run ()
{
struct timeval tv;
...
while (!kill_me) {
...
tv.tv_sec = 1;
tv.tv_usec = 0;
res = select (maxfd, &readfds, NULL, NULL, &tv);
if (res < 0) {
// Handle error
}
else if (res != 0) {
...
}
}
You could also set up a pipe and include it in readfds, and then just write something to it from another thread. That would avoid waking up every second and bring down the thread without delay.
Also, you should of course never use a boolean variable like that without some kind of lock, ...
Are the threads still looking at mpTrackInfo after you delete it?
Not seeing the code it is hard.
But Iwould think that the first thing the destructor should do is wait for any threads to die (preferably with some form of join() to make sure they are all accounted for). Once they are dead you can start cleaning up the data.
your thread is more than just memory with some members, so just deleting and counting on the destructor is not enough. Since I don't know qt threads I think this link can put you on your way:
trolltech message
Two possible problems:
What is mpTrackInfo? You delete it before you wait for the thread to exit. Does the thread use this data somewhere, maybe even after it's been deleted?
How does the thread know it's supposed to exit? The loop in run() seems to run forever, which should cause wait() in the destructor to wait forever.