This is a pretty basic scenario but I'm not finding too many helpful resources. I have a C++ program running in Linux that does file processing. Reads lines, does various transformations, writes data into a database. There's certain variables (stored in the database) that affect the processing which I'm currently reading at every iteration because I want processing to be as up to date as possible, but a slight lag is OK. But those variables change pretty rarely, and the reads are expensive over time (10 million plus rows a day). I could space out the reads to every n iterations or simply restart the program when a variable changes, but those seem hackish.
What I would like to do instead is have the program trigger a reread of the variables when it receives a SIGHUP. Everything I'm reading about signal handling is talking about the C signal library which I'm not sure how to tie in to my program's classes. The Boost signal libraries seem to be more about inter-object communication rather than handling OS signals.
Can anybody help? It seems like this should be incredibly simple, but I'm pretty rusty with C++.
I would handle it just like you might handle it in C. I think it's perfectly fine to have a stand-alone signal handler function, since you'll just be posting to a semaphore or setting a variable or some such, which another thread or object can inspect to determine if it needs to re-read the settings.
#include <signal.h>
#include <stdio.h>
/* or you might use a semaphore to notify a waiting thread */
static volatile sig_atomic_t sig_caught = 0;
void handle_sighup(int signum)
{
/* in case we registered this handler for multiple signals */
if (signum == SIGHUP) {
sig_caught = 1;
}
}
int main(int argc, char* argv[])
{
/* you may also prefer sigaction() instead of signal() */
signal(SIGHUP, handle_sighup);
while(1) {
if (sig_caught) {
sig_caught = 0;
printf("caught a SIGHUP. I should re-read settings.\n");
}
}
return 0;
}
You can test sending a SIGHUP by using kill -1 `pidof yourapp`.
I'd recommend checking out this link which gives the details on registering a signal.
Unless I'm mistaken, one important thing to remember is that any function inside an object expects a referent parameter, which means non-static member functions can't be signal handlers. I believe you'll need to register it either to a static member function, or some kind of global function. From there, if you have a specific object function you want to take care of your update, you'll need a way to reference that object.
There are several possibilities; it would not necessarily be overkill to implement all of them:
Respond to a specific signal, just like C does. C++ works the same way. See the documentation for signal().
Trigger on the modification timestamp of some file changing, like the database if it is stored in a flat file.
Trigger once per hour, or once per day (whatever makes sense).
You can define a Boost signal corresponding to the OS signal and tie the Boost signal to your slot to invoke the respective handler.
Related
Currently, I am using usart_read_buffer_job function provided by ASF library. I placed this function inside the while(1) loop as below:
int main()
{
Some pieces of code for initialization;
while(1)
{
usart_read_buffer_job();
while(1) // The second infinite loop
{
Some other pieces of code;
}
}
}
It works perfectly well for the first interrupt handler call. However, after returning from the handler, I was no longer able to call the interrupt handler. The program kept running within the second infinite loop and was not able to execute usart_read_buffer_job() again. It was probably the cause of the handler 's malfunction.
In this case, my purpose is to jump into the USART interrupt handler regardless of the number of infinite loops being executed in main(). Of course, by not using ASF, the issue could be solved by manually set the handler but I still wonder how this issue could be solved by other functions provided by ASF.
Looking forward to getting the response from the community soon.
Thank you,
Thanks for very quick response.
The code which I am working on is confidentials. Hence, I could only share the ASF library functions with you and explain briefly how they work.
In the ASF, typically, we have two functions for handling the interrupt, namely usart_read_buffer_job and usart_read_job
Before using these two functions, the handler calls are defined by the two functions:
usart_register_callback: Registers a callback function, which is implemented by the user.
usart_enable_callback: The callback function will be called from the interrupt handler when the conditions for the callback type are met.
And these two functions above are placed in the initialization code as shown in the question.
Then, depending on the design purpose, handlers are called whenever a character or a group of characters are received via UART peripherals using usart_read_buffer_job/usart_read_job respectively.
usart_read_buffer_job: Sets up the driver to read from the USART to a given buffer. If registered and enabled, a callback function will be called.
usart_read_job: Sets up the driver to read data from the USART module to the data pointer given. If registered and enabled, a callback will be called when the receiving is completed.
You could find more details about these functions on http://asf.atmel.com/docs/latest/samd21/html/group__asfdoc__sam0__sercom__usart__group.html
In this case, assumming that the main program stalls due to some unexpected infinite loops, the handlers should still work at anytime after receiving command invoked from the UART peripherals and do some important tasks to solve out the problems, for example.
Hope that this explanation makes my previous question clearer. And, hope to get response from all of you soon.
First of all, do not put an infinite loop inside an infinite loop!!.
If you find yourself doing this, this indicates a probable design flow. Please revise your design.
(Let's call it the first rule)
Second, you seem to use event driven I/O (rather than polling) by registering a handler/callback.
Here is a second rule, you never call a handler yourself.
You register a callback function (handler) to be called when the event occurs.
If you are doing the initialization and configuration correctly, the code should work following this scheme:
void initialization()
{
/*Device and other initialization*/
...
usart_register_callback(...); /*Register usart_read_callback() here*/
usart_enable_callback(...);
}
int main()
{
initialization();
while(1)
{
/*Some other pieces of code*/
}
}
void usart_read_callback(...)
{
usart_write_buffer_job(...); /*Read data into your read data buffer*/
}
usart_read_buffer_job() will only invoke the callback one time, so after the callback has been dealt with, you must invoke usart_read_buffer_job() again (perhaps at the end of the callback if the processing is finished).
Only one infinite loop can run unless you have some kind of separate tasks (such as in FreeRTOS), each with their own loop.
I tried to break down my problem to a small example. The real problem is a more complex communication:
I have a function that triggers a communication and connects and sends messages to a server. If there is an answer, the Client-class emits a signal containing the answer.
void communicate()
{
client.setUpMessage(); // the answer is emitted as a signal and
// and processed in the Slot
// 'reactToAnswer(...)'
client.sendMessage("HelloWorld");
}
void reactToAnswer(QString answer)
{
parser.parseAnswer() // an error could occur
}
What if an error is detected in the slot in which the response is processed? I would like to stop the execution of the function communicate(). This means that the function client.sendMessage("HelloWorld") should no longer be executed.
In my naivety I tried to handle the problem with exceptions:
void communicate()
{
try
{
client.setUpMessage(); // the answer is emitted as a signal and
// and processed in the Slot
// 'reactToAnswer(...)'
client.sendMessage("HelloWorld");
}
catch(myException)
{
// do something
}
void reactToAnswer(QString answer)
{
if( !parser.parseAnswer() )
{
throw myException;
}
}
This does not work, throwing an exception from a slot invoked by a qt-signal is undefined behaviour. The usual way is to reimplement QApplication::notify() resp. QCoreApplication()::notify, but this does not work for me. There is already a QApplication for the GUI and I want the communication class (QObject) to stand alone. All things should be treated within this class.
I hope I explained the problem comprehensibly. I do not want to use exceptions in any case, other ways to stop the communication are also right for me.
Thanks in advance!
I'm not sure that what you are trying to accomplish is a particularly good fit for the signals-and-slots paradigm... perhaps you want to go with just a regular old function call instead? i.e. something like:
void communicate()
{
QString theAnswer; // will be written to by setupMessage() unless error occurs
if (client.setUpMessage(theAnswer))
{
reactToAnswer(theAnswer);
client.sendMessage("HelloWorld");
}
}
The reason that signals-and-slots aren't a good fit is that signals are designed to be connectable to multiple slots at once, and the order in which the slots-methods are called is undefined -- so if a slot-method tries to interfere with the signal-emitting process in the way you describe, the behavior is rather unpredictable (because you don't know how many other connected slot-methods, if any, had already been called as part of the signal-emission, before your particular slot-method hit the brakes). And of course if you ever go to queued/asynchronous signals, then it won't work at all, because the slot will be called in a different context entirely, long after the signal-emitting function has already returned.
That said, if you absolutely must use signals-and-slots for this, you can have your slot emit its own error-has-occurred signal, which can be connected back to a slot in the original signal-emitting class. That slot could then set a boolean (or whatever), and your communicate() method could then check the state of that boolean (right after client.setUpMessage() has returned) to decide whether or not to continue executing or return early.
(I don't recommend that though -- signals-and-slots are there to make your program less complicated, and in this case I think using them instead of a regular function call actually makes your program more complicated, with no corresponding benefit)
I'm working in C++ on Unix.
Say I have a long running function that does something, for example read stuff from file and parse it. In this function I keep count of the things I read from the file in a local variable num_read.
I want to catch CTRL+c in a custom signal handler and print the value of num_read.
The only way I can think of is allocating num_read on the heap and storing its adress in a global variable that can be accessed by my signal handler. Is there a more elegant way?
The answer is no. There is no way of communicating between
a signal handler and the rest of code except by global
variables.
Also, you can only do a very, very limited number of things in
a signal handler. You cannot use a << on an std::ostream,
for example, nor can you call printf. The usual way of
handling signals under Unix is to catch them in a separate
thread. The alternative (which works for other OS's as well) is
to define a global variable of sig_atomic_t, which is set in
the signal handler, and polled in the main loop. (In your case,
for example, you might poll it every time you update
num_read.)
Except the traditional Unix way with signal handlers, there is other:
since Linux kernel 2.6.22 there is signalfd() function present. You may obtain a usual file descriptor and poll it (using select or epoll) for incoming signals. So when you handle a signal there is no any limitations proper to them -- it's just usual userspace code, so you can call whatever you want...
as far as I know for OS X, there is similar feature present in kqueue (search this site or internet for EVFILT_SIGNAL and kqueue)
Let's say that I have two libraries (A and B), and each has one function that listen on sockets. These functions use select() and they return some event immediately if the data has arrived, otherwise they wait for some time (timeout) and then return NULL:
A_event_t* A_wait_for_event(int timeout);
B_event_t* B_wait_for_event(int timeout);
Now, I use them in my program:
int main (int argc, char *argv[]) {
// Init A
// Init B
// .. do some other initialization
A_event_t *evA;
B_event_t *evB;
for(;;) {
evA = A_wait_for_event(50);
evB = B_wait_for_event(50);
// do some work based on events
}
}
Each library has its own sockets (e.g. udp socket) and it is not accessible from outside.
PROBLEM: This is not very efficient. If for example there is a lot of events waiting to be delivered by *B_wait_for_event* these would have to wait always until *A_wait_for_event* timeouts, which effectively limits the throughput of library B and my program.
Normally, one could use threads to separate processing, BUT what if processing of some event require to call function of other library and vice verse. Example:
if (evA != 0 && evA == A_EVENT_1) {
B_do_something();
}
if (evB != 0 && evB == B_EVENT_C) {
A_do_something();
}
So, even if I could create two threads and separate functionality from libraries, these threads would have to exchange events among them (probably through pipe). This would still limit performance, because one thread would be blocked by *X_wait_for_event()* function, and would not be possible to receive data immediately from other thread.
How to solve this?
This solution may not be available depending on the libraries you're using, but the best solution is not to call functions in individual libraries that wait for events. Each library should support hooking into an external event loop. Then your application uses a single loop which contains a poll() or select() call that waits on all of the events that all of the libraries you use want to wait for.
glib's event loop is good for this because many libraries already know how to hook into it. But if you don't use something as elaborate as glib, the normal approach is this:
Loop forever:
Start with an infinite timer and an empty set of file descriptors
For each library you use:
Call a setup function in the library which is allowed to add file descriptors to your set and/or shorten (but not lengthen) the timeout.
Run poll()
For each library you use:
Call a dispatch function in the library that responds to any events that might have occurred when the poll() returned.
Yes, it's still possible for an earlier library to starve a later library, but it works in practice.
If the libraries you use don't support this kind of setup & dispatch interface, add it as a feature and contribute the code upstream!
(I'm moving this to an answer since it's getting too long for a comment)
If you are in a situation where you're not allowed to call A_do_something in one thread while another thread is executing A_wait_for_event (and similarly for B), then I'm pretty sure you can't do anything efficient, and have to settle between various evils.
The most obvious improvement is to immediately take action upon getting an event, rather than trying to read from both: i.e. order your loop
Wait for an A event
Maybe do something in B
Wait for a B event
Maybe do something in A
Other mitigations you could do are
Try to predict whether an A event or a B event is more likely to come next, and wait on that first. (e.g. if they come in streaks, then after getting and handling an A event, you should go back to waiting for another A event)
Fiddle with the timeout values to strike a balance between spin loops and too much blocking. (maybe even adjust dynamically)
EDIT: You might check the APIs for your library; they might already offer a way to deal with the problem. For example, they might allow you to register callbacks for events, and get notifications of events through the callback, rather than polling wait_for_event.
Another thing is if you can create new file descriptors for the library to listen on. e.g. If you create a new pipe and hand one end to library A, then if thread #1 is waiting for an A event, thread #2 can write to the pipe to make an event happen, thus forcing #1 out of wait_for_event. With the ability to kick threads out of the wait_for_event functions at will, all sorts of new options become available.
A possible solution is to use two threads to wait_for_events plus boost::condition_variable in "main" thread which "does something". An alike but not exact solution is here
I am looking some info about reentrancy, then I encountered about signal and thread. What is the difference between the two?
Please advice.
Many thanks.
You are comparing apples and oranges. Signal Programming is event driven programming and can be used to influence threads. However the signal programming paradigm can be used in a single threaded application.
To understand signals it is best to start by thinking about a single threaded program. This program is doing whatever it does with its one thread and then a signal is delivered to it. If the program has registered a signal handler (a function to call) for that signal then the normal execution of that program will be put on hold for a little bit while the signal handler function is called (very much like an hardware interrupt interrupts the operating system to run interrupt service routines) and run the function that the program has registered to handle that signal. So with the code:
#include <stdio.h>
#include <signal.h>
#include <unistd.h> // for alarm
volatile int x = 0;
void foo(int sig_num) {
x = sig_num;
}
int main(void) {
unsigned long long count = 0;
signal(SIGALRM, foo);
alarm(1); // This is a posix function and may not be in all hosted
// C implementations.
// SIGALRM will be sent to this process in 1 second.
while (!x) {
printf("not x\n");
count++;
}
printf("x is %i and count = %llu\n", x, count);
}
The program will loop until someone sends it a signal (how this happens may differ by platform). If the signal SIGALARM is sent then foo will set x and the loop will exit. Exactly where in the loop foo is called is not clear. It could happen just between the print and incrementing the count, just after the while conditional is tested, during the print, ... lots of places, really. This is why signals may pose a concurrency or reentrantcy problem -- they can change things without the other code knowing that it happened.
The reason that x was declared as volatile was that without that many compilers might think "hey, no one in main changes x and main doesn't call any other functions, so x never changes" and optimize out the loop test. Specifying volatile tells the C compiler that this variable can be changed by unseen forces (such as signal handlers, other threads, or sometimes even hardware in the case of memory mapped device control registers).
It was pretty easy to make sure that x was looked out for properly between both the signal handler and the main execution code because x is just an integer (load and store for it were probably single instructions assembly each), it was only altered by the one thing (the signal handler, rather than the main code) in this case, and it was only used as a simple boolean value. If x were some other type, such as a string, then since signals can interrupt you at any time, the signal handler may overwrite part of the string while the main code was in the middle of reading the string. This could have results as bad as someone freezing time while you were in the middle of brushing your teeth, replacing your toothbrush with a cobra, and then unfreezing time.
A little bit more on signals -- they are part of the C language, but most of their use is not really covered by C. Many of the Linux, Unix, and POSIX functions that have to do with signals are not part of the C language, but it is difficult to come up with reasonable (and small) examples of signal use that doesn't rely on something not in the C standard, which is why I used the alarm function. The raise function, which is part of C, can be used to send a signal to yourself, but it is more difficult to make examples for.
As scary as signals may seem now, most systems have more functions that make them much more easy to use.
threads, finally
Threads execute concurrently, while signals interrupt. While there are some threading libraries that actually implement threading in such a way that this is not really the case, it is best to think of threads this way. Since computer programs are actually very limited in their ability to see what is going on threads can get in each others' way just like signal handlers can get in the way of the main execution code (probably more often than signal handlers, though).
Imagine that you are about to brush your teeth again, but this time you are def and blind. Now your roommate, who is also def and blind, comes in to fix the sink with some silicone sealer. Just as you reach for the toothpaste he lays down the tube of silicone right on top of the tube of toothpaste and you grab the tube of silicone instead of the toothpaste. Remember, since you are both blind and def (and somehow not bumping into each other) you both assume that no one else is using the sink, so you never realize that you have just put the silicone on your toothbrush, and your roommate doesn't realize that he is trying to fill the cracks between the tile and the back of the sink with toothpaste.
Luckily there are ways that threads can communicate to each other that something is currently in use so other threads should stay away (like locking the door while you brush your teeth).
Thread lives inside a process whereas signals are part of a universe, and signals have permission to communicate with processes or with specific thread inside a process.