Using timer with zmq - c++

I am working on a project where I have to use zmq_poll. But I did not completely understand what it does.
So I also tried to implement it:
zmq_pollitem_t timer_open(void){
zmq_pollitem_t items[1];
if( items[0].socket == nullptr ){
printf("error socket %s: %s\n", zmq_strerror(zmq_errno()));
return;
}
else{
items[0].socket = gsock;
}
items[0].fd = -1;
items[0].events = ZMQ_POLLIN;
// get a timer
items[0].fd = timerfd_create( CLOCK_REALTIME, 0 );
if( items[0].fd == -1 )
{
printf("timerfd_create() failed: errno=%d\n", errno);
items[0].socket = nullptr;
return;
}
int rc = zmq_poll(items,1,-1);
if(rc == -1){
printf("error poll %s: %s\n", zmq_strerror(zmq_errno()));
return;
}
else
return items[0];
}
I am very new to this topic and I have to modify an old existing project and replace the functions with the one of zmq. On other websites I saw examples where they used two items and the zmq_poll function in an endless loop. I have read the documentation but still could not properly understand how this works. And these are the other two functions I have implemented. I do not know if it is the correct way to implement it like this:
void timer_set(zmq_pollitem_t items[] , long msec, ipc_timer_mode_t mode ) {
struct itimerspec t;
...
timerfd_settime( items[0].fd , 0, &t, NULL );
}
void timer_close(zmq_pollitem_t items[]){
if( items[0].fd != -1 )
close(items[0].fd);
items[0].socket = nullptr;
}
I am not sure if I need the zmq_poll function because I am using a timer.
EDIT:
void some_function_timer_example() {
// We want to wait on two timers
zmq_pollitem_t items[2] ;
// Setup first timer
ipc_timer_open_(&items[0]);
ipc_timer_set_(&items[0], 1000, IPC_TIMER_ONE_SHOT);
// Setup second timer
ipc_timer_open_(&items[1]);
ipc_timer_set_(&items[1], 1000, IPC_TIMER_ONE_SHOT);
// Now wait for the timers in a loop
while (1) {
//ipc_timer_set_(&items[0], 1000, IPC_TIMER_REPEAT);
//ipc_timer_set_(&items[1], 5000, IPC_TIMER_REPEAT);
int rc = zmq_poll (items, 2, -1);
assert (rc >= 0); /* Returned events will be stored in items[].revents */
if (items [0].revents & ZMQ_POLLIN) {
// Process task
std::cout << "revents: 1" << std::endl;
}
if (items [1].revents & ZMQ_POLLIN) {
// Process weather update
std::cout << "revents: 2" << std::endl;
}
}
}
Now it still prins very fast and is not waiting. It is still waiting only in the beginning. And when the timer_set is inside the loop it waits properly, only if the waiting time is the same like: ipc_timer_set(&items[1], 1000,...) and ipctimer_set(&items[0], 1000,...)
So how do I have to change this? Or is this the correct behavior?

zmq_poll works like select, but it allows some additional stuff. For instance you can select between regular synchronous file descriptors, and also special async sockets.
In your case you can use the timer fd as you have tried to do, but you need to make a few small changes.
First you have to consider how you will invoke these timers. I think the use case is if you want to create multiple timers and wait for them. This would be typically the function in yuor current code that might be using a loop for the timer (either using select() or whatever else they might be doing).
It would be something like this:
void some_function() {
// We want to wait on two timers
zmq_pollitem items[2];
// Setup first timer
ipc_timer_open(&item[0]);
ipc_timer_set(&item[0], 1000, IPC_TIMER_ONE_REPEAT);
// Setup second timer
ipc_timer_open(&item[1]);
ipc_timer_set(&item[1], 5000, IPC_TIMER_ONE_SHOT);
// Now wait for the timers in a loop
while (1) {
int rc = zmq_poll (items, 2, -1);
assert (rc >= 0); /* Returned events will be stored in items[].revents */
}
}
Now, you need to fix the ipc_timer_open. It will be very simple - just create the timer fd.
// Takes a pointer to pre-allocated zmq_pollitem_t and returns 0 for success, -1 for error
int ipc_timer_open(zmq_pollitem_t *items){
items[0].socket = NULL;
items[0].events = ZMQ_POLLIN;
// get a timer
items[0].fd = timerfd_create( CLOCK_REALTIME, 0 );
if( items[0].fd == -1 )
{
printf("timerfd_create() failed: errno=%d\n", errno);
return -1; // error
}
return 0;
}
Edit: Added as reply to comment, since this is long:
From the documentation:
If both socket and fd are set in a single zmq_pollitem_t, the ØMQ socket referenced by socket shall take precedence and the value of fd shall be ignored.
So if you are passing the fd, you have to set socket to NULL. I am not even clear where gsock is coming from. Is this in the documentation? I couldn't find it.
And when will it break out of the while(1) loop?
This is application logic, and you have to code according to what you require. zmq_poll just keeps returning everytime one of the timer hits. In this example, every second the zmq_poll returns because the first timer (which is a repeat) keeps triggering. But at 5 seconds, it will also return because of the second timer (which is a one shot). Its up to you to decide when you exit the loop. Do you want this to go infinitely? Do you need to check for a different condition to exit the loop? Do you want to do this for say 100 times and then return? You can code whatever logic you want on top of this code.
And what kind of events are returned back
ZMQ_POLLIN since timer fds behave like readable file descriptors.

Related

Thread query SDL_Net

Running my listen function in a seperate thread seems to use up a lot of CPU
Is it considered ok to use Delays to reduce cpu usage or am I using threads all wrong ?
// Running in a seperate Thread
void Server::listen()
{
while (m_running)
{
if (SDLNet_UDP_Recv(m_socket, m_packet) > 0)
{
//Handle Packet Function
}
}
}
From the SDLNet_UDP_Recv reference
This is a non-blocking call, meaning if there's no data ready to be received the function will return.
That means if there's nothing to receive then SDLNet_UDP_Recv will return immediately with 0 and your loop will iterate and call SDLNet_UDP_Recv again which returns 0 and so on. This loop will never sleep of pause, so of course it will use as much CPU as it can.
A possible solution is indeed to add some kind of delay or sleep in the loop.
I would suggest something like
while (m_running)
{
int res;
while (m_running && (res = SDLNet_UDP_Recv(...)) > 0)
{
// Handle message
}
if (res < 0)
{
// Handle error
}
else if (m_running /* && res == 0 */)
{
// Small delay or sleep
}
}

C++ How to exit out of a while loop recvfrom()

I'm trying to create a UDP broadcast program to check for local game servers, but I'm having some trouble with the receiving end. Since the amount of servers alive is unknown at all times, you must have a loop that only exits when you stop it. So in this bit of code here:
while(1) // start a while loop
{
if(recvfrom(sd,buff,BUFFSZ,0,(struct sockaddr *)&peer,&psz) < 0) // recvfrom() function call
{
cout << red << "Fatal: Failed to receive data" << white << endl;
return;
}
else
{
cout << green << "Found Server :: " << white;
cout << yellow << inet_ntoa(peer.sin_addr), htons(peer.sin_port);
cout << endl;
}
}
I wish to run this recvfrom() function until I press Ctrl + C. I've tried setting up handlers and such (from related questions), but they're all either too complicated for me, or it's a simple function that just exits the program as a demonstration. Here's my problem:
The program hangs on recvfrom until it receives a connection (my guess), so, there's never a chance for it to specifically wait for input. How can I set up an event that will work into this nicely?
Thanks!
In the CTRL-C handler, set a flag, and use that flag as condition in the while loop.
Oh, and if you're not on a POSIX systems where system-calls can be interrupted by signals, you might want to make the socket non-blocking and use e.g. select (with a small timeout) to poll for data.
Windows have a couple of problems with a scheme like this. The major problem is that functions calls can not be interrupted by the CTRL-C handler. Instead you have to poll if there is anything to receive in the loop, while also checking the "exit loop" flag.
It could be done something like this:
bool ExitRecvLoop = false;
BOOL CtrlHandler(DWORD type)
{
if (type == CTRL_C_EVENT)
{
ExitRecvLoop = true;
return TRUE;
}
return FALSE; // Call next handler
}
// ...
SetConsoleCtrlHandler((PHANDLER_ROUTINE) CtrlHandler, TRUE);
while (!ExitRecvLoop)
{
fd_set rs;
FD_ZERO(&rs);
FD_SET(sd, &rs);
timeval timeout = { 0, 1000 }; // One millisecond
if (select(sd + 1, &rs, NULL, NULL, &timeout) < 0)
{
// Handle error
}
else
{
if (FD_ISSET(sd, &rs))
{
// Data to receive, call `recvfrom`
}
}
}
You might have to make the socket non-blocking for this to work (see the ioctlsocket function for how to).
Thread off your recvFrom() loop so that your main thread can wait for user input. When user requests stop, close the fd from the main thread and the recvFrom() will return immediately with an error, so allowing your recvFrom() thread to exit.

Child process is blocked by full pipe, cannot read in parent process

I have roughly created the following code to call a child process:
// pipe meanings
const int READ = 0;
const int WRITE = 1;
int fd[2];
// Create pipes
if (pipe(fd))
{
throw ...
}
p_pid = fork();
if (p_pid == 0) // in the child
{
close(fd[READ]);
if (dup2(fd[WRITE], fileno(stdout)) == -1)
{
throw ...
}
close(fd[WRITE]);
// Call exec
execv(argv[0], const_cast<char*const*>(&argv[0]));
_exit(-1);
}
else if (p_pid < 0) // fork has failed
{
throw
}
else // in th parent
{
close(fd[WRITE]);
p_stdout = new std::ifstream(fd[READ]));
}
Now, if the subprocess does not write too much to stdout, I can wait for it to finish and then read the stdout from p_stdout. If it writes too much, the write blocks and the parent waits for it forever.
To fix this, I tried to wait with WNOHANG in the parent, if it is not finished, read all available output from p_stdout using readsome, sleep a bit and try again. Unfortunately, readsome never reads anything:
while (true)
{
if (waitid(P_PID, p_pid, &info, WEXITED | WNOHANG) != 0)
throw ...;
else if (info.si_pid != 0) // waiting has succeeded
break;
char tmp[1024];
size_t sizeRead;
sizeRead = p_stdout->readsome(tmp, 1024);
if (sizeRead > 0)
s_stdout.write(tmp, sizeRead);
sleep(1);
}
The question is: Why does this not work and how can I fix it?
edit: If there is only child, simply using read instead of readsome would probably work, but the process has multiple children and needs to react as soon as one of them terminates.
As sarnold suggested, you need to change the order of your calls. Read first, wait last. Even if your method worked, you might miss the last read. i.e. you exit the loop before you read the last set of bytes that was written.
The problem might be is that ifstream is non-blocking. I've never liked iostreams, even in my C++ projects, I always liked the simplicity of C's stdio functions (i.e. FILE*, fprintf, etc). One way to get around this is to read if the descriptor is readable. You can use select to determine if there is data waiting on that pipe. You're going to need select if you are going to read from multiple children anyway, so might as well learn it now.
As for a quick isreadable function, try something like this (please note I haven't tried compiling this):
bool isreadable(int fd, int timeoutSecs)
{
struct timeval tv = { timeoutSecs, 0 };
fd_set readSet;
FD_ZERO(&readSet);
return select(fds, &readSet, NULL, NULL, &tv) == 1;
}
Then in your parent code, do something like:
while (true) {
if (isreadable(fd[READ], 1)) {
// read fd[READ];
if (bytes <= 0)
break;
}
}
wait(pid);
I'd suggest re-writing the code so that it doesn't call waitpid(2) until after read(2) calls on the pipe return 0 to signify end-of-file. Once you get the end-of-file return from your read calls, you know the child is dead, and you can finally waitpid(2) for it.
Another option is to de-couple the reading from the reaping even further and perform the wait calls in a SIGCHLD signal handler asynchronously to the reading operations.

select() behaviour for writeability?

I have a fd_set "write_set" which contains sockets that I want to use in a send(...) call. When I call select(maxsockfd+1, NULL, &write_set, NULL, &tv) there it always returns 0 (timeout) although I haven't sent anything over the sockets in the write_set yet and it should be possible to send data.
Why is this? Shouldn't select return instantly when it's possible to send data over the sockets in write_set?
Thanks!
Edit: My code..
// _read_set and _write_set are the master sets
fd_set read_set = _read_set;
fd_set write_set = _write_set;
// added this for testing, the socket is a member of RemoteChannelConnector.
std::list<RemoteChannelConnector*>::iterator iter;
for (iter = _acceptingConnectorList->begin(); iter != _acceptingConnectorList->end(); iter++) {
if(FD_ISSET((*iter)->getSocket(), &write_set)) {
char* buf = "a";
int ret;
if ((ret = send((*iter)->getSocket(), buf, 1, NULL)) == -1) {
std::cout << "error." << std::endl;
} else {
std::cout << "success." << std::endl;
}
}
}
struct timeval tv;
tv.tv_sec = 10;
tv.tv_usec = 0;
int status;
if ((status = select(_maxsockfd, &read_set, &write_set, NULL, &tv)) == -1) {
// Terminate process on error.
exit(1);
} else if (status == 0) {
// Terminate process on timeout.
exit(1);
} else {
// call send/receive
}
When I run it with the code for testing if my socket is actually in the write_set and if it is possible to send data over the socket, I get a "success"...
I don't believe that you're allowed to copy-construct fd_set objects. The only guaranteed way is to completely rebuild the set using FD_SET before each call to select. Also, you're writing to the list of sockets to be selected on, before ever calling select. That doesn't make sense.
Can you use poll instead? It's a much friendlier API.
Your code is very confused. First, you don't seem to be setting any of the bits in the fd_set. Secondly, you test the bits before you even call select.
Here is how the flow generally works...
Use FD_ZERO to zero out your set.
Go through, and for each file descriptor you're interested in the writeable state of, use FD_SET to set it.
Call select, passing it the address of the fd_set you've been calling the FD_SET function on for the write set and observe the return value.
If the return value is > 0, then go through the write set and use FD_ISSET to figure out which ones are still set. Those are the ones that are writeable.
Your code does not at all appear to be following this pattern. Also, the important task of setting up the master set isn't being shown.

force exit from readline() function

I am writing program in c++ which runs GNU readline in separate thread. When main thread is exited I need to finish the thread in which readline() function is called. The readline() function is returned only when standart input came (enter pressed).
Is there any way to send input to application or explicitly return from readline function?
Thanks in advance.
Instead of returning from main thread, call exit(errno). All other threads will be killed nastily!
Or, if you wanted to be nicer, and depending on your OS, you could send a signal to the readline thread, which would interrupt the syscall.
Or, if you wanted to be cleverer, you could run readline in async mode, using a select() loop with a timeout so that your thread never blocks in readine functions, and your thread can clean up after itself.
I experimented with this situation as well. I thought perhaps one could call close(STDIN_FILENO), which does cause readline to return on the other thread, but for some reason it leaves the terminal in a bad state (doesn't echo characters so you can't see what you're typing). However, a call to the 'reset' command will fix this, so the full alternative is:
close(STDIN_FILENO);
pthread_join(...); // or whatever to wait for thread exit
system("reset -Q"); // -Q to avoid displaying cruft
However, the final better solution I used, inspired by the other suggestions, was to override rl_getc:
rl_getc_function = getc; // stdio's getc passes
and then you can use pthread_kill() to send a signal to interrupt the getc, which returns a -1 to readline, which returns a NULL to the calling thread so you can exit cleanly instead of looping for the next input (the same as would happen if the user EOF'd by ctrl-D)
Now you can have your cake (easy blocking readlines) and eat it too (be able to stop by external event without screwing up the terminal)
C++ standard input is not designed to be thread safe. So, even if there was a method to programatically stop it from waiting input, you wouldn't be able to call it from another thread. Of course, there could be an implementation specific way to do so.
Old thread but still readline API seems not explored.
In order to interrupt readline first I disabled readline signal handlers.
Do not look at the ugly global_buffer I'm using - it's just an example
http://www.delorie.com/gnu/docs/readline/rlman_43.html
Reader Thread:
pthread_mutex_t lock;
int isBufferReady = 0;
char global_buffer[2500]; /// Assuming that reads will not be any bigger
void *reader_thread(void *arg)
{
rl_getc_function = getc;
rl_catch_signals = 0;
rl_catch_sigwinch = 0;
char *input;
while ( (input = readline( NULL )) )
{
i = strlen(input)-1;
if ( input[i] == '\0' )
return NULL;
/// Due to TAB there might be a whitespace in the end
while ( i > 0 )
{
if ( isspace(input[i]) )
{
input[i] = '\0';
}
else
{
break;
}
i--;
}
pthread_mutex_lock(&lock);
read_file_function( input, buffer );
free(input);
isBufferReady = 1;
pthread_mutex_unlock(&lock);
}
printf( "Im closed \n" );
return NULL;
}
Signal handler:
volatile int keepRunning = 1;
void SIG_handler(int signal)
{
int static sig_count = 0;
switch ( signal )
{
case SIGUSR2:
{
/// Yeah I know I should not printf in a signal handler
printf( "USR2: %d \n", sig_count++);
break;
}
default:
{
printf( " SIGHANDLE\n" );
keepRunning = 0;
break;
}
}
}
main:
int main( int argc, char *argv[] )
{
pthread_t file_reader;
{ /// Signal Handler registration
struct sigaction sigact = {{0}};
sigact.sa_handler = SIG_handler;
// sigact.sa_flags = SA_RESTART;
sigaction(SIGINT , &sigact, NULL);
sigaction(SIGQUIT, &sigact, NULL);
sigaction(SIGTERM, &sigact, NULL);
sigaction(SIGHUP, &sigact, NULL);
// sigaction(SIGUSR1, &sigact, NULL);
sigaction(SIGUSR2, &sigact, NULL);
}
pthread_create( &file_reader, NULL, reader_thread, NULL );
while(keepRunning)
{
pthread_mutex_lock(&lock);
if( !isBufferReady )
{
... fill in global_buffer according to some algorithm
}
pthread_mutex_unlock(&lock);
usleep(10);
pthread_mutex_lock(&lock);
if(isBufferReady)
isBufferReady = 0;
... some operation on the 'global_buffer' like write its contents to socket
pthread_mutex_unlock(&lock);
usleep(10);
}
signal(SIGINT, SIG_DFL);
pthread_cancel( file_reader );
pthread_join( file_reader, NULL);
pthread_mutex_destroy(&lock);
rl_cleanup_after_signal();
return 0;
}
With this (nowhere near perfect) code snippet I was able to finally interrupt readline without described prevously flakiness.
Used this code snippet for interactive debug purposes where I had prepared packets in simple text files and read-in those files with the help of readline.