I am trying to catch a SIGVTALRM sent by setitimer, and I have no idea why it doesn't work. here's my code:
void time(int time) {
cout << "time" << endl;
exit(0);
}
int main(void) {
signal(SIGVTALRM, time);
itimerval tv;
tv.it_value.tv_sec = 5;
tv.it_value.tv_usec = 0;
tv.it_interval.tv_sec = 5;
tv.it_interval.tv_usec = 0;
setitimer(ITIMER_VIRTUAL, &tv, NULL);
while (true) {
cout << "waiting" << endl;
}
return 0;
}
For some reason it never invokes time() - is it because it doesn't catch the signal or because the signal wasn't sent in the first place I don't know.
It should be pretty simple. Any ideas? thanks
Are you sure it is not working?
Everything looks fine to me. May be you are not waiting enough. Since you are printing the string waiting inside the loop and you are using the virtual timer, as a result the clock ticks only when the process runs (IO time not included). So in reality your timer might expire after several (>5) seconds.
Try commenting out the printing part.
It is due to signal function. As mentioned in http://manpages.ubuntu.com/manpages//precise/en/man2/signal.2.html:
The behavior of signal() varies across UNIX versions, and has also varied historically across different versions of Linux. Avoid its use: use sigaction(2) instead.
So the main method should be:
int main(void) {
itimerval tv;
struct sigaction sa;
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = timer_handler;
if (sigaction(SIGVTALRM, &sa, NULL) == -1) {
printf("error with: sigaction\n");
exit(EXIT_FAILURE);
}
tv.it_value.tv_sec = 5;
tv.it_value.tv_usec = 0;
tv.it_interval.tv_sec = 5;
tv.it_interval.tv_usec = 0;
setitimer(ITIMER_VIRTUAL, &tv, NULL);
while (true) {
cout << "waiting" << endl;
}
return 0;
}
Related
In a C++ application running on a Raspberry Pi, I am using a loop in a thread to continuously wait for SocketCAN messages and process them. The messages come in at around 1kHz, as verified using candump.
After waiting for poll() to return and reading the data, I read the timestamp using ioctl() with SIOCGSTAMP. I then compare the timestamp with the previous one, and this is where it gets weird:
Most of the time, the difference is around 1ms, which is expected. But sometimes (probably when the data processing takes longer than usual or gets interrupted by the scheduler) it is much bigger, up to a few hundred milliseconds. In those instances, the messages that should have come in in the meantime (visible in candump) are lost.
How is that possible? If there is a delay somewhere, the incoming messages get buffered? Why do they get lost?
This is the slightly simplified code:
while(!done)
{
struct pollfd fd = {.fd = canSocket, .events = POLLIN};
int pollRet = poll(&fd, 1, 20); // 20ms timeout
if(pollRet < 0)
{
std::cerr << "Error polling canSocket" << errno << std::endl;
done = true;
return;
}
if(pollRet == 0) // timeout, never happens as expected
{
std::cout << "canSocket poll timeout" << std::endl;
if(done) break;
continue;
}
struct canfd_frame frame;
int size = sizeof(frame);
int readLength = read(canSocket, &frame, size);
if(readLength < 0) throw std::runtime_error("CAN read failed");
else if(readLength < size) throw std::runtime_error("CAN read incomplete");
struct timeval timestamp;
ioctl(canSocket, SIOCGSTAMP, ×tamp);
uint64_t timestamp_us = (uint64_t)timestamp.tv_sec * 1e6 + (uint64_t)timestamp.tv_usec;
static uint64_t timestamp_us_last = 0;
if((timestamp_us - timestamp_us_last) > 20000)
{
std::cout << "timestamp difference large: " << (timestamp_us - timestamp_us_last) << std::endl; // this sometime happens, why?
}
timestamp_us_last = timestamp_us;
// data processing
}
In C++ I am running a bash command. The command is "echo | openssl s_client -connect zellowork.io:443"
But if this fails I want it to timeout in 4 seconds. The typical "/usr/bin/timeout 4 /usr/bin/sh -c" before the command does not work when run from the c++ code.
So I was trying to make a function that uses popen to send out the command and then waits for up to 4 seconds for the command to complete before it returns. The difficulty that I have is that fgets is blocking and it will wait for 20 seconds (on this command) before it unblocks and fails and I can not find anyway to see if there is something to read in a stream before I call fgets. Here is my code.
ExecuteCmdReturn Utils::executeCmdWithTimeout(string cmd, int ms)
{
ExecuteCmdReturn ecr;
ecr.success = false;
ecr.outstr = "";
FILE *in;
char buff[4096];
u64_t startTime = TWTime::ticksSinceStart();
u64_t stopTime = startTime + ms;
if(!(in = popen(cmd.c_str(), "r"))){
return ecr;
}
fseek(in,0,SEEK_SET);
stringstream ss("");
long int lastPos = 0;
long int newPos = 0;
while (TWTime::ticksSinceStart() < stopTime) {
newPos = ftell(in);
if (newPos > lastPos) {
lastPos = newPos;
if (fgets(buff, sizeof(buff), in) == NULL) {
break;
} else {
ss << buff;
}
} else {
msSleep(10);
}
}
auto rc = pclose(in);
ecr.success = true;
ecr.outstr = ss.str();
return ecr;
}
Use std::async to express that you may get your result asynchronously (a std::future<ExecuteCmdReturn>)
Use std::future<T>::wait_for to timeout waiting for the result.
Here's an example:
First, a surrogate for your executeCmdWithTimeout function that randomly sleeps between 0 and 5 seconds.
int do_something_silly()
{
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<> distribution(0, 5);
auto sleep_time = std::chrono::seconds(distribution(gen));
std::cout << "Sleeping for " << sleep_time.count() << " seconds\n";
std::this_thread::sleep_for(sleep_time);
return 42;
}
Then, launching the task asynchronously and timing out on it:
int main()
{
auto silly_result = std::async(std::launch::async, [](){ return do_something_silly();});
auto future_status = silly_result.wait_for(3s);
switch(future_status)
{
case std::future_status::timeout:
std::cout << "timed out\n";
break;
case std::future_status::ready:
std::cout << "finished. Result is " << silly_result.get() << std::endl;
break;
case std::future_status::deferred:
std::cout << "The function hasn't even started yet.\n";
}
}
I used a lambda here even though I didn't need to because in your situation it will be easier because it looks like you are using a member function and you'll want to capture [this].
Live Demo
In your case, main would become ExecuteCmdReturn Utils::executeCmdWithTimeout(string cmd, int ms) and do_something_silly would become a private helper, named something like executeCmdWithTimeout_impl.
If you timeout waiting for the process to complete, you optionally kill the process so that you aren't wasting any extra cycles.
If you find yourself creating many short-lived threads like this, consider thread pooling. I've had a lot of success with boost::thread_pool (and if you end up going that direction, consider using Boost.Process for handling your process creation).
So, I am writing a small winsock app and I need to make a multi-client server.
I decided to use threads for every new connection, the problem is that I don't know how to pass multiple data to a thread, so I use struct.
Struct:
typedef struct s_par {
char lttr;
SOCKET clientSocket;
} par;
_stdcall:
unsigned __stdcall ClientSession(void *data) {
par param = data;
char ch = param.lttr;
SOCKET clntSocket = param.clientSocket;
// ..working with client
}
Main:
int main() {
unsigned seed = time (0);
srand(seed);
/*
..........
*/
SOCKET clientSockets[nMaxClients-1];
char ch = 'a' + rand()%26;
while(true) {
cout << "Waiting for clients(MAX " << nMaxClients << "." << endl;
while ((clientSockets[nClient] = accept(soketas, NULL, NULL))&&(nClient < nMaxClients)) {
par param;
// Create a new thread for the accepted client (also pass the accepted client socket).
if(clientSockets[nClient] == INVALID_SOCKET) {
cout << "bla bla" << endl;
exit(1);
}
cout << "Succesfull connection." << endl;
param.clientSocket = clientSockets[nClient];
param.lttr = ch;
unsigned threadID;
HANDLE hThread = (HANDLE)_beginthreadex(NULL, 0, &ClientSession, ¶m, 0, &threadID);
nClient++;
}
The problem is that I get errors with data type conversion. Maybe someone could suggest an easy fix with passing this struct to a thread?
With each round of your while-loop you're doing two ill-advised activites:
Passing the address of an automatic variable that will be destroyed with each cycle of the loop.
Leaking a thread HANDLE returned from _beginthreadex
Neither of those is good. Ideally your thread proc should look something like this:
unsigned __stdcall ClientSession(void *data)
{
par * param = reinterpret_cast<par*>(data);
char ch = param->lttr;
SOCKET clntSocket = param->clientSocket;
// ..working with client
delete param;
return 0U;
}
And the caller side should do something like this:
par *param = new par;
param->clientSocket = clientSockets[nClient];
param->lttr = ch;
...
HANDLE hThread = (HANDLE)_beginthreadex(NULL, 0, &ClientSession, param, 0, &threadID);
if (hThread != NULL)
CloseHandle(hThread);
else
delete param; // probably report error here as well
That should be enough to get you going. I would advise you may wish to take some time to learn about the C++11 Threading Model. It makes much of this considerably more elegant (and portable!).
Best of luck.
ok so I have a thread that is meant to add to a vector of players but whenever I call the push_back function I get a memory access violations, I've taken out all other code where the vector is being used outside of this thread.
I can read the size of the vector before this happens but I just cannot push_back into it.
the vector looks like this:
std::vector<A_Player> &clientsRef;
adn the thread that it is in is:
void NetworkManager::TCPAcceptClient(){
std::cout << "Waiting to accept that client that pinged us" << std::endl;
fd_set fd;
timeval tv;
FD_ZERO(&fd);
FD_SET(TCPListenSocket, &fd);
tv.tv_sec = 5;//seconds
tv.tv_usec = 0;//miliseconds
A_Player thePlayer;
thePlayer.sock = SOCKET_ERROR;
if (select(0, &fd, NULL, NULL, &tv) > 0){ //using select to allow a timeout if the client fails to connect
if (thePlayer.sock == SOCKET_ERROR){
thePlayer.sock = accept(TCPListenSocket, NULL, NULL);
}
thePlayer.playerNumber = clientsRef.size() + 1;
thePlayer.isJumping = false;
thePlayer.X = 0;
thePlayer.Y = 0;
thePlayer.Z = 0;
clientsRef.push_back(thePlayer);
clientHandler = std::thread(&NetworkManager::ClientRecieve, this);
clientHandler.detach();
}
else{
std::cout << "Client connection timed out!!!!!" << std::endl;
}
}
Can anyone give me some insight into why this doesn't work?
Kind regards
My psychic debugging skills tell me that your clientsRef reference is referencing a destroyed local vector. Take a look at the code where you set the reference.
I found that referencing wouldn't work for the problem I had and I converted it to a pointer system which worked fine.
I have application, that periodically (by timer) check some data storage.
Like this:
#include <iostream>
#include <cerrno>
#include <cstring>
#include <cstdlib>
#include <sys/fcntl.h>
#include <unistd.h>
// EPOLL & TIMER
#include <sys/epoll.h>
#include <sys/timerfd.h>
int main(int argc, char **argv)
{
/* epoll instance */
int efd = epoll_create1(EPOLL_CLOEXEC);
if (efd < 0)
{
std::cerr << "epoll_create error: " << strerror(errno) << std::endl;
return EXIT_FAILURE;
}
struct epoll_event ev;
struct epoll_event events[128];
/* timer instance */
int tfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
struct timespec ts;
// first expiration in 3. seconds after program start
ts.tv_sec = 3;
ts.tv_nsec = 0;
struct itimerspec new_timeout;
struct itimerspec old_timeout;
bzero(&new_timeout, sizeof(new_timeout));
bzero(&old_timeout, sizeof(old_timeout));
// value
new_timeout.it_value = ts;
// no interval;
// timer will be armed in epoll_wait event trigger
new_timeout.it_interval.tv_sec =
new_timeout.it_interval.tv_nsec = 0;
// Add the timer descriptor to epoll.
if (tfd != -1)
{
ev.events = EPOLLIN | EPOLLERR /*| EPOLLET*/;
ev.data.ptr = &tfd;
epoll_ctl(efd, EPOLL_CTL_ADD, tfd, &ev);
}
int flags = 0;
if (timerfd_settime(tfd, flags, &new_timeout, &old_timeout) < 0)
{
std::cerr << "timerfd_settime error: " << strerror(errno) << std::endl;
}
int numEvents = 0;
int timeout = 0;
bool checkTimer = false;
while (1)
{
checkTimer = false;
numEvents = epoll_wait(efd, events, 128, timeout);
if (numEvents > 0)
{
for (int i = 0; i < numEvents; ++i)
{
if (events[i].data.ptr == &tfd)
{
std::cout << "timeout" << std::endl;
checkTimer = true;
}
}
}
else if(numEvents == 0)
{
continue;
}
else
{
std::cerr << "An error occured: " << strerror(errno) << std::endl;
}
if (checkTimer)
{
/* Check data storage */
uint64_t value;
ssize_t readBytes;
//while ( (readBytes = read(tfd, &value, 8)) > 0)
//{
// std::cout << "\tread: '" << value << "'" << std::endl;
//}
itimerspec new_timeout;
itimerspec old_timeout;
new_timeout.it_value.tv_sec = rand() % 3 + 1;
new_timeout.it_value.tv_nsec = 0;
new_timeout.it_interval.tv_sec =
new_timeout.it_interval.tv_nsec = 0;
timerfd_settime(tfd, flags, &new_timeout, &old_timeout);
}
}
return EXIT_SUCCESS;
}
This is simple description of my app.
After each timeout timer need to be rearmed by some value different in each timeout.
Questions are:
Is it necessary to add timerfd to epoll (epoll_ctl) with EPOLLET flag?
Is it necessary to read timerfd after each timeout?
Is it necessary to epoll_wait infinitely (timeout = -1)?
You can do this in one of two modes, edge triggered or level triggered. If you choose the edge triggered route then you must pass EPOLLET and do not need to read the timerfd after each wakeup. The fact that you receive an event from epoll means one or more time outs have fired. Optionally you may read the timerfd and it will return the number of time outs that have fired since you last read it.
If you choose the level triggered route then you don't need to pass EPOLLET, but you must read the timerfd after each wakeup. If you do not then you will immediately be woken up again until you consume the time out.
You should either pass -1 to epoll as the time out or some positive value. If you pass 0, like you do in the example, then you will never go to sleep, you'll just spin waiting for the time out to fire. That's almost certainly undesirable behaviour.
Answers to the questions:
Is it necessary to add timerfd to epoll (epoll_ctl) with EPOLLET flag?
No. Adding EPOLLET (edge trigger) does changes the behavior of receiving events. Without EPOLLET, you'll continuously receive the event from epoll_wait related to the timerfd until you've read() from the timerfd. With EPOLLET, you'll NOT receive additional events beyond the first one, even if new expiration occurs, until you've read() from the timerfd and a new expiration occur.
Is it necessary to read timerfd after each timeout?
Yes in order to continue and receive events (only) when new expiration occur (see above). No when periodic timer is not used (single expiration only), and you close the timerfd without reading.
Is it necessary to epoll_wait infinitely (timeout = -1)?
No. You can use epoll_wait's timeout instead of timerfd. I personally think it is easier to use timerfd than keep calculating the next timeout for EPOLL, especially if you expect multiple timeout intervals; keeping tabs on what is your next task when timeout occurs is much easier when it is tied to the specific event what woke up.