Buffer underruns on x310 when transmitting and receiving from same channel - c++

I'm running on an x310 across dual 10 Gigabit ethernet, outfitted with twin basic tx rx daughterboards. I'm running on UHD version 3.11.0. Ideally, I would like two simultaneous transmit and receive streams, utilizing both channels to transmit and receive. I don't want to have to use 2 x310s for 2 receive and transmit streams
When I transmit and receive at the same time on the same channel, I get a lot of U's printed out to console signaling underflows, no matter what the rate. HOWEVER, if I transmit and receive on separate channels (tx_streamer has stream_args at channel 1 and rx_streamer has stream_args at channel 0) it works just fine.
I've attached the source code of a complete but simple program that will hopefully demonstrate my problem. In this program, two threads are created: a transmit thread and a receive thread. The receive thread is constantly receiving data to a buffer and overwriting that buffer with new data. The transmit thread is constantly transmitting 0's from a prefilled buffer.
If anyone has an x310 running across 10Gbps ethernet, can you compile and run my program to test if this problem occurs not just for me?
Here's what we have already tested:
I'm running on a server system rocking two 12 core intel xeon processors. (https://ark.intel.com/products/91767/Intel-Xeon-Processor-E5-2650-v4-30M-Cache-2_20-GHz). My network card is the recommended x520 da2. Someone had previously suggested NUMA to be an issue, but I don't think this is the case as the program works when we switch to transmitting and receiving on separate channels.
Since the program works just fine when we are transmitting and receiving on separate channels, I'm led to believe this is not a CPU power issue.
I've tested only transmitting and only receiving. We can transmit at 200MS/s across both channels and we can receive at 200MS/s across both channels, but we cannot transmit and receive from the same channel. This suggests our network card is working properly and we can handle the high rate.
I've tried my program on UHD 3.10.2 and the problem still occurs
I've tried setting the tx_metadata waiting 2 second before transmitting. The problem still occurs.
I've tried running the example program txrx_loopback_from_file and that works for simultaneous receive and transmit, but I have no idea why.
From the last point, I'm lead to believe that I am somehow calling the uhd API wrong, but I have no idea where the error is. Any help would be greatly appreciated.
Thanks,
Jason
#include <iostream>
#include <iomanip>
#include <stdlib.h>
#include <vector>
#include <csignal>
#include <thread>
#include <uhd/utils/thread_priority.hpp>
#include <uhd/utils/safe_main.hpp>
#include <uhd/usrp/multi_usrp.hpp>
#include <uhd/types/tune_request.hpp>
typedef std::complex<short> Complex;
// Constants and signal variables
static bool stop_signal_called = false;
const int NUM_CHANNELS = 1;
const int BUFF_SIZE = 64000;
//function prototypes here
void recvTask(Complex *buff, uhd::rx_streamer::sptr rx_stream);
void txTask(Complex *buff, uhd::tx_streamer::sptr tx_stream, uhd::tx_metadata_t md);
void sig_int_handler(int){
std::cout << "Interrupt Signal Received" << std::endl;
stop_signal_called = true;
}
int UHD_SAFE_MAIN(int argc, char *argv[]) {
uhd::set_thread_priority_safe();
//type=x300,addr=192.168.30.2,second_addr=192.168.40.2
std::cout << std::endl;
std::cout << boost::format("Creating the usrp device") << std::endl;
uhd::usrp::multi_usrp::sptr usrp = uhd::usrp::multi_usrp::make(std::string("type=x300,addr=192.168.30.2"));
std::cout << std::endl;
//set stream args
uhd::stream_args_t stream_args("sc16");
double samp_rate_tx = 10e6;
double samp_rate_rx = 10e6;
uhd::tune_request_t tune_request(0);
//Lock mboard clocks
usrp->set_clock_source(std::string("internal"));
//set rx parameters
usrp->set_rx_rate(samp_rate_rx);
usrp->set_rx_freq(tune_request);
usrp->set_rx_gain(0);
//set tx parameters
usrp->set_tx_rate(samp_rate_tx);
usrp->set_tx_freq(tune_request);
usrp->set_tx_gain(0);
std::signal(SIGINT, &sig_int_handler);
std::cout << "Press Ctrl + C to stop streaming..." << std::endl;
//create buffers, 2 per channel (1 for tx, 1 for rx)
// transmitting complex shorts -> typedef as Complex
Complex *rx_buffs[NUM_CHANNELS];
Complex *tx_buffs[NUM_CHANNELS];
for (int i = 0; i < NUM_CHANNELS; i++){
rx_buffs[i] = new Complex[BUFF_SIZE];
tx_buffs[i] = new Complex[BUFF_SIZE];
// only transmitting 0's
std::fill(tx_buffs[i], tx_buffs[i]+BUFF_SIZE, 0);
}
//////////////////////////////////////////////////////////////////////////////
////////////////START RECEIVE AND TRANSMIT THREADS////////////////////////////
//////////////////////////////////////////////////////////////////////////////
printf("setting up threading\n");
//reset usrp time
usrp -> set_time_now(uhd::time_spec_t(0.0));
// set up RX streams and threads
std::thread rx_threads[NUM_CHANNELS];
uhd::rx_streamer::sptr rx_streams[NUM_CHANNELS];
for (int i = 0; i < NUM_CHANNELS; i++){
stream_args.channels = std::vector<size_t>(1,i);
rx_streams[i] = usrp->get_rx_stream(stream_args);
//setup streaming
auto stream_mode = uhd::stream_cmd_t::STREAM_MODE_START_CONTINUOUS;
uhd::stream_cmd_t stream_cmd(stream_mode);
stream_cmd.num_samps = 0;
stream_cmd.stream_now = true;
stream_cmd.time_spec = uhd::time_spec_t();
rx_streams[i]->issue_stream_cmd(stream_cmd);
//start rx thread
std::cout << "Starting rx thread " << i << std::endl;
rx_threads[i] = std::thread(recvTask,rx_buffs[i],rx_streams[i]);
}
// set up TX streams and threads
std::thread tx_threads[NUM_CHANNELS];
uhd::tx_streamer::sptr tx_streams[NUM_CHANNELS];
// set up TX metadata
uhd::tx_metadata_t md;
md.start_of_burst = true;
md.end_of_burst = false;
md.has_time_spec = true;
// start transmitting 2 seconds later
md.time_spec = uhd::time_spec_t(2);
for (int i = 0; i < NUM_CHANNELS; i++){
//does not work when we transmit and receive on same channel,
//if we change to stream_args.channels = std::vector<size_t> (1,1), this works for 1 channel.
stream_args.channels = std::vector<size_t>(1,i);
tx_streams[i] = usrp->get_tx_stream(stream_args);
//start the thread
std::cout << "Starting tx thread " << i << std::endl;
tx_threads[i] = std::thread(txTask,tx_buffs[i],tx_streams[i],md);
}
printf("Waiting to join threads\n");
for (int i = 0; i < NUM_CHANNELS; i++){
//join threads
tx_threads[i].join();
rx_threads[i].join();
}
return EXIT_SUCCESS;
}
//////////////////////////////////////////////////////////////////////////////
////////////////RECEIVE AND TRANSMIT THREAD FUNCTIONS/////////////////////////
//////////////////////////////////////////////////////////////////////////////
void recvTask(Complex *buff, uhd::rx_streamer::sptr rx_stream){
uhd::rx_metadata_t md;
unsigned overflows = 0;
//receive loop
while(!stop_signal_called){
size_t amount_received = rx_stream->recv(buff,BUFF_SIZE,md,3.0);
if (amount_received != BUFF_SIZE){ printf("receive not equal\n");}
//handle the error codes
switch(md.error_code){
case uhd::rx_metadata_t::ERROR_CODE_NONE:
break;
case uhd::rx_metadata_t::ERROR_CODE_TIMEOUT:
std::cerr << "T";
continue;
case uhd::rx_metadata_t::ERROR_CODE_OVERFLOW:
overflows++;
std::cerr << "Got an Overflow Indication" << std::endl;
continue;
default:
std::cout << boost::format(
"Got error code 0x%x, exiting loop..."
) % md.error_code << std::endl;
goto done_loop;
}
} done_loop:
// tell receive to stop streaming
auto stream_cmd = uhd::stream_cmd_t(uhd::stream_cmd_t::STREAM_MODE_STOP_CONTINUOUS);
rx_stream->issue_stream_cmd(stream_cmd);
//finished
std::cout << "Overflows=" << overflows << std::endl << std::endl;
}
void txTask(Complex *buff, uhd::tx_streamer::sptr tx_stream, uhd::tx_metadata_t md){
//transmit loop
while(!stop_signal_called){
size_t samples_sent = tx_stream->send(buff,BUFF_SIZE,md);
md.start_of_burst = false;
md.has_time_spec = false;
}
//send a mini EOB packet
md.end_of_burst = true;
tx_stream -> send("",0,md);
printf("End transmit \n");
}

Related

SocketCAN with C++ on Raspberry Pi: messages lost when read is delayed

In a C++ application running on a Raspberry Pi, I am using a loop in a thread to continuously wait for SocketCAN messages and process them. The messages come in at around 1kHz, as verified using candump.
After waiting for poll() to return and reading the data, I read the timestamp using ioctl() with SIOCGSTAMP. I then compare the timestamp with the previous one, and this is where it gets weird:
Most of the time, the difference is around 1ms, which is expected. But sometimes (probably when the data processing takes longer than usual or gets interrupted by the scheduler) it is much bigger, up to a few hundred milliseconds. In those instances, the messages that should have come in in the meantime (visible in candump) are lost.
How is that possible? If there is a delay somewhere, the incoming messages get buffered? Why do they get lost?
This is the slightly simplified code:
while(!done)
{
struct pollfd fd = {.fd = canSocket, .events = POLLIN};
int pollRet = poll(&fd, 1, 20); // 20ms timeout
if(pollRet < 0)
{
std::cerr << "Error polling canSocket" << errno << std::endl;
done = true;
return;
}
if(pollRet == 0) // timeout, never happens as expected
{
std::cout << "canSocket poll timeout" << std::endl;
if(done) break;
continue;
}
struct canfd_frame frame;
int size = sizeof(frame);
int readLength = read(canSocket, &frame, size);
if(readLength < 0) throw std::runtime_error("CAN read failed");
else if(readLength < size) throw std::runtime_error("CAN read incomplete");
struct timeval timestamp;
ioctl(canSocket, SIOCGSTAMP, &timestamp);
uint64_t timestamp_us = (uint64_t)timestamp.tv_sec * 1e6 + (uint64_t)timestamp.tv_usec;
static uint64_t timestamp_us_last = 0;
if((timestamp_us - timestamp_us_last) > 20000)
{
std::cout << "timestamp difference large: " << (timestamp_us - timestamp_us_last) << std::endl; // this sometime happens, why?
}
timestamp_us_last = timestamp_us;
// data processing
}

Qt c++ Issue receiving data weighing scale Ohaus aviator 7000

I am trying to establisch a serial port connection to my Aviator 7000 weighing scale using Qt c++. The expected result would be a succesfull communication through the use of a byte command.
Sadly I don't receive any bytes back from the scale. below you can find what I tried so far:
const int Max_attempts = 5;
const int Max_sleep = 125;
int attemps;
attemps = 0;
while (true)
{
int enq {5};
QByteArray bytes;
bytes.setNum(enq);
m_serial->write(bytes);
m_serial->waitForReadyRead(Max_sleep);
if (m_serial->bytesAvailable() !=0)
{
qDebug() << m_serial->bytesAvailable() ;
qDebug() << "connected" << m_serial->readAll();
break;
}
attemps += 1;
if (attemps == Max_attempts)
{
qDebug() << "no connection established";
break;
}
}
Kind regards,
Tibo
According to this manual you are supposed to send a byte 0x05 but you are sending 0x35 (the character "5").
Use
bytes.append('\X05');

Pthreads: Worker Thread Waits Forever for Signal From Controller Thread

I'm working on a thread scheduling assignment, which consists of a controller thread indicating when a worker thread ("bus") is allowed to leave its station. There would normally be many bus threads fed through a text file that could be at one of several stations, but this problem can be seen by creating a single bus with pre-defined values (id, direction, loading and crossing time) and a single station, which I've done here to minimize the code.
The general workflow of each bus thread is as follows:
Create a structure which contains a con_var, lock, and flag
(initialized to 0). This will be pushed to the first queue.
Create a Bus class object with id, direction, etc. This will be pushed to
the second queue. Both queues represent the same station.
Wait for signal from controller to begin loading. Once received, sleep for
the bus' loading time.
On wakeup, push the created structure and the bus object to their queues
Wait for crossing signal from controller...
This last part is the issue; although I've verified that the controller IS continuously sending the signal, the thread will not stop waiting. It just hangs forever and never gets to cross. I'm not sure if I'm sending the wrong signal or if the problem is something else, but I'm quite stuck and would appreciate any available help. The first 2 code windows are for the controller and bus functions respectively, which are the heart of the issue and what I'm hoping someone is willing to look at. The third is the header for my structures/classes, and final is the entire compilable main function if necessary.
busController()
void *busController (void* n){
pthread_mutex_lock(&load_lock);
int remainingBuss = totalBusCount;
readyToLoad = true;
pthread_cond_broadcast(&loadReady_cond);
pthread_mutex_unlock(&load_lock);
while (true){
//if there's a single bus in all queues (the first of several scenarios to come)
if(loadedBusCount == 1){
pthread_mutex_lock(&SouthCondition.front().lock_var);
SouthCondition.front().flag = 1;
pthread_cond_signal(&SouthCondition.front().cond_var);
//NOTE: printing &SouthCondition.front().cond_var shows different address than bus thread is waiting for
pthread_mutex_unlock(&SouthCondition.front().lock_var);
}
}
}
busFunction()
void *busFunction (void* t){
CondStruct condition; //Will be pushed to the first queue, contains cond_var, lock, and condition flag for the bus
pthread_mutex_init(&condition.lock_var, NULL);
pthread_cond_init(&condition.cond_var, NULL);
condition.flag = 0;
struct BusStructure *data = (struct BusStructure *) t;
//create Bus object which takes its data from the bus structure for easy access inside queue
Bus bus(((BusStructure*)t)->dir, ((BusStructure*)t)->id, ((BusStructure*)t)->loadTime,
((BusStructure*)t)->crossTime);
//wait for signal to begin loading from controller thread
pthread_mutex_lock (&load_lock);
while(readyToLoad == false){
pthread_cond_wait(&loadReady_cond, &load_lock);
}
cout << "BUS HAS BEEN UNLOCKED AND CAN PROCEED TO START LOADING" <<endl;
pthread_mutex_unlock (&load_lock);
usleep((((BusStructure*)t)->loadTime)*1000000); //sleep to simulate load time
//Bus has now loaded, add to relevant queue
pthread_mutex_lock(&busLoading_lock);
cout << "Pushing bus " <<bus.getID()<<" to queue"<<endl;
pthread_mutex_lock(&busToQueue_lock);
SouthCondition.push(condition);
SouthBus.push(bus);
pthread_mutex_unlock(&busToQueue_lock);
loadedBusCount++;
pthread_mutex_unlock(&busLoading_lock);
//Wait for crossing signal from controller (THIS IS THE PROBLEM AREA, BUS WAITS FOREVER)
pthread_mutex_lock(&condition.lock_var);
while(condition.flag == 0){
cout <<"Thread "<<bus.getID()<<" is now waiting for " << &condition.cond_var << endl;
pthread_cond_wait(&condition.cond_var, &condition.lock_var);
}
cout <<"TIME TO CROSS, WHICH ISN'T IMPLEMENTED YET. THIS PRINTOUT IS THE SUCCESS." <<endl;
pthread_mutex_unlock(&condition.lock_var);
}
StructClassMCVE.h
#ifndef StructClassMCVE_H
#define StructClassMCVE_H
struct BusStructure{
public:
char dir;
int id;
double loadTime;
double crossTime;
};
class Bus{
public:
char dir;
int id;
double loadTime;
double crossTime;
Bus(char d, int i, double l, double c){
dir = d;
id = i;
loadTime = l;
crossTime = c;
}
~Bus(){}
char getDirection(){return dir;}
int getID(){return id;}
double getLoadingTime(){return loadTime;}
double getCrossingTime(){return crossTime;}
};
struct CondStruct{
pthread_cond_t cond_var;
pthread_mutex_t lock_var;
int flag;
};
#endif
mainMCVE.cpp:
#include <iostream>
#include <stdio.h>
#include<string.h>
#include <stdlib.h>
#include<cstring>
#include<sstream>
#include<cstdlib>
#include<queue>
#include<math.h>
#include<time.h>
#include <unistd.h>
#include<pthread.h>
#include<ctype.h>
#include <vector>
#include <sys/wait.h>
#include <fstream>
#include<ctype.h>
#include"StructClassMCVE.h"
using namespace std;
int totalBusCount;
int loadedBusCount = 0;
pthread_cond_t loadReady_cond;
bool readyToLoad = false;
pthread_mutex_t load_lock;
pthread_mutex_t busLoading_lock;
pthread_mutex_t busToQueue_lock;
queue<CondStruct> SouthCondition;
queue<Bus> SouthBus;
void *busFunction (void* t){
CondStruct condition; //Will be pushed to the first queue, contains cond_var, lock, and condition flag for the bus
pthread_mutex_init(&condition.lock_var, NULL);
pthread_cond_init(&condition.cond_var, NULL);
condition.flag = 0;
struct BusStructure *data = (struct BusStructure *) t;
//create Bus object which takes its data from the bus structure for easy access inside queue
Bus bus(((BusStructure*)t)->dir, ((BusStructure*)t)->id, ((BusStructure*)t)->loadTime,
((BusStructure*)t)->crossTime);
//wait for signal to begin loading from controller thread
pthread_mutex_lock (&load_lock);
while(readyToLoad == false){
pthread_cond_wait(&loadReady_cond, &load_lock);
}
cout << "BUS HAS BEEN UNLOCKED AND CAN PROCEED TO START LOADING" <<endl;
pthread_mutex_unlock (&load_lock);
usleep((((BusStructure*)t)->loadTime)*1000000); //sleep to simulate load time
//Bus has now loaded, add to relevant queue
pthread_mutex_lock(&busLoading_lock);
cout << "Pushing bus " <<bus.getID()<<" to queue"<<endl;
pthread_mutex_lock(&busToQueue_lock);
SouthCondition.push(condition);
SouthBus.push(bus);
pthread_mutex_unlock(&busToQueue_lock);
loadedBusCount++;
pthread_mutex_unlock(&busLoading_lock);
//Wait for crossing signal from controller (THIS IS THE PROBLEM AREA, BUS WAITS FOREVER)
pthread_mutex_lock(&condition.lock_var);
while(condition.flag == 0){
cout <<"Thread "<<bus.getID()<<" is now waiting for " << &condition.cond_var << endl;
pthread_cond_wait(&condition.cond_var, &condition.lock_var);
}
cout <<"TIME TO CROSS, WHICH ISN'T IMPLEMENTED YET. THIS PRINTOUT IS THE SUCCESS." <<endl;
pthread_mutex_unlock(&condition.lock_var);
}
void *busController (void* n){
pthread_mutex_lock(&load_lock);
int remainingBuss = totalBusCount;
readyToLoad = true;
pthread_cond_broadcast(&loadReady_cond);
pthread_mutex_unlock(&load_lock);
while (true){
//if there's a single bus in all queues (the first of several scenarios to come)
if(loadedBusCount == 1){
pthread_mutex_lock(&SouthCondition.front().lock_var);
SouthCondition.front().flag = 1;
pthread_cond_signal(&SouthCondition.front().cond_var);
//NOTE: printing &SouthCondition.front().cond_var shows different address than bus thread is waiting for
pthread_mutex_unlock(&SouthCondition.front().lock_var);
}
}
}
int main(int argc, char **argv){
totalBusCount = 1;
pthread_t thread[2];
//create bus thread
struct BusStructure *busEntry = new struct BusStructure;
busEntry -> id = 0;
busEntry -> dir = 'S';
busEntry -> loadTime = 0.6;
busEntry -> crossTime = 1.2;
pthread_create(&thread[0], NULL, busFunction,(void *) busEntry);
//create controller thread
pthread_create(&thread[1], NULL, busController,NULL);
for(int i = 0; i < 2; i++){
pthread_join(thread[i], NULL);
}
}

How computer driven ignition timing on gas engines work?

I have been dabbling with writing a C++ program that would control spark timing on a gas engine and have been running in to some trouble. My code is very simple. It starts by creating a second thread that works to emulate the output signal of a Hall Effect sensor that is triggered once per engine revolution. My main code processes the fake sensor output, recalculates engine rpm, and then determines the time necessary to wait for the crankshaft to rotate to the correct angle to send spark to the engine. The problem I'm running into is that I am using a sleep function in milliseconds and at higher RPM's I am losing a significant amount of data.
My question is how are real automotive ECU's programed to be able to control spark at high RPM's accurately?
My code is as follows:
#include <iostream>
#include <Windows.h>
#include <process.h>
#include <fstream>
#include "GetTimeMs64.cpp"
using namespace std;
void HEEmulator(void * );
int HE_Sensor1;
int *sensor;
HANDLE handles[1];
bool run;
bool *areRun;
int main( void )
{
int sentRpm = 4000;
areRun = &run;
sensor = &HE_Sensor1;
*sensor = 1;
run = TRUE;
int rpm, advance, dwell, oHE_Sensor1, spark;
oHE_Sensor1 = 1;
advance = 20;
uint64 rtime1, rtime2, intTime, curTime, sparkon, sparkoff;
handles[0] = (HANDLE)_beginthread(HEEmulator, 0, &sentRpm);
ofstream myfile;
myfile.open("output.out");
intTime = GetTimeMs64();
rtime1 = intTime;
rpm = 0;
spark = 0;
dwell = 10000;
sparkoff = 0;
while(run == TRUE)
{
rtime2 = GetTimeMs64();
curTime = rtime2-intTime;
myfile << "Current Time = " << curTime << " ";
myfile << "HE_Sensor1 = " << HE_Sensor1 << " ";
myfile << "RPM = " << rpm << " ";
myfile << "Spark = " << spark << " ";
if(oHE_Sensor1 != HE_Sensor1)
{
if(HE_Sensor1 > 0)
{
rpm = (1/(double)(rtime2-rtime1))*60000;
dwell = (1-((double)advance/360))*(rtime2-rtime1);
rtime1 = rtime2;
}
oHE_Sensor1 = HE_Sensor1;
}
if(rtime2 >= (rtime1 + dwell))
{
spark = 1;
sparkoff = rtime2 + 2;
}
if(rtime2 >= sparkoff)
{
spark = 0;
}
myfile << "\n";
Sleep(1);
}
myfile.close();
return 0;
}
void HEEmulator(void *arg)
{
int *rpmAd = (int*)arg;
int rpm = *rpmAd;
int milliseconds = (1/(double)rpm)*60000;
for(int i = 0; i < 10; i++)
{
*sensor = 1;
Sleep(milliseconds * 0.2);
*sensor = 0;
Sleep(milliseconds * 0.8);
}
*areRun = FALSE;
}
A desktop PC is not a real-time processing system.
When you use Sleep to pause a thread, you don't have any guarantees that it will wake up exactly after the specified amount of time has elapsed. The thread will be marked as ready to resume execution, but it may still have to wait for the OS to actually schedule it. From the documentation of the Sleep function:
Note that a ready thread is not guaranteed to run immediately. Consequently, the thread may not run until some time after the sleep interval elapses.
Also, the resolution of the system clock ticks is limited.
To more accurately simulate an ECU and the attached sensors, you should not use threads. Your simulation should not even depend on the passage of real time. Instead, use a single loop that updates the state of your simulation (both ECU and sensors) with each tick. This also means that your simulation should include the clock of the ECU.

timerfd and read

I have application, that periodically (by timer) check some data storage.
Like this:
#include <iostream>
#include <cerrno>
#include <cstring>
#include <cstdlib>
#include <sys/fcntl.h>
#include <unistd.h>
// EPOLL & TIMER
#include <sys/epoll.h>
#include <sys/timerfd.h>
int main(int argc, char **argv)
{
/* epoll instance */
int efd = epoll_create1(EPOLL_CLOEXEC);
if (efd < 0)
{
std::cerr << "epoll_create error: " << strerror(errno) << std::endl;
return EXIT_FAILURE;
}
struct epoll_event ev;
struct epoll_event events[128];
/* timer instance */
int tfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
struct timespec ts;
// first expiration in 3. seconds after program start
ts.tv_sec = 3;
ts.tv_nsec = 0;
struct itimerspec new_timeout;
struct itimerspec old_timeout;
bzero(&new_timeout, sizeof(new_timeout));
bzero(&old_timeout, sizeof(old_timeout));
// value
new_timeout.it_value = ts;
// no interval;
// timer will be armed in epoll_wait event trigger
new_timeout.it_interval.tv_sec =
new_timeout.it_interval.tv_nsec = 0;
// Add the timer descriptor to epoll.
if (tfd != -1)
{
ev.events = EPOLLIN | EPOLLERR /*| EPOLLET*/;
ev.data.ptr = &tfd;
epoll_ctl(efd, EPOLL_CTL_ADD, tfd, &ev);
}
int flags = 0;
if (timerfd_settime(tfd, flags, &new_timeout, &old_timeout) < 0)
{
std::cerr << "timerfd_settime error: " << strerror(errno) << std::endl;
}
int numEvents = 0;
int timeout = 0;
bool checkTimer = false;
while (1)
{
checkTimer = false;
numEvents = epoll_wait(efd, events, 128, timeout);
if (numEvents > 0)
{
for (int i = 0; i < numEvents; ++i)
{
if (events[i].data.ptr == &tfd)
{
std::cout << "timeout" << std::endl;
checkTimer = true;
}
}
}
else if(numEvents == 0)
{
continue;
}
else
{
std::cerr << "An error occured: " << strerror(errno) << std::endl;
}
if (checkTimer)
{
/* Check data storage */
uint64_t value;
ssize_t readBytes;
//while ( (readBytes = read(tfd, &value, 8)) > 0)
//{
// std::cout << "\tread: '" << value << "'" << std::endl;
//}
itimerspec new_timeout;
itimerspec old_timeout;
new_timeout.it_value.tv_sec = rand() % 3 + 1;
new_timeout.it_value.tv_nsec = 0;
new_timeout.it_interval.tv_sec =
new_timeout.it_interval.tv_nsec = 0;
timerfd_settime(tfd, flags, &new_timeout, &old_timeout);
}
}
return EXIT_SUCCESS;
}
This is simple description of my app.
After each timeout timer need to be rearmed by some value different in each timeout.
Questions are:
Is it necessary to add timerfd to epoll (epoll_ctl) with EPOLLET flag?
Is it necessary to read timerfd after each timeout?
Is it necessary to epoll_wait infinitely (timeout = -1)?
You can do this in one of two modes, edge triggered or level triggered. If you choose the edge triggered route then you must pass EPOLLET and do not need to read the timerfd after each wakeup. The fact that you receive an event from epoll means one or more time outs have fired. Optionally you may read the timerfd and it will return the number of time outs that have fired since you last read it.
If you choose the level triggered route then you don't need to pass EPOLLET, but you must read the timerfd after each wakeup. If you do not then you will immediately be woken up again until you consume the time out.
You should either pass -1 to epoll as the time out or some positive value. If you pass 0, like you do in the example, then you will never go to sleep, you'll just spin waiting for the time out to fire. That's almost certainly undesirable behaviour.
Answers to the questions:
Is it necessary to add timerfd to epoll (epoll_ctl) with EPOLLET flag?
No. Adding EPOLLET (edge trigger) does changes the behavior of receiving events. Without EPOLLET, you'll continuously receive the event from epoll_wait related to the timerfd until you've read() from the timerfd. With EPOLLET, you'll NOT receive additional events beyond the first one, even if new expiration occurs, until you've read() from the timerfd and a new expiration occur.
Is it necessary to read timerfd after each timeout?
Yes in order to continue and receive events (only) when new expiration occur (see above). No when periodic timer is not used (single expiration only), and you close the timerfd without reading.
Is it necessary to epoll_wait infinitely (timeout = -1)?
No. You can use epoll_wait's timeout instead of timerfd. I personally think it is easier to use timerfd than keep calculating the next timeout for EPOLL, especially if you expect multiple timeout intervals; keeping tabs on what is your next task when timeout occurs is much easier when it is tied to the specific event what woke up.