Strings read from FIFO teleport to other lines - c++

My code reads lines from a fifo, constructs a functor, and queues it with a Boost thread pool. I was having a problem with occasional work items failing, but when I added logging the problem disappeared.
I've worked it down to the location of where a std::ofstream is opened, but I can't figure out why this is true. By moving that line I was able to get the problems to reoccur. Sometimes part of a line goes missing and reappears somewhere else (either preppended to or overwriting another line). This doesn't appear to be random. Given the same input the error always occurs in the same place, but if the input lines are reordered the error will occur in a different location.
As an example
# This line should read
# /dev/shm/test/bluetooth-active-symbolic.symbolic.png.csv.plot.png
# But it is missing the leading "/dev/"
shm/test/bluetooth-active-symbolic.symbolic.png.csv.plot.png
# This line should read
# POOLCMD_CLOSE
# But the missing text reappeared here
/dev/POOLCMD_CLOSE
Code
Where we fork to the background
switch (fork()) {
case -1:
std::cerr << "Error forking\n";
return 1;
case 0:
close(0);
close(1);
close(2);
dup2(stdout_file, fileno(stdout));
dup2(stdout_file, fileno(stderr));
setsid();
went_background = true;
// If this ofstream line is moved elsewhere, things break
log_file = std::ofstream("poolcmd-log.txt");
break;
default:
exit(0);
}
Where we read from the fifo
read_threads.push_back(
decltype(read_threads)::value_type(new std::thread([&]() {
char* line = 0;
size_t n = 0;
while (true) {
auto read = getline(&line, &n, fifo);
locked_print(log_file, "read: ");
locked_print(log_file, line);
if (read >= 0) {
std::string tmp(line, read - 1);
if (tmp == "POOLCMD_CLOSE") {
locked_print(log_file, "Understood POOLCMD_CLOSE\n");
break;
}
tp.add(Executer(cmd, {tmp}, quiet));
}
else {
locked_print(log_file, "read < 0\n");
break;
}
}
free(line);
fclose(fifo);
remove(fifo_name.c_str());
})));

Related

Do input redirection and capture command output (Custom shell-like program)

I'm writing a custom shell where I try to add support for input, output redirections and pipes just like standard shell. I stuck at point where I cannot do input redirection, but output redirection is perfectly working. My implementation is something like this (only related part), you can assume that (string) input is non-empty
void execute() {
... // stuff before execution and initialization of variables
int *fds;
std::string content;
std::string input = readFromAFile(in_file); // for input redirection
for (int i = 0; i < commands.size(); i++) {
fds = subprocess(commands[i]);
dprintf(fds[1], "%s", input.data()); // write to write-end of pipe
close(fds[1]);
content += readFromFD(fds[0]); // read from read-end of pipe
close(fds[0]);
}
... // stuff after execution
}
int *subprocess(std::string &cmd) {
std::string s;
int *fds = new int[2];
pipe(fds);
pid_t pid = fork();
if (pid == -1) {
std::cerr << "Fork failed.";
}
if (pid == 0) {
dup2(fds[1], STDOUT_FILENO);
dup2(fds[0], STDIN_FILENO);
close(fds[1]);
close(fds[0]);
system(cmd.data());
exit(0); // child terminates
}
return fds;
}
My thought is subprocess returns a pipe (fd_in, fd_out) and parent can write to write-end and read-from read-end afterwards. However when I try an input redirection something like sort < in.txt, the program just hangs. I think there is a deadlock because one waiting other to write, and other one to read, however, after parent writes to write-end it closes, and then read from read-end. How should I consider this case ?
When I did a bit of searching, I saw this answer, which my original thinking was similar except that in the answer it mentions creating two pipes. I did not quite understand this part. Why do we need two separate pipes ?

Sampling keyboard input at a certain rate in C++

What I want to do is sampling the keyboard input at a certain rate (e.g. 10-20 Hz). I use a while loop that read from stdin (I use read because I want to read asynchronously, e.i. I don't want to press enter every time) and then I have a pause before a new cycle starts to keep the sampling frequency stable.
The user press left/right arrow to give a command (to a robot). If nothing is pressed, the output is 0.
The problem is that, during the pause, the stdin buffer is written (I suppose), and so the read will return an old input. The final result is that the output is delayed. So if I press left the output immediately change to 1, but when I release it takes some seconds to return to 0. I want to remove this delay.
My aim is to sample just the more recent key pressed, in order to synchronize user input and output command without delays. Is there a way? Thank you in advance.
This is the method I'm using:
void key_reader::keyLoop()
{
char c;
bool dirty = false;
int read_flag;
// get the console in raw mode
tcgetattr(kfd, &cooked);
memcpy(&raw, &cooked, sizeof(struct termios));
raw.c_lflag &= ~(ICANON | ECHO);
// Setting a new line, then end of file
raw.c_cc[VEOL] = 1;
raw.c_cc[VEOF] = 2;
tcsetattr(kfd, TCSANOW, &raw);
//FD_ZERO(&set); /* clear the set */
//FD_SET(kfd, &set); /* add our file descriptor to the set */
//timeout.tv_sec = 0;
//timeout.tv_usec = 10000;
if (fcntl(kfd, F_SETFL, O_NONBLOCK) == -1)
{
perror("fcntl:"); // an error accured
exit(-1);
}
puts("Reading from keyboard");
puts("---------------------------");
puts("Use arrow keys to move the turtle.");
ros::Rate r(10);
while (ros::ok())
{
read_flag = read(kfd, &c, 1);
switch (read_flag)
{
case -1:
// case -1: is empty and errono
// set EAGAIN
if (errno == EAGAIN)
{
//no input yet
direction = 0;
break;
}
else
{
perror("read:");
exit(2);
}
// case 0 means all bytes are read and EOF(end of conv.)
case 0:
//no input yet
direction = 0;
break;
default:
ROS_DEBUG("value: 0x%02X\n", c);
switch (c)
{
case KEYCODE_L:
ROS_DEBUG("LEFT");
direction = 1;
dirty = true;
break;
case KEYCODE_R:
ROS_DEBUG("RIGHT");
direction = -1;
dirty = true;
break;
}
}
continuos_input::input_command cmd;
cmd.type = "Keyboard";
cmd.command = direction;
cmd.stamp = ros::Time::now();
key_pub.publish(cmd);
r.sleep();
}
}
I feel that the issue is with your subscriber rather than publisher. I can see that you have used rate to limit the publishing rate to 10Hz. Do confirm the publishing rate using Topic Monitor in rqt. Also setting a lower queue size for the publisher might help. Can't give a more definitive answer without referring to your subscriber node.

Read Syntax from configuration file

I'm trying to write a configuration reader in C/C++ (using Low-Level I/O).
The configuration contains directions like:
App example.com {
use: python vauejs sass;
root: www/html;
ssl: Enabled;
}
How could i read the content into a std::map or struct? Google did not give me the results i'm looking for yet. I hope SO got some ideas...
What i got so far:
// File Descriptor
int fd;
// Open File
const errno_t ferr = _sopen_s(&fd, _file, _O_RDONLY, _SH_DENYWR, _S_IREAD);
// Handle returned value
switch (ferr) {
// The given path is a directory, or the file is read-only, but an open-for-writing operation was attempted.
case EACCES:
perror("[Error] Following error occurred while reading configuration");
return false;
break;
// Invalid oflag, shflag, or pmode argument, or pfh or filename was a null pointer.
case EINVAL:
perror("[Error] Following error occurred while reading configuration");
return false;
break;
// No more file descriptors available.
case EMFILE:
perror("[Error] Following error occurred while reading configuration");
return false;
break;
// File or path not found.
case ENOENT:
perror("[Error] Following error occured while reading configuration");
return false;
break;
}
// Notify Screen
if (pDevMode)
std::printf("[Configuration]: '_sopen_s' were able to open file \"%s\".\n", _file);
// Allocate memory for buffer
buffer = new (std::nothrow) char[4098 * 4];
// Did the allocation succeed?
if (buffer == nullptr) {
_close(fd);
std::perror("[Error] Following error occurred while reading configuration");
return false;
}
// Notify Screen
if (pDevMode)
std::printf("[Configuration]: Buffer Allocation succeed.\n");
// Reading content from file
const std::size_t size = _read(fd, buffer, (4098 * 4));
If you put your buffer into a std::string you can piece together a solution from various answers about splitting strings on SO.
The essential structure seems to be "stuff { key:value \n key:value \n }"
with varying amounts of whitespace. Many questions have been asked about trimming a string. Splitting a string can happen in several ways, e.g.
std::string config = "App example.com {\n"
" use: python vauejs sass;\n"
" root: www / html; \n"
" ssl: Enabled;"
"}";
std::istringstream ss(config);
std::string token;
std::getline(ss, token, '{');
std::cout << token << "... ";
std::getline(ss, token, ':');
//use handy trim function - loads of e.g.s on SO
std::cout << token << " = ";
std::getline(ss, token, '\n');
// trim function required...
std::cout << token << "...\n\n";
//as many times or in a loop..
//then check for closing }
If you have more complicated parsing consider a full-on parser.

Losing data with GetOverlappedResult?

I have a thread that constantly looks for new data and if the data is not already in the serial buffer, ReadFile and GetOverlappedResult seem to tell me there's data, and that it read it, but not transfer it into my buffer...
func read()
{
if(state == 0)
{
memset(bytes, '\0', sizeof(amount_to_read));
readreturn = ReadFile(h, bytes, amount_to_read,NULL, osReader);
if(readreturn <= 0)
{
errorcode = GetLastError();
if(errorcode != ERROR_IO_PENDING)
{
SetEAIError(ERROR_INTERNALERROR);
return -1;
}
}
}
if (GetOverlappedResult(h, osReader, &dwRead, FALSE) == false)
{
errorcode = GetLastError();
if (errorcode == ERROR_IO_INCOMPLETE || errorcode == 0)
{
if(dwRead > 0)
{
return 1;
}
//timeout
SetEAIError(ERROR_EAITIMEOUT);
return -1;
}
else
{
//other error
SetEAIError(ERROR_WIN_ERROR);
return -1;
}
}
else
{
//read succeded, check if we read the amount required
if(dwRead != amount_to_read)
{
if(dwRead == 0)
{
//nothing read, treat as timeout
SetEAIError(ERROR_EAINOREAD);
return -1;
}
else
{
//memcpy_s(bytes, sizeof(bytes), readbuf, dwRead);
SetEAIError(ERROR_PARTIALREAD);
*_bytesRead = dwRead;
return -1;
}
}
else
{
if(strlen((char*)bytes) == 0)
{
//nothing read, treat as timeout
SetEAIError(ERROR_EAINOREAD);
return -1;
}
//memcpy_s(bytes, sizeof(bytes), readbuf, dwRead);
*_bytesRead = dwRead;
return 0;
}
}
}
This is what the error codes mean:
ERROR_TIMEOUT - switches the state to 1 so that it does not read again, which calls GetOverlappedResult again
INTERNALERROR,ERROR_EAINOREAD - it resets state to 0
ERROR_PARTIALREAD - starts a new read with new amount of bytes to read
If I swtich GetOverlappedResult to blocking (pass TRUE) it works every time.
If I switch my thread to only read when I know there is data there it works every time.
But if there is not data there, when there is data there it seems to "lose" the data, it my read amount parameter dwRead shows the correct number of bytes read (can see it read with a port monitor) but the bytes are not stored in my char*.
I constantly get ERROR_EAINOREAD
What am I doing wrong?
I do not want to use flags, I want to just use ReadFile and GetOverlappedResult, I should be able to accomplish this with the code I have....... I assume
The problem was exactly what was stated the data was getting lost... the REASON it was getting lost is because the bytes parameter passed into the readfile is a local variable in the parents thread. being local it gets re initialized each cycle so after I come into the read again, skip the readfile and go to the overlappedresults, I am now potentially working with a different area of memory

Ofstream not writing to file C++

I have this method which supposed to get a buffer and write some content to a file:
void writeTasksToDevice()
{
TaskInfo *task;
unsigned int i = lastTaskWritten;
printf("writing elihsa\n");
outputFile.write(" Elisha2", 7);
//pthread_mutex_lock(&fileMutex);
for(; i < (*writingTasks).size(); i++)
{
task = (*writingTasks).at(i);
if(NULL == task)
{
printf("ERROR!!! in writeTasksToDevice - there's a null task in taskQueue. By "
" design that should NEVER happen\n");
exit(-1);
}
if(true == task->wasItWritten)
{
//continue;
}
else // we've found a task to write!
{
printf("trying to write buffer to file\n");
printf("buffer = %s, length = %d\n", task->buffer, task->length);<====PRINT HERE IS OK< PRINTING WHAT IS WANTED
outputFile.write(task->buffer, task->length); <===SHOULD WRITE HERE
printf("done writing file\n");
}
}
//pthread_mutex_unlock(&fileMutex);
// TODO: check if we should go to sleep and wait for new tasks
// and then go to sleep
}
the buffer content is:
task->buffer: elishaefla
task->length: 10
i opened the stream in another init function using:
outputFile.open(fileName, ios :: app);
if(NULL == outputFile)
{
//print error;
return -1;
}
but at the end, the file content is empty, nothing is being written.
any idea why?
You did not provide enough information to answer the question with certainty, but here are some of the issues you might be facing:
You did not flush the buffer of the ofstream
You did not close the file that you are trying to open later on (if I'm correct, outputFile is a global variable, so it is not closed automatically until the end of the program)