Opening 2 pipes in c++ from 1 program - c++

I have a program that I am writing for an embedded device, and I am trying to use pipes to pass messages. Before I get to passing messages between my program and another, I was building test code to ensure that everything is working properly, and encountered a problem. Note that the embedded device doesn't support c++11 (hence the use of pthread's).
The code in question:
void* testSender(void *ptr) {
std::cout << "Beginning testSender" << std::endl;
int pipe = open("/dev/rtp10", O_WRONLY);
if (pipe < 0) {
std::cout << "write pipe failed to open" << std::endl;
} else {
std::cout << "write pipe successfully opened" << std::endl;
}
std::cout << "Ending testSender" << std::endl;
return NULL;
}
void* testReceiver(void *ptr) {
std::cout << "Beginning testReceiver" << std::endl;
int pipe = open("/dev/rtp10", O_RDONLY);
if (pipe < 0) {
std::cout << "read pipe failed to open" << std::endl;
} else {
std::cout << "read pipe successfully opened" << std::endl;
}
std::cout << "Ending testReceiver" << std::endl;
return NULL;
}
void testOpenClosePipes() {
std::cout << "Beginning send/receive test" << std::endl;
pthread_t sendThread, receiveThread;
pthread_create(&sendThread, NULL, &testSender, NULL);
pthread_create(&receiveThread, NULL, &testReceiver, NULL);
std::cout << "waiting for send and receive test" << std::endl;
pthread_join(receiveThread, NULL);
pthread_join(sendThread, NULL);
std::cout << "Done testing open send/receive" << std::endl;
}
The function testOpenClosePipes() is called from my main thread and after calling it I get the following output:
Beginning send/receive test
waiting for send and receive test
Beginning testReceiver
Beginning testSender
write pipe failed to open
Ending testSender
and then the program hangs. I believe that this is because the read pipe has been opened and is then waiting for a sender to connect to the pipe, but I could be wrong there. Note that if I start the receive thread before I start the send thread, then the result is as follows:
Beginning send/receive test
waiting for send and receive test
Beginning testSender
Beginning testReceiver
read pipe failed to open
Ending testReceiver
From what I have read about pipes so far, what appears to be occurring is that one of the two (either send or receive) is opening correctly, and then holding until the other end of the pipe has been opened. However, the other end of the pipe is failing to open correctly, which ends up leaving the system hanging because the open pipe is waiting for its connection before it successfully moves on. I am unable to figure out why this is happening however, and am looking to get help with that.

After reviewing my problem, it appears that the problem isn't actually in the use of the pipes in question, it is that /dev/rtp* opens a pipe for the embedded system's special application, and is not actually a pipe that can go from linux to linux. This is solved by using a different pipe and first creating said pipe with the mkfifo command prior to attempting to open a pipe.

How about checking the errno value for the failed open call?

Related

Is there any way to determine stdin content size in bytes in c++? [duplicate]

This question already has answers here:
How to construct a c++ fstream from a POSIX file descriptor?
(8 answers)
Closed 2 years ago.
I'm new to programming, and I'm trying to write a c++ program for Linux which would create a child process, and this child process would execute an external program. The output of this program should be redirected to the main program and saved into a string variable, preserving all the spaces and new lines. I don't know how many lines/characters will the output contain.
This is the basic idea:
#include <iostream>
#include <string>
#include <cstring>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pipeDescriptors[2];
pipe(pipeDescriptors);
pid_t pid = fork();
if (pid == -1)
{
std::cerr << __LINE__ << ": fork() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
else if (!pid)
{
// Child process
close(pipeDescriptors[0]); // Not gonna read from here
if (dup2(pipeDescriptors[1], STDOUT_FILENO) == -1) // Redirect output to the pipe
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
close(pipeDescriptors[1]); // Not needed anymore
execlp("someExternalProgram", "someExternalProgram", NULL);
}
else
{
// Parent process
close(pipeDescriptors[1]); // Not gonna write here
pid_t stdIn = dup(STDIN_FILENO); // Save the standard input for further usage
if (dup2(pipeDescriptors[0], STDIN_FILENO) == -1) // Redirect input to the pipe
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
close(pipeDescriptors[0]); // Not needed anymore
int childExitCode;
wait(&childExitCode);
if (childExitCode == 0)
{
std::string childOutput;
char c;
while (std::cin.read(&c, sizeof(c)))
{
childOutput += c;
}
// Do something with childOutput...
}
if (dup2(stdIn, STDIN_FILENO) == -1) // Restore the standard input
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
// Some further code goes here...
}
return 0;
}
The problem with the above code is that when std::cin.get() function reads the last byte in the input stream, it doesn't actually "know" that this byte is the last one and tries to read further, which leads to set failbit and eofbit for std::cin so I cannot read from the standard input later anymore. std::cin.clear() resets those flags, but stdin still remains unusable.
If I could get the precise size in bytes of the stdin content without going beyond the last character in the stream, I would be able to use std::cin.read() for reading this exact amount of bytes into a string variable. But I guess there is no way to do that.
So how can I solve this problem? Should I use an intermediate file for writing the output of the child process into it and reading it later from the parent process?
The child process writes into the pipe but the parent doesn't read the pipe until the child process terminates. If the child writes more than the pipe buffer size it blocks waiting for the parent to read the pipe, but the parent is blocked waiting for the child to terminate leading to a deadlock.
To avoid that, the parent process must keep reading the pipe until EOF and only then use wait to get the child process exit status.
E.g.:
// Read entire child output.
std::string child_stdout{std::istreambuf_iterator<char>{std::cin},
std::istreambuf_iterator<char>{}};
// Get the child exit status.
int childExitCode;
if(wait(&childExitCode))
std::abort(); // wait failed.
You may also like to open a new istream from the pipe file descriptor to avoid messing up std::cin state.

Named pipe file descriptor

Currently I am making a C/C++ program for the Linux Operating system.
I want to use a named pipe to communicate a PID (process ID) between two programs.
The pipe has been created and is visible in the directory.
The Get PID program says that the file descriptor returns 3, while it should return 0 if it could open the pipe. What am I doing wrong?
Get PID
// Several includes
using namespace std;
int main(int argc, char *argv[]) {
pid_t pid;
int sig = 22;
int succesKill;
int iFIFO;
char sPID[5] = {0,1,2,3,'\0'};
iFIFO = open("IDpipe" , O_RDONLY);
if(iFIFO != 0)
{
cerr << "File descriptor does not return 0, but: " << iFIFO << endl;
return EXIT_FAILURE;
}
read(iFIFO, sPID, strlen(sPID));
cerr << "In sPID now is: " << sPID << endl;
close(iFIFO);
pid = atoi(sPID);
cout << "The PID I will send signals to is: " << pid << "." << endl;
while(1)
{
succesKill = kill(pid, sig);
cout << "Tried to send signal" << endl;
sleep(5);
}
return EXIT_SUCCESS;
}
Send PID
// Several includes
using namespace std;
void catch_function(int signo);
volatile sig_atomic_t iAmountSignals = 0;
int main(void) {
pid_t myPID;
int iFIFO;
char sPID[5] = {'l','e','e','g','\0'};
myPID = getpid();
sprintf(sPID, "%d",myPID);
cout << "My PID is: " << sPID << endl;
iFIFO = open("IDpipe" , O_WRONLY);
if(iFIFO == -1)
{
cerr << "Pipe can't be opened for writing, error: " << errno << endl;
return EXIT_FAILURE;
}
write(iFIFO, sPID, strlen(sPID));
close(iFIFO);
if (signal(22, catch_function) == SIG_ERR) {
cerr << "An error occurred while setting a signal handler." << endl;
return EXIT_FAILURE;
}
cout << "Raising the interactive attention signal." << endl;
if (raise(22) != 0) {
cerr << "Error raising the signal." << endl;
return EXIT_FAILURE;
}
while(1)
{
cout << "iAmountSignals is: " << iAmountSignals << endl;
sleep(1);
}
cout << "Exit." << endl;
return EXIT_SUCCESS;
}
void catch_function(int signo) {
switch(signo) {
case 22:
cout << "Caught a signal 22" << endl;
if(iAmountSignals == 9)
{iAmountSignals = 0;}
else
{++iAmountSignals;}
break;
default:
cerr << "Thats the wrong signal.." << endl;
break;
}
}
Terminal output
Output
open() returns the newly created file descriptor. It cannot return 0 for the simple reason that the new process already has a file descriptor 0. That would be standard input.
The return value of 3 is the expected result from open(), in this case, because that would be the next available file descriptor after standard input, output, and error. If open() couldn't open the file descriptor, it would return -1.
But besides that, your code also has a bunch of other bugs:
sprintf(sPID, "%d",myPID);
// ...
write(iFIFO, sPID, strlen(sPID));
If your process ID happens to be only 3 digits long (which is possible), this will write three bytes to the pipe.
If your process ID happens to be five digits long (which is even more possible), this will write 5 bytes plus the '\0' byte, for a total of six bytes written to the five byte-long sPID buffer, overrunning the array and resulting in undefined behavior.
The actual results are, of course, are undefined, but a typical C++ implementation will end up clobbering the first byte of whatever is the next variable on the stack, which is:
int iFIFO;
which is your file descriptor. So, if your luck runs out and your new process gets a five-digit process id, and this is a little-endian C++ implementation, there is no padding, then the low order byte of iFIFO gets set to 0, and if the code got compiled without any optimizations, the iFIFO file descriptor gets set to 0. Hillarity ensues.
Furthermore, on the other side of the pipe:
char sPID[5] = {0,1,2,3,'\0'};
// ...
read(iFIFO, sPID, strlen(sPID));
Because the first byte of SPID is always set to 0, this will always execute read(iFIFO, sPID, 0), and not read anything.
After that:
pid = atoi(sPID);
atoi() expects a '\0'-terminated string. read() only reads whatever it reads, it will not '\0'-terminate whatever it ends up reading. It is your responsibility to place a '\0' that terminates the read input (and, of course, making sure that the read buffer is big enough), before using atoi().
Your logic appears to be incorrect.
if(iFIFO != 0)
should be
if(iFIFO == -1)
since open returns -1 on error. Otherwise it returns a valid file descriptor.

How to use QProcess write correctly?

I need a program to communicate with a subprocess that is relying on in- and
output. The problem is that I am apparently not able to use QProcess correctly.
The code further down should create a QProcess, start it and enter the main while loop. In there it prints all the output created by the subprocess to the console and subsequently asks the user for input which is then passed to the subprocess via write(...).
Originally I had two problems emerging from this scenario:
The printf's of the subprocess could not be read by the parent process.
scanf in the subprocess is not receiving the strings sent via write.
As for (1), I came to realize that this is a problem caused by the buffering of the subprocess' stdout. This problem can be solved easily with fflush(stdout) calls or manipulations regarding its flushing behavior.
The second problem is the one I can't wrap my head around. write gets called and even returns the correct number of sent bytes. The subprocess, however, is not continuing its excecution, because no new data is written to its output. The scanf seems not to be receiving the data sent. The output given by the program is:
Subprocess should have started.
124 bytes available!
Attempting to read:
Read: This is a simple demo application.
Read: It solely reads stdin and echoes its contents.
Read: Input exit to terminate.
Read: ---------
Awaiting user input: test
Written 5 bytes
No line to be read...
Awaiting user input:
I am seriously stuck right here. Google + heavy thinking having failed on me, I want to pass this on to you as my last beacon of hope. In case I am just failing to see the forest for all the trees, my apologies.
In case this information is necessary: I am working on 64bit MacOS X using Qt5 and the clang compiler. The subprocess-code is compiled with gcc on the same machine.
Thank you very much in advance,
NR
Main-Code:
int main() {
// Command to execute the subprocess
QString program = "./demo";
QProcess sub;
sub.start(program, QProcess::Unbuffered | QProcess::ReadWrite);
// Check, whether the subprocess is starting correctly.
if (!sub.waitForStarted()) {
std::cout << "Subprocess could not be started!" << std::endl;
sub.close();
return 99;
}
std::cout << "Subprocess should have started." << std::endl;
// Check, if the subprocess has written its starting message to the output.
if (!sub.waitForReadyRead()) {
std::cout << "No data available for reading. An error must have occurred." << std::endl;
sub.close();
return 99;
}
while (1) {
// Try to read the subprocess' output
if (!sub.canReadLine()) {
std::cout << "No line to be read..." << std::endl;
} else {
std::cout << sub.bytesAvailable() << " bytes available!" << std::endl;
std::cout << "Attempting to read..." << std::endl;
while (sub.canReadLine()) {
QByteArray output = sub.readLine();
std::cout << "Read: " << output.data();
}
}
std::cout << "Awaiting user input: ";
std::string input;
getline(std::cin, input);
if (input.compare("exit") == 0) break;
qint64 a = sub.write(input.c_str());
qint64 b = sub.write("\n");
sub.waitForBytesWritten();
std::cout << "Written " << a + b << " bytes" << std::endl;
}
std::cout << "Terminating..." << std::endl;
sub.close();
}
Subprocess-Code:
int main() {
printf("This is a simple demo application.\n");
printf("It reads stdin and echoes its contents.\n");
printf("Input \"exit\" to terminate.\n");
while (1) {
char str[256];
printf("Input: ");
fflush(stdout);
scanf("%s", str);
if (strcmp(str, "exit") == 0) return 0;
printf("> %s\n", str);
}
}
P.s: Since this is my first question on SO, please tell me if something is wrong concerning the asking style.
Solution
After many many more trials & errors, I managed to come up with a solution to the problem. Adding a call to waitForReadyRead() causes the main process to wait until new output is written by the subprocess. The working code is:
...
sub.waitForBytesWritten();
std::cout << "Written " << a + b << " bytes" << std::endl;
// Wait for new output
sub.waitForReadyRead();
...
I still don't have a clue why it works this way. I guess it somehow relates to the blocking of the main process by getline() vs blocking by waitForReadyRead(). To me it appears as if getline() blocks everything, including the subprocess, causing the scanf call never to be processed due to race conditions.
It would be great, if someone who understands could drop an explanation.
Thank you for your help :)
NR
This will not work. You are waiting for the sent bytes to be written but you are not waiting for the echo. Instead you are entering the getline() function waiting for new user input. Keep in mind that two processes are involved here where each process can be delayed to any degree.
Apart from this you should consider building your Qt application asynchronously (having an event loop) instead of trying the synchronous approach. This way your Qt application can do things in parallel... e.g. reading input or waiting for input from the remote process while still not being blocked and able to accept user input.

dup2( ) causing child process to terminate early

So I'm writing a program that involves the creation of 2 sets of pipes so that a parent process can write to a child process & the child process can right back...
I have the following code for my child process:
if(pid==0){ //child process
cout << "executing child" << endl;
close(fd1[WRITE_END]);
close(fd2[READ_END]);
if(dup2(fd1[READ_END],STDIN_FILENO) < 0 || dup2(fd2[WRITE_END],STDOUT_FILENO) < 0){
cerr << "dup2 failed" << endl;
exit(1);
}
cout << "test output" << endl;
close(fd2[WRITE_END]);
close(fd1[READ_END]);
read(fd1[READ_END],buf,BUFFER_SIZE);
cout << "Child process read " << buf << endl;
execl("/bin/sort", "sort", "-nr", NULL);
} else { //... parent process
When I run my program, all I get as output from the child process is executing child but no test output.
However, when I remove the if-statement handling the dup2 calls, my output does include test output.
Any ideas as to why dup2 causes my child process to not finish terminating?
(and by the way, originally, my two dup2's were done in separate if statements... when I put the test output below the dup2(fd1[READ_END],STDIN_FILENO) < 0 test, it outputs, but not when I put it below the other dup2 conditional test, so I'm convinced that that's where my issue is)
Thanks in advance
The call to dup2(fd2[WRITE_END],STDOUT_FILENO) connects STDOUT (which is used by C++ cout stream) to your fd2 pipe. So 'test output' gets written to the pipe.

Qt C++ and QSerialDevice: Windows 7 USB->Serial Port Reading/Writing

I am attempting to read from/write to an RS-232 capable device. This works without issue on Linux. The device is connected via a Digitus USB/Serial Adapter.
The device shows up in Device Manager as COM4.
void PayLife::run() {
this->sendingData = 0;
this->running = true;
qDebug() << "Starting PayLife Thread";
this->port = new AbstractSerial();
this->port->setDeviceName(this->addy);
QByteArray ba;
if (port->open(AbstractSerial::ReadWrite| AbstractSerial::Unbuffered)) {
if (!port->setBaudRate(AbstractSerial::BaudRate19200)) {
qDebug() << "Set baud rate " << AbstractSerial::BaudRate19200 << " error.";
goto end_thread;
};
if (!port->setDataBits(AbstractSerial::DataBits7)) {
qDebug() << "Set data bits " << AbstractSerial::DataBits7 << " error.";
goto end_thread;
}
if (!port->setParity(AbstractSerial::ParityEven)) {
qDebug() << "Set parity " << AbstractSerial::ParityEven << " error.";
goto end_thread;
}
if (!port->setStopBits(AbstractSerial::StopBits1)) {
qDebug() << "Set stop bits " << AbstractSerial::StopBits1 << " error.";
goto end_thread;
}
if (!port->setFlowControl(AbstractSerial::FlowControlOff)) {
qDebug() << "Set flow " << AbstractSerial::FlowControlOff << " error.";
goto end_thread;
}
while(this->running) {
if ((port->bytesAvailable() > 0) || port->waitForReadyRead(900)) {
ba.clear();
ba = port->read(1024);
qDebug() << "Readed is : " << ba.size() << " bytes";
}
else {
qDebug() << "Timeout read data in time : " << QTime::currentTime();
}
}
}
end_thread:
this->running = false;
}
On Linux, I don't use QSerialDevice, just regular serial reading/writing.
No matter what, I always get:
Starting PayLife Thread
Readed is : 0 bytes
Timeout read data in time : QTime("16:27:43")
Timeout read data in time : QTime("16:27:44")
Timeout read data in time : QTime("16:27:45")
Timeout read data in time : QTime("16:27:46")
I am not exactly sure why.
Note, I tried first to use regular Windows API reading and writing with the same results, i.e. it just doesn't ready any data from the device.
I am 100% sure that there is always something to read from the device, as it spams ENQ across the connection.
You should generate the doxygen documentation of QSerialDevice if you haven't already done so. The problem seems to be explained there.
On Windows in unbuffered mode:
Necessary to avoid the values of CharIntervalTimeout and
TotalReadConstantTimeout equal to 0. In theory, it was planned that at
zero values of timeouts method AbstractSerial::read() will read the
data which are in the buffer device driver (not to be confused with
the buffer AbstractSerial!) and return them immediately. But for
unknown reasons, this reading always returns 0, not depending on
whether or not a ready-made data in the buffer.
Because read waits for the data in unbuffered mode, I guess waitForReadyReady doesn't do anything useful in that mode.