dup2( ) causing child process to terminate early - c++

So I'm writing a program that involves the creation of 2 sets of pipes so that a parent process can write to a child process & the child process can right back...
I have the following code for my child process:
if(pid==0){ //child process
cout << "executing child" << endl;
close(fd1[WRITE_END]);
close(fd2[READ_END]);
if(dup2(fd1[READ_END],STDIN_FILENO) < 0 || dup2(fd2[WRITE_END],STDOUT_FILENO) < 0){
cerr << "dup2 failed" << endl;
exit(1);
}
cout << "test output" << endl;
close(fd2[WRITE_END]);
close(fd1[READ_END]);
read(fd1[READ_END],buf,BUFFER_SIZE);
cout << "Child process read " << buf << endl;
execl("/bin/sort", "sort", "-nr", NULL);
} else { //... parent process
When I run my program, all I get as output from the child process is executing child but no test output.
However, when I remove the if-statement handling the dup2 calls, my output does include test output.
Any ideas as to why dup2 causes my child process to not finish terminating?
(and by the way, originally, my two dup2's were done in separate if statements... when I put the test output below the dup2(fd1[READ_END],STDIN_FILENO) < 0 test, it outputs, but not when I put it below the other dup2 conditional test, so I'm convinced that that's where my issue is)
Thanks in advance

The call to dup2(fd2[WRITE_END],STDOUT_FILENO) connects STDOUT (which is used by C++ cout stream) to your fd2 pipe. So 'test output' gets written to the pipe.

Related

How to use waitpid when execv returns an error?

My program loops through a vector of strings and runs a program to do some work. Each entry in the vector has its own associated program. The child processes are created using fork() and execv() in the loop. The parent process is waiting until each child process has returned before continuing the loop using waidpid(). The called child processes in my test environment (for now) will each print a message, sleep() and print another message.
The code works perfectly fine as long as all execv() does not return -1 (for example because the file wasn't found).
std::vector<std::string> files{ "foo", "bar", "foobar" };
for (size_t i=0; i<files.size(); i++)
{
pid_t pid_fork = fork();
if (pid_fork == -1)
{
std::cout << "error: could not fork process" << std::endl;
} else if (pid_fork > 1)
{
std::cout << "this is the parent" << std::endl;
int pid_status;
pid_t child_ret = waitpid(pid_fork, &pid_status, 0);
std::cout << "child_ret: " << child_ret << std::endl;
if (child_ret == -1)
{
std::cout << "error waiting for child " << pid_fork << std::endl;
} else
{
if (WIFEXITED(pid_status))
{
std::cout << "child process exit status: " << WEXITSTATUS(pid_status) << std::endl;
if (WIFEXITED(pid_status) == 0)
{
std::cout << "updating db that file has been loaded: " << files[i].first << std::endl;
/* some code to update a DB table */
} else
{
std::cout << "exit status = FAILED" << std::endl;
}
}
}
} else
{
std::cout << "this is the child" << std::endl;
char *args[] = {NULL};
if (execv(("./etl/etl_" + files[i].c_str(), args) == -1)
{
std::cout << "could not load ./etl/etl_" << files[i] << std::endl;
/* DB insert of failed "load" here */
return EXIT_FAILURE;
}
}
}
/* some more code here writing stuff to a database before cleanup and returning from main*/
Output:
this is the parent
this is the child
hello from etl_foo
etl_foo is done
child_ret: 77388
child process exit status: 0
this is the parent
this is the child
hello from etl_bar
etl_bar is done
child_ret: 77389
child process exit status: 0
this is the parent
this is the child
hello from etl_foobar
etl_foobar is done
child_ret: 77390
child process exit status: 0
If, however I cause execv() to return ´´´-1´´´ because I deleted etl_foobar the parent process seems to no longer wait for the child process to return
this is the child
hello from etl_foo
etl_foo is done
child_ret: 77620
child process exit status: 0
this is the parent
this is the child
hello from etl_bar
etl_bar is done
child_ret: 77621
child process exit status: 0
this is the parent
this is the child
could not load ./etl_foobar
-> here the end of the parent code is reached, the DB is updated and the parent returns (?)
-> I expect the program to be done at this stage, however... this happens
child_ret: 77622
terminate called after throwing an instance of 'sql::SQLException'
what(): Lost connection to MySQL server during query
Aborted (core dumped)
It seems the code block after pid_t child_ret = waitpid(pid_fork, &pid_status, 0); is executed, which I don't understand. The parent has already returned, yet part of the parent's code is still executed and fails as the connection object for the db connection has been deleted just before the parent returns.
The desired behavior is that upon discovery that execv() == -1 the child process returns to the waiting parent, which then finishes off the remaining code and returns itself in an orderly manner, the same way it does when there is no error in execv(). Thank you!
Edit: User Sneftel pointed me to the fact that the child process in case of failure actually doesn't return, which I have changed now. The parent process hence is now waiting for all children to return, including those where execv fails.
Nevertheless, I still have the issue that whenever the child returns with EXIT_FAILURE, the following loop performs up until the next DB insert is attempted, where I continue to get the "lost MySQL connection" error + core dump. Not sure what the origin of this is.

Opening 2 pipes in c++ from 1 program

I have a program that I am writing for an embedded device, and I am trying to use pipes to pass messages. Before I get to passing messages between my program and another, I was building test code to ensure that everything is working properly, and encountered a problem. Note that the embedded device doesn't support c++11 (hence the use of pthread's).
The code in question:
void* testSender(void *ptr) {
std::cout << "Beginning testSender" << std::endl;
int pipe = open("/dev/rtp10", O_WRONLY);
if (pipe < 0) {
std::cout << "write pipe failed to open" << std::endl;
} else {
std::cout << "write pipe successfully opened" << std::endl;
}
std::cout << "Ending testSender" << std::endl;
return NULL;
}
void* testReceiver(void *ptr) {
std::cout << "Beginning testReceiver" << std::endl;
int pipe = open("/dev/rtp10", O_RDONLY);
if (pipe < 0) {
std::cout << "read pipe failed to open" << std::endl;
} else {
std::cout << "read pipe successfully opened" << std::endl;
}
std::cout << "Ending testReceiver" << std::endl;
return NULL;
}
void testOpenClosePipes() {
std::cout << "Beginning send/receive test" << std::endl;
pthread_t sendThread, receiveThread;
pthread_create(&sendThread, NULL, &testSender, NULL);
pthread_create(&receiveThread, NULL, &testReceiver, NULL);
std::cout << "waiting for send and receive test" << std::endl;
pthread_join(receiveThread, NULL);
pthread_join(sendThread, NULL);
std::cout << "Done testing open send/receive" << std::endl;
}
The function testOpenClosePipes() is called from my main thread and after calling it I get the following output:
Beginning send/receive test
waiting for send and receive test
Beginning testReceiver
Beginning testSender
write pipe failed to open
Ending testSender
and then the program hangs. I believe that this is because the read pipe has been opened and is then waiting for a sender to connect to the pipe, but I could be wrong there. Note that if I start the receive thread before I start the send thread, then the result is as follows:
Beginning send/receive test
waiting for send and receive test
Beginning testSender
Beginning testReceiver
read pipe failed to open
Ending testReceiver
From what I have read about pipes so far, what appears to be occurring is that one of the two (either send or receive) is opening correctly, and then holding until the other end of the pipe has been opened. However, the other end of the pipe is failing to open correctly, which ends up leaving the system hanging because the open pipe is waiting for its connection before it successfully moves on. I am unable to figure out why this is happening however, and am looking to get help with that.
After reviewing my problem, it appears that the problem isn't actually in the use of the pipes in question, it is that /dev/rtp* opens a pipe for the embedded system's special application, and is not actually a pipe that can go from linux to linux. This is solved by using a different pipe and first creating said pipe with the mkfifo command prior to attempting to open a pipe.
How about checking the errno value for the failed open call?

Best way to create child process in linux and handle possible failing

I have parent process, that have to create few children processes. Best way I found is using fork + execl. But then parent process need to know if execl of concrete child fails or not, and I don't know how to implement that.
int pid = fork();
if (pid < 0) {
std::cout << "ERROR on fork." << std::endl;
} if (pid == 0) {
execl("/my/program/full/path", (char *)NULL);
exit(1);
}
else {
if (/*child's process execl fails*/) {
std::cout << "it failed" << std::endl
} else {
std::cout << "child born" << std::endl
}
}
I think this idea is not good:
int status(0);
sleep(100);
int res = waitpid(pid, &status, WNOHANG);
if (res < 0 && errno == 10) {
std::cout << "it failed" << std::endl
} else {
std::cout << "child born" << std::endl
}
because it's not good to hope that child process will die after 100 milliseconds, I want to know that for sure as only that will happens.
I also think that creation of shared_memory or special pipe connection for such check is a Cannon against Bees.
There have to be simple solution for that, that I just didn't found yet.
What is the best way to achieve that?
As a general solution you can register signal handler (SIGUSR1) in the parent using sigaction().
In a child: unregister signal handler, if execl() call failed you need to send SIGUSR1 to the parent.
In the parent: Every child pid we will store in std::set. When all childs are created you just create a separate thread for tracking childs. In the thread function just call wait() and remove pid from the set. Another way to listen SIGCHLD signal (but it will lead to more complex solution, so if spawn another thread is an option I'd use thread).
When the set is empty we have done.

How to use QProcess write correctly?

I need a program to communicate with a subprocess that is relying on in- and
output. The problem is that I am apparently not able to use QProcess correctly.
The code further down should create a QProcess, start it and enter the main while loop. In there it prints all the output created by the subprocess to the console and subsequently asks the user for input which is then passed to the subprocess via write(...).
Originally I had two problems emerging from this scenario:
The printf's of the subprocess could not be read by the parent process.
scanf in the subprocess is not receiving the strings sent via write.
As for (1), I came to realize that this is a problem caused by the buffering of the subprocess' stdout. This problem can be solved easily with fflush(stdout) calls or manipulations regarding its flushing behavior.
The second problem is the one I can't wrap my head around. write gets called and even returns the correct number of sent bytes. The subprocess, however, is not continuing its excecution, because no new data is written to its output. The scanf seems not to be receiving the data sent. The output given by the program is:
Subprocess should have started.
124 bytes available!
Attempting to read:
Read: This is a simple demo application.
Read: It solely reads stdin and echoes its contents.
Read: Input exit to terminate.
Read: ---------
Awaiting user input: test
Written 5 bytes
No line to be read...
Awaiting user input:
I am seriously stuck right here. Google + heavy thinking having failed on me, I want to pass this on to you as my last beacon of hope. In case I am just failing to see the forest for all the trees, my apologies.
In case this information is necessary: I am working on 64bit MacOS X using Qt5 and the clang compiler. The subprocess-code is compiled with gcc on the same machine.
Thank you very much in advance,
NR
Main-Code:
int main() {
// Command to execute the subprocess
QString program = "./demo";
QProcess sub;
sub.start(program, QProcess::Unbuffered | QProcess::ReadWrite);
// Check, whether the subprocess is starting correctly.
if (!sub.waitForStarted()) {
std::cout << "Subprocess could not be started!" << std::endl;
sub.close();
return 99;
}
std::cout << "Subprocess should have started." << std::endl;
// Check, if the subprocess has written its starting message to the output.
if (!sub.waitForReadyRead()) {
std::cout << "No data available for reading. An error must have occurred." << std::endl;
sub.close();
return 99;
}
while (1) {
// Try to read the subprocess' output
if (!sub.canReadLine()) {
std::cout << "No line to be read..." << std::endl;
} else {
std::cout << sub.bytesAvailable() << " bytes available!" << std::endl;
std::cout << "Attempting to read..." << std::endl;
while (sub.canReadLine()) {
QByteArray output = sub.readLine();
std::cout << "Read: " << output.data();
}
}
std::cout << "Awaiting user input: ";
std::string input;
getline(std::cin, input);
if (input.compare("exit") == 0) break;
qint64 a = sub.write(input.c_str());
qint64 b = sub.write("\n");
sub.waitForBytesWritten();
std::cout << "Written " << a + b << " bytes" << std::endl;
}
std::cout << "Terminating..." << std::endl;
sub.close();
}
Subprocess-Code:
int main() {
printf("This is a simple demo application.\n");
printf("It reads stdin and echoes its contents.\n");
printf("Input \"exit\" to terminate.\n");
while (1) {
char str[256];
printf("Input: ");
fflush(stdout);
scanf("%s", str);
if (strcmp(str, "exit") == 0) return 0;
printf("> %s\n", str);
}
}
P.s: Since this is my first question on SO, please tell me if something is wrong concerning the asking style.
Solution
After many many more trials & errors, I managed to come up with a solution to the problem. Adding a call to waitForReadyRead() causes the main process to wait until new output is written by the subprocess. The working code is:
...
sub.waitForBytesWritten();
std::cout << "Written " << a + b << " bytes" << std::endl;
// Wait for new output
sub.waitForReadyRead();
...
I still don't have a clue why it works this way. I guess it somehow relates to the blocking of the main process by getline() vs blocking by waitForReadyRead(). To me it appears as if getline() blocks everything, including the subprocess, causing the scanf call never to be processed due to race conditions.
It would be great, if someone who understands could drop an explanation.
Thank you for your help :)
NR
This will not work. You are waiting for the sent bytes to be written but you are not waiting for the echo. Instead you are entering the getline() function waiting for new user input. Keep in mind that two processes are involved here where each process can be delayed to any degree.
Apart from this you should consider building your Qt application asynchronously (having an event loop) instead of trying the synchronous approach. This way your Qt application can do things in parallel... e.g. reading input or waiting for input from the remote process while still not being blocked and able to accept user input.

Errno 22 after call to fdopen()

I am getting an error when calling fdopen and it sets errno to 22. I am using the exec command to call a child process. The child calls fdopen on file descriptor 4. The first child works and sends data back to the parent and errno is 0. After the parent creates the next child process, fdopen(4, "w"); is called again which is when errno is set to 22.
From what I've read, errno 22 for fdopen() could mean mode argument is incorrect. I also read that it could be an error from fnctl and that could mean a bad file descriptor. I specify file descriptor 4 and it works on the first child process. Could that be why errno is being set to 22 when I try to create another FILE*?
I cannot figure out when it works for one child process but not the next. Can anyone shed some light on this for me?
Here is the code:
int main(int argc, char* argv[])
{
cout << "Child " << argv[argc-1] << " starting" << endl;
//close(3);
if(argc < 1) fatal("Not enough arguments provided to ChildMain");
int id = atoi(argv[argc-1]);
//Child kid((int) *argv[1]);
cout << "Error before fdopen(): " << errno << endl;
FILE* out = fdopen(4, "w");
if(out == NULL)
{
cout << "Child ID: " << id << endl;
cout << "\tError: " << errno << endl << endl;
}
int ret = fprintf(out, "%d", id);
fflush(out);
return 0;
}
For the first child process, the file descriptor's number is 4. For the second child process, 4 is in use in the parent, so it gets some other file descriptor number. The child is either going to have to search for the file descriptor or the parent will have to communicate it to the child in the environment, on the child's command line, or some other way.