I have a process that forks in order to execute a subprocess, which receive an entry from stdin and writes to stdout.
My code in short is as follows:
int fd[2];
int fd2[2];
if (pipe(fd) < 0 || pipe(fd2) < 0)
throws exception;
pid_t p = fork();
if (p == 0) // child
{
close(fd[0]); //not needed
dup2( fd[1],STDOUT_FILENO);
dup2( fd[1],STDERR_FILENO);
close(fd2[1]); //not needed
//what if write calls on parent process execute first?
//how to handle that situation
dup2( fd2[0],STDIN_FILENO);
string cmd="./childbin";
if (execl(cmd.c_str(),(char *) NULL) == -1)
{
exit (-1);
}
exit(-1);
}
else if (p > 0) // parent
{
close(fd[1]); //not needed
close(fd2[0]);
if (write(fd2[1],command.c_str(),command.size())<0)
{
throw exception;
}
close(fd2[1]);
//waits for child to finish.
//child process actually hangs on reading for ever from stdin.
pidret=waitpid(p,&status,WNOHANG))==0)
.......
}
The child process remains waiting forever for data in STDIN. Is there maybe a race condition between the child and parent process? I think that could be the problem but not quite sure and also not sure how to fix it.
Thanks in advance.
Update:
Some useful information.
The parent process is a daemon and this code runs several times per second. It works 97% of the times (~3% of the cases, the child process remains in the state described before).
UPDATE 2
After added validation in dup2 call, there is no error there, next condition is never raised.
if(dup2(...) == -1) {
syslog(...)
}
Your missing a wait that is why you in 3% of the cases run the parent before the child. See the example at the bottom.
Also you should call close on the fd's you don't use before doing anything else.
Related
I'm trying to create a parent and a child processes that would communicate through a pipe.
I've setup the child to listen to its parent through a pipe, with a read command running in a while loop.
In order to debug my program I print debug messages to the standard output (note that my read command is set to the pipe with a file descriptor different than 0 or 1).
From some reason these debug messages are being received in the read command of my child process. I can't understand why this is happening. What could be causing this? What elegant solution do I have to solve it (apart from writing to the standard error instead of output)?
This code causes an endless loop because of the cout message that just triggers another read. Why? Notice that the child process exists upon receiving a CHILD_EXIT_CODE signal from parent.
int myPipe[2]
pipe(myPipe);
if(fork() == 0)
{
int readPipe = myPipe[0];
while(true)
{
size_t nBytes = read(readPipe, readBuffer, sizeof(readBuffer));
std::cout << readBuffer << "\n";
int newPosition = atoi(readBuffer);
if(newPosition == CHILD_EXIT_CODE)
{
exit(0);
}
}
}
Edit: Code creating the pipe and fork
I do not know what is doing your parent process (you did not post your code), but because of your description it seems like your parent and child processes are sharing the same stdout stream (the child inherits copies of the parent's set of open file descriptors; see man fork)
I guess, what you should do is to attach stdout and stderr streams in your parent process to the write side of your pipes (you need one more pipe for the stderr stream)
This is what I would try if I were in your situation (in my opinion you are missing dup2):
pid_t pid; /*Child or parent PID.*/
int out[2], err[2]; /*Store pipes file descriptors. Write ends attached to the stdout*/
/*and stderr streams.*/
// Init value as error.
out[0] = out[1] = err[0] = err[1] = -1;
/*Creating pipes, they will be attached to the stderr and stdout streams*/
if (pipe(out) < 0 || pipe(err) < 0) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if ((pid=fork()) == -1) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if (pid != 0) {
/*Parent process*/
/*Attach stderr and stdout streams to your pipes (their write end)*/
if ((dup2(out[1], 1) < 0) || (dup2(err[1], 2) < 0)) {
/* Error: you should log it */
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit (EXIT_FAILURE);
}
/*WHATEVER YOU DO WITH YOUR PARENT PROCESS*/
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit(EXIT_SUCCESS);
}
else {
/*Child process*/
}
You should not forget a couple of things:
wait or waitpid to release associated memory to child process when it dies. wait or waitpid must be called from parent process.
If you use wait or waitpid you might have to think about blocking SIGCHLD before calling fork and in that case you should unblock SIGCHLD in your child process right after fork, at the beginning of your child process code (A child created via fork(2) inherits a copy of its parent's signal mask; see sigprocmask).
.
Something that many times is forgotten. Be aware of EINTR error. dup2, waitpid/wait, read and many others are affected by this error.
If your parent process dies before your child process you should try to kill the child process if you do not want it to become an orphan one.
Take a look at _exit. Perhaps you should use it in your child process instead of exit.
Extracted from Unix Network Programming Vol1 Third Edition Section 5.10 wait and waitpid functions
#include "unp.h"
void
sig_chld(int signo)
{
pid_t pid;
int stat;
while ( (pid = waitpid(-1, &stat, WNOHANG)) > 0) {
printf("child %d terminated\n", pid);
}
return;
}
...
// in server code
Signal(SIGCHLD, sig_chld); // used to prevent any zombies from being left around
...
..
// in client code
The client establishes five connection with the server and then immediately exit
...
Reference waitpid:
Return Value
waitpid(): on success, returns the process ID of the child whose state
has changed; if WNOHANG was specified and one or more child(ren)
specified by pid exist, but have not yet changed state, then 0 is
returned. On error, -1 is returned.
Based on the above document, waitpid will return 0 if at the moment no child process has terminated. If I understood correctly, this will cause the function sig_chld break from the while statement.
Question> Thus how can we guarantee that this signal handler can make sure all terminated children processes are collected?
while ( (pid = waitpid(-1, &stat, WNOHANG)) > 0) {
printf("child %d terminated\n", pid);
You wouldn't be in the signal handler if you didn't have a child to handle. The loop is because while you are in the handler itself a 2nd or 3rd child could have changed or terminated sending SIGCHLDs that would not be queued. Thus the loop actually prevents you from missing those possible dead children. It will return 0 or error out with a -1 (ECHILD) when there are no more children to be reaped at the moment.
I have the following code fork()'s 2 children from a common parent and implements a pipeline between them. When I call the wait() function in the parent once only the program runs perfectly. However if I try to call the wait() function twice (to reap from both the children), the program does nothing and must be force exited.
Can someone tell me why I can't wait for both children here?
int main()
{
int status;
int pipeline[2];
pipe(pipeline);
pid_t pid_A, pid_B;
if( !(pid_A = fork()) )
{
dup2(pipeline[1], 1);
close(pipeline[0]);
close(pipeline[1]);
execl("/bin/ls", "ls", 0);
}
if( !(pid_B = fork()) )
{
dup2(pipeline[0], 0);
close(pipeline[0]);
close(pipeline[1]);
execl("/usr/bin/wc", "wc", 0);
}
wait(&status);
wait(&status);
}
You need to close both ends of the pipe in the parent after you fork the children. The problem is that output of ls is going to the parent, and the wc is waiting for input. So the first wait cleans up the ls, but the second is waiting for wc which is blocked on a pipe that's not receiving data.
Process B (wc) does not terminate until it receives end-of-file on its input stream. The other end of the pipe is shared as both the output stream of process A, and as pipeline[1] in the parent process, so you will need to close(pipeline[1]) in the parent process before waiting for process B.
I have roughly created the following code to call a child process:
// pipe meanings
const int READ = 0;
const int WRITE = 1;
int fd[2];
// Create pipes
if (pipe(fd))
{
throw ...
}
p_pid = fork();
if (p_pid == 0) // in the child
{
close(fd[READ]);
if (dup2(fd[WRITE], fileno(stdout)) == -1)
{
throw ...
}
close(fd[WRITE]);
// Call exec
execv(argv[0], const_cast<char*const*>(&argv[0]));
_exit(-1);
}
else if (p_pid < 0) // fork has failed
{
throw
}
else // in th parent
{
close(fd[WRITE]);
p_stdout = new std::ifstream(fd[READ]));
}
Now, if the subprocess does not write too much to stdout, I can wait for it to finish and then read the stdout from p_stdout. If it writes too much, the write blocks and the parent waits for it forever.
To fix this, I tried to wait with WNOHANG in the parent, if it is not finished, read all available output from p_stdout using readsome, sleep a bit and try again. Unfortunately, readsome never reads anything:
while (true)
{
if (waitid(P_PID, p_pid, &info, WEXITED | WNOHANG) != 0)
throw ...;
else if (info.si_pid != 0) // waiting has succeeded
break;
char tmp[1024];
size_t sizeRead;
sizeRead = p_stdout->readsome(tmp, 1024);
if (sizeRead > 0)
s_stdout.write(tmp, sizeRead);
sleep(1);
}
The question is: Why does this not work and how can I fix it?
edit: If there is only child, simply using read instead of readsome would probably work, but the process has multiple children and needs to react as soon as one of them terminates.
As sarnold suggested, you need to change the order of your calls. Read first, wait last. Even if your method worked, you might miss the last read. i.e. you exit the loop before you read the last set of bytes that was written.
The problem might be is that ifstream is non-blocking. I've never liked iostreams, even in my C++ projects, I always liked the simplicity of C's stdio functions (i.e. FILE*, fprintf, etc). One way to get around this is to read if the descriptor is readable. You can use select to determine if there is data waiting on that pipe. You're going to need select if you are going to read from multiple children anyway, so might as well learn it now.
As for a quick isreadable function, try something like this (please note I haven't tried compiling this):
bool isreadable(int fd, int timeoutSecs)
{
struct timeval tv = { timeoutSecs, 0 };
fd_set readSet;
FD_ZERO(&readSet);
return select(fds, &readSet, NULL, NULL, &tv) == 1;
}
Then in your parent code, do something like:
while (true) {
if (isreadable(fd[READ], 1)) {
// read fd[READ];
if (bytes <= 0)
break;
}
}
wait(pid);
I'd suggest re-writing the code so that it doesn't call waitpid(2) until after read(2) calls on the pipe return 0 to signify end-of-file. Once you get the end-of-file return from your read calls, you know the child is dead, and you can finally waitpid(2) for it.
Another option is to de-couple the reading from the reaping even further and perform the wait calls in a SIGCHLD signal handler asynchronously to the reading operations.
I'm trying to execute an external program from inside my Linux C++ program.
I'm calling the method system("gedit") to launch an instance of the Gedit editor. However my problem is while the Gedit window is open, my C++ program waits for it to exit.
How can I call an external program without waiting for it to exit?
You will need to use fork and exec
int fork_rv = fork();
if (fork_rv == 0)
{
// we're in the child
execl("/path/to/gedit", "gedit", 0);
// in case execl fails
_exit(1);
}
else if (fork_rv == -1)
{
// error could not fork
}
You will also need to reap your child so as not to leave a zombie process.
void reap_child(int sig)
{
int status;
waitpid(-1, &status, WNOHANG);
}
int main()
{
signal(SIGCHLD, reap_child);
...
}
In regards to zombie processes, you have a second option. It uses a bit more resources (this flavor forks twice), but the benefit is you can keep your wait closer to your fork which is nicer in terms of maintenance.
int fork_rv = fork();
if (fork_rv == 0)
{
fork_rv = fork();
if (fork_rv == 0)
{
// we're in the child
execl("/path/to/gedit", "gedit", 0);
// if execl fails
_exit(1);
}
else if (fork_rv == -1)
{
// fork fails
_exit(2);
}
_exit(0);
}
else if (fork_rv != -1)
{
// parent wait for the child (which will exit quickly)
int status;
waitpid(fork_rv, &status, 0);
}
else if (fork_rv == -1)
{
// error could not fork
}
What this last flavor does is create a child, which in turns creates a grandchild and the grandchild is what exec's your gedit program. The child itself exits and the parent process can reap it right away. So an extra fork but you keep all the code in one place.
Oh, let me say it!
http://en.wikipedia.org/wiki/Fork-exec
Fork! :)
First, did you try to launch in background with system("gedit&")?
If that does not work, try spawning a new thread and running gedit from there.
I presume that you are not concerned with the result of the edit, or the contents of the edited file?