C/C++ linux fork() and exec() - c++

I'm use fork() to create child process. From child process I am use exec() to launch new process. My code as below:
......
pid = fork();
if (pid > 0) {
WriteLog("Parent Process");
//Do something
} else if (pid == 0) {
WriteLog("Child process");
int return = execl(ShellScript);
if ( return == -1 )
WriteLog("Launch process fail");
} else {
WriteLog("Can't create child process");
}
......
Note: WriteLog function will be open file, write log, and close file. (It is flushed)
ShellScript will launch new process c/c++.
I run my program for long run and the code above is called many times. And sometime (rarely) there are problem happen that the new process can't launch successful although the child process is created successfully (I have checked carefully). And one thing is extremely misunderstand when this problem happen that the "Child process" log can't printed although the child process is created successful.
In normal case (there are not error happen) the number of times print the "Child process" and "Parent process" log are the same.
In abnormal case, they are not the same although the child process always create successfully.The "Launch process fail" and "Can't create child process" log aren't printed in this case.
Please help me for consult.

Remember that stdio(3) is buffered. Always call fflush(NULL); (see fflush(3) for more) before fork. Add a \n (newline) at end of every printf(3) format string (or else, follow them by fflush(NULL); ...).
The function execl(3) (perhaps you want execlp?) can fail (so sets errno on failure).
} else if (pid == 0) {
printf("Child process\n");
fflush(NULL);
execl("/bin/foo", "foo", "arg1", NULL);
// if we are here execl has failed
perror("Launch process fail");
}
On error, fork(2) fails by returning -1 and sets errno(3) (see also perror(3) and strerror(3)). So your last else should be
} else {
perror("Can't create child process");
fflush(NULL);
}
You might want to use strace(1) (notably as strace -f yourprog ...) to understand the involved syscalls (see syscalls(2)...)
Your WriteLog should probably use strerror (on the errno value saved at beginning of WriteLog ....). I suggest something like
void WriteLog(const char* msg) {
int e = errno;
if (e)
syslog (LOG_ERR, "%s [%s]", msg, strerrno(e));
else
syslog (LOG_ERR, "%s", msg);
}
See syslog(3).
There are limits on the number of fork-ed processes, see setrlimit(2) with RLIMIT_NPROC and the bash ulimit builtin.
Read also Advanced Linux Programming.

Related

Linux - child reading from pipe receives debug messages sent to standard output

I'm trying to create a parent and a child processes that would communicate through a pipe.
I've setup the child to listen to its parent through a pipe, with a read command running in a while loop.
In order to debug my program I print debug messages to the standard output (note that my read command is set to the pipe with a file descriptor different than 0 or 1).
From some reason these debug messages are being received in the read command of my child process. I can't understand why this is happening. What could be causing this? What elegant solution do I have to solve it (apart from writing to the standard error instead of output)?
This code causes an endless loop because of the cout message that just triggers another read. Why? Notice that the child process exists upon receiving a CHILD_EXIT_CODE signal from parent.
int myPipe[2]
pipe(myPipe);
if(fork() == 0)
{
int readPipe = myPipe[0];
while(true)
{
size_t nBytes = read(readPipe, readBuffer, sizeof(readBuffer));
std::cout << readBuffer << "\n";
int newPosition = atoi(readBuffer);
if(newPosition == CHILD_EXIT_CODE)
{
exit(0);
}
}
}
Edit: Code creating the pipe and fork
I do not know what is doing your parent process (you did not post your code), but because of your description it seems like your parent and child processes are sharing the same stdout stream (the child inherits copies of the parent's set of open file descriptors; see man fork)
I guess, what you should do is to attach stdout and stderr streams in your parent process to the write side of your pipes (you need one more pipe for the stderr stream)
This is what I would try if I were in your situation (in my opinion you are missing dup2):
pid_t pid; /*Child or parent PID.*/
int out[2], err[2]; /*Store pipes file descriptors. Write ends attached to the stdout*/
/*and stderr streams.*/
// Init value as error.
out[0] = out[1] = err[0] = err[1] = -1;
/*Creating pipes, they will be attached to the stderr and stdout streams*/
if (pipe(out) < 0 || pipe(err) < 0) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if ((pid=fork()) == -1) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if (pid != 0) {
/*Parent process*/
/*Attach stderr and stdout streams to your pipes (their write end)*/
if ((dup2(out[1], 1) < 0) || (dup2(err[1], 2) < 0)) {
/* Error: you should log it */
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit (EXIT_FAILURE);
}
/*WHATEVER YOU DO WITH YOUR PARENT PROCESS*/
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit(EXIT_SUCCESS);
}
else {
/*Child process*/
}
You should not forget a couple of things:
wait or waitpid to release associated memory to child process when it dies. wait or waitpid must be called from parent process.
If you use wait or waitpid you might have to think about blocking SIGCHLD before calling fork and in that case you should unblock SIGCHLD in your child process right after fork, at the beginning of your child process code (A child created via fork(2) inherits a copy of its parent's signal mask; see sigprocmask).
.
Something that many times is forgotten. Be aware of EINTR error. dup2, waitpid/wait, read and many others are affected by this error.
If your parent process dies before your child process you should try to kill the child process if you do not want it to become an orphan one.
Take a look at _exit. Perhaps you should use it in your child process instead of exit.

waitpid/wexitstatus returning 0 instead of correct return code

I have the helper function below, used to execute a command and get the return value on posix systems. I used to use popen, but it is impossible to get the return code of an application with popen if it runs and exits before popen/pclose gets a chance to do its work.
The following helper function creates a process fork, uses execvp to run the desired external process, and then the parent uses waitpid to get the return code. I'm seeing odd cases where it's refusing to run.
When called with wait = true, waitpid should return the exit code of the application no matter what. However, I'm seeing stdout output that specifies the return code should be non-zero, yet the return code is zero. Testing the external process in a regular shell, then echoing $? returns non-zero, so it's not a problem w/ the external process not returning the right code. If it's of any help, the external process being run is mount(8) (yes, I know I can use mount(2) but that's besides the point).
I apologize in advance for a code dump. Most of it is debugging/logging:
inline int ForkAndRun(const std::string &command, const std::vector<std::string> &args, bool wait = false, std::string *output = NULL)
{
std::string debug;
std::vector<char*> argv;
for(size_t i = 0; i < args.size(); ++i)
{
argv.push_back(const_cast<char*>(args[i].c_str()));
debug += "\"";
debug += args[i];
debug += "\" ";
}
argv.push_back((char*)NULL);
neosmart::logger.Debug("Executing %s", debug.c_str());
int pipefd[2];
if (pipe(pipefd) != 0)
{
neosmart::logger.Error("Failed to create pipe descriptor when trying to launch %s", debug.c_str());
return EXIT_FAILURE;
}
pid_t pid = fork();
if (pid == 0)
{
close(pipefd[STDIN_FILENO]); //child isn't going to be reading
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO);
if (execvp(command.c_str(), &argv[0]) != 0)
{
exit(EXIT_FAILURE);
}
return 0;
}
else if (pid < 0)
{
neosmart::logger.Error("Failed to fork when trying to launch %s", debug.c_str());
return EXIT_FAILURE;
}
else
{
close(pipefd[STDOUT_FILENO]);
int exitCode = 0;
if (wait)
{
waitpid(pid, &exitCode, wait ? __WALL : (WNOHANG | WUNTRACED));
std::string result;
char buffer[128];
ssize_t bytesRead;
while ((bytesRead = read(pipefd[STDIN_FILENO], buffer, sizeof(buffer)-1)) != 0)
{
buffer[bytesRead] = '\0';
result += buffer;
}
if (wait)
{
if ((WIFEXITED(exitCode)) == 0)
{
neosmart::logger.Error("Failed to run command %s", debug.c_str());
neosmart::logger.Info("Output:\n%s", result.c_str());
}
else
{
neosmart::logger.Debug("Output:\n%s", result.c_str());
exitCode = WEXITSTATUS(exitCode);
if (exitCode != 0)
{
neosmart::logger.Info("Return code %d", (exitCode));
}
}
}
if (output)
{
result.swap(*output);
}
}
close(pipefd[STDIN_FILENO]);
return exitCode;
}
}
Note that the command is run OK with the correct parameters, the function proceeds without any problems, and WIFEXITED returns TRUE. However, WEXITSTATUS returns 0, when it should be returning something else.
Probably isn't your main issue, but I think I see a small problem. In your child process, you have...
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO); //but wait, this pipe is closed!
But I think what you want is:
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd for both, can close
I don't have much experience with forks and pipes in Linux, but I did write a similar function pretty recently. You can take a look at the code to compare, if you'd like. I know that my function works.
execAndRedirect.cpp
I'm using the mongoose library, and grepping my code for SIGCHLD revealed that using mg_start from mongoose results in setting SIGCHLD to SIG_IGN.
From the waitpid man page, on Linux a SIGCHLD set to SIG_IGN will not create a zombie process, so waitpid will fail if the process has already successfully run and exited - but will run OK if it hasn't yet. This was the cause of the sporadic failure of my code.
Simply re-setting SIGCHLD after calling mg_start to a void function that does absolutely nothing was enough to keep the zombie records from being immediately erased.
Per #Geoff_Montee's advice, there was a bug in my redirect of STDERR, but this was not responsible for the problem as execvp does not store the return value in STDERR or even STDOUT, but rather in the kernel object associated with the parent process (the zombie record).
#jilles' warning about non-contiguity of vector in C++ does not apply for C++03 and up (only valid for C++98, though in practice, most C++98 compilers did use contiguous storage, anyway) and was not related to this issue. However, the advice on reading from the pipe before blocking and checking the output of waitpid is spot-on.
I've found that pclose does NOT block and wait for the process to end, contrary to the documentation (this is on CentOS 6). I've found that I need to call pclose and then call waitpid(pid,&status,0); to get the true return value.

Catch stderr and stdout from external program in C++

I am trying to write a program that runs an external program.
I know that I can catch stdout, and I can catch stdout and stderr together BUT the question is can I catch the stderr and stdout separated?
I mean for example, stderr in variable STDERR and stdout in variable STDOUT. I mean I want them separated.
Also I need the exit code of the external program in a variable.
On Windows you must fill STARTUPINFO for the CreateProcess to catch standart streams, and you can use GetExitCodeProcess function to get the termination status. There is an example how to redirect standart streams into the parent process http://msdn.microsoft.com/en-us/library/windows/desktop/ms682499.aspx
On Linux-like OS you probably want to use fork instead of execve, and working with a forked process is another story.
In Windows and Linux redirecting streams has general approach - you must create several pipes (one for each stream) and redirect child process streams into that pipes, and the parent process can read data from that pipes.
Sample code for Linux:
int fd[2];
if (pipe(fd) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
pid_t cpid = fork();
if (cpid == -1) {
perror("fork");
exit(EXIT_FAILURE);
}
if (cpid == 0) { // child
dup2(fd[1], STDERR_FILENO);
fprintf(stderr, "Hello, World!\n");
exit(EXIT_SUCCESS);
} else { // parent
char ch;
while (read(fd[0], &ch, 1) > 0)
printf("%c", ch);
exit(EXIT_SUCCESS);
}
EDIT: If you need to catch streams from another program, use the same stragey as above, first fork, second - use pipes (as in code above), then execve another progrram in child process and use this code in parent process to wait an execution end and catch a return code:
int status;
if (waitpid(cpid, &status, 0) < 0) {
perror("waitpid");
exit(EXIT_FAILURE);
}
You can find more details in man pages pipe, dup2 and waitpid.

Zombie process and fork

i have a code like this...
c = fork();
if(c==0) {
close(fd[READ]);
if (dup2(fd[WRITE],STDOUT_FILENO) != -1)
execlp("ssh", "ssh", host, "ls" , NULL);
_exit(1);
}
close(fd[WRITE]);
fd[READ] and fd[WRITE] are pipe file descriptors.
when i run it continuously, there are a lot of zombie processes when i use ps ax. How to rectify this? Is this because i am not using the parent to wait for the exit status of the child process...
If you have no intention to wait for your child processes, set the SIGCHLD handler to SIG_IGN to have the kernel automatically reap your children, eg.
signal(SIGCHLD, SIG_IGN);
Yes, the parent must wait for the child return status. You can do it asynchronously by catching SIGCHILD in the parent process and then call waitpid in the capture method.
Yes, waitpid() should be called from parent. waitpid() will clean-up any child process of the parent process, which is currently in terminated state.
You can add below code to your program :
if(c>0)
{
while(1){
ret = waitpid(-1,&status,0);
if(ret>0){
if(WIFEXITED(status)){
if(WEXITSTATUS(status) == 0){
printf("child process terminated normally and successfully\n");
}
else{
printf("child process terminated normally and unsuccessfully\n");
}
}
else{
printf("child process terminated abnormally and unsuccessfully\n");
}
}
if(ret<0) {
break;
}
}
}
FYI : more on waitpid.
First parameter is set to -1 such that waitpid() will clean-up any child process of this parent process, which is currently in terminated state.The first parameter can also be +ve - in this case, waitpid() will cleanup only the specific child process.Most common use is to set first parameter to -1 also refer to manual page of waitpid().
Second parameter is used to extract the termination/exit status code of the child process - waitpid() system call API fills the status field when the system call API is invoked.
Last field is the flags field - currently unused - in most cases, flags field will be set to 0 - meaning, default behaviour of the system call API !!! if you really need to use flags, refer to manual page of waitpid().
Note:
In the code you submitted, _exit(1) will be called iff execlp() fails. so you can put a condition for execlp() fail and that condition _exit() can be called. The Reason is, execlp() functions only return if an error has occurred.
Modified code can be like below :
c = fork();
if(c==0) {
close(fd[READ]);
if (dup2(fd[WRITE],STDOUT_FILENO) != -1)
ret_execlp = execlp("ssh", "ssh", host, "ls" , NULL);
if(ret_execlp == -1 ) {
printf("execlp is failed");
_exit(1);
}
}
close(fd[WRITE]);
I appreciate the above 2 answers. Wish this answer may give more clarity. Thank you.

How to handle execvp(...) errors after fork()?

I do the regular thing:
fork()
execvp(cmd, ) in child
If execvp fails because no cmd is found, how can I notice this error in parent process?
The well-known self-pipe trick can be adapted for this purpose.
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <sys/wait.h>
#include <sysexits.h>
#include <unistd.h>
int main(int argc, char **argv) {
int pipefds[2];
int count, err;
pid_t child;
if (pipe(pipefds)) {
perror("pipe");
return EX_OSERR;
}
if (fcntl(pipefds[1], F_SETFD, fcntl(pipefds[1], F_GETFD) | FD_CLOEXEC)) {
perror("fcntl");
return EX_OSERR;
}
switch (child = fork()) {
case -1:
perror("fork");
return EX_OSERR;
case 0:
close(pipefds[0]);
execvp(argv[1], argv + 1);
write(pipefds[1], &errno, sizeof(int));
_exit(0);
default:
close(pipefds[1]);
while ((count = read(pipefds[0], &err, sizeof(errno))) == -1)
if (errno != EAGAIN && errno != EINTR) break;
if (count) {
fprintf(stderr, "child's execvp: %s\n", strerror(err));
return EX_UNAVAILABLE;
}
close(pipefds[0]);
puts("waiting for child...");
while (waitpid(child, &err, 0) == -1)
if (errno != EINTR) {
perror("waitpid");
return EX_SOFTWARE;
}
if (WIFEXITED(err))
printf("child exited with %d\n", WEXITSTATUS(err));
else if (WIFSIGNALED(err))
printf("child killed by %d\n", WTERMSIG(err));
}
return err;
}
Here's a complete program.
$ ./a.out foo
child's execvp: No such file or directory
$ (sleep 1 && killall -QUIT sleep &); ./a.out sleep 60
waiting for child...
child killed by 3
$ ./a.out true
waiting for child...
child exited with 0
How this works:
Create a pipe, and make the write endpoint CLOEXEC: it auto-closes when an exec is successfully performed.
In the child, try to exec. If it succeeds, we no longer have control, but the pipe is closed. If it fails, write the failure code to the pipe and exit.
In the parent, try to read from the other pipe endpoint. If read returns zero, then the pipe was closed and the child must have exec successfully. If read returns data, it's the failure code that our child wrote.
You terminate the child (by calling _exit()) and then the parent can notice this (through e.g. waitpid()). For instance, your child could exit with an exit status of -1 to indicate failure to exec. One caveat with this is that it is impossible to tell from your parent whether the child in its original state (i.e. before exec) returned -1 or if it was the newly executed process.
As suggested in the comments below, using an "unusual" return code would be appropriate to make it easier to distinguish between your specific error and one from the exec()'ed program. Common ones are 1, 2, 3 etc. while higher numbers 99, 100, etc. are more unusual. You should keep your numbers below 255 (unsigned) or 127 (signed) to increase portability.
Since waitpid blocks your application (or rather, the thread calling it) you will either need to put it on a background thread or use the signalling mechanism in POSIX to get information about child process termination. See the SIGCHLD signal and the sigaction function to hook up a listener.
You could also do some error checking before forking, such as making sure the executable exists.
If you use something like Glib, there are utility functions to do this, and they come with pretty good error reporting. Take a look at the "spawning processes" section of the manual.
1) Use _exit() not exit() - see http://opengroup.org/onlinepubs/007908775/xsh/vfork.html - NB: applies to fork() as well as vfork().
2) The problem with doing more complicated IPC than the exit status, is that you have a shared memory map, and it's possible to get some nasty state if you do anything too complicated - e.g. in multithreaded code, one of the killed threads (in the child) could have been holding a lock.
Not should you wonder how you can notice it in parent process, but also you should keep in mind that you must notice the error in parent process. That's especially true for multithreaded applications.
After execvp you must place a call to function that terminates the process in any case. You should not call any complex functions that interact with C library (such as stdio), since effects of them may mingle with pthreads of libc functionality of parent process. So you can't print a message with printf() in child process and have to inform parent about the error instead.
The easiest way, among the other, is passing return code. Supply nonzero argument to _exit() function (see note below) you used to terminate the child and then examine the return code in the parent. Here's the example:
int pid, stat;
pid = fork();
if (pid == 0){
// Child process
execvp(cmd);
if (errno == ENOENT)
_exit(-1);
_exit(-2);
}
wait(&stat);
if (!WIFEXITED(stat)) { // Error happened
...
}
Instead of _exit(), you might think of exit() function, but it's incorrect, since this function will do a part of the C-library cleanup that should be done only when parent process terminates. Instead, use _exit() function, that doesn't do such a cleanup.
Well, you could use the wait/waitpid functions in the parent process. You can specify a status variable that holds info about the status of the process that terminated. The downside is that the parent process is blocked until the child process finishes execution.
Anytime exec fails in a subprocess, you should use kill(getpid(),SIGKILL) and the parent should always have a signal handler for SIGCLD and tell the user of the program, in the appropriate way, that the process was not successfully started.