Catch stderr and stdout from external program in C++ - c++

I am trying to write a program that runs an external program.
I know that I can catch stdout, and I can catch stdout and stderr together BUT the question is can I catch the stderr and stdout separated?
I mean for example, stderr in variable STDERR and stdout in variable STDOUT. I mean I want them separated.
Also I need the exit code of the external program in a variable.

On Windows you must fill STARTUPINFO for the CreateProcess to catch standart streams, and you can use GetExitCodeProcess function to get the termination status. There is an example how to redirect standart streams into the parent process http://msdn.microsoft.com/en-us/library/windows/desktop/ms682499.aspx
On Linux-like OS you probably want to use fork instead of execve, and working with a forked process is another story.
In Windows and Linux redirecting streams has general approach - you must create several pipes (one for each stream) and redirect child process streams into that pipes, and the parent process can read data from that pipes.
Sample code for Linux:
int fd[2];
if (pipe(fd) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
pid_t cpid = fork();
if (cpid == -1) {
perror("fork");
exit(EXIT_FAILURE);
}
if (cpid == 0) { // child
dup2(fd[1], STDERR_FILENO);
fprintf(stderr, "Hello, World!\n");
exit(EXIT_SUCCESS);
} else { // parent
char ch;
while (read(fd[0], &ch, 1) > 0)
printf("%c", ch);
exit(EXIT_SUCCESS);
}
EDIT: If you need to catch streams from another program, use the same stragey as above, first fork, second - use pipes (as in code above), then execve another progrram in child process and use this code in parent process to wait an execution end and catch a return code:
int status;
if (waitpid(cpid, &status, 0) < 0) {
perror("waitpid");
exit(EXIT_FAILURE);
}
You can find more details in man pages pipe, dup2 and waitpid.

Related

C/C++ linux fork() and exec()

I'm use fork() to create child process. From child process I am use exec() to launch new process. My code as below:
......
pid = fork();
if (pid > 0) {
WriteLog("Parent Process");
//Do something
} else if (pid == 0) {
WriteLog("Child process");
int return = execl(ShellScript);
if ( return == -1 )
WriteLog("Launch process fail");
} else {
WriteLog("Can't create child process");
}
......
Note: WriteLog function will be open file, write log, and close file. (It is flushed)
ShellScript will launch new process c/c++.
I run my program for long run and the code above is called many times. And sometime (rarely) there are problem happen that the new process can't launch successful although the child process is created successfully (I have checked carefully). And one thing is extremely misunderstand when this problem happen that the "Child process" log can't printed although the child process is created successful.
In normal case (there are not error happen) the number of times print the "Child process" and "Parent process" log are the same.
In abnormal case, they are not the same although the child process always create successfully.The "Launch process fail" and "Can't create child process" log aren't printed in this case.
Please help me for consult.
Remember that stdio(3) is buffered. Always call fflush(NULL); (see fflush(3) for more) before fork. Add a \n (newline) at end of every printf(3) format string (or else, follow them by fflush(NULL); ...).
The function execl(3) (perhaps you want execlp?) can fail (so sets errno on failure).
} else if (pid == 0) {
printf("Child process\n");
fflush(NULL);
execl("/bin/foo", "foo", "arg1", NULL);
// if we are here execl has failed
perror("Launch process fail");
}
On error, fork(2) fails by returning -1 and sets errno(3) (see also perror(3) and strerror(3)). So your last else should be
} else {
perror("Can't create child process");
fflush(NULL);
}
You might want to use strace(1) (notably as strace -f yourprog ...) to understand the involved syscalls (see syscalls(2)...)
Your WriteLog should probably use strerror (on the errno value saved at beginning of WriteLog ....). I suggest something like
void WriteLog(const char* msg) {
int e = errno;
if (e)
syslog (LOG_ERR, "%s [%s]", msg, strerrno(e));
else
syslog (LOG_ERR, "%s", msg);
}
See syslog(3).
There are limits on the number of fork-ed processes, see setrlimit(2) with RLIMIT_NPROC and the bash ulimit builtin.
Read also Advanced Linux Programming.

Linux - child reading from pipe receives debug messages sent to standard output

I'm trying to create a parent and a child processes that would communicate through a pipe.
I've setup the child to listen to its parent through a pipe, with a read command running in a while loop.
In order to debug my program I print debug messages to the standard output (note that my read command is set to the pipe with a file descriptor different than 0 or 1).
From some reason these debug messages are being received in the read command of my child process. I can't understand why this is happening. What could be causing this? What elegant solution do I have to solve it (apart from writing to the standard error instead of output)?
This code causes an endless loop because of the cout message that just triggers another read. Why? Notice that the child process exists upon receiving a CHILD_EXIT_CODE signal from parent.
int myPipe[2]
pipe(myPipe);
if(fork() == 0)
{
int readPipe = myPipe[0];
while(true)
{
size_t nBytes = read(readPipe, readBuffer, sizeof(readBuffer));
std::cout << readBuffer << "\n";
int newPosition = atoi(readBuffer);
if(newPosition == CHILD_EXIT_CODE)
{
exit(0);
}
}
}
Edit: Code creating the pipe and fork
I do not know what is doing your parent process (you did not post your code), but because of your description it seems like your parent and child processes are sharing the same stdout stream (the child inherits copies of the parent's set of open file descriptors; see man fork)
I guess, what you should do is to attach stdout and stderr streams in your parent process to the write side of your pipes (you need one more pipe for the stderr stream)
This is what I would try if I were in your situation (in my opinion you are missing dup2):
pid_t pid; /*Child or parent PID.*/
int out[2], err[2]; /*Store pipes file descriptors. Write ends attached to the stdout*/
/*and stderr streams.*/
// Init value as error.
out[0] = out[1] = err[0] = err[1] = -1;
/*Creating pipes, they will be attached to the stderr and stdout streams*/
if (pipe(out) < 0 || pipe(err) < 0) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if ((pid=fork()) == -1) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if (pid != 0) {
/*Parent process*/
/*Attach stderr and stdout streams to your pipes (their write end)*/
if ((dup2(out[1], 1) < 0) || (dup2(err[1], 2) < 0)) {
/* Error: you should log it */
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit (EXIT_FAILURE);
}
/*WHATEVER YOU DO WITH YOUR PARENT PROCESS*/
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit(EXIT_SUCCESS);
}
else {
/*Child process*/
}
You should not forget a couple of things:
wait or waitpid to release associated memory to child process when it dies. wait or waitpid must be called from parent process.
If you use wait or waitpid you might have to think about blocking SIGCHLD before calling fork and in that case you should unblock SIGCHLD in your child process right after fork, at the beginning of your child process code (A child created via fork(2) inherits a copy of its parent's signal mask; see sigprocmask).
.
Something that many times is forgotten. Be aware of EINTR error. dup2, waitpid/wait, read and many others are affected by this error.
If your parent process dies before your child process you should try to kill the child process if you do not want it to become an orphan one.
Take a look at _exit. Perhaps you should use it in your child process instead of exit.

Redirect bash stdin and stdout in c++

I need help to get the following to work. I need to start a bash process from c++, this bash process needs to accept input from stdin and output as per normal it's output to stdout.
From a different process I need to write commands to stdin which will then actually execute in bash as per above, then I'm interested in the result from stdout.
This is what I've tried so far, but the output does not make sense to me at all...
if (pipe(pipeBashShell)) {
fprintf(stderr, "Pipe error!\n");
exit(1);
}
if ((pipePId = fork()) == -1) {
fprintf(stderr, "Fork error. Exiting.\n"); /* something went wrong */
exit(1);
}
if (pipePId == 0) { //this is the child process
dup2(pipeBashShell[0], STDIN_FILENO);
dup2(pipeBashShell[1], STDOUT_FILENO);
dup2(pipeBashShell[1], STDERR_FILENO);
static char* bash[] = {"/bin/bash", "-i", NULL};
if (execv(*bash, bash) == -1) {
fprintf(stderr, "execv Error!");
exit(1);
}
exit(0);
} else {
char buf[512];
memset(buf, 0x00, sizeof(buf));
sprintf(buf, "ls\n");
int byteswritten = write(pipeBashShell[1], buf, strlen(buf));
int bytesRead = read(pipeBashShell[0], buf, sizeof(buf));
write(STDOUT_FILENO, buf, strlen(buf));
exit(0);
}
.
The output of the result above is as follows:
' (main)
bash:: command not found gerhard#gerhard-work-pc:~/workspaces/si/si$ gerhard
orkspaces/si/si$ gerhard# gerhard-work-pc:~/workspa
....
The command i'm trying to send to bash is "ls", which should give me a directory listing
Am I missing something here?
You have created one pipe (with two ends) and you are trying to use it for bi-directional communication -- from your main process to bash and vice versa. You need two separate pipes for that.
The way you have connected the file descriptors makes bash talk to itself -- it interprets its prompt as a command which it cannot find, and then interprets the error messages as subsequend commands.
Edit:
The correct setup works as follows:
prepare two pipes:
int parent2child[2], child2parent[2];
pipe(parent2child);
pipe(child2parent);
fork()
in the parent process:
close(parent2child[0]);
close(child2parent[1]);
// write to parent2child[1], read from child2parent[0]
in the child process:
close(parent2child[1]);
close(child2parent[0]);
dup2(parent2child[0], STDIN_FILENO);
dup2(child2parent[1], STDOUT_FILENO);

popen simultaneous read and write [duplicate]

This question already has answers here:
Can popen() make bidirectional pipes like pipe() + fork()?
(6 answers)
Closed 3 years ago.
Is it possible to read and write to a file descriptor returned by popen. I have an interactive process I'd like to control through C. If this isn't possible with popen, is there any way around it?
As already answered, popen works in one direction. If you need to read and write, You can create a pipe with pipe(), span a new process by fork() and exec functions and then redirect its input and outputs with dup2(). Anyway I prefer exec over popen, as it gives you better control over the process (e.g. you know its pid)
EDITED:
As comments suggested, a pipe can be used in one direction only. Therefore you have to create separate pipes for reading and writing. Since the example posted before was wrong, I deleted it and created a new, correct one:
#include<unistd.h>
#include<sys/wait.h>
#include<sys/prctl.h>
#include<signal.h>
#include<stdlib.h>
#include<string.h>
#include<stdio.h>
int main(int argc, char** argv)
{
pid_t pid = 0;
int inpipefd[2];
int outpipefd[2];
char buf[256];
char msg[256];
int status;
pipe(inpipefd);
pipe(outpipefd);
pid = fork();
if (pid == 0)
{
// Child
dup2(outpipefd[0], STDIN_FILENO);
dup2(inpipefd[1], STDOUT_FILENO);
dup2(inpipefd[1], STDERR_FILENO);
//ask kernel to deliver SIGTERM in case the parent dies
prctl(PR_SET_PDEATHSIG, SIGTERM);
//replace tee with your process
execl("/usr/bin/tee", "tee", (char*) NULL);
// Nothing below this line should be executed by child process. If so,
// it means that the execl function wasn't successfull, so lets exit:
exit(1);
}
// The code below will be executed only by parent. You can write and read
// from the child using pipefd descriptors, and you can send signals to
// the process using its pid by kill() function. If the child process will
// exit unexpectedly, the parent process will obtain SIGCHLD signal that
// can be handled (e.g. you can respawn the child process).
//close unused pipe ends
close(outpipefd[0]);
close(inpipefd[1]);
// Now, you can write to outpipefd[1] and read from inpipefd[0] :
while(1)
{
printf("Enter message to send\n");
scanf("%s", msg);
if(strcmp(msg, "exit") == 0) break;
write(outpipefd[1], msg, strlen(msg));
read(inpipefd[0], buf, 256);
printf("Received answer: %s\n", buf);
}
kill(pid, SIGKILL); //send SIGKILL signal to the child process
waitpid(pid, &status, 0);
}
The reason popen() and friends don't offer bidirectional communication is that it would be deadlock-prone, due to buffering in the subprocess. All the makeshift pipework and socketpair() solutions discussed in the answers suffer from the same problem.
Under UNIX, most commands cannot be trusted to read one line and immediately process it and print it, except if their standard output is a tty. The reason is that stdio buffers output in userspace by default, and defers the write() system call until either the buffer is full or the stdio stream is closed (typically because the program or script is about to exit after having seen EOF on input). If you write to such a program's stdin through a pipe, and now wait for an answer from that program's stdout (without closing the ingress pipe), the answer is stuck in the stdio buffers and will never come out - This is a deadlock.
You can trick some line-oriented programs (eg grep) into not buffering by using a pseudo-tty to talk to them; take a look at libexpect(3). But in the general case, you would have to re-run a different subprocess for each message, allowing to use EOF to signal the end of each message and cause whatever buffers in the command (or pipeline of commands) to be flushed. Obviously not a good thing performance-wise.
See more info about this problem in the perlipc man page (it's for bi-directional pipes in Perl but the buffering considerations apply regardless of the language used for the main program).
You want something often called popen2. Here's a basic implementation without error checking (found by a web search, not my code):
// http://media.unpythonic.net/emergent-files/01108826729/popen2.c
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include "popen2.h"
int popen2(const char *cmdline, struct popen2 *childinfo) {
pid_t p;
int pipe_stdin[2], pipe_stdout[2];
if(pipe(pipe_stdin)) return -1;
if(pipe(pipe_stdout)) return -1;
//printf("pipe_stdin[0] = %d, pipe_stdin[1] = %d\n", pipe_stdin[0], pipe_stdin[1]);
//printf("pipe_stdout[0] = %d, pipe_stdout[1] = %d\n", pipe_stdout[0], pipe_stdout[1]);
p = fork();
if(p < 0) return p; /* Fork failed */
if(p == 0) { /* child */
close(pipe_stdin[1]);
dup2(pipe_stdin[0], 0);
close(pipe_stdout[0]);
dup2(pipe_stdout[1], 1);
execl("/bin/sh", "sh", "-c", cmdline, NULL);
perror("execl"); exit(99);
}
childinfo->child_pid = p;
childinfo->to_child = pipe_stdin[1];
childinfo->from_child = pipe_stdout[0];
close(pipe_stdin[0]);
close(pipe_stdout[1]);
return 0;
}
//#define TESTING
#ifdef TESTING
int main(void) {
char buf[1000];
struct popen2 kid;
popen2("tr a-z A-Z", &kid);
write(kid.to_child, "testing\n", 8);
close(kid.to_child);
memset(buf, 0, 1000);
read(kid.from_child, buf, 1000);
printf("kill(%d, 0) -> %d\n", kid.child_pid, kill(kid.child_pid, 0));
printf("from child: %s", buf);
printf("waitpid() -> %d\n", waitpid(kid.child_pid, NULL, 0));
printf("kill(%d, 0) -> %d\n", kid.child_pid, kill(kid.child_pid, 0));
return 0;
}
#endif
popen() can only open the pipe in read or write mode, not both. Take a look at this thread for a workaround.
In one of netresolve backends I'm talking to a script and therefore I need to write to its stdin and read from its stdout. The following function executes a command with stdin and stdout redirected to a pipe. You can use it and adapt it to your liking.
static bool
start_subprocess(char *const command[], int *pid, int *infd, int *outfd)
{
int p1[2], p2[2];
if (!pid || !infd || !outfd)
return false;
if (pipe(p1) == -1)
goto err_pipe1;
if (pipe(p2) == -1)
goto err_pipe2;
if ((*pid = fork()) == -1)
goto err_fork;
if (*pid) {
/* Parent process. */
*infd = p1[1];
*outfd = p2[0];
close(p1[0]);
close(p2[1]);
return true;
} else {
/* Child process. */
dup2(p1[0], 0);
dup2(p2[1], 1);
close(p1[0]);
close(p1[1]);
close(p2[0]);
close(p2[1]);
execvp(*command, command);
/* Error occured. */
fprintf(stderr, "error running %s: %s", *command, strerror(errno));
abort();
}
err_fork:
close(p2[1]);
close(p2[0]);
err_pipe2:
close(p1[1]);
close(p1[0]);
err_pipe1:
return false;
}
https://github.com/crossdistro/netresolve/blob/master/backends/exec.c#L46
(I used the same code in Can popen() make bidirectional pipes like pipe() + fork()?)
Use forkpty (it's non-standard, but the API is very nice, and you can always drop in your own implementation if you don't have it) and exec the program you want to communicate with in the child process.
Alternatively, if tty semantics aren't to your liking, you could write something like forkpty but using two pipes, one for each direction of communication, or using socketpair to communicate with the external program over a unix socket.
You can't use popen to use two-way pipes.
In fact, some OSs don't support two-way pipes, in which case a socket-pair (socketpair) is the only way to do it.
popen works for me in both directions (read and write)
I have been using a popen() pipe in both directions..
Reading and writing a child process stdin and stdout with the file descriptor returned by popen(command,"w")
It seems to work fine..
I assumed it would work before I knew better, and it does.
According posts above this shouldn't work.. which worries me a little bit.
gcc on raspbian (raspbery pi debian)

fork() and pipes() in c

What is fork and what is pipe?
Any scenarios explaining why their use is necessary will be appreciated.
What are the differences between fork and pipe in C?
Can we use them in C++?
I need to know this is because I want to implement a program in C++ which can access live video input, convert its format and write it to a file.
What would be the best approach for this?
I have used x264 for this. So far I have implemented the part of conversion on a file format.
Now I have to implement it on a live stream.
Is it a good idea to use pipes? Capture video in another process and feed it to the other?
A pipe is a mechanism for interprocess communication. Data written to the pipe by one process can be read by another process. The primitive for creating a pipe is the pipe function. This creates both the reading and writing ends of the pipe. It is not very useful for a single process to use a pipe to talk to itself. In typical use, a process creates a pipe just before it forks one or more child processes. The pipe is then used for communication either between the parent or child processes, or between two sibling processes. A familiar example of this kind of communication can be seen in all operating system shells. When you type a command at the shell, it will spawn the executable represented by that command with a call to fork. A pipe is opened to the new child process and its output is read and printed by the shell. This page has a full example of the fork and pipe functions. For your convenience, the code is reproduced below:
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
/* Read characters from the pipe and echo them to stdout. */
void
read_from_pipe (int file)
{
FILE *stream;
int c;
stream = fdopen (file, "r");
while ((c = fgetc (stream)) != EOF)
putchar (c);
fclose (stream);
}
/* Write some random text to the pipe. */
void
write_to_pipe (int file)
{
FILE *stream;
stream = fdopen (file, "w");
fprintf (stream, "hello, world!\n");
fprintf (stream, "goodbye, world!\n");
fclose (stream);
}
int
main (void)
{
pid_t pid;
int mypipe[2];
/* Create the pipe. */
if (pipe (mypipe))
{
fprintf (stderr, "Pipe failed.\n");
return EXIT_FAILURE;
}
/* Create the child process. */
pid = fork ();
if (pid == (pid_t) 0)
{
/* This is the child process.
Close other end first. */
close (mypipe[1]);
read_from_pipe (mypipe[0]);
return EXIT_SUCCESS;
}
else if (pid < (pid_t) 0)
{
/* The fork failed. */
fprintf (stderr, "Fork failed.\n");
return EXIT_FAILURE;
}
else
{
/* This is the parent process.
Close other end first. */
close (mypipe[0]);
write_to_pipe (mypipe[1]);
return EXIT_SUCCESS;
}
}
Just like other C functions you can use both fork and pipe in C++.
there are stdin and stdout for common input and output.
A common style is like this:
input->process->output
But with pipe, it becomes:
input->process1->(tmp_output)->(tmp-input)->process2->output
pipe is the function that returns the two temporary tmp-input and tmp-output, i.e. fd[0] and fd[1].