Capturing stdout from a system() command optimally [duplicate] - c++

This question already has answers here:
How do I execute a command and get the output of the command within C++ using POSIX?
(12 answers)
Closed 7 years ago.
I'm trying to start an external application through system() - for example, system("ls"). I would like to capture its output as it happens so I can send it to another function for further processing. What's the best way to do that in C/C++?

From the popen manual:
#include <stdio.h>
FILE *popen(const char *command, const char *type);
int pclose(FILE *stream);

Try the popen() function. It executes a command, like system(), but directs the output into a new file. A pointer to the stream is returned.
FILE *lsofFile_p = popen("lsof", "r");
if (!lsofFile_p)
{
return -1;
}
char buffer[1024];
char *line_p = fgets(buffer, sizeof(buffer), lsofFile_p);
pclose(lsofFile_p);

EDIT: misread question as wanting to pass output to another program, not another function. popen() is almost certainly what you want.
System gives you full access to the shell. If you want to continue using it, you can
redirect it's output to a temporary file, by system("ls > tempfile.txt"), but choosing a secure temporary file is a pain. Or, you can even redirect it through another program: system("ls | otherprogram");
Some may recommend the popen() command. This is what you want if you can process the output yourself:
FILE *output = popen("ls", "r");
which will give you a FILE pointer you can read from with the command's output on it.
You can also use the pipe() call to create a connection in combination with fork() to create new processes, dup2() to change the standard input and output of them, exec() to run the new programs, and wait() in the main program to wait for them. This is just setting up the pipeline much like the shell would. See the pipe() man page for details and an example.

The functions popen() and such don't redirect stderr and such; I wrote popen3() for that purpose.
Here's a bowdlerised version of my popen3():
int popen3(int fd[3],const char **const cmd) {
int i, e;
int p[3][2];
pid_t pid;
// set all the FDs to invalid
for(i=0; i<3; i++)
p[i][0] = p[i][1] = -1;
// create the pipes
for(int i=0; i<3; i++)
if(pipe(p[i]))
goto error;
// and fork
pid = fork();
if(-1 == pid)
goto error;
// in the parent?
if(pid) {
// parent
fd[STDIN_FILENO] = p[STDIN_FILENO][1];
close(p[STDIN_FILENO][0]);
fd[STDOUT_FILENO] = p[STDOUT_FILENO][0];
close(p[STDOUT_FILENO][1]);
fd[STDERR_FILENO] = p[STDERR_FILENO][0];
close(p[STDERR_FILENO][1]);
// success
return 0;
} else {
// child
dup2(p[STDIN_FILENO][0],STDIN_FILENO);
close(p[STDIN_FILENO][1]);
dup2(p[STDOUT_FILENO][1],STDOUT_FILENO);
close(p[STDOUT_FILENO][0]);
dup2(p[STDERR_FILENO][1],STDERR_FILENO);
close(p[STDERR_FILENO][0]);
// here we try and run it
execv(*cmd,const_cast<char*const*>(cmd));
// if we are there, then we failed to launch our program
perror("Could not launch");
fprintf(stderr," \"%s\"\n",*cmd);
_exit(EXIT_FAILURE);
}
// preserve original error
e = errno;
for(i=0; i<3; i++) {
close(p[i][0]);
close(p[i][1]);
}
errno = e;
return -1;
}

The most efficient way is to use stdout file descriptor directly, bypassing FILE stream:
pid_t popen2(const char *command, int * infp, int * outfp)
{
int p_stdin[2], p_stdout[2];
pid_t pid;
if (pipe(p_stdin) == -1)
return -1;
if (pipe(p_stdout) == -1) {
close(p_stdin[0]);
close(p_stdin[1]);
return -1;
}
pid = fork();
if (pid < 0) {
close(p_stdin[0]);
close(p_stdin[1]);
close(p_stdout[0]);
close(p_stdout[1]);
return pid;
} else if (pid == 0) {
close(p_stdin[1]);
dup2(p_stdin[0], 0);
close(p_stdout[0]);
dup2(p_stdout[1], 1);
dup2(::open("/dev/null", O_WRONLY), 2);
/// Close all other descriptors for the safety sake.
for (int i = 3; i < 4096; ++i) {
::close(i);
}
setsid();
execl("/bin/sh", "sh", "-c", command, NULL);
_exit(1);
}
close(p_stdin[0]);
close(p_stdout[1]);
if (infp == NULL) {
close(p_stdin[1]);
} else {
*infp = p_stdin[1];
}
if (outfp == NULL) {
close(p_stdout[0]);
} else {
*outfp = p_stdout[0];
}
return pid;
}
To read output from child use popen2() like this:
int child_stdout = -1;
pid_t child_pid = popen2("ls", 0, &child_stdout);
if (!child_pid) {
handle_error();
}
char buff[128];
ssize_t bytes_read = read(child_stdout, buff, sizeof(buff));
To both write and read:
int child_stdin = -1;
int child_stdout = -1;
pid_t child_pid = popen2("grep 123", &child_stdin, &child_stdout);
if (!child_pid) {
handle_error();
}
const char text = "1\n2\n123\n3";
ssize_t bytes_written = write(child_stdin, text, sizeof(text) - 1);
char buff[128];
ssize_t bytes_read = read(child_stdout, buff, sizeof(buff));

The functions popen() and pclose() could be what you're looking for.
Take a look at the glibc manual for an example.

In Windows, instead of using system(), use CreateProcess, redirect the output to a pipe and connect to the pipe.
I'm guessing this is also possible in some POSIX way?

Actually, I just checked, and:
popen is problematic, because the process is forked. So if you need to wait for the shell command to execute, then you're in danger of missing it. In my case, my program closed even before the pipe got to do it's work.
I ended up using system call with tar command on linux. The return value from system was the result of tar.
So: if you need the return value, then not no only is there no need to use popen, it probably won't do what you want.

In this page: capture_the_output_of_a_child_process_in_c describes the limitations of using popen vs. using fork/exec/dup2/STDOUT_FILENO approach.
I'm having problems capturing tshark output with popen.
And I'm guessing that this limitation might be my problem:
It returns a stdio stream as opposed to a raw file descriptor, which
is unsuitable for handling the output asynchronously.
I'll come back to this answer if I have a solution with the other approach.

I'm not entirely certain that its possible in standard C, as two different processes don't typically share memory space. The simplest way I can think of to do it would be to have the second program redirect its output to a text file (programname > textfile.txt) and then read that text file back in for processing. However, that may not be the best way.

Related

Do input redirection and capture command output (Custom shell-like program)

I'm writing a custom shell where I try to add support for input, output redirections and pipes just like standard shell. I stuck at point where I cannot do input redirection, but output redirection is perfectly working. My implementation is something like this (only related part), you can assume that (string) input is non-empty
void execute() {
... // stuff before execution and initialization of variables
int *fds;
std::string content;
std::string input = readFromAFile(in_file); // for input redirection
for (int i = 0; i < commands.size(); i++) {
fds = subprocess(commands[i]);
dprintf(fds[1], "%s", input.data()); // write to write-end of pipe
close(fds[1]);
content += readFromFD(fds[0]); // read from read-end of pipe
close(fds[0]);
}
... // stuff after execution
}
int *subprocess(std::string &cmd) {
std::string s;
int *fds = new int[2];
pipe(fds);
pid_t pid = fork();
if (pid == -1) {
std::cerr << "Fork failed.";
}
if (pid == 0) {
dup2(fds[1], STDOUT_FILENO);
dup2(fds[0], STDIN_FILENO);
close(fds[1]);
close(fds[0]);
system(cmd.data());
exit(0); // child terminates
}
return fds;
}
My thought is subprocess returns a pipe (fd_in, fd_out) and parent can write to write-end and read-from read-end afterwards. However when I try an input redirection something like sort < in.txt, the program just hangs. I think there is a deadlock because one waiting other to write, and other one to read, however, after parent writes to write-end it closes, and then read from read-end. How should I consider this case ?
When I did a bit of searching, I saw this answer, which my original thinking was similar except that in the answer it mentions creating two pipes. I did not quite understand this part. Why do we need two separate pipes ?

How to get pid of process executed with system() command in c++

When we use system() command, program wait until it complete but I am executing a process using system() and using load balance server due to which program comes to next line just after executing system command. Please note that that process may not be complete.
system("./my_script");
// after this I want to see whether it is complete or not using its pid.
// But how do i Know PID?
IsScriptExecutionComplete();
Simple answer: you can't.
The purpose of system() is to block when command is being executed.
But you can 'cheat' like this:
pid_t system2(const char * command, int * infp, int * outfp)
{
int p_stdin[2];
int p_stdout[2];
pid_t pid;
if (pipe(p_stdin) == -1)
return -1;
if (pipe(p_stdout) == -1) {
close(p_stdin[0]);
close(p_stdin[1]);
return -1;
}
pid = fork();
if (pid < 0) {
close(p_stdin[0]);
close(p_stdin[1]);
close(p_stdout[0]);
close(p_stdout[1]);
return pid;
} else if (pid == 0) {
close(p_stdin[1]);
dup2(p_stdin[0], 0);
close(p_stdout[0]);
dup2(p_stdout[1], 1);
dup2(::open("/dev/null", O_RDONLY), 2);
/// Close all other descriptors for the safety sake.
for (int i = 3; i < 4096; ++i)
::close(i);
setsid();
execl("/bin/sh", "sh", "-c", command, NULL);
_exit(1);
}
close(p_stdin[0]);
close(p_stdout[1]);
if (infp == NULL) {
close(p_stdin[1]);
} else {
*infp = p_stdin[1];
}
if (outfp == NULL) {
close(p_stdout[0]);
} else {
*outfp = p_stdout[0];
}
return pid;
}
Here you can have not only PID of the process, but also it's STDIN and STDOUT. Have fun!
Not an expert on this myself, but if you look at the man page for system:
system() executes a command specified in command by calling /bin/sh -c command, and returns after the command has been completed
You can go into the background within the command/script you're executing (and return immediately), but I don't think there's a specific provision in system for that case.
Ideas I can think of are:
Your command might return the pid through the return code.
Your code might want to look up the name of the command in the active processes (e.g. /proc APIs in unix-like environments).
You might want to launch the command yourself (instead of through a SHELL) using fork/exec
As the other answers said, std::system blocks until complete anyway. However, if you want to run the child process async and you are ok with boost you can use boost.process (ref):
#include <boost/process.hpp>
namespace bp = boost::process;
bp::child c(bp::search_path("echo"), "hello world");
std::cout << c.id() << std::endl;
// ... do something with ID ...
c.wait();
You can check exit status of your command by following code :
int ret = system("./my_script");
if (WIFEXITED(ret) && !WEXITSTATUS(ret))
{
printf("Completed successfully\n"); ///successful
}
else
{
printf("execution failed\n"); //error
}

waitpid/wexitstatus returning 0 instead of correct return code

I have the helper function below, used to execute a command and get the return value on posix systems. I used to use popen, but it is impossible to get the return code of an application with popen if it runs and exits before popen/pclose gets a chance to do its work.
The following helper function creates a process fork, uses execvp to run the desired external process, and then the parent uses waitpid to get the return code. I'm seeing odd cases where it's refusing to run.
When called with wait = true, waitpid should return the exit code of the application no matter what. However, I'm seeing stdout output that specifies the return code should be non-zero, yet the return code is zero. Testing the external process in a regular shell, then echoing $? returns non-zero, so it's not a problem w/ the external process not returning the right code. If it's of any help, the external process being run is mount(8) (yes, I know I can use mount(2) but that's besides the point).
I apologize in advance for a code dump. Most of it is debugging/logging:
inline int ForkAndRun(const std::string &command, const std::vector<std::string> &args, bool wait = false, std::string *output = NULL)
{
std::string debug;
std::vector<char*> argv;
for(size_t i = 0; i < args.size(); ++i)
{
argv.push_back(const_cast<char*>(args[i].c_str()));
debug += "\"";
debug += args[i];
debug += "\" ";
}
argv.push_back((char*)NULL);
neosmart::logger.Debug("Executing %s", debug.c_str());
int pipefd[2];
if (pipe(pipefd) != 0)
{
neosmart::logger.Error("Failed to create pipe descriptor when trying to launch %s", debug.c_str());
return EXIT_FAILURE;
}
pid_t pid = fork();
if (pid == 0)
{
close(pipefd[STDIN_FILENO]); //child isn't going to be reading
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO);
if (execvp(command.c_str(), &argv[0]) != 0)
{
exit(EXIT_FAILURE);
}
return 0;
}
else if (pid < 0)
{
neosmart::logger.Error("Failed to fork when trying to launch %s", debug.c_str());
return EXIT_FAILURE;
}
else
{
close(pipefd[STDOUT_FILENO]);
int exitCode = 0;
if (wait)
{
waitpid(pid, &exitCode, wait ? __WALL : (WNOHANG | WUNTRACED));
std::string result;
char buffer[128];
ssize_t bytesRead;
while ((bytesRead = read(pipefd[STDIN_FILENO], buffer, sizeof(buffer)-1)) != 0)
{
buffer[bytesRead] = '\0';
result += buffer;
}
if (wait)
{
if ((WIFEXITED(exitCode)) == 0)
{
neosmart::logger.Error("Failed to run command %s", debug.c_str());
neosmart::logger.Info("Output:\n%s", result.c_str());
}
else
{
neosmart::logger.Debug("Output:\n%s", result.c_str());
exitCode = WEXITSTATUS(exitCode);
if (exitCode != 0)
{
neosmart::logger.Info("Return code %d", (exitCode));
}
}
}
if (output)
{
result.swap(*output);
}
}
close(pipefd[STDIN_FILENO]);
return exitCode;
}
}
Note that the command is run OK with the correct parameters, the function proceeds without any problems, and WIFEXITED returns TRUE. However, WEXITSTATUS returns 0, when it should be returning something else.
Probably isn't your main issue, but I think I see a small problem. In your child process, you have...
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO); //but wait, this pipe is closed!
But I think what you want is:
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd for both, can close
I don't have much experience with forks and pipes in Linux, but I did write a similar function pretty recently. You can take a look at the code to compare, if you'd like. I know that my function works.
execAndRedirect.cpp
I'm using the mongoose library, and grepping my code for SIGCHLD revealed that using mg_start from mongoose results in setting SIGCHLD to SIG_IGN.
From the waitpid man page, on Linux a SIGCHLD set to SIG_IGN will not create a zombie process, so waitpid will fail if the process has already successfully run and exited - but will run OK if it hasn't yet. This was the cause of the sporadic failure of my code.
Simply re-setting SIGCHLD after calling mg_start to a void function that does absolutely nothing was enough to keep the zombie records from being immediately erased.
Per #Geoff_Montee's advice, there was a bug in my redirect of STDERR, but this was not responsible for the problem as execvp does not store the return value in STDERR or even STDOUT, but rather in the kernel object associated with the parent process (the zombie record).
#jilles' warning about non-contiguity of vector in C++ does not apply for C++03 and up (only valid for C++98, though in practice, most C++98 compilers did use contiguous storage, anyway) and was not related to this issue. However, the advice on reading from the pipe before blocking and checking the output of waitpid is spot-on.
I've found that pclose does NOT block and wait for the process to end, contrary to the documentation (this is on CentOS 6). I've found that I need to call pclose and then call waitpid(pid,&status,0); to get the true return value.

Linux: Executing child process with piped stdin/stdout

Using Linux and C++, I would like a function that does the following:
string f(string s)
{
string r = system("foo < s");
return r;
}
Obviously the above doesn't work, but you get the idea. I have a string s that I would like to pass as the standard input of a child process execution of application "foo", and then I would like to record its standard output to string r and then return it.
What combination of Linux syscalls or POSIX functions should I use?
I'm using Linux 3.0 and do not need the solution to work with older systems.
The code provided by eerpini does not work as written. Note, for example, that the pipe ends that are closed in the parent are used afterwards. Look at
close(wpipefd[1]);
and the subsequent write to that closed descriptor. This is just transposition, but it shows this code has never been used. Below is a version that I have tested. Unfortunately, I changed the code style, so this was not accepted as an edit of eerpini's code.
The only structural change is that I only redirect the I/O in the child (note the dup2 calls are only in the child path.) This is very important, because otherwise the parent's I/O gets messed up. Thanks to eerpini for the initial answer, which I used in developing this one.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#define PIPE_READ 0
#define PIPE_WRITE 1
int createChild(const char* szCommand, char* const aArguments[], char* const aEnvironment[], const char* szMessage) {
int aStdinPipe[2];
int aStdoutPipe[2];
int nChild;
char nChar;
int nResult;
if (pipe(aStdinPipe) < 0) {
perror("allocating pipe for child input redirect");
return -1;
}
if (pipe(aStdoutPipe) < 0) {
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
perror("allocating pipe for child output redirect");
return -1;
}
nChild = fork();
if (0 == nChild) {
// child continues here
// redirect stdin
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1) {
exit(errno);
}
// redirect stdout
if (dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1) {
exit(errno);
}
// redirect stderr
if (dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1) {
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execve(szCommand, aArguments, aEnvironment);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exit(nResult);
} else if (nChild > 0) {
// parent continues here
// close unused file descriptors, these are for child only
close(aStdinPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// Include error check here
if (NULL != szMessage) {
write(aStdinPipe[PIPE_WRITE], szMessage, strlen(szMessage));
}
// Just a char by char read here, you can change it accordingly
while (read(aStdoutPipe[PIPE_READ], &nChar, 1) == 1) {
write(STDOUT_FILENO, &nChar, 1);
}
// done with these in this example program, you would normally keep these
// open of course as long as you want to talk to the child
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
} else {
// failed to create child
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
}
return nChild;
}
Since you want bidirectional access to the process, you would have to do what popen does behind the scenes explicitly with pipes. I am not sure if any of this will change in C++, but here is a pure C example :
void piped(char *str){
int wpipefd[2];
int rpipefd[2];
int defout, defin;
defout = dup(stdout);
defin = dup (stdin);
if(pipe(wpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(pipe(rpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(dup2(wpipefd[0], 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(rpipefd[1], 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(fork() == 0){
close(defout);
close(defin);
close(wpipefd[0]);
close(wpipefd[1]);
close(rpipefd[0]);
close(rpipefd[1]);
//Call exec here. Use the exec* family of functions according to your need
}
else{
if(dup2(defin, 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(defout, 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
close(defout);
close(defin);
close(wpipefd[1]);
close(rpipefd[0]);
//Include error check here
write(wpipefd[1], str, strlen(str));
//Just a char by char read here, you can change it accordingly
while(read(rpipefd[0], &ch, 1) != -1){
write(stdout, &ch, 1);
}
}
}
Effectively you do this :
Create pipes and redirect the stdout and stdin to the ends of the two pipes (note that in linux, pipe() creates unidirectional pipes, so you need to use two pipes for your purpose).
Exec will now start a new process which has the ends of the pipes for stdin and stdout.
Close the unused descriptors, write the string to the pipe and then start reading whatever the process might dump to the other pipe.
dup() is used to create a duplicate entry in the file descriptor table. While dup2() changes what the descriptor points to.
Note : As mentioned by Ammo# in his solution, what I provided above is more or less a template, it will not run if you just tried to execute the code since clearly there is a exec* (family of functions) missing, so the child will terminate almost immediately after the fork().
Ammo's code has some error handling bugs. The child process is returning after dup failure instead of exiting. Perhaps the child dups can be replaced with:
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1
)
{
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);

popen simultaneous read and write [duplicate]

This question already has answers here:
Can popen() make bidirectional pipes like pipe() + fork()?
(6 answers)
Closed 3 years ago.
Is it possible to read and write to a file descriptor returned by popen. I have an interactive process I'd like to control through C. If this isn't possible with popen, is there any way around it?
As already answered, popen works in one direction. If you need to read and write, You can create a pipe with pipe(), span a new process by fork() and exec functions and then redirect its input and outputs with dup2(). Anyway I prefer exec over popen, as it gives you better control over the process (e.g. you know its pid)
EDITED:
As comments suggested, a pipe can be used in one direction only. Therefore you have to create separate pipes for reading and writing. Since the example posted before was wrong, I deleted it and created a new, correct one:
#include<unistd.h>
#include<sys/wait.h>
#include<sys/prctl.h>
#include<signal.h>
#include<stdlib.h>
#include<string.h>
#include<stdio.h>
int main(int argc, char** argv)
{
pid_t pid = 0;
int inpipefd[2];
int outpipefd[2];
char buf[256];
char msg[256];
int status;
pipe(inpipefd);
pipe(outpipefd);
pid = fork();
if (pid == 0)
{
// Child
dup2(outpipefd[0], STDIN_FILENO);
dup2(inpipefd[1], STDOUT_FILENO);
dup2(inpipefd[1], STDERR_FILENO);
//ask kernel to deliver SIGTERM in case the parent dies
prctl(PR_SET_PDEATHSIG, SIGTERM);
//replace tee with your process
execl("/usr/bin/tee", "tee", (char*) NULL);
// Nothing below this line should be executed by child process. If so,
// it means that the execl function wasn't successfull, so lets exit:
exit(1);
}
// The code below will be executed only by parent. You can write and read
// from the child using pipefd descriptors, and you can send signals to
// the process using its pid by kill() function. If the child process will
// exit unexpectedly, the parent process will obtain SIGCHLD signal that
// can be handled (e.g. you can respawn the child process).
//close unused pipe ends
close(outpipefd[0]);
close(inpipefd[1]);
// Now, you can write to outpipefd[1] and read from inpipefd[0] :
while(1)
{
printf("Enter message to send\n");
scanf("%s", msg);
if(strcmp(msg, "exit") == 0) break;
write(outpipefd[1], msg, strlen(msg));
read(inpipefd[0], buf, 256);
printf("Received answer: %s\n", buf);
}
kill(pid, SIGKILL); //send SIGKILL signal to the child process
waitpid(pid, &status, 0);
}
The reason popen() and friends don't offer bidirectional communication is that it would be deadlock-prone, due to buffering in the subprocess. All the makeshift pipework and socketpair() solutions discussed in the answers suffer from the same problem.
Under UNIX, most commands cannot be trusted to read one line and immediately process it and print it, except if their standard output is a tty. The reason is that stdio buffers output in userspace by default, and defers the write() system call until either the buffer is full or the stdio stream is closed (typically because the program or script is about to exit after having seen EOF on input). If you write to such a program's stdin through a pipe, and now wait for an answer from that program's stdout (without closing the ingress pipe), the answer is stuck in the stdio buffers and will never come out - This is a deadlock.
You can trick some line-oriented programs (eg grep) into not buffering by using a pseudo-tty to talk to them; take a look at libexpect(3). But in the general case, you would have to re-run a different subprocess for each message, allowing to use EOF to signal the end of each message and cause whatever buffers in the command (or pipeline of commands) to be flushed. Obviously not a good thing performance-wise.
See more info about this problem in the perlipc man page (it's for bi-directional pipes in Perl but the buffering considerations apply regardless of the language used for the main program).
You want something often called popen2. Here's a basic implementation without error checking (found by a web search, not my code):
// http://media.unpythonic.net/emergent-files/01108826729/popen2.c
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include "popen2.h"
int popen2(const char *cmdline, struct popen2 *childinfo) {
pid_t p;
int pipe_stdin[2], pipe_stdout[2];
if(pipe(pipe_stdin)) return -1;
if(pipe(pipe_stdout)) return -1;
//printf("pipe_stdin[0] = %d, pipe_stdin[1] = %d\n", pipe_stdin[0], pipe_stdin[1]);
//printf("pipe_stdout[0] = %d, pipe_stdout[1] = %d\n", pipe_stdout[0], pipe_stdout[1]);
p = fork();
if(p < 0) return p; /* Fork failed */
if(p == 0) { /* child */
close(pipe_stdin[1]);
dup2(pipe_stdin[0], 0);
close(pipe_stdout[0]);
dup2(pipe_stdout[1], 1);
execl("/bin/sh", "sh", "-c", cmdline, NULL);
perror("execl"); exit(99);
}
childinfo->child_pid = p;
childinfo->to_child = pipe_stdin[1];
childinfo->from_child = pipe_stdout[0];
close(pipe_stdin[0]);
close(pipe_stdout[1]);
return 0;
}
//#define TESTING
#ifdef TESTING
int main(void) {
char buf[1000];
struct popen2 kid;
popen2("tr a-z A-Z", &kid);
write(kid.to_child, "testing\n", 8);
close(kid.to_child);
memset(buf, 0, 1000);
read(kid.from_child, buf, 1000);
printf("kill(%d, 0) -> %d\n", kid.child_pid, kill(kid.child_pid, 0));
printf("from child: %s", buf);
printf("waitpid() -> %d\n", waitpid(kid.child_pid, NULL, 0));
printf("kill(%d, 0) -> %d\n", kid.child_pid, kill(kid.child_pid, 0));
return 0;
}
#endif
popen() can only open the pipe in read or write mode, not both. Take a look at this thread for a workaround.
In one of netresolve backends I'm talking to a script and therefore I need to write to its stdin and read from its stdout. The following function executes a command with stdin and stdout redirected to a pipe. You can use it and adapt it to your liking.
static bool
start_subprocess(char *const command[], int *pid, int *infd, int *outfd)
{
int p1[2], p2[2];
if (!pid || !infd || !outfd)
return false;
if (pipe(p1) == -1)
goto err_pipe1;
if (pipe(p2) == -1)
goto err_pipe2;
if ((*pid = fork()) == -1)
goto err_fork;
if (*pid) {
/* Parent process. */
*infd = p1[1];
*outfd = p2[0];
close(p1[0]);
close(p2[1]);
return true;
} else {
/* Child process. */
dup2(p1[0], 0);
dup2(p2[1], 1);
close(p1[0]);
close(p1[1]);
close(p2[0]);
close(p2[1]);
execvp(*command, command);
/* Error occured. */
fprintf(stderr, "error running %s: %s", *command, strerror(errno));
abort();
}
err_fork:
close(p2[1]);
close(p2[0]);
err_pipe2:
close(p1[1]);
close(p1[0]);
err_pipe1:
return false;
}
https://github.com/crossdistro/netresolve/blob/master/backends/exec.c#L46
(I used the same code in Can popen() make bidirectional pipes like pipe() + fork()?)
Use forkpty (it's non-standard, but the API is very nice, and you can always drop in your own implementation if you don't have it) and exec the program you want to communicate with in the child process.
Alternatively, if tty semantics aren't to your liking, you could write something like forkpty but using two pipes, one for each direction of communication, or using socketpair to communicate with the external program over a unix socket.
You can't use popen to use two-way pipes.
In fact, some OSs don't support two-way pipes, in which case a socket-pair (socketpair) is the only way to do it.
popen works for me in both directions (read and write)
I have been using a popen() pipe in both directions..
Reading and writing a child process stdin and stdout with the file descriptor returned by popen(command,"w")
It seems to work fine..
I assumed it would work before I knew better, and it does.
According posts above this shouldn't work.. which worries me a little bit.
gcc on raspbian (raspbery pi debian)