This question follows from my attempts to implement
http://www.microhowto.info/howto/capture_the_output_of_a_child_process_in_c.html
and
https://linux.die.net/man/2/pipe
I'm writing a shell program; the intention is that, eventually, it can execute commands and pipe them to another program. As such, I require the stdout of a child process directly, rather than outputting to terminal. I attempted to use the above guides, but I have a problem: The pipe is always empty. It just doesn't work. I have absolutely no clue why. Here's my code:
int pipefd[2];
pid_t pid;
pid = fork();
char buf;
const char* arg = "/bin/ls";
char *args[] = {"/bin/ls", (char *) 0};
if (pipe(pipefd) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
if(pid<0) {
std::cout << "Fork() failed!." << std::endl;
exit(EXIT_FAILURE);
} else if (pid == 0) { //According to everything I could find on the internet, pipe should work.
dup2(pipefd[1], STDOUT_FILENO); // It does not. I don't know why.
close(pipefd[1]);
close(pipefd[0]);
execv(arg, args);
std::cout << "Child Error! " << errno << std::endl;
perror("execv");
exit(EXIT_FAILURE);
} else {
close(pipefd[1]);
wait(NULL);
while (read(pipefd[0], &buf, 1) > 0){
write(STDOUT_FILENO, &buf, 1);
}
write(STDOUT_FILENO, "\n", 1);
close(pipefd[0]);
}
I'm on a laptop with Ubuntu 15.04
Also, the pipe DOES work if I write/read inside one process.
Edit: Also, the execv does work - If I remove the dup2, it outputs directly to terminal and works.
Related
if everything is not perfect I apologize;)
I am doing a program in c ++ that when it receives a sensor information, shows a picture with feh full screen.
The problem is that when I want to go from one image to another, It opens a new feh, until the moment when the computer crashes because it takes all the memory ...
How to make the opening of an image close the previous one?
This is my current command line :
system("feh -F ressources/icon_communication.png&");
I must specify that I also trigger a sound, but that there is no problem because the program closes automatically at the end of the sound:
system("paplay /home/pi/demo_ecran_interactif/ressources/swip.wav&");
Tried this as a test and works ! Thanks #paul-sanders !
#include <iostream>
#include <chrono>
#include <thread>
#include <unistd.h>
#include <signal.h>
using namespace std;
pid_t display_image_file (const char *image_file)
{
pid_t pid = fork ();
if (pid == -1)
{
std::cout << "Could not fork, error: " << errno << "\n";
return -1;
}
if (pid != 0) // parent
return pid;
// child
execlp ("feh", "-F", image_file, NULL); // only returns on failure
std::cout << "Couldn't exec feh for image file " << image_file << ", error: " << errno << "\n";
return -1;
}
int main()
{
pid_t pid = display_image_file ("nav.png");
if (pid != -1)
{
std::this_thread::sleep_for (std::chrono::milliseconds (2000));
kill (pid, SIGKILL);
}
pid_t pid2 = display_image_file ("sms2.png");
}
Soooooooooo, the goal here (in terms of your test program) seems to be:
display nav.png in feh
wait 2 seconds
close (that instance of) feh
display sms2.png in feh
And if you can get the test program doing that then you will be on your way (I'm not going to worry my pretty little head about your sound issue (because it's 30+ degrees here today), but once you have your test program running right then you will probably be able to figure out how to solve that one yourself).
So, two issues that I see in your code here:
you're not making any effort to close the first instance of 'feh'
execlp() doesn't do quite what you probably think it does (specifically, it never returns, unless it fails for some reason).
So what I think you need to do is something like this (code untested, might not even compile and you need to figure out the right header files to #include, but it should at least get you going):
pid_t display_image_file (const char *image_file)
{
pid_t pid = fork ();
if (pid == -1)
{
std::cout << "Could not fork, error: " << errno << "\n";
return -1;
}
if (pid != 0) // parent
return pid;
// child
execlp ("feh", "-F", image_file, NULL); // only returns on failure
std::cout << "Couldn't exec feh for image file " << image_file << ", error: " << errno << "\n";
return -1;
}
int main()
{
pid_t pid = display_image_file ("nav.png");
if (pid != -1)
{
std::this_thread::sleep_for (std::chrono::milliseconds (2000));
kill (pid, SIGKILL);
}
pid_t pid = display_image_file ("sms2.png");
// ...
}
Does that help?
I'm redirecting output from a child process:
int pipefd[2];
pipe(pipefd);
pid_t pid = fork(); /* Create a child process */
switch (pid) {
case -1: /* Error */
cout << "Uh-Oh! fork() failed.\n";
exit(1);
case 0: /* Child process */
close(pipefd[0]);
dup2(pipefd[1], 1);
dup2(pipefd[1], 2);
close(pipefd[1]);
execv(args[0], (char * const *)args);
cout << "execv() error" << endl;
exit(1);
default: /* Parent process */
close(pipefd[1]);
char buffer[1024];
size_t bytes_read = 0;
bytes_read = read(pipefd[0], buffer, sizeof(buffer));
if(bytes_read == -1) {
cout << "read() error" << endl;
exit(1);
}
close(pipefd[0]);
if(bytes_read > 0) {
buffer[bytes_read-1] = '\0'; // Overwrite the newline
}
int status, exit_pid;
while(true) {
exit_pid = waitpid(pid, &status, 0);
if(exit_pid == -1) {
cout << "waitpid() error: " << strerror(errno) << endl;
exit(1);
}
else {
return WEXITSTATUS(status);
}
}
}
This works fine when I ran it as an isolated piece of code. But when I integrate it into my multithreaded environment, a horrible thing happens: the read() calls somehow reads output of other threads of the parent process, as if it were the output from the pipe of the child process.
Anyone encountered such a thing?
I'm on OS X.
Well, I have a solution even though I don't completely understand why this happened.
But first, it should be clear that this behavior is neither normal nor expectable. A child process created with fork() does not inherit any running threads from its parent (so the unexpected output must come from the parent threads). And it has its own descriptor table. So when the child process calls dup2() to alter its output descriptors, that shouldn't have any effect on the threads in the parent process.
The problem occurred only in cases where execv() call failed. In those cases, I expected the termination of the child process to close all its file descriptors. But that didn't happen, or at least it didn't have the same effect as calling close() explicitly. So adding explicit close() calls after execv() solved the problem:
execv(args[0], (char * const *)args);
close(1);
close(2);
exit(1);
The close of the write-end descriptor of the pipe is what will cause the read operation on the read-end to receive 0, thus knowing not to read anymore.
However, I still don't know the following:
Why isn't the call to exit() in the child process equivalent to explicitly calling close() ?
Even if the pipe write-end isn't closed, why does reading from the read-end produces output of threads in the parent process, instead of blocking, or returning some error ?
If anybody can shed light on this, it will be appreciated.
I've got a function for starting a process, and then returning the stdout and exit code. However I've noticed that it claims that every process returns the exit code of 1. I control the executable being invoked and I had it print to stdout the exit code, so I've confirmed that when it "failed", it in fact returned 0 from main. I also invoked the executable directly form the shell and confirmed the expected stdout and exit code (0). So the fault must lie on the side of the caller. I've also confirmed that WIFEXITED doesn't return false- it returns true as if the child had exited normally (which it did).
This code worked fine before I needed to capture stdout, so it must have something to do with that. I tried looking into the "Child has already terminated" jobby, but that's not occurring in this case- waitpid() behaves exactly like I'd expect and just doesn't care that the child might have already terminated whilst I was nomming up the stdout.
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <iostream>
Wide::Driver::ProcessResult Wide::Driver::StartAndWaitForProcess(std::string name, std::vector<std::string> args, Util::optional<unsigned> timeout) {
int filedes[2];
pipe(filedes);
pid_t pid = fork();
if (pid == 0) {
while ((dup2(filedes[1], STDOUT_FILENO) == -1) && (errno == EINTR)) {}
freopen("/dev/null", "rw", stdin);
freopen("/dev/null", "rw", stderr);
//close(filedes[0]);
std::vector<const char*> cargs;
cargs.push_back(name.c_str());
for (auto&& arg : args)
cargs.push_back(arg.c_str());
cargs.push_back(nullptr);
execv(name.c_str(), const_cast<char* const*>(&cargs[0]));
}
std::string std_out;
close(filedes[1]);
char buffer[4096];
while (1) {
ssize_t count = read(filedes[0], buffer, sizeof(buffer));
if (count == -1) {
if (errno == EINTR) {
continue;
} else {
perror("read");
exit(1);
}
} else if (count == 0) {
break;
} else {
std_out += std::string(buffer, buffer + count);
}
}
close(filedes[0]);
int status;
ProcessResult result;
result.std_out = std_out;
waitpid(pid, &status, 0);
if (!WIFEXITED(status))
result.exitcode = 1;
else {
result.exitcode = WEXITSTATUS(status);
if (result.exitcode != 0) {
std::cout << name << " failed with code " << result.exitcode << "\n";
std::cout << "stdout: " << result.std_out;
}
}
return result;
}
Why on earth is waitpid() giving me this strange result and how can I fix it?
I've confirmed in IRC that it is an LLVM issue. The exit code for the process I printed out is what I returned from main- a static destructor or other such code can still run and call exit(1). This is caused by redirecting stderr- so basically you can't get the error, since if you don't redirect stderr you don't see the problem. So if you execute from the shell, since stderr is not redirected, it's all good.
Therefore, despite the shell and my own return code agreeing, the process was in fact returning the exit code of 1.
Apparently the issue is resolved in trunk, or should be, but I am still using 3.6.
I have a multi-threaded C++03 application that presently uses popen() to invoke itself (same binary) and ssh (different binary) again in a new process and reads the output, however, when porting to Android NDK this is posing some issues such as not not having permissions to access ssh, so I'm linking in Dropbear ssh to my application to try and avoid that issue. Further, my current popen solution requires that stdout and stderr be merged together into a single FD which is a bit messy and I'd like to stop doing that.
I would think the pipe code could be simplified by using fork() instead but wonder how to drop all of the parent's stack/memory which is not needed in the child of the fork? Here is a snippet of the old working code:
#include <iostream>
#include <stdio.h>
#include <string>
#include <errno.h>
using std::endl;
using std::cerr;
using std::cout;
using std::string;
void
doPipe()
{
// Redirect stderr to stdout with '2>&1' so that we see any error messages
// in the pipe output.
const string selfCmd = "/path/to/self/binary arg1 arg2 arg3 2>&1";
FILE *fPtr = ::popen(selfCmd.c_str(), "r");
const int bufSize = 4096;
char buf[bufSize + 1];
if (fPtr == NULL) {
cerr << "Failed attempt to popen '" << selfCmd << "'." << endl;
} else {
cout << "Result of: '" << selfCmd << "':\n";
while (true) {
if (::fgets(buf, bufSize, fPtr) == NULL) {
if (!::feof(fPtr)) {
cerr << "Failed attempt to fgets '" << selfCmd << "'." << endl;
}
break;
} else {
cout << buf;
}
}
if (pclose(fPtr) == -1) {
if (errno != 10) {
cerr << "Failed attempt to pclose '" << selfCmd << "'." << endl;
}
}
cout << "\n";
}
}
So far, this is loosely what I have done to convert to fork(), but fork needlessly duplicates the entire parent process memory space. Further, it does not quite work, because the parent never sees EOF on the outFD it is reading from the pipe(). Where else do I need to close the FDs for this to work? How can I do something like execlp() without supplying a binary path (not easily available on Android) but instead start over with the same binary and a blank image with new args?
#include <iostream>
#include <stdio.h>
#include <string>
#include <errno.h>
using std::endl;
using std::cerr;
using std::cout;
using std::string;
int
selfAction(int argc, char *argv[], int &outFD, int &errFD)
{
pid_t childPid; // Process id used for current process.
// fd[0] is the read end of the pipe and fd[1] is the write end of the pipe.
int fd[2]; // Pipe for normal communication between parent/child.
int fdErr[2]; // Pipe for error communication between parent/child.
// Create a pipe for IPC between child and parent.
const int pipeResult = pipe(fd);
if (pipeResult) {
cerr << "selfAction normal pipe failed: " << errno << ".\n";
return -1;
}
const int errorPipeResult = pipe(fdErr);
if (errorPipeResult) {
cerr << "selfAction error pipe failed: " << errno << ".\n";
return -1;
}
// Fork - error.
if ((childPid = fork()) < 0) {
cerr << "selfAction fork failed: " << errno << ".\n";
return -1;
} else if (childPid == 0) { // Fork -> child.
// Close read end of pipe.
::close(fd[0]);
::close(fdErr[0]);
// Close stdout and set fd[1] to it, this way any stdout of the child is
// piped to the parent.
::dup2(fd[1], STDOUT_FILENO);
::dup2(fdErr[1], STDERR_FILENO);
// Close write end of pipe.
::close(fd[1]);
::close(fdErr[1]);
// Exit child process.
exit(main(argc, argv));
} else { // Fork -> parent.
// Close write end of pipe.
::close(fd[1]);
::close(fdErr[1]);
// Provide fd's to our caller for stdout and stderr:
outFD = fd[0];
errFD = fdErr[0];
return 0;
}
}
void
doFork()
{
int argc = 4;
char *argv[4] = { "/path/to/self/binary", "arg1", "arg2", "arg3" };
int outFD = -1;
int errFD = -1;
int result = selfAction(argc, argv, outFD, errFD);
if (result) {
cerr << "Failed to execute selfAction." << endl;
return;
}
FILE *outFile = fdopen(outFD, "r");
FILE *errFile = fdopen(errFD, "r");
const int bufSize = 4096;
char buf[bufSize + 1];
if (outFile == NULL) {
cerr << "Failed attempt to open fork file." << endl;
return;
} else {
cout << "Result:\n";
while (true) {
if (::fgets(buf, bufSize, outFile) == NULL) {
if (!::feof(outFile)) {
cerr << "Failed attempt to fgets." << endl;
}
break;
} else {
cout << buf;
}
}
if (::close(outFD) == -1) {
if (errno != 10) {
cerr << "Failed attempt to close." << endl;
}
}
cout << "\n";
}
if (errFile == NULL) {
cerr << "Failed attempt to open fork file err." << endl;
return;
} else {
cerr << "Error result:\n";
while (true) {
if (::fgets(buf, bufSize, errFile) == NULL) {
if (!::feof(errFile)) {
cerr << "Failed attempt to fgets err." << endl;
}
break;
} else {
cerr << buf;
}
}
if (::close(errFD) == -1) {
if (errno != 10) {
cerr << "Failed attempt to close err." << endl;
}
}
cerr << "\n";
}
}
There are two kinds of child processes created in this fashion with different tasks in my application:
SSH to another machine and invoke a server that will communicate back to the parent that is acting as a client.
Compute a signature, delta, or merge file using rsync.
First of all, popen is a very thin wrapper on top of fork() followed by exec() [and some call to pipe and dup and so on to manage the ends of a pipe] .
Second, the memory is only duplicated in form of "copy-on-write" memory - meaning that unless one of the processes writes to some page, the actual physical memory is shared between the two processes.
It does mean, of course, the OS has to create a memory map with 4-8 bytes per 4KB [in typical cases] (probably plus some internal OS data to track how many copies there are of that page and stuff - but as long as the page remains the same one as the parent process, the child page uses the parent processes internal data). Compared to everything else involved in creating a new process and loading an executable file into the new process, it's a pretty small part of the time. Since you are almost immediately doing exec, not much of the parent process' memory will be touched, so very little will happen there.
My advice would be that if popen works, keep using popen. If popen doesn't quite do what you want for some reason, then use fork + exec - but make sure you know what the reason for doing so is.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
bash: force exec’d process to have unbuffered stdout
I need to read a binary's (fceux, nes emulator) stdout to get some info, and kill it when I receive special input for a genetic algorithm that I'm trying. So basically, the problem is that the program doesn't flush his output so I receive the output only when the process ends, but it never ends because I'm suppose to kill it.
So is there a way to read from unflushed buffer child ? Even if it is not in C++, so I can add some flush and then read it finally in C++ (but that's becoming a little dirty). I've tried too using python, but didn't find a way to do it either.
Here is a chunk of my code :
int fd[2];
pid_t pid;
pipe (fd);
if ((pid = fork ()) == 0)
{
close (fd[0]);
dup2 (fd[1], STDOUT_FILENO);
execl ("/usr/bin/fceux", "fceux", "Super Mario Bros.zip", NULL)
perror ("fork");
}
else
{
close (fd[1]);
char buf[1];
std::string res;
while (read (fd[0], buf, 1) > 0)
{
std::cout << "Read" << std::endl;
res += buf;
if (res.find ("score") != std::string::npos)
{
std::cout << "KILL" << std::endl;
kill (pid, SIGKILL);
}
}
close (fd[0]);
}
return 0;
Call setbuf(stdout, NULL) just before execl(). It makes stdout unbuffered.