This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
bash: force exec’d process to have unbuffered stdout
I need to read a binary's (fceux, nes emulator) stdout to get some info, and kill it when I receive special input for a genetic algorithm that I'm trying. So basically, the problem is that the program doesn't flush his output so I receive the output only when the process ends, but it never ends because I'm suppose to kill it.
So is there a way to read from unflushed buffer child ? Even if it is not in C++, so I can add some flush and then read it finally in C++ (but that's becoming a little dirty). I've tried too using python, but didn't find a way to do it either.
Here is a chunk of my code :
int fd[2];
pid_t pid;
pipe (fd);
if ((pid = fork ()) == 0)
{
close (fd[0]);
dup2 (fd[1], STDOUT_FILENO);
execl ("/usr/bin/fceux", "fceux", "Super Mario Bros.zip", NULL)
perror ("fork");
}
else
{
close (fd[1]);
char buf[1];
std::string res;
while (read (fd[0], buf, 1) > 0)
{
std::cout << "Read" << std::endl;
res += buf;
if (res.find ("score") != std::string::npos)
{
std::cout << "KILL" << std::endl;
kill (pid, SIGKILL);
}
}
close (fd[0]);
}
return 0;
Call setbuf(stdout, NULL) just before execl(). It makes stdout unbuffered.
Related
This question already has answers here:
How to construct a c++ fstream from a POSIX file descriptor?
(8 answers)
Closed 2 years ago.
I'm new to programming, and I'm trying to write a c++ program for Linux which would create a child process, and this child process would execute an external program. The output of this program should be redirected to the main program and saved into a string variable, preserving all the spaces and new lines. I don't know how many lines/characters will the output contain.
This is the basic idea:
#include <iostream>
#include <string>
#include <cstring>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pipeDescriptors[2];
pipe(pipeDescriptors);
pid_t pid = fork();
if (pid == -1)
{
std::cerr << __LINE__ << ": fork() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
else if (!pid)
{
// Child process
close(pipeDescriptors[0]); // Not gonna read from here
if (dup2(pipeDescriptors[1], STDOUT_FILENO) == -1) // Redirect output to the pipe
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
close(pipeDescriptors[1]); // Not needed anymore
execlp("someExternalProgram", "someExternalProgram", NULL);
}
else
{
// Parent process
close(pipeDescriptors[1]); // Not gonna write here
pid_t stdIn = dup(STDIN_FILENO); // Save the standard input for further usage
if (dup2(pipeDescriptors[0], STDIN_FILENO) == -1) // Redirect input to the pipe
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
close(pipeDescriptors[0]); // Not needed anymore
int childExitCode;
wait(&childExitCode);
if (childExitCode == 0)
{
std::string childOutput;
char c;
while (std::cin.read(&c, sizeof(c)))
{
childOutput += c;
}
// Do something with childOutput...
}
if (dup2(stdIn, STDIN_FILENO) == -1) // Restore the standard input
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
// Some further code goes here...
}
return 0;
}
The problem with the above code is that when std::cin.get() function reads the last byte in the input stream, it doesn't actually "know" that this byte is the last one and tries to read further, which leads to set failbit and eofbit for std::cin so I cannot read from the standard input later anymore. std::cin.clear() resets those flags, but stdin still remains unusable.
If I could get the precise size in bytes of the stdin content without going beyond the last character in the stream, I would be able to use std::cin.read() for reading this exact amount of bytes into a string variable. But I guess there is no way to do that.
So how can I solve this problem? Should I use an intermediate file for writing the output of the child process into it and reading it later from the parent process?
The child process writes into the pipe but the parent doesn't read the pipe until the child process terminates. If the child writes more than the pipe buffer size it blocks waiting for the parent to read the pipe, but the parent is blocked waiting for the child to terminate leading to a deadlock.
To avoid that, the parent process must keep reading the pipe until EOF and only then use wait to get the child process exit status.
E.g.:
// Read entire child output.
std::string child_stdout{std::istreambuf_iterator<char>{std::cin},
std::istreambuf_iterator<char>{}};
// Get the child exit status.
int childExitCode;
if(wait(&childExitCode))
std::abort(); // wait failed.
You may also like to open a new istream from the pipe file descriptor to avoid messing up std::cin state.
if everything is not perfect I apologize;)
I am doing a program in c ++ that when it receives a sensor information, shows a picture with feh full screen.
The problem is that when I want to go from one image to another, It opens a new feh, until the moment when the computer crashes because it takes all the memory ...
How to make the opening of an image close the previous one?
This is my current command line :
system("feh -F ressources/icon_communication.png&");
I must specify that I also trigger a sound, but that there is no problem because the program closes automatically at the end of the sound:
system("paplay /home/pi/demo_ecran_interactif/ressources/swip.wav&");
Tried this as a test and works ! Thanks #paul-sanders !
#include <iostream>
#include <chrono>
#include <thread>
#include <unistd.h>
#include <signal.h>
using namespace std;
pid_t display_image_file (const char *image_file)
{
pid_t pid = fork ();
if (pid == -1)
{
std::cout << "Could not fork, error: " << errno << "\n";
return -1;
}
if (pid != 0) // parent
return pid;
// child
execlp ("feh", "-F", image_file, NULL); // only returns on failure
std::cout << "Couldn't exec feh for image file " << image_file << ", error: " << errno << "\n";
return -1;
}
int main()
{
pid_t pid = display_image_file ("nav.png");
if (pid != -1)
{
std::this_thread::sleep_for (std::chrono::milliseconds (2000));
kill (pid, SIGKILL);
}
pid_t pid2 = display_image_file ("sms2.png");
}
Soooooooooo, the goal here (in terms of your test program) seems to be:
display nav.png in feh
wait 2 seconds
close (that instance of) feh
display sms2.png in feh
And if you can get the test program doing that then you will be on your way (I'm not going to worry my pretty little head about your sound issue (because it's 30+ degrees here today), but once you have your test program running right then you will probably be able to figure out how to solve that one yourself).
So, two issues that I see in your code here:
you're not making any effort to close the first instance of 'feh'
execlp() doesn't do quite what you probably think it does (specifically, it never returns, unless it fails for some reason).
So what I think you need to do is something like this (code untested, might not even compile and you need to figure out the right header files to #include, but it should at least get you going):
pid_t display_image_file (const char *image_file)
{
pid_t pid = fork ();
if (pid == -1)
{
std::cout << "Could not fork, error: " << errno << "\n";
return -1;
}
if (pid != 0) // parent
return pid;
// child
execlp ("feh", "-F", image_file, NULL); // only returns on failure
std::cout << "Couldn't exec feh for image file " << image_file << ", error: " << errno << "\n";
return -1;
}
int main()
{
pid_t pid = display_image_file ("nav.png");
if (pid != -1)
{
std::this_thread::sleep_for (std::chrono::milliseconds (2000));
kill (pid, SIGKILL);
}
pid_t pid = display_image_file ("sms2.png");
// ...
}
Does that help?
This question follows from my attempts to implement
http://www.microhowto.info/howto/capture_the_output_of_a_child_process_in_c.html
and
https://linux.die.net/man/2/pipe
I'm writing a shell program; the intention is that, eventually, it can execute commands and pipe them to another program. As such, I require the stdout of a child process directly, rather than outputting to terminal. I attempted to use the above guides, but I have a problem: The pipe is always empty. It just doesn't work. I have absolutely no clue why. Here's my code:
int pipefd[2];
pid_t pid;
pid = fork();
char buf;
const char* arg = "/bin/ls";
char *args[] = {"/bin/ls", (char *) 0};
if (pipe(pipefd) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
if(pid<0) {
std::cout << "Fork() failed!." << std::endl;
exit(EXIT_FAILURE);
} else if (pid == 0) { //According to everything I could find on the internet, pipe should work.
dup2(pipefd[1], STDOUT_FILENO); // It does not. I don't know why.
close(pipefd[1]);
close(pipefd[0]);
execv(arg, args);
std::cout << "Child Error! " << errno << std::endl;
perror("execv");
exit(EXIT_FAILURE);
} else {
close(pipefd[1]);
wait(NULL);
while (read(pipefd[0], &buf, 1) > 0){
write(STDOUT_FILENO, &buf, 1);
}
write(STDOUT_FILENO, "\n", 1);
close(pipefd[0]);
}
I'm on a laptop with Ubuntu 15.04
Also, the pipe DOES work if I write/read inside one process.
Edit: Also, the execv does work - If I remove the dup2, it outputs directly to terminal and works.
I'm redirecting output from a child process:
int pipefd[2];
pipe(pipefd);
pid_t pid = fork(); /* Create a child process */
switch (pid) {
case -1: /* Error */
cout << "Uh-Oh! fork() failed.\n";
exit(1);
case 0: /* Child process */
close(pipefd[0]);
dup2(pipefd[1], 1);
dup2(pipefd[1], 2);
close(pipefd[1]);
execv(args[0], (char * const *)args);
cout << "execv() error" << endl;
exit(1);
default: /* Parent process */
close(pipefd[1]);
char buffer[1024];
size_t bytes_read = 0;
bytes_read = read(pipefd[0], buffer, sizeof(buffer));
if(bytes_read == -1) {
cout << "read() error" << endl;
exit(1);
}
close(pipefd[0]);
if(bytes_read > 0) {
buffer[bytes_read-1] = '\0'; // Overwrite the newline
}
int status, exit_pid;
while(true) {
exit_pid = waitpid(pid, &status, 0);
if(exit_pid == -1) {
cout << "waitpid() error: " << strerror(errno) << endl;
exit(1);
}
else {
return WEXITSTATUS(status);
}
}
}
This works fine when I ran it as an isolated piece of code. But when I integrate it into my multithreaded environment, a horrible thing happens: the read() calls somehow reads output of other threads of the parent process, as if it were the output from the pipe of the child process.
Anyone encountered such a thing?
I'm on OS X.
Well, I have a solution even though I don't completely understand why this happened.
But first, it should be clear that this behavior is neither normal nor expectable. A child process created with fork() does not inherit any running threads from its parent (so the unexpected output must come from the parent threads). And it has its own descriptor table. So when the child process calls dup2() to alter its output descriptors, that shouldn't have any effect on the threads in the parent process.
The problem occurred only in cases where execv() call failed. In those cases, I expected the termination of the child process to close all its file descriptors. But that didn't happen, or at least it didn't have the same effect as calling close() explicitly. So adding explicit close() calls after execv() solved the problem:
execv(args[0], (char * const *)args);
close(1);
close(2);
exit(1);
The close of the write-end descriptor of the pipe is what will cause the read operation on the read-end to receive 0, thus knowing not to read anymore.
However, I still don't know the following:
Why isn't the call to exit() in the child process equivalent to explicitly calling close() ?
Even if the pipe write-end isn't closed, why does reading from the read-end produces output of threads in the parent process, instead of blocking, or returning some error ?
If anybody can shed light on this, it will be appreciated.
This question follows from my attempt to implement the instructions in:
Linux Pipes as Input and Output
How to send a simple string between two programs using pipes?
http://tldp.org/LDP/lpg/node11.html
My question is along the lines of the question in: Linux Pipes as Input and Output, but more specific.
Essentially, I am trying to replace:
/directory/program < input.txt > output.txt
using pipes in C++ in order to avoid using the hard drive. Here's my code:
//LET THE PLUMBING BEGIN
int fd_p2c[2], fd_pFc[2], bytes_read;
// "p2c" = pipe_to_child, "pFc" = pipe_from_child (see above link)
pid_t childpid;
char readbuffer[80];
string program_name;// <---- includes program name + full path
string gulp_command;// <---- includes my line-by-line stdin for program execution
string receive_output = "";
pipe(fd_p2c);//create pipe-to-child
pipe(fd_pFc);//create pipe-from-child
childpid = fork();//create fork
if (childpid < 0)
{
cout << "Fork failed" << endl;
exit(-1);
}
else if (childpid == 0)
{
dup2(0,fd_p2c[0]);//close stdout & make read end of p2c into stdout
close(fd_p2c[0]);//close read end of p2c
close(fd_p2c[1]);//close write end of p2c
dup2(1,fd_pFc[1]);//close stdin & make read end of pFc into stdin
close(fd_pFc[1]);//close write end of pFc
close(fd_pFc[0]);//close read end of pFc
//Execute the required program
execl(program_name.c_str(),program_name.c_str(),(char *) 0);
exit(0);
}
else
{
close(fd_p2c[0]);//close read end of p2c
close(fd_pFc[1]);//close write end of pFc
//"Loop" - send all data to child on write end of p2c
write(fd_p2c[1], gulp_command.c_str(), (strlen(gulp_command.c_str())));
close(fd_p2c[1]);//close write end of p2c
//Loop - receive all data to child on read end of pFc
while (1)
{
bytes_read = read(fd_pFc[0], readbuffer, sizeof(readbuffer));
if (bytes_read <= 0)//if nothing read from buffer...
break;//...break loop
receive_output += readbuffer;//append data to string
}
close(fd_pFc[0]);//close read end of pFc
}
I am absolutely sure that the above strings are initialized properly. However, two things happen that don't make sense to me:
(1) The program I am executing reports that the "input file is empty." Since I am not calling the program with "<" it should not be expecting an input file. Instead, it should be expecting keyboard input. Furthermore, it should be reading the text contained in "gulp_command."
(2) The program's report (provided via standard output) appears in the terminal. This is odd because the purpose of this piping is to transfer stdout to my string "receive_output." But since it is appearing on screen, that indicates to me that the information is not being passed correctly through the pipe to the variable. If I implement the following at the end of the if statement,
cout << receive_output << endl;
I get nothing, as though the string is empty. I appreciate any help you can give me!
EDIT: Clarification
My program currently communicates with another program using text files. My program writes a text file (e.g. input.txt), which is read by the external program. That program then produces output.txt, which is read by my program. So it's something like this:
my code -> input.txt -> program -> output.txt -> my code
Therefore, my code currently uses,
system("program < input.txt > output.txt");
I want to replace this process using pipes. I want to pass my input as standard input to the program, and have my code read the standard output from that program into a string.
Your primary problem is that you have the arguments to dup2() reversed. You need to use:
dup2(fd_p2c[0], 0); // Duplicate read end of pipe to standard input
dup2(fd_pFc[1], 1); // Duplicate write end of pipe to standard output
I got suckered into misreading what you wrote as OK until I put error checking on the set-up code and got unexpected values from the dup2() calls, which told me what the trouble was. When something goes wrong, insert the error checks you skimped on before.
You also did not ensure null termination of the data read from the child; this code does.
Working code (with diagnostics), using cat as the simplest possible 'other command':
#include <unistd.h>
#include <string>
#include <iostream>
using namespace std;
int main()
{
int fd_p2c[2], fd_c2p[2], bytes_read;
pid_t childpid;
char readbuffer[80];
string program_name = "/bin/cat";
string gulp_command = "this is the command data sent to the child cat (kitten?)";
string receive_output = "";
if (pipe(fd_p2c) != 0 || pipe(fd_c2p) != 0)
{
cerr << "Failed to pipe\n";
exit(1);
}
childpid = fork();
if (childpid < 0)
{
cout << "Fork failed" << endl;
exit(-1);
}
else if (childpid == 0)
{
if (dup2(fd_p2c[0], 0) != 0 ||
close(fd_p2c[0]) != 0 ||
close(fd_p2c[1]) != 0)
{
cerr << "Child: failed to set up standard input\n";
exit(1);
}
if (dup2(fd_c2p[1], 1) != 1 ||
close(fd_c2p[1]) != 0 ||
close(fd_c2p[0]) != 0)
{
cerr << "Child: failed to set up standard output\n";
exit(1);
}
execl(program_name.c_str(), program_name.c_str(), (char *) 0);
cerr << "Failed to execute " << program_name << endl;
exit(1);
}
else
{
close(fd_p2c[0]);
close(fd_c2p[1]);
cout << "Writing to child: <<" << gulp_command << ">>" << endl;
int nbytes = gulp_command.length();
if (write(fd_p2c[1], gulp_command.c_str(), nbytes) != nbytes)
{
cerr << "Parent: short write to child\n";
exit(1);
}
close(fd_p2c[1]);
while (1)
{
bytes_read = read(fd_c2p[0], readbuffer, sizeof(readbuffer)-1);
if (bytes_read <= 0)
break;
readbuffer[bytes_read] = '\0';
receive_output += readbuffer;
}
close(fd_c2p[0]);
cout << "From child: <<" << receive_output << ">>" << endl;
}
return 0;
}
Sample output:
Writing to child: <<this is the command data sent to the child cat (kitten?)>>
From child: <<this is the command data sent to the child cat (kitten?)>>
Note that you will need to be careful to ensure you don't get deadlocked with your code. If you have a strictly synchronous protocol (so the parent writes a message and reads a response in lock-step), you should be fine, but if the parent is trying to write a message that's too big to fit in the pipe to the child while the child is trying to write a message that's too big to fit in the pipe back to the parent, then each will be blocked writing while waiting for the other to read.
It sounds like you're looking for coprocesses. You can program them in C/C++ but since they are already available in the (bash) shell, easier to use the shell, right?
First start the external program with the coproc builtin:
coproc external_program
The coproc starts the program in the background and stores the file descriptors to communicate with it in an array shell variable. Now you just need to start your program connecting it to those file descriptors:
your_program <&${COPROC[0]} >&${COPROC[1]}
#include <stdio.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <string.h>
#include <iostream>
using namespace std;
int main() {
int i, status, len;
char str[10];
mknod("pipe", S_IFIFO | S_IRUSR | S_IWUSR, 0); //create named pipe
pid_t pid = fork(); // create new process
/* Process A */
if (pid == 0) {
int myPipe = open("pipe", O_WRONLY); // returns a file descriptor for the pipe
cout << "\nThis is process A having PID= " << getpid(); //Get pid of process A
cout << "\nEnter the string: ";
cin >> str;
len = strlen(str);
write(myPipe, str, len); //Process A write to the named pipe
cout << "Process A sent " << str;
close(myPipe); //closes the file descriptor fields.
}
/* Process B */
else {
int myPipe = open("pipe", O_RDONLY); //Open the pipe and returns file descriptor
char buffer[21];
int pid_child;
pid_child = wait(&status); //wait until any one child process terminates
int length = read(myPipe, buffer, 20); //reads up to size bytes from pipe with descriptor fields, store results
// in buffer;
cout<< "\n\nThis is process B having PID= " << getpid();//Get pid of process B
buffer[length] = '\0';
cout << "\nProcess B received " << buffer;
i = 0;
//Reverse the string
for (length = length - 1; length >= 0; length--)
str[i++] = buffer[length];
str[i] = '\0';
cout << "\nRevers of string is " << str;
close(myPipe);
}
unlink("pipe");
return 0;
}