My goal is to:
Pipe stdin to stdin of child process.
Pipe stdout of child process to stdout.
Pipe stderr of chile process to stderr.
I have looked at these:
http://www.jukie.net/bart/blog/popenRWE
and
http://jineshkj.wordpress.com/2006/12/22/how-to-capture-stdin-stdout-and-stderr-of-child-program/
but am having trouble doing what I listed.
If you want to connect the child process's stdin/stdout/stderr to your stdin/stdout/stderr you don't have to do anything, it inherits them automatically.
Note that this doesn't give your application any access to the data -- it just goes directly between the child process application and the original streams. So it's not really "wrapping" anything.
Related
I am trying to receive data from a child process over an anonymous pipe in Windows. I know how to do this using standard I/O streams but these are being used for other purposes. I also know how to do this in Linux or OSX using fork(), pipe() and execv().
In Windows, you can create a pipe with CreatePipe() and make one end not inheritable with SetHandleInformation(). Then for stdout and stderr you can pass STARTUPINFO, with hStdOutput or hStdError set, to CreateProcess() to pass the other end to the child. After the call to CreateProcess() the parent most close it's handle to the child's end of the pipe. This is all explained in detail in Creating a Child Process with Redirected Input and Output on MSDN. However, I have not found a way to pass a HANDLE, other than via stderr, stdout or stdin, to the child.
I've tried converting the HANDLE to a string with something like this:
std::ostringstream str;
str << hex << "0x" << handle;
std::string handleArg = str.str();
And then passing it as a command line argument and converting it back to a HANDLE, which is just a void * in the child process. Although the child process apparently inherits the pipe HANDLE the actual value of the HANDLE must be different than in the parent because passing it this way fails to work.
I know I can use a named pipe to do this but it seems it should be possible to do this with anonymous pipes.
So how can I pass a pipe HANDLE to a child process in Windows?
Update1: Sample code in this MSDN article seems to indicate that, at least with socket handles, you can pass them as a string to the child.
Update2: Turns out I made a mistake. See my answer below.
Turns out you can pass a HANDLE to a child process as a command line argument by converting it to string and then, in the child process, back to a HANDLE (which is just a void *). Sample code using a HANDLE to a socket can be found here.
To make this work you need to make sure you follow Creating a Child Process with Redirected Input and Output closely. It's important that you close the child process's end of the pipe after calling CreateProcess() and get all the inheritance settings right.
Note, I tried passing the HANDLE as a string on the command line before but I was doing it wrong. My mistake was in passing the HANDLE as an int to boost::iostreams::file_descriptor() which made it treat it as a file descriptor instead of a Windows HANDLE.
Why not use the method shown here?
Call the GetStdHandle function to get the current standard output
handle; save this handle so you can restore the original standard
output handle after the child process has been created.
Call the SetStdHandle function to set the standard output handle to
the write handle to the pipe. Now the parent process can create the
child process.
Call the CloseHandle function to close the write handle to the pipe.
After the child process inherits the write handle, the parent process
no longer needs its copy.
Call SetStdHandle to restore the original standard output handle.
I am writing a program in openFrameworks a c++ framework. I want to start another app and communicate with it over stdin and stdout. I can start a new thread conveniently using the ofThread class. I had planned on creating two pipes and redirecting the std in and out of the thread to the pipes (using dup2), but unfortunately, this redirects the pipes for the whole app, not just the thread.
Is there a way I can start another app and be able to reads its output and provide it input?
Instead of another thread you'll need to create a child process using the fork() function (which might involve another thread intrinsically).
The difference is, that fork creates a complete copy of the parent process environment that should be shown on an exec() call within scope of the child process, while just exec() from a thread tries to share all the resource from it's parent process (thread) and thus might lead to unexpected concurrency (race conditon) problems.
If your "another app" is implemented as a subthread within your existing program, you don't need to redirect stdin and stdout to communicate with it over pipes. Just pass the pipe file descriptors to the subthread when you start it up. (You can use fdopen to wrap file descriptors in FILE objects. If you have dup2 and pipe, you have fdopen as well.)
In my C++ program I need to execute a bash script. I need then to return the result obtained running the script in my c++ program.
I have two possibilities:
1. use system(script.sh). In script.sh I redirect the output in a file which is processd after I return to the c++ program.
2. use popen
I am interested which of this methods is preffered, considering that the output returned from script.sh could be big(100 M). Thanks.
When using system the parent process is blocked until the child process terminates. The child process will run with full performance.
popen will start the child process, but not wait until it ended. So the parent process can continue to do whatever it wants while the child is running, it can for example read the output of the child process. The parent process can decide if it wants to read blocking or non-blocking from the child's output pipe, depending on how much other things the parent process has to do. The child will run in parallel and write its output to the pipe. It might be blocked when writing if the parent process is not reading from the pipe and the pipe's memory limit is reached. So the parent process should keep on reading the output.
The system approach is a bit simpler. But popen gives you the possibility to read the process's output while it is still running. And you don't need the extra file (space). So I'd use popen.
I have one application where parent process launches jobs over distributed farm system like lsf/vnc .
Now what I want is whenever there is any error reported in thier respective log by any of the jobs launch , error should be redirected to main stdout screen of parent process. so that there is no need to monitor log of each job separately.
I have never used pipe/semaphores in my codes but I can learn that if needed.
Please suggest some efficient solution. I am working on Linux/Solaris platform.
Thanks
Depending on how you launch the subprocesses there are different mechanics how to set their standard handles.
In general, you'll have to set their stderr handle to be the same your stdout handle.
Keep in mind that this has nothing to do with the "logs" that you mention; it's about what your subject says (redirecting stderr).
If you want the stderr of the children to be the same as the stdout of the parent, then you may be able to simply launch the parent with its stderr tied to its stdout. If cmd is the command to launch the parent, try:
$ cmd 2>&1
You probably should use dup2() library call in order to duplicate STDERR in the child process to desired file descriptor, for example to STDOUT of any other descriptor, which can be opened before by the parent process and inherited by the child process after fork().
Try to read manula page for "dup2" call.
I am working on a project that has multiple C++ executables that communicate using named pipes. The main application (App1) spawns the rest of the applications. When spawning, it closes STDIN for the children using:
close(STDIN_FILENO);
And it redirects STDOUT and STDERR to other files that are specific to the child processes. This makes it so that the output from App1 is only from App1 and none of the children. It also allows App1 to accept input from STDIN and not let it get captured by the child processes.
One of the child processes is a Qt application. When spawned, it is using as much CPU as it can, slowing my computer considerably. If I do not close STDIN for the child processes, this behavior stops (but the children capture STDIN instead of the main process, which I don't want).
Why does this happen and how can I prevent the Qt applications from using all the CPU cycles?
Maybe give the Qt app what it wants? Use dup2 after fork but before exec? dup2 will replace a given file descriptor with another so you can replace stdin with a file. Quick example:
if(fork() == 0)
{
int somefd = open("somefile", O_RDONLY);
// replace stdin (0) with somefd before exec-ing
if(dup2(somefd, 0) == -1)
{
// cunning plan failed
}
// exec Qt app here
}
I think I figured out what the issue was while fixing another issue I was having. I was closing the STDIN file descriptor before redirecting the STDERR and STDOUT file descriptors. This was messing up the indexes that are used when I used freopen() to redirect them.
I moved the close() of STDIN to after the redirection, and don't seem to have the problem anymore.