I have one application where parent process launches jobs over distributed farm system like lsf/vnc .
Now what I want is whenever there is any error reported in thier respective log by any of the jobs launch , error should be redirected to main stdout screen of parent process. so that there is no need to monitor log of each job separately.
I have never used pipe/semaphores in my codes but I can learn that if needed.
Please suggest some efficient solution. I am working on Linux/Solaris platform.
Thanks
Depending on how you launch the subprocesses there are different mechanics how to set their standard handles.
In general, you'll have to set their stderr handle to be the same your stdout handle.
Keep in mind that this has nothing to do with the "logs" that you mention; it's about what your subject says (redirecting stderr).
If you want the stderr of the children to be the same as the stdout of the parent, then you may be able to simply launch the parent with its stderr tied to its stdout. If cmd is the command to launch the parent, try:
$ cmd 2>&1
You probably should use dup2() library call in order to duplicate STDERR in the child process to desired file descriptor, for example to STDOUT of any other descriptor, which can be opened before by the parent process and inherited by the child process after fork().
Try to read manula page for "dup2" call.
Related
For my computer science class final project, I need to interact with a SQL database. Only problem is, my prof won't install the SQL c++ API for me. Is there a way I can still interact with SQL without the API?
If I'm understanding your question correctly, you want your program to be able to launch a child process (an SQL command line program in this case), and then be able to read the text it receives from the child process's stdout and/or stderr, and write text to the child process's stdin, the same way a user would if he/she were running that program interactively.
The answer is yes, it is possible to do this, although it takes some work. Under Linux/Unix/MacOSX, you can call forkpty() to spawn a child process -- the parent process will get a socket (via forkpty's first argument) that you can use to communicate with the child process's stdin and stdout. In the child process, you can then call execvp (or one of its variants) to run the SQL program in that process;
Under Windows, it's a bit more complex -- you'll need to set up some pipes and then call CreateProcess() to launch the child process, and communicate with it through those pipes. Microsoft has a page on the topic (including example code) here.
In my c++ windows app I start multiple child processes and I want them to inherit parent's stdout/stderr, so that if output of my app is redirected to some file then that file would also contain output of all child processes that my app creates.
Currently I do that using CreateProcess without output redirection. MSDN has a sample how to redirect output: Creating a Child Process with Redirected Input and Output, but I want to see what alternative do I have. Simplest is to use system and call it from a blocking thread that waits for child to exit. All output is then piped back to parent's stdout/stderr, however in parent process I do not have a chance to process stdout data that comes from child.
There are also other functions to start processes on windows: spawn, exec, which might be easier to port to posix systems.
What should I use if I want it to work on linux/osx? What options do I have if I want it to work on UWP aka WinRT? I might be totally ok with system called from a blocking thread, but perhaps I'd prefer to be able to have more control on process PID (to be able to terminate it) and process stdout/stderr, to prepend each line with child##: for example.
The boost libraries recently released version 1.64 which includes a new boost::process library.
In it, you're given a C++ way to be able to redirect output to a pipe or asio::streambuf, from which you can create a std::string or std::istream to read whatever your child process wrote.
You can read up on boost::process tutorials here, which shows some simple examples of reading child output. It does make heavy use of boost::asio, so I highly recommend you read up on that too.
In my C++ program I need to execute a bash script. I need then to return the result obtained running the script in my c++ program.
I have two possibilities:
1. use system(script.sh). In script.sh I redirect the output in a file which is processd after I return to the c++ program.
2. use popen
I am interested which of this methods is preffered, considering that the output returned from script.sh could be big(100 M). Thanks.
When using system the parent process is blocked until the child process terminates. The child process will run with full performance.
popen will start the child process, but not wait until it ended. So the parent process can continue to do whatever it wants while the child is running, it can for example read the output of the child process. The parent process can decide if it wants to read blocking or non-blocking from the child's output pipe, depending on how much other things the parent process has to do. The child will run in parallel and write its output to the pipe. It might be blocked when writing if the parent process is not reading from the pipe and the pipe's memory limit is reached. So the parent process should keep on reading the output.
The system approach is a bit simpler. But popen gives you the possibility to read the process's output while it is still running. And you don't need the extra file (space). So I'd use popen.
My goal is to:
Pipe stdin to stdin of child process.
Pipe stdout of child process to stdout.
Pipe stderr of chile process to stderr.
I have looked at these:
http://www.jukie.net/bart/blog/popenRWE
and
http://jineshkj.wordpress.com/2006/12/22/how-to-capture-stdin-stdout-and-stderr-of-child-program/
but am having trouble doing what I listed.
If you want to connect the child process's stdin/stdout/stderr to your stdin/stdout/stderr you don't have to do anything, it inherits them automatically.
Note that this doesn't give your application any access to the data -- it just goes directly between the child process application and the original streams. So it's not really "wrapping" anything.
The situation is that I have program started through system() or CreateProcess().
Now, is it possible to do stuff as that program outputs data into console. I mean as the program outputs it. That is not wait for the end, gather data and then process it, but just in the moment that this external program calls console with data that it wants to print, and then get hold of that data, process it and output something else on the console.
The easiest way is usually to start the program with _popen(your_program, "r");. That will return a FILE * you can read from, and what it reads will be whatever the child writes to its standard output. When you read EOF on that file, it means the child process has terminated. This makes it relatively easy to read and process the output from the child in real time.
On Linux, create a named pipe:
system("mkfifo pipename")
Then open the pipe in the first program, and start the program with:
system("program > pipename")
I'm not sure how to do this on Windows.
Call AllocConsole before creating child process, or use AttachConsole(ChildPID) function (in parent process).
After that, you may use any ReadConsoleXXX or WriteConsoleXXX functions.