read stdout of a process in itself using c++ - c++

Consider we have some_function and it prints result to stdout instead returning it.Changing it's defination is out of our scope and there's no alternative to it. We're left with option of reading it from stdout. So the question.
How to read stdout of C++ program in itself.
It is possible to get pid I searched if we can get fd of the same programm but I'm not able to find anything.
#include <unistd.h>
#include <sys/types.h>
#include <iostream>
void some_function(){
std::cout<<"Hello World";
}
int main(){
int pid = ::getpid();
string s = //What to write here.
cout<<"Printing";
some_function(); //This function prints "Hello World" to screen
cout<<s; //"PrintingHello World"
return 0;
}
How to attach pipe to same process i.e instead of creating child process.
Some might think of creating child process and call some_function in it, to be able to read its stdout in parent process, but No, some_function depends on process which calls it and hence we want to call it the very process instead of creating child process.

This isn't hard to do, but IMO it's quite a hack, and it won't work with a multithreaded program:
// make a temp file to store the function's stdout
int newStdOut = mkstemp( "/tmp/stdout.XXXXXXX" );
// save the original stdout
int tmpStdOut = dup( STDOUT_FILENO );
// clear stdout
fflush( stdout );
// now point the stdout file descriptor to the file
dup2( newStdOut, STDOUT_FILENO );
// call the function we want to collect the stdout from
some_function();
// make sure stdout is empty
fflush( stdout );
// restore original stdout
dup2( tmpStdOut, STDOUT_FILENO );
// the tmp file now contains whatever some_function() wrote to stdout
Error checking, proper headers, syncing C stdout with C++ cout, and reading from and cleaning up the temp file are left as exercises... ;-)
Note that you can't safely use a pipe - the function can write enough to fill up the pipe, and you can't read from the pipe because you've called the function.

How to read stdout of C++ program in itself?
There are very few reasons to do that and that is usually (but not always) a design bug.
Be aware of an important thing (at least in a single-threaded program). If your program is both reading from its "stdout" and writing (as usual) in it, it could be stuck in a deadlock: unable to read so not reaching any output routine, (or unable to write because the pipe is full).
So a program which both reads and writes the same thing (actually, the two sides of the same pipe(7)) should use some multiplexing call like poll(2). See also this.
Once you understand that, you'll have some event loop. And before that, you'll make a pipe(7) using pipe(2) (and dup2(2)).
However, pipe to self is a good thing in some signal(7) handling (see signal-safety(7)). That trick is even recommended in Qt Unix signal handling.
Read more about Unix system programming, e.g. ALP or some newer book. Read also intro(2) & syscalls(2).
I have looked for pipe and it requires fd
Wrong. Read much more carefully pipe(2); on success it fills an array of two file descriptors. Of course it could fail (see errno(3) & perror(3) & strerror(3))
Maybe you just need popen(3). Or std::ostringstream. Or open_memstream(3).
Consider we have some_function and it prints result to stdout instead returning it. Changing it's definition is out of our scope and there's no alternative to it
If some_function is your code, or is some free software, you could and probably should improve it to give a result somewhere....

Related

Send Character TO CONIN$ (Windows Console)

If you want to spawn a Windows console in an otherwise SUBSYSTEM:WINDOWS application you can use this code:
if (AllocConsole())
{
FILE* file = nullptr;
_wfreopen_s(&file, L"CONIN$", L"r", stdin);
_wfreopen_s(&file, L"CONOUT$", L"w", stdout);
_wfreopen_s(&file, L"CONOUT$", L"w", stderr);
}
The _wfreopen_s function maps stdin to CONIN$ and provides a pointer to pointer in the file variable (which we are effectively discarding).
What I'd like to do is instead map an input from something other than stdin, for example, another file stream and then write that stream to CONIN$.
For a larger picture of what I'm trying to do here, I've got a secondary thread running std::getline(std::cin... which blocks. I'd like the thread context object to just send a \n to the console to break the blocking call.
If there are other ideas, I'm open. The alternative currently is that I print a message to the console that says "Shutting down, press ENTER to quit..." Which, I guess, also works ;)
What I tried was using the FILE* conin = new FILE(); and then did a memcpy to fill it with a \n and then I used WriteFile to that pointer, thinking that it might write the file stream out to CONIN$, and while the code compiles, and the contents of the FILE* appears to be correct (0x0a), it does not appear to send that stream to the console.
I tested this by having std::cout above and below the code testing the stream write. If it works, I'd expect the two lines to be on separate lines, but they always show up on the same, suggesting that I'm not sending the file stream.
Thanks for reading!
You should not discard the FILE* handle, otherwise you won't be able to manipulate it, in particular you won't be able to properly flush/close it if required.
If you're working with threads, simply give the FILE* to the thread that requires it. Threads share the same memory space.
If you're working with processes, then you should create a pipe between the two processes involved (see Win32 API for CreatePipe for details), and connect one's stdout to the other's stdin.

How to write in stdout after using freopen [duplicate]

This question already has answers here:
How to redirect the output back to the screen after freopen("out.txt", "a", stdout)
(6 answers)
Closed 8 years ago.
After freopen-ing stdout, How can I print on terminal?
freopen("out", "w", stdout); // reopen stdout
/* something */
printf("Now I want to print this on terminal");
I believe this is what you are looking for:
Once I've used freopen, how can I get the original stdout (or stdin) back?
There's no portable solution. But the link also explains a possible solution using your own stream and a non-portable solution that'll work on most posix systems.
There isn't a good way. If you need to switch back, the best solution
is not to have used freopen in the first place. Try using your own
explicit output (or input) stream variable, which you can reassign at
will, while leaving the original stdout (or stdin) undisturbed. For
example, declare a global
FILE *ofp;
and replace all calls to printf( ... ) with fprintf(ofp, ... ).
(Obviously, you'll have to check for calls to putchar and puts, too.)
Then you can set ofp to stdout or to anything else.
You might wonder if you could skip freopen entirely, and do something
like
FILE *savestdout = stdout;
stdout = fopen(file, "w"); /* WRONG */
leaving yourself able to restore stdout later by doing
stdout = savestdout; /* WRONG */
but code like this is not likely to work, because stdout (and stdin
and stderr) are typically constants which cannot be reassigned (which
is why freopen exists in the first place).
It may be possible, in a nonportable way, to save away information
about a stream before calling freopen to open some file in its place,
such that the original stream can later be restored. The most
straightforward and reliable way is to manipulate the underlying file
descriptors using a system-specific call such as dup or dup2, if
available. Another is to copy or inspect the contents of the FILE
structure, but this is exceedingly nonportable and unreliable.
Under some systems, you might be able to reopen a special device file
(such as /dev/fd/1 under modern versions of Unix) which is still
attached to (for example) the original standard output. You can, under
some systems, explicitly re-open the controlling terminal, but this
isn't necessarily what you want, since the original input or output
(i.e. what stdin or stdout had been before you called freopen) could
have been redirected from the command line.
You can do it by:
#include <fstream>
ofstream out("out.txt");
out<<"something";
then
cout<<"something";

Capturing child stdout to a buffer

I'm developing a cross platform project currently. On windows i had a class that ran a process/script (using a commandline), waited for it to end, and read everything from it's stdout/stderr to a buffer. I then printed the output to a custom 'console'. Note: This was not a redirection of child stdout to parent stdout, just a pipe from child stdout to parent.
I'm new to OSX/unix-like api's but i can understand the canonical way of doing something like this is forking and piping stdouts together. However, i dont want to redirect it to stdout and i would like to capture the output.. It should work pretty much like this (pseudocode, resemblance with unix functions purely coincidental):
class program
{
string name, cmdline;
string output;
program(char * name, char * cmdline)
: name(name), cmdline(cmdline) {};
int run()
{
// run program - spawn it as a new process
int pid = exec(name, cmdline);
// wait for it to finish
wait(pid);
char buf[size];
int n;
// read output of program's stdout
// keep appending data until there's nothing left to read
while (read(pid, buf, size, &n))
output.append(buf, n);
// return exit code of process
return getexitcode(pid);
}
const string & getOutput() { return output; }
};
How would i go about doing this on OSX?
E:
Okay so i studied the relevant api's and it seems that some kind of fork/exec combo is unavoidable. Problem at hand is that my process is very large and forking it really seems like a bad idea (i see that some unix implementations can't do it if the parent process takes up 50%+ of the system ram).
Can't i avoid this scheme in any way? I see that vfork() might be a possible contender, so maybe i could try to mimic the popen() function using vfork. But then again, most man pages state that vfork might very well just be fork()
You have a library call to do just that: popen. It will provide you with a return value of a file descriptor, and you can read that descriptor till eof. It's part of stdio, so you can do that on OSX, but other systems as well. Just remember to pclose() the descriptor.
#include <stdio.h>
FILE * popen(const char *command, const char *mode);
int pclose(FILE *stream);
if you want to keep output with absolutely no redirection, the only thing we can think of is using something like "tee" - a command which splits the output to a file but maintains its own stdout. It's fairly easy to implement that in code as well, but it might not be necessary in this case.

Getting the output of Tcl Interpreter

I am trying to get the output of Tcl Interpreter as described in answer of this question Tcl C API: redirect stdout of embedded Tcl interp to a file without affecting the whole program. Instead of writing the data to file I need to get it using pipe. I changed Tcl_OpenFileChannel to Tcl_MakeFileChannel and passed write-end of pipe to it. Then I called Tcl_Eval with some puts. No data came at read-end of the pipe.
#include <sys/wait.h>
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <tcl.h>
#include <iostream>
int main() {
int pfd[2];
if (pipe(pfd) == -1) { perror("pipe"); exit(EXIT_FAILURE); }
/*
int saved_flags = fcntl(pfd[0], F_GETFL);
fcntl(pfd[0], F_SETFL, saved_flags | O_NONBLOCK);
*/
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_Channel chan;
int rc;
int fd;
/* Get the channel bound to stdout.
* Initialize the standard channels as a byproduct
* if this wasn't already done. */
chan = Tcl_GetChannel(interp, "stdout", NULL);
if (chan == NULL) {
return TCL_ERROR;
}
/* Duplicate the descriptor used for stdout. */
fd = dup(1);
if (fd == -1) {
perror("Failed to duplicate stdout");
return TCL_ERROR;
}
/* Close stdout channel.
* As a byproduct, this closes the FD 1, we've just cloned. */
rc = Tcl_UnregisterChannel(interp, chan);
if (rc != TCL_OK)
return rc;
/* Duplicate our saved stdout descriptor back.
* dup() semantics are such that if it doesn't fail,
* we get FD 1 back. */
rc = dup(fd);
if (rc == -1) {
perror("Failed to reopen stdout");
return TCL_ERROR;
}
/* Get rid of the cloned FD. */
rc = close(fd);
if (rc == -1) {
perror("Failed to close the cloned FD");
return TCL_ERROR;
}
chan = Tcl_MakeFileChannel((void*)pfd[1], TCL_WRITABLE | TCL_READABLE);
if (chan == NULL)
return TCL_ERROR;
/* Since stdout channel does not exist in the interp,
* this call will make our file channel the new stdout. */
Tcl_RegisterChannel(interp, chan);
rc = Tcl_Eval(interp, "puts test");
if (rc != TCL_OK) {
fputs("Failed to eval", stderr);
return 2;
}
char buf;
while (read(pfd[0], &buf, 1) > 0) {
std::cout << buf;
}
}
I've no time at the moment to tinker with the code (might do that later) but I think this approach is flawed as I see two problems with it:
If stdout is connected to something which is not an interactive console (a call to isatty(2) is usually employed by the runtime to check for that), full buffering could be (and I think will be) engaged, so unless your call to puts in the embedded interpreter outputs so many bytes as to fill up or overflow the Tcl's channel buffer (8KiB, ISTR) and then the downstream system's buffer (see the next point), which, I think, won't be less than 4KiB (the size of a single memory page on a typical HW platform), nothing will come up at the read side.
You could test this by changing your Tcl script to flush stdout, like this:
puts one
flush stdout
puts two
You should then be able to read the four bytes output by the first puts from the pipe's read end.
A pipe is two FDs connected via a buffer (of a defined but system-dependent size). As soon as the write side (your Tcl interp) fills up that buffer, the write call which will hit the "buffer full" condition will block the writing process unless something reads from the read end to free up space in the buffer. Since the reader is the same process, such a condition has a perfect chance to deadlock since as soon as the Tcl interp is stuck trying to write to stdout, the whole process is stuck.
Now the question is: could this be made working?
The first problem might be partially fixed by turning off buffering for that channel on the Tcl side. This (supposedly) won't affect buffering provided for the pipe by the system.
The second problem is harder, and I can only think of two possibilities to fix it:
Create a pipe then fork(2) a child process ensuring its standard output stream is connected to the pipe's write end. Then embed the Tcl interpreter in that process and do nothing to the stdout stream in it as it will be implicitly connected to the child process standard output stream attached, in turn, to the pipe. You then read in your parent process from the pipe until the write side is closed.
This approach is more robust than using threads (see the next point) but it has one potential downside: if you need to somehow affect the embedded Tcl interpreter in some ways which are not known up front before the program is run (say, in response to the user's actions), you will have to set up some sort of IPC between the parent and the child processes.
Use threading and embed the Tcl interp into a separate thread: then ensure that reads from the pipe happen in another (let's call it "controlling") thread.
This approach might superficially look simpler than forking a process but then you get all the hassles related to proper synchronization common for threading. For instance, a Tcl interpreter must not be accessed directly from threads other than the one in which the interp was created. This implies not only concurrent access (which is kind of obvious by itself) but any access at all, including synchronized, because of possible TLS issues. (I'm not exactly sure this holds true, but I have a feeling this is a big can of worms.)
So, having said all that, I wonder why you seem to systematically reject suggestions to implement a custom "channel driver" for your interp and just use it to provide the implementation for the stdout channel in your interp? This would create a super-simple single-thread fully-synchronized implementation. What's wrong with this approach, really?
Also observe that if you decided to use a pipe in hope it will serve as a sort of "anonymous file", then this is wrong: a pipe assumes both sides work in parallel. And in your code you first make the Tcl interp write everything it has to write and then try to read this. This is asking for trouble, as I've described, but if this was invented just to not mess with a file, then you're just doing it wrong, and on a POSIX system the course of actions could be:
Use mkstemp() to create and open a temporary file.
Immediately delete it using the name mkstemp() returned in place of the template you passed it.
Since the file still has an open FD for it (returned by mkstemp()), it will disappear from the file system but will not be unlinked, and might be written to and read from.
Make this FD an interp's stdout. Let the interp write everything it has to.
After the interp is finished, seek() the FD back to the beginning of the file and read from it.
Close the FD when done — the space it occupied on the underlying filesystem will be reclamied.

Pipes between Python and C++ don't get closed

I am spawning a process in python using subprocess and want to read output from the program using pipes. The C++ program does not seem to close the pipe though, even when explicitly telling it to close.
#include <cstdlib>
#include <ext/stdio_filebuf.h>
#include <iostream>
int main(int argc, char **argv) {
int fd = atoi(argv[1]);
__gnu_cxx::stdio_filebuf<char> buffer(fd, std::ios::out);
std::ostream stream(&buffer);
stream << "Hello World" << std::endl;
buffer.close();
return 0;
}
I invoke this small program with this python snippet:
import os
import subprocess
read, write = os.pipe()
proc = subprocess.Popen(["./dummy", str(write)])
data = os.fdopen(read, "r").read()
print data
The read() method does not return, as the fd is not closed. Opening and closing the write fd in python solves the problem. But it seems like a hack to me. Is there a way to close the fd in my C++ process?
Thanks a lot!
Spawning a child process on Linux (all POSIX OSes, really) is usually accomplished via fork and exec. After fork, both processes have the file open. The C++ process closes it, but the file remains open until the parent process closes the fd also. This is normal for code using fork, and usually is handled by a wrapper around fork. Read the man page for pipe. I guess python has no way of knowing which files are being transferred to the child, though, and therefore doesn't know what to close in the parent vs the child process.
POSIX file descriptors are local to the process. The file descriptor write from the Python tool is not valid in the C++ process.
Perhaps the easiest way would be to have the C++ process write its output to stdout (like cout <<), and Python call Popen using stdout=PIPE and read proc.stdout (or use proc.communicate() instead of using fdopen. This should work in Windows, too.
For passing the file descriptor as a command-line argument, see Ben Voigt's answer.