Getting the output of Tcl Interpreter - c++

I am trying to get the output of Tcl Interpreter as described in answer of this question Tcl C API: redirect stdout of embedded Tcl interp to a file without affecting the whole program. Instead of writing the data to file I need to get it using pipe. I changed Tcl_OpenFileChannel to Tcl_MakeFileChannel and passed write-end of pipe to it. Then I called Tcl_Eval with some puts. No data came at read-end of the pipe.
#include <sys/wait.h>
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <tcl.h>
#include <iostream>
int main() {
int pfd[2];
if (pipe(pfd) == -1) { perror("pipe"); exit(EXIT_FAILURE); }
/*
int saved_flags = fcntl(pfd[0], F_GETFL);
fcntl(pfd[0], F_SETFL, saved_flags | O_NONBLOCK);
*/
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_Channel chan;
int rc;
int fd;
/* Get the channel bound to stdout.
* Initialize the standard channels as a byproduct
* if this wasn't already done. */
chan = Tcl_GetChannel(interp, "stdout", NULL);
if (chan == NULL) {
return TCL_ERROR;
}
/* Duplicate the descriptor used for stdout. */
fd = dup(1);
if (fd == -1) {
perror("Failed to duplicate stdout");
return TCL_ERROR;
}
/* Close stdout channel.
* As a byproduct, this closes the FD 1, we've just cloned. */
rc = Tcl_UnregisterChannel(interp, chan);
if (rc != TCL_OK)
return rc;
/* Duplicate our saved stdout descriptor back.
* dup() semantics are such that if it doesn't fail,
* we get FD 1 back. */
rc = dup(fd);
if (rc == -1) {
perror("Failed to reopen stdout");
return TCL_ERROR;
}
/* Get rid of the cloned FD. */
rc = close(fd);
if (rc == -1) {
perror("Failed to close the cloned FD");
return TCL_ERROR;
}
chan = Tcl_MakeFileChannel((void*)pfd[1], TCL_WRITABLE | TCL_READABLE);
if (chan == NULL)
return TCL_ERROR;
/* Since stdout channel does not exist in the interp,
* this call will make our file channel the new stdout. */
Tcl_RegisterChannel(interp, chan);
rc = Tcl_Eval(interp, "puts test");
if (rc != TCL_OK) {
fputs("Failed to eval", stderr);
return 2;
}
char buf;
while (read(pfd[0], &buf, 1) > 0) {
std::cout << buf;
}
}

I've no time at the moment to tinker with the code (might do that later) but I think this approach is flawed as I see two problems with it:
If stdout is connected to something which is not an interactive console (a call to isatty(2) is usually employed by the runtime to check for that), full buffering could be (and I think will be) engaged, so unless your call to puts in the embedded interpreter outputs so many bytes as to fill up or overflow the Tcl's channel buffer (8KiB, ISTR) and then the downstream system's buffer (see the next point), which, I think, won't be less than 4KiB (the size of a single memory page on a typical HW platform), nothing will come up at the read side.
You could test this by changing your Tcl script to flush stdout, like this:
puts one
flush stdout
puts two
You should then be able to read the four bytes output by the first puts from the pipe's read end.
A pipe is two FDs connected via a buffer (of a defined but system-dependent size). As soon as the write side (your Tcl interp) fills up that buffer, the write call which will hit the "buffer full" condition will block the writing process unless something reads from the read end to free up space in the buffer. Since the reader is the same process, such a condition has a perfect chance to deadlock since as soon as the Tcl interp is stuck trying to write to stdout, the whole process is stuck.
Now the question is: could this be made working?
The first problem might be partially fixed by turning off buffering for that channel on the Tcl side. This (supposedly) won't affect buffering provided for the pipe by the system.
The second problem is harder, and I can only think of two possibilities to fix it:
Create a pipe then fork(2) a child process ensuring its standard output stream is connected to the pipe's write end. Then embed the Tcl interpreter in that process and do nothing to the stdout stream in it as it will be implicitly connected to the child process standard output stream attached, in turn, to the pipe. You then read in your parent process from the pipe until the write side is closed.
This approach is more robust than using threads (see the next point) but it has one potential downside: if you need to somehow affect the embedded Tcl interpreter in some ways which are not known up front before the program is run (say, in response to the user's actions), you will have to set up some sort of IPC between the parent and the child processes.
Use threading and embed the Tcl interp into a separate thread: then ensure that reads from the pipe happen in another (let's call it "controlling") thread.
This approach might superficially look simpler than forking a process but then you get all the hassles related to proper synchronization common for threading. For instance, a Tcl interpreter must not be accessed directly from threads other than the one in which the interp was created. This implies not only concurrent access (which is kind of obvious by itself) but any access at all, including synchronized, because of possible TLS issues. (I'm not exactly sure this holds true, but I have a feeling this is a big can of worms.)
So, having said all that, I wonder why you seem to systematically reject suggestions to implement a custom "channel driver" for your interp and just use it to provide the implementation for the stdout channel in your interp? This would create a super-simple single-thread fully-synchronized implementation. What's wrong with this approach, really?
Also observe that if you decided to use a pipe in hope it will serve as a sort of "anonymous file", then this is wrong: a pipe assumes both sides work in parallel. And in your code you first make the Tcl interp write everything it has to write and then try to read this. This is asking for trouble, as I've described, but if this was invented just to not mess with a file, then you're just doing it wrong, and on a POSIX system the course of actions could be:
Use mkstemp() to create and open a temporary file.
Immediately delete it using the name mkstemp() returned in place of the template you passed it.
Since the file still has an open FD for it (returned by mkstemp()), it will disappear from the file system but will not be unlinked, and might be written to and read from.
Make this FD an interp's stdout. Let the interp write everything it has to.
After the interp is finished, seek() the FD back to the beginning of the file and read from it.
Close the FD when done — the space it occupied on the underlying filesystem will be reclamied.

Related

Exchange data between "Virtualbox guest OS COM port" and "Windows host pipe" [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 months ago.
Improve this question
Assume you map Virtualbox guest OS COM port to Windows host pipe (\\.\pipe\*) and Virtualbox is server side of pipe.
It is possible to read/write Windows host pipe from client side (from Windows program) by <stdio>, i.e. access the pipe (and the guest COM port) as regular stream (via FILE, fopen, fgetc, fputc, etc).
1.
The client side pipe behaviour details can depend on conventions offered by Virtualbox server side pipe implementation.
If you are not sure about the server side pipe considerations, you can find unexpected troubles while transfer data, among them:
client side pipe terminates to receive rx data (windows program begin to read 0x00);
//the following code will not help to 'read after write'
int
//begin to read 0x00 here
rx_ch= fgetc(fc);
if(rx_ch == EOF)break;
guest machine engine can hang guest OS window during COM port exchange via pipe (probably by pipe queue overflow);
etc.
By tests it was found, that there is the way to read/write client side pipe in half-duplex mode via single FILE handler opened for read/write by:
for mingw pipe win directory is visible as "//./pipe"
fopen pipe in "rb+" mode;
mandatory calls fseek (with correct offset) between any read and write pipe operation.
Though the "fseek before append" is always mandatory for "r+" mode FILE, the fseek does nothing visible in pipe, but sets internal pipe data into correct state to carry out read and write pipe operation.
3.
example
enum{ PR_SYN= 0x16U };
fc= fopen("//./pipe/dos1","rb+"); assert(fc);
fi= fopen("./isw","rb"); assert(fi);
fo= fopen("./osw","ab"); assert(fo);
//
long
fc_fpos_rx= 0;
for(;;){
//rx
assert( !fseek(fc,fc_fpos_rx,SEEK_SET) );
int
rx_ch= fgetc(fc);
if(rx_ch == EOF)break;
fc_fpos_rx= ftell(fc); assert( fc_fpos_rx != -1L );
if(rx_ch != PR_SYN)fputc(rx_ch,fo);
//tx
assert( !fseek(fc,0,SEEK_END) );
int
tx_ch= fi? fgetc(fi): PR_SYN;
if(tx_ch == EOF)tx_ch= PR_SYN;
fputc(tx_ch,fc);
}
any suggestions to improve read/write Windows host pipe from client side.
picture: guest-host pipe file transfer (https://i.stack.imgur.com/mGccG.png)
Let's continue to explore <stdio.h> stream access.
The original program did not work better because only a few errors returned by system calls was checked by assert() in the program.
If we check more returns we got more error messages, we got error messages on every wrong fputc/fgetc, that means stdio knows about the errors, but only we do not.
int
//fgetc emits pipe error if there was no fseek
rx_ch= fgetc(fc);
if(rx_ch == EOF){ assert( feof(fc) ); break; }
...
//fputc emits pipe error if there was no fseek
assert( fputc(tx_ch,fc) != EOF );
...
The improved version of initial example,
example2
enum{ PR_SYN= 0x16U };
fc= fopen("//./pipe/dos1","rb+"); assert(fc);
//unbuffered is always required for selected AUX port protocol "per char" exchange
assert( !setvbuf(fc, 0, _IONBF, 0) );
fi= fopen("./isw","rb"); assert(fi);
fo= fopen("./osw","ab"); assert(fo);
//
for(;;){
//rx
assert( !fseek(fc,0,SEEK_SET) );
int
rx_ch= fgetc(fc);
if(rx_ch == EOF){ assert( feof(fc) ); break; }
if(rx_ch != PR_SYN)assert( fputc(rx_ch,fo) != EOF );
//tx
assert( !fseek(fc,0,SEEK_END) );
int
tx_ch= fi? fgetc(fi): PR_SYN;
if(tx_ch == EOF){ assert( feof(fi) ); tx_ch= PR_SYN; }
assert( fputc(tx_ch,fc) != EOF );
}
We could collect the error messages by more asserts, but the messages could not help us to understand "what the error is", because when we tried to access //./pipe/* from client side we did assume:
we must not fseek() on pipes (will should always get ESPIPE error on every fseek());
we can not violate OS quotas for number of pipe client side opened handlers (virtualbox pipe server side declares the quotas).
But in real <stdio.h> life, we not only "can", we "must", we should do assume:
we must always DO fseek() on pipes when interleave read/write requests (otherwise will always get fputc/fgetc error);
we can violate OS quotas for "number of pipe client side opened handlers" by dup() to create "rb"+"ab" separated access handlers pair;
we must know dup() can not "elevate up" RW access of handler created by open() (dup() can only more restrict the existing RW access created by open()).
It is ridiculous, the real <stdio.h> conditions are strictly opposit to our expectations, and we must find why.
1.1
The OS quotas means we can not do like this
fcr= fopen("//./pipe/dos1", "rb"); assert(fcr);
fcw= fopen("//./pipe/dos1", "ab"); assert(fcw); //fcw always error
Due to the OS quotas we must use "r+" mode to r/w by one bi-directional handler
fc= fopen("//./pipe/dos1", "rb+"); assert(fc);
As we know, windows has no posix "named FIFOs", means windows kernel does not provide "standard pipe server side" to any number of pipe clients, who has permissions defined by the file system chmod/chown attrs to connect pipe.
In our case virtualbox provides own access rules of own pipe server side for "//./pipe/dos1" and the 'standard rules' for windows is 'single client only'.
1.2
One user told us abstract answer: "dup this to anything", so we can write a code to create two separated single_direction streams fcr/fcw.
The splitted fcr/fcw streams have no "r+" read/write interleave problems of original bi-directional fc stream.
Mode "a"/"w" is often not the same we need from "w" direction, so we use "r+" as "w" direction and to avoid r/w interleave problems just does not read from "r+".
{
//here regular job for 'rb+' access
//splitted FILE can be buffered
//this is program responsibility to undestand the r/w buffers are shared or separated
//and provide coherence of multiple r/w shared buffers
//assert( !setvbuf(fc, 0, _IONBF, 0) );
//must be created in 'r+' access (O_RDWR)
//the original "rb+" will be used for "w" direction (O_WRONLY)
//dup() also can not "elevate up" RW attr, can not create "w" direction (O_WRONLY) from "r" direction (O_RDONLY)
fcw= fopen("//./pipe/dos1", "rb+"); assert(fcw);
//here function to do the FILE split regular job for 'rb+' access
{
int
dfcw, dfcr;
dfcw= fileno(fcw); assert(dfcw != 01L);
dfcr= dup(dfcw); assert(dfcr != 01L);
fcr= fdopen(dfcr, "rb"); assert(fcr);
//drop dfc, dfcr values
}
//we can declare function
//to do the FILE split regular job for 'rb+' access
FILE
//return "r" mode dupped fcr from original "r+"("w") mode fcw
*fdup_r(FILE *const fcw);
}
1.3
The next improved version of initial example
example3
//
FILE
//return "r" mode dupped fcr from original "r+"("w") mode fcw
*fdup_r(
FILE *const fcw
){
int
dfcw, dfcr;
dfcw= fileno(fcw); assert(dfcw != 01L);
dfcr= dup(dfcw); assert(dfcr != 01L);
//fcr
return fdopen(dfcr, "rb");
//drop dfc, dfcr values
}
//
{
enum{ PR_SYN= 0x16U };
//pipe
//open pipe in "r+"("w") mode fcw
fcw= fopen("//./pipe/dos1","rb+"); assert(fcw);
//return "r" mode dupped fcr from original "r+"("w") mode fcw
fcr= fdup_r(fcw); assert(fcr);
//unbuffered pipe is always required for our AUX port protocol "per char" exchange
assert( !setvbuf(fcr, 0, _IONBF, 0) );
assert( !setvbuf(fcw, 0, _IONBF, 0) );
//allow subst regular file instead of pipe
assert( !fseek(fcr,0,SEEK_SET) );
assert( !fseek(fcw,0,SEEK_END) );
//local files
fi= fopen("./isw","rb"); assert(fi);
//append + reopen to r+
fo= fopen("./osw","ab+"); assert(fo); fclose(fo);
fo= fopen("./osw","rb+"); assert(fo); assert( !fseek(fo,0,SEEK_END) );
//exchange loop
for(;;){
//rx
int
rx_ch= fgetc(fcr);
if(rx_ch == EOF){ assert( feof(fcr) ); break; }
if(rx_ch != PR_SYN)assert( fputc(rx_ch,fo) != EOF );
//tx
int
tx_ch= fi? fgetc(fi): PR_SYN;
if(tx_ch == EOF){ assert( feof(fi) ); tx_ch= PR_SYN; }
assert( fputc(tx_ch,fcw) != EOF );
}}
Now the example looks better, there are no more:
fseek on pipe;
fseek on every write to local file;
fseek two times/per loop pass on every r/w interleave for pipe.
There are some questions about <stdlib.h>.
About dup() behaviour.
Due to the guess about the dup() behaviour we can explain why:
dup() can violate OS quotas for number of opened handlers;
dup() can not "elevate up" RW access of system_handler created by open() (dup() can only more restrict the existing RW access created by open()).
It looks like process handler is abstract structure { system_handler data, local_handler data}.
open() creates both members in the abstract structure, but dup can change only local_handler field, copying the same system_handler field to new handler.
That means that from OS point of view, all duped handlers does the same access via the same system_handler, as if we several times called fread() for the same system_handler directly.
Why we must not do fseek() on pipes?
Unrelated to stdio rules, we can not fseek() on devices which are incapable of seeking because there are no file pointer here, the devices are represented as pair {input_port, output_port}:
there is no position of data inside each IO direction (not exist "input position" or "output position");
there is no position of data between different IO directions ("input_port position" is not less, more or equal to "output position");
there are many devices that follow the restrictions.
And the "posix prog man" claims
"for fseek() the value of the file offset on devices which are incapable of seeking is undefined".
that means stdio is ignoring {offset,whence} parameters of fseek() call on FILE which are incapable of seeking and fseek(fc, rand(), rand()) also will work in our example2 instead of assert( !fseek(fc,0,0) ).
The "posix prog man" claims
"the behavior of fseek() on devices which are incapable of seeking is implementation-defined"
that means we must not call fseek() on devices which are incapable of seeking, fseek() must be used for directly purpose only, in order to set file pointer and not for any other jobs.
3.1
The question is ability to subst regular file instead of trivial pipe in the same program.
In order to work in program with regular file as if the file is a trivial pipe, we can restrict in the program fseek() usage to cases:
sequential read file starting from begin of file fseek(fi,0,SEEK_SET);
sequential write file by append after end of file fseek(fo,0,SEEK_END).
That means pipe should "defines up to complete" the prohibited fseek() by the two allowed fseek() calls with parameters '(0,SEEK_SET)/(0,SEEK_END)'.
The fseek() parameters have no sense for pipe but provide compatibility of pipe with regular file (SEEK_SET refer to read pipe, SEEK_END refer to write pipe).
sure, the purpose of stdio is to avoid us from dig the stream details and guess stream attr combinations, i have never looked so closely before.
3.2
We can imagine stdio based program, that does random access to regular file by fseek(offset), fread(size)/fwrite(size). Later we decided to feed the program by pipe instead of regular file, using external software created "gathered" pipe with "chuncks" of the same (offset,size).
In the case we need stdio behaviour is ignoring {offset, whence} parameters of fseek() call on pipe.
But in real the similar "gathered" pipe is just improper use of pipe, the programming technic produces unreliable code and leads to runtime errors, just believe that runtime errors is one of the most unpleasure thing you want to get.
Improper in comparison with "mapping regular file to pipe" that is implemented by only two special cases of fseek(fi,0,SEEK_SET)/fseek(fi,0,SEEK_END) calls allowed for pipe.
why we need fseek() for r/w interleave in buffered "r+" mode?
FILE provides single buffer to IO and read/write data from downlayer handler by "chunks" into the buffer (reducing IO system calls).
When we does fseek() if the new file pointer points to data that is already in the buffer, nothing need to be done by IO system calls.
but when in "r+" mode we change IO direction (read->write or write->read) we need flush dirty write buffer and drop (invalidate by LRU rules) obsolete read buffer.
But flush only is not enough, most time pointer in FILE and pointer in downlayer handler are not the same, we need fseek to change IO direction.
consider
//read
fseek(0,0)
seek (0,0)
//here FILE and downlayer handler both points to 0
fgetc
//read into buf
read(0,4K)
//here FILE points to buf[1]
//and downlayer handler points to file[4K]
//char was returned from FILE buf[0]
//write
fseek(100,0)
//drop input buf
//here previous input [1..4K) will be dropped
//!HERE "on devices which are incapable of seeking" we can not fseek
//all dropped data will be lost, we can not "push back" the read data into pipe
//input stream data will be damaged just because we try write after read
//"on devices which are incapable of seeking" just split pipe by `fdup_r()`
//or disable buffering by `setvbuf(FILE, 0, _IONBF, 0)`
//seek new write pos 100
seek (100,0)
//here FILE and downlayer handler both points to 100
fputc
//write into buf
//here FILE points to 101
//char was placed into FILE buf[0]
//here downlayer handler points to 100
//write(100,1) was not called
"on devices which are incapable of seeking" in buffered "r+" mode just split pipe by fdup_r() or disable buffering by setvbuf(FILE, 0, _IONBF, 0).
As result.
Once time earlier we have claimed "posix provides reliable way to access streams", but now we see this was not really true, there are some issues here.
And i have never seen the same info about dup() and 'r+' mode in "prog references".
Issues in exploring <stdio.h> stream access.
1.
bugfixes
When we does fseek() if the new file pointer points to data that is already in the buffer, nothing need to be done by IO system calls.
by stdio convention this is not true, fseek() will always reset FILE* buffer, only serial 'getc/putc' calls will skip IO system calls.
the convention is taken by stdio designers to simplify coherent FILE* access by the price of fseek() effectivity.
1.2
"assert(??? != 01L)"
means "-01L"
1.3
in function fdup_r(), "//drop dfcr"
never can "//drop dfcr" due to possible fdopen errors (now the function name is changed into fdup_pipe_r())
2.
Some summary info.
conceptually there are two parts of file handle: "system" and "per_process"
"system" has "per_process" references_list accessed by [process pid]
references_list item {
long owner_pid;
ulong num_of_the_process_dups;
};
references_list item will be removed if owner_pid terminated or no more `num_of_the_process_dups`
system handle will be closed if no more references_list "items"
system{
ulong lseek_pos;
references_list per_process[];
ubits system_io_restrictions;
}
process{
long system_fildes;
ubits per_process_io_restrictions;
}
2.1
there are several ways to create file handle that involves different parts of file handle:
IO call system per_process
open + + check access (chown,chmod) and create new both parts
fork - + dup "per_process" part for new proccess with the same io_restrictions
dup - + dup "per_process" part for current proccess with the same or more strict io_restrictions
dopen + + dup both parts with the same or more strict io_restrictions (is not supported by posix/stdlib)
close ? + close "per_process" part, close "system" part if no more "per_process" refs
2.2
we consider lseek_pos created as common (in "system part" of file handle) in order to simplify "read/write" calls to file handle interleaved by different processes without IPC locks usage, from first look there are no ways to get the same behaviour by private lseek_pos in "per_process" part.
coherent by process pair { lseek, read/write } requires IPC lock usage in IO access interleaved by different processes
dup does for file handle the similar as fork, but for current process
2.3
"fildes dopen(fildes, io_restriction)" works similar to "FILE* fdopen(fildes, io_restriction)"
the same as fdopen() or dup(), dopen() does not check access (chown,chmod) and uses existed system_io_restrictions created by open(). io_restriction is open() attr "O_RW/O_RO/O_WO".
dopen is not supported by posix/stdlib.
dopen() works with regular files and pipes.
for pipes file handle has no lseek() interface (lseek() returns ESPIPE), so for pipes ordinary dup() works very closely to dopen()
2.4
the our (not from lib) function fdup_pipe_r() is intended for bi_direction "r+" pipes only, to dup() from original bi_direction "r+"(O_RW) FILE* stream (created by open()/fopen()) new separated uni_direction "r"(O_RO) FILE* stream, in order to use:
"r+" FILE* stream to uni_direction any buffered, pipe only write
"r" FILE* stream to uni_direction any buffered, pipe only read
the function fdup_pipe_r() allows us to access guest UART 8250 port from host machine for virtualbox
3.
stdio FILE* dups
bi_direction "r+" streams
for bi_direction "r+" `FILE*` access, "pipe" is NOT base interface for any "regular file",
for interleaved r/w access in code is intended for "pipe" can not subst any "regular file",
the interleaving r/w for "regular file" requires `fseek()/ftell()`
for bi_direction "r+" `FILE*` access, "pipe" can not be buffered,
it is required to call `setvbuf(_IONBF)` for the "pipe",
the interleaving r/w for the "pipe" requires `push_back()` input `FILE*` buffer
for bi_direction "r+" `FILE*` access, "pipe" should be splitted by `fdup_pipe_r()`
into two separated uni_direction "r"/"w" `FILE*` access
to improve interleaved r/w access in code is intended for "pipe"
uni_direction "r"/"w" streams
for uni_direction "r"/"w" `FILE*` access, "pipe" IS base interface for SOME "regular file",
for uni_direction r/w access in code is intended for "pipe" can subst the "regular file",
in which the data is placed in real one-by-one as pipes does
so "pipe" interface should be "defined up to complete" by two fake calls:
`rewind()` the same as `fseek(0,SEEK_SET)`
`append()` the same as `fseek(0,SEEK_END)`
for uni_direction "r"/"w" `FILE*` access (including "r+" pipes splitted by `fdup_pipe_r()`)
"pipe" can be buffered by any `_IO?BF` type (by `setvbuf()` call)
so "r+" pipes splitted by `fdup_pipe_r()` in the two uni_direction "r"/"w" derived pipes
povide any buffered interleaved r/w access in code is intended for "pipe"
(by the two uni_direction "r"/"w" derived pipes working together)
it is prohibited to create code is intended for "pipe" in uni_direction "r"/"w" FILE* access, in which can subst any (without one-by-one placed data) "regular file" by fake fseek(rand(),SEEK_SET) calls in the code
if code needs `fseek(rand(),SEEK_SET)` calls, the code is not intended for pipes,
the code follows "regular file" interface (by `fseek()` requests),
but "regular file" is not base interface for "pipe".
attempt to subst "pipe" instead of "regular file" interface violates "type checking"
The next improved version of initial example
example4
//
FILE
//return "r" mode dupped fr from original "r+"("w") mode fw
*fdup_pipe_r(
FILE *const fw
){
int
dfw, dfr;
//assume "fileno" returns the same "fildes" value that is stored in the FILE*
//("fileno" does not make "dup")
dfw= fileno(fw); assert(dfw != -01L);
dfr= dup(dfw); assert(dfr != -01L);
//fr
FILE
*fr;
fr= fdopen(dfr, "rb"); if(!fr)close(dfr);
return fr;
//drop dfw, dfr values
}
//
{
enum{ PR_SYN= 0x16U };
//pipe
//open pipe in "r+"("w") mode fcw
fcw= fopen("//./pipe/dos1","rb+"); assert(fcw);
//return "r" mode dupped fcr from original "r+"("w") mode fcw
fcr= fdup_pipe_r(fcw); assert(fcr);
//unbuffered pipe is always required for our AUX port protocol "per char" exchange
//but for splitted pipes any buffering type "_IO?BF" could be set
assert( !setvbuf(fcr, 0, _IONBF, 0) );
assert( !setvbuf(fcw, 0, _IONBF, 0) );
//"r+" pipe in interleaved r/w access never allows subst regular file instead of pipe
//("rewind()/append()" calls can not help)
//assert( !fseek(fcr,0,SEEK_SET) );
//assert( !fseek(fcw,0,SEEK_END) );
//our AUX port protocol local files
fi= fopen("./isw","rb"); assert(fi);
//"a"(create if not exist) + reopen to "r+" + append()
fo= fopen("./osw","ab+"); assert(fo); fclose(fo);
fo= fopen("./osw","rb+"); assert(fo); assert( !fseek(fo,0,SEEK_END) );
//our AUX port protocol exchange loop
//terminates by ctrl+C
for(;;){
//rx
int
rx_ch= fgetc(fcr);
//"assert( feof(fcr) )" throw error if fgetc(fcr) returned EOF but !feof(fcr) (but better check errno)
//otherwise endless loop with intention
if(rx_ch == EOF){ assert( feof(fcr) ); break; }
if(rx_ch != PR_SYN)assert( fputc(rx_ch,fo) != EOF );
//tx
int
tx_ch= fi? fgetc(fi): PR_SYN;
//"assert( feof(fi) )" throw error if fgetc(fi) returned EOF but !feof(fi) (but better check errno)
//otherwise endless loop with intention
if(tx_ch == EOF){ assert( feof(fi) ); tx_ch= PR_SYN; }
assert( fputc(tx_ch,fcw) != EOF );
}}
Now the example looks better. guest UART 8250 port will be accessed by simple stdio read/write.

read stdout of a process in itself using c++

Consider we have some_function and it prints result to stdout instead returning it.Changing it's defination is out of our scope and there's no alternative to it. We're left with option of reading it from stdout. So the question.
How to read stdout of C++ program in itself.
It is possible to get pid I searched if we can get fd of the same programm but I'm not able to find anything.
#include <unistd.h>
#include <sys/types.h>
#include <iostream>
void some_function(){
std::cout<<"Hello World";
}
int main(){
int pid = ::getpid();
string s = //What to write here.
cout<<"Printing";
some_function(); //This function prints "Hello World" to screen
cout<<s; //"PrintingHello World"
return 0;
}
How to attach pipe to same process i.e instead of creating child process.
Some might think of creating child process and call some_function in it, to be able to read its stdout in parent process, but No, some_function depends on process which calls it and hence we want to call it the very process instead of creating child process.
This isn't hard to do, but IMO it's quite a hack, and it won't work with a multithreaded program:
// make a temp file to store the function's stdout
int newStdOut = mkstemp( "/tmp/stdout.XXXXXXX" );
// save the original stdout
int tmpStdOut = dup( STDOUT_FILENO );
// clear stdout
fflush( stdout );
// now point the stdout file descriptor to the file
dup2( newStdOut, STDOUT_FILENO );
// call the function we want to collect the stdout from
some_function();
// make sure stdout is empty
fflush( stdout );
// restore original stdout
dup2( tmpStdOut, STDOUT_FILENO );
// the tmp file now contains whatever some_function() wrote to stdout
Error checking, proper headers, syncing C stdout with C++ cout, and reading from and cleaning up the temp file are left as exercises... ;-)
Note that you can't safely use a pipe - the function can write enough to fill up the pipe, and you can't read from the pipe because you've called the function.
How to read stdout of C++ program in itself?
There are very few reasons to do that and that is usually (but not always) a design bug.
Be aware of an important thing (at least in a single-threaded program). If your program is both reading from its "stdout" and writing (as usual) in it, it could be stuck in a deadlock: unable to read so not reaching any output routine, (or unable to write because the pipe is full).
So a program which both reads and writes the same thing (actually, the two sides of the same pipe(7)) should use some multiplexing call like poll(2). See also this.
Once you understand that, you'll have some event loop. And before that, you'll make a pipe(7) using pipe(2) (and dup2(2)).
However, pipe to self is a good thing in some signal(7) handling (see signal-safety(7)). That trick is even recommended in Qt Unix signal handling.
Read more about Unix system programming, e.g. ALP or some newer book. Read also intro(2) & syscalls(2).
I have looked for pipe and it requires fd
Wrong. Read much more carefully pipe(2); on success it fills an array of two file descriptors. Of course it could fail (see errno(3) & perror(3) & strerror(3))
Maybe you just need popen(3). Or std::ostringstream. Or open_memstream(3).
Consider we have some_function and it prints result to stdout instead returning it. Changing it's definition is out of our scope and there's no alternative to it
If some_function is your code, or is some free software, you could and probably should improve it to give a result somewhere....

Using pipe() and fork() to read from a file and output to the console/new file

I'm trying to learn how to use the pipe() and fork() system calls. I'm using pipe and fork to create parent and child processes where the child will read a character from the text file, and then send it through the pipe to the parent that will then output the character to the console, with the desired result that it will print out the entire text to the console. Later I'm going to be doing some text processing on the file with the child process reading and processing then sending the updated text to the parent but for now I just want to make sure I'm getting the basics of pipe() correct.
example file:
This is a test file; it is 1 of many.
Others will follow.
Relevant code:
pid = fork();
ifstream fin;
fin.open(inputFilename);
fin.get(inputChar);
if (pid == -1)
{
perror("Trouble");
exit(2);
}
else if (pid == 0) //child process that reads text file and writes to parent
{
close(pipefds[0]);
while(!fin.eof())
{
write(pipefds[1], &inputChar, sizeof(inputChar));
fin.get(inputChar);
}
close(pipefds[1]);
exit(0);
}
else
{
close(pipefds[1]);
read(pipefds[0], readbuffer, sizeof(readbuffer));
cout << readbuffer << endl;
close(pipefds[0]);
exit(0);
}
fin.close();
However, when I compile and run, the output is always of a varying length. Sometimes it will print the whole file, others it will just print out a few letters, or half of a line. Such as.
This i
I've tried going through the man pages and researching more but I haven't been able to find any answers. What exactly is going on with my program that it will sometimes read everything from the file but other times won't. Any help is greatly appreciated!
It looks as though you're trying to read all the data from the pipe with one call to read(2). But, as with any I/O operation, this may always return fewer bytes than you requested. You should always check the return value of read(2) and write(2) system calls (and others), to make sure that they acted as expected.
In this case, you should loop until you get some independent notification from the child process that they're done sending data. This can be signaled in this case by read(2) returning 0, meaning that the child closed their end of the pipe.
You are assuming that the parent can read everything written to the pipe by the child via one read() call. That might be a safe assumption for a pipe if the child were writing everything via a single write() call, as long as the overall data size did not exceed the size of the pipe's internal buffer. It is not at all safe when, as in this case, the child is sending data via many little writes.
How much data the parent actually gets will depend in part on how its one read() call is ordered relative to the child's writes. Inasmuch as the two are separate processes and you're employing no IPC other than the pipe itself, it's basically unpredictable how much data the parent will successfully read.
In the general case, one must assume that the reader will need to perform multiple read() calls to read all data that are sent. It must keep calling read() and processing the resulting data appropriately until read's return value indicates that an I/O error has occurred or that the end of the file has been reached. Note well that end of file does not mean just that no more bytes are available now, but that no more bytes will ever be available. That happens after all processes have closed all copies of the write end of the pipe.

Returning output from bash script to calling C++ function

I am writing a baby program for practice. What I am trying to accomplish is basically a simple little GUI which displays services (for Linux); with buttons to start, stop, enable, and disable services (Much like the msconfig application "Services" tab in Windows). I am using C++ with Qt Creator on Fedora 21.
I want to create the GUI with C++, and populating the GUI with the list of services by calling bash scripts, and calling bash scripts on button clicks to do the appropriate action (enable, disable, etc.)
But when the C++ GUI calls the bash script (using system("path/to/script.sh")) the return value is only for exit success. How do I receive the output of the script itself, so that I can in turn use it to display on the GUI?
For conceptual example: if I were trying to display the output of (systemctl --type service | cut -d " " -f 1) into a GUI I have created in C++, how would I go about doing that? Is this even the correct way to do what I am trying to accomplish? If not,
What is the right way? and
Is there still a way to do it using my current method?
I have looked for a solution to this problem but I can't find information on how to return values from Bash to C++, only how to call Bash scripts from C++.
We're going to take advantage of the popen function, here.
std::string exec(char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
This function takes a command as an argument, and returns the output as a string.
NOTE: this will not capture stderr! A quick and easy workaround is to redirect stderr to stdout, with 2>&1 at the end of your command.
Here is documentation on popen. Happy coding :)
You have to run the commands using popen instead of system and then loop through the returned file pointer.
Here is a simple example for the command ls -l
#include <stdio.h>
#include <stdlib.h>
int main() {
FILE *process;
char buff[1024];
process = popen("ls -l", "r");
if (process != NULL) {
while (!feof(process)) {
fgets(buff, sizeof(buff), process);
printf("%s", buff);
}
pclose(process);
}
return 0;
}
The long approach - which gives you complete control of stdin, stdout, and stderr of the child process, at the cost of fairly significant complexity - involves using fork and execve directly.
Before forking, set up your endpoints for communication - pipe works well, or socketpair. I'll assume you've invoked something like below:
int childStdin[2], childStdout[2], childStderr[2];
pipe(childStdin);
pipe(childStdout);
pipe(childStderr);
After fork, in child process before execve:
dup2(childStdin[0], 0); // childStdin read end to fd 0 (stdin)
dup2(childStdout[1], 1); // childStdout write end to fd 1 (stdout)
dup2(childStderr[1], 2); // childStderr write end to fd 2 (stderr)
.. then close all of childStdin, childStdout, and childStderr.
After fork, in parent process:
close(childStdin[0]); // parent cannot read from stdin
close(childStdout[1]); // parent cannot write to stdout/stderr
close(childStderr[1]);
Now, your parent process has complete control of the std i/o of the child process - and must safely multiplex childStdin[1], childStdout[0], and childStderr[0], while also monitoring for SIGCLD and eventually using a wait-series call to check the process termination code. pselect is particularly good for dealing with SIGCLD while dealing with std i/o asynchronously. See also select or poll of course.
If you want to merge the child's stdout and stderr, just dup2(childStdout[1], 2) and get rid of childStderr entirely.
The man pages should fill in the blanks from here. So that's the hard way, should you need it.

Capturing child stdout to a buffer

I'm developing a cross platform project currently. On windows i had a class that ran a process/script (using a commandline), waited for it to end, and read everything from it's stdout/stderr to a buffer. I then printed the output to a custom 'console'. Note: This was not a redirection of child stdout to parent stdout, just a pipe from child stdout to parent.
I'm new to OSX/unix-like api's but i can understand the canonical way of doing something like this is forking and piping stdouts together. However, i dont want to redirect it to stdout and i would like to capture the output.. It should work pretty much like this (pseudocode, resemblance with unix functions purely coincidental):
class program
{
string name, cmdline;
string output;
program(char * name, char * cmdline)
: name(name), cmdline(cmdline) {};
int run()
{
// run program - spawn it as a new process
int pid = exec(name, cmdline);
// wait for it to finish
wait(pid);
char buf[size];
int n;
// read output of program's stdout
// keep appending data until there's nothing left to read
while (read(pid, buf, size, &n))
output.append(buf, n);
// return exit code of process
return getexitcode(pid);
}
const string & getOutput() { return output; }
};
How would i go about doing this on OSX?
E:
Okay so i studied the relevant api's and it seems that some kind of fork/exec combo is unavoidable. Problem at hand is that my process is very large and forking it really seems like a bad idea (i see that some unix implementations can't do it if the parent process takes up 50%+ of the system ram).
Can't i avoid this scheme in any way? I see that vfork() might be a possible contender, so maybe i could try to mimic the popen() function using vfork. But then again, most man pages state that vfork might very well just be fork()
You have a library call to do just that: popen. It will provide you with a return value of a file descriptor, and you can read that descriptor till eof. It's part of stdio, so you can do that on OSX, but other systems as well. Just remember to pclose() the descriptor.
#include <stdio.h>
FILE * popen(const char *command, const char *mode);
int pclose(FILE *stream);
if you want to keep output with absolutely no redirection, the only thing we can think of is using something like "tee" - a command which splits the output to a file but maintains its own stdout. It's fairly easy to implement that in code as well, but it might not be necessary in this case.