Alright, I'm doing a pipe to connect with the children of my process.
First of nothing I tried to do a safeguard of my fds so I can access them later for some stuff, but somehow it just gets stuck when duplicating the fds.
int pipeFd [2];
int pid;
pipe (pipeFd);
//Safeguard of the Original FDs
int fdSG [2];
perror ("fdsg create");
dup2 (1, fdSG [1]);
perror ("dup2 sfg1");
dup2 (0, fdSG [0]);
perror ("dup2 sfg2");
dup2 (pipeFd [1], 1);
The program gets stuck in the last instruction showed here.
The terminal output is the following:
fdsg create: Success
dup2 sfg1: Bad file descriptor
dup2 sfg2: Bad file descriptor
dup2: Bad file descriptor
Does any of you have any clue why this is happening?
From the code you've shown you haven't initalised fdSG. That's not correct, the arguments of dup2 both need to be valid file descriptors.
Since you seem to want to copy a fd rather than replace an existing one you should use dup for those backup copies instead, it picks a free fd and uses that. (Alternatively you could initalise fdSG to be valid fds too).
From the manpage:
dup() uses the lowest-numbered unused descriptor for the new descriptor.
Related
I mean to associate a file descriptor with a file pointer and use that for writing.
I put together program io.cc below:
int main() {
ssize_t nbytes;
const int fd = 3;
char c[100] = "Testing\n";
nbytes = write(fd, (void *) c, strlen(c)); // Line #1
FILE * fp = fdopen(fd, "a");
fprintf(fp, "Writing to file descriptor %d\n", fd);
cout << "Testing alternate writing to stdout and to another fd" << endl;
fprintf(fp, "Writing again to file descriptor %d\n", fd);
close(fd); // Line #2
return 0;
}
I can alternately comment lines 1 and/or 2, compile/run
./io 3> io_redirect.txt
and check the contents of io_redirect.txt.
Whenever line 1 is not commented, it produces in io_redirect.txt the expected line Testing\n.
If line 2 is commented, I get the expected lines
Writing to file descriptor 3
Writing again to file descriptor 3
in io_redirect.txt.
But if it is not commented, those lines do not show up in io_redirect.txt.
Why is that?
What is the correct way of using fdopen?
NOTE.
This seems to be the right approach for a (partial) answer to Smart-write to arbitrary file descriptor from C/C++
I say "partial" since I would be able to use C-style fprintf.
I still would like to also use C++-style stream<<.
EDIT:
I was forgetting about fclose(fp).
That "closes" part of the question.
Why is that?
The opened stream ("stream" is an opened FILE*) is block buffered, so nothing gets written to the destination before the file is flushed. Exiting from an application closes all open streams, which flushes the stream.
Because you close the underlying file descriptor before flushing the stream, the behavior of your program is undefined. I would really recommend you to read posix 2.5.1 Interaction of File Descriptors and Standard I/O Streams (which is written in a horrible language, nonetheless), from which:
... if two or more handles are used, and any one of them is a stream, the application shall ensure that their actions are coordinated as described below. If this is not done, the result is undefined.
...
For the first handle, the first applicable condition below applies. ...
...
If it is a stream which is open for writing or appending (but not also open for reading), the application shall either perform an fflush(), or the stream shall be closed.
A "handle" is a file descriptor or a stream. An "active handle" is the last handle that you did something with.
The fp stream is the active handle that is open for appending to file descriptor 3. Because fp is an active handle and is not flushed and you switch the active handle to fd with close(fd), the behavior of your program is undefined.
What is my guess and most probably happens is that your C standard library implementation calls fflush(fp) after main returns, because fd is closed, some internal write(3, ...) call returns an error and nothing is written to the output.
What is the correct way of using fdopen?
The usage you presented is the correct way of using fdopen.
So I am trying to implement the following command line statement in c++ by using dup2() and execvp(): wc < inputFile.txt then return to my command line. So basically I am forking a process and executing that command in the child process.
However my code the following error: wc: stdin: read: Bad file descriptor
Here is my code:
int file_desc = open(fileName.c_str(), O_WRONLY | O_APPEND);
int stdin = dup(0);
dup2(file_desc,0);
execvp (args2[0], args2); // now execute
dup2(stdin, 0);
So my thought process was that I needed to redirect the standard in (aka index 0 of the file descriptor table) to the file descriptor of the file since at index is always stdin and that's where input is read from. Then after I execute, I replace it back with the original standard in. So I am confused about what I am doing wrong.
The file_desc is opened only for writing (O_WRONLY) - try opening it for reading (O_RDONLY).
You might also want to:
dup2() between fork() and exec() instead of saving and restoring stdin - less system calls and saves a race in multi-threaded apps.
close file_desc in the parent process
close file_desc in the child process after the dup2 (and before the exec)
I am trying to get the output of Tcl Interpreter as described in answer of this question Tcl C API: redirect stdout of embedded Tcl interp to a file without affecting the whole program. Instead of writing the data to file I need to get it using pipe. I changed Tcl_OpenFileChannel to Tcl_MakeFileChannel and passed write-end of pipe to it. Then I called Tcl_Eval with some puts. No data came at read-end of the pipe.
#include <sys/wait.h>
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <tcl.h>
#include <iostream>
int main() {
int pfd[2];
if (pipe(pfd) == -1) { perror("pipe"); exit(EXIT_FAILURE); }
/*
int saved_flags = fcntl(pfd[0], F_GETFL);
fcntl(pfd[0], F_SETFL, saved_flags | O_NONBLOCK);
*/
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_Channel chan;
int rc;
int fd;
/* Get the channel bound to stdout.
* Initialize the standard channels as a byproduct
* if this wasn't already done. */
chan = Tcl_GetChannel(interp, "stdout", NULL);
if (chan == NULL) {
return TCL_ERROR;
}
/* Duplicate the descriptor used for stdout. */
fd = dup(1);
if (fd == -1) {
perror("Failed to duplicate stdout");
return TCL_ERROR;
}
/* Close stdout channel.
* As a byproduct, this closes the FD 1, we've just cloned. */
rc = Tcl_UnregisterChannel(interp, chan);
if (rc != TCL_OK)
return rc;
/* Duplicate our saved stdout descriptor back.
* dup() semantics are such that if it doesn't fail,
* we get FD 1 back. */
rc = dup(fd);
if (rc == -1) {
perror("Failed to reopen stdout");
return TCL_ERROR;
}
/* Get rid of the cloned FD. */
rc = close(fd);
if (rc == -1) {
perror("Failed to close the cloned FD");
return TCL_ERROR;
}
chan = Tcl_MakeFileChannel((void*)pfd[1], TCL_WRITABLE | TCL_READABLE);
if (chan == NULL)
return TCL_ERROR;
/* Since stdout channel does not exist in the interp,
* this call will make our file channel the new stdout. */
Tcl_RegisterChannel(interp, chan);
rc = Tcl_Eval(interp, "puts test");
if (rc != TCL_OK) {
fputs("Failed to eval", stderr);
return 2;
}
char buf;
while (read(pfd[0], &buf, 1) > 0) {
std::cout << buf;
}
}
I've no time at the moment to tinker with the code (might do that later) but I think this approach is flawed as I see two problems with it:
If stdout is connected to something which is not an interactive console (a call to isatty(2) is usually employed by the runtime to check for that), full buffering could be (and I think will be) engaged, so unless your call to puts in the embedded interpreter outputs so many bytes as to fill up or overflow the Tcl's channel buffer (8KiB, ISTR) and then the downstream system's buffer (see the next point), which, I think, won't be less than 4KiB (the size of a single memory page on a typical HW platform), nothing will come up at the read side.
You could test this by changing your Tcl script to flush stdout, like this:
puts one
flush stdout
puts two
You should then be able to read the four bytes output by the first puts from the pipe's read end.
A pipe is two FDs connected via a buffer (of a defined but system-dependent size). As soon as the write side (your Tcl interp) fills up that buffer, the write call which will hit the "buffer full" condition will block the writing process unless something reads from the read end to free up space in the buffer. Since the reader is the same process, such a condition has a perfect chance to deadlock since as soon as the Tcl interp is stuck trying to write to stdout, the whole process is stuck.
Now the question is: could this be made working?
The first problem might be partially fixed by turning off buffering for that channel on the Tcl side. This (supposedly) won't affect buffering provided for the pipe by the system.
The second problem is harder, and I can only think of two possibilities to fix it:
Create a pipe then fork(2) a child process ensuring its standard output stream is connected to the pipe's write end. Then embed the Tcl interpreter in that process and do nothing to the stdout stream in it as it will be implicitly connected to the child process standard output stream attached, in turn, to the pipe. You then read in your parent process from the pipe until the write side is closed.
This approach is more robust than using threads (see the next point) but it has one potential downside: if you need to somehow affect the embedded Tcl interpreter in some ways which are not known up front before the program is run (say, in response to the user's actions), you will have to set up some sort of IPC between the parent and the child processes.
Use threading and embed the Tcl interp into a separate thread: then ensure that reads from the pipe happen in another (let's call it "controlling") thread.
This approach might superficially look simpler than forking a process but then you get all the hassles related to proper synchronization common for threading. For instance, a Tcl interpreter must not be accessed directly from threads other than the one in which the interp was created. This implies not only concurrent access (which is kind of obvious by itself) but any access at all, including synchronized, because of possible TLS issues. (I'm not exactly sure this holds true, but I have a feeling this is a big can of worms.)
So, having said all that, I wonder why you seem to systematically reject suggestions to implement a custom "channel driver" for your interp and just use it to provide the implementation for the stdout channel in your interp? This would create a super-simple single-thread fully-synchronized implementation. What's wrong with this approach, really?
Also observe that if you decided to use a pipe in hope it will serve as a sort of "anonymous file", then this is wrong: a pipe assumes both sides work in parallel. And in your code you first make the Tcl interp write everything it has to write and then try to read this. This is asking for trouble, as I've described, but if this was invented just to not mess with a file, then you're just doing it wrong, and on a POSIX system the course of actions could be:
Use mkstemp() to create and open a temporary file.
Immediately delete it using the name mkstemp() returned in place of the template you passed it.
Since the file still has an open FD for it (returned by mkstemp()), it will disappear from the file system but will not be unlinked, and might be written to and read from.
Make this FD an interp's stdout. Let the interp write everything it has to.
After the interp is finished, seek() the FD back to the beginning of the file and read from it.
Close the FD when done — the space it occupied on the underlying filesystem will be reclamied.
If I create pipe in unix this way:
int fds[] = {0, 0};
pipe(fds);
Then make FILE * from fds[0] this way:
FILE *pipe_read = fdopen(fds[0], "rt");
Then how should I close this file (pipe_read)?
fclose(pipe_read)
pclose(pipe_read)
close(fileno(pipe_read))
fdopen returns a FILE* so you should fclose it. This will also close the underlying file descriptor as well.
The pclose call is meant for closing a handle created with popen, a function you use for running a command and connecting to it with pipes.
The close call will close the underlying file descriptor but, unfortunately, before the file handle has had a chance to flush its data out - in other words, you're likely to lose data.
You should use fclose(pipe_read).
close() closes the file descriptor in the kernel. It's not enough because the file pointer is not free. So you should use fclose() on pipe_read, which will also take care of closing the file descriptor.
I have a failing C program, and i've narrowed it down to a fork()ed child trying to close stdout and stderr, which were closed by its parent process before calling fork() - i assume those streams were passed on to the child process.
how can i tell if a stream is closed in C before attempting to close it using something like fclose(stdout)
C programs on UNIX expect to have a file descriptors 0, 1 and 2 open when they are started. If you do not want them to go anywhere, open /dev/null and dup it to those file descriptors.
If you're working at that level, you probably shouldn't be using the C standard library's buffered FILE pointers (stdin & co), but rather with the underlying file descriptor integers themselves.
You should be able to do some harmless operation, such as maybe a lseek(fd, 0, SEEK_SET) on the file to detect if the underlying descriptor is valid.
You can use ftell() to check that.
If it returns -1, your stream is mostly closed.
After the call to fclose(), any use of
stream results in undefined behavior.
So if it's the FILE* stdout, you can't use stdout anymore at all, not even to check if it's valid/open.
You could use the file descriptor for stdout directly, it's fd 1 .
struct stat stbuf;
if(fstat(1,&stbuf) == -1) {
if(errno == EBADF) {
stdout isn't open/valid
}
}