Unnamed pipe gets blocked although there is data to read - c++

I'm having some problems with unnamed pipes or "fifos" in C. I have two executable files: One tries to read, the other one tries to write. The reader is meant to be executed only once. I tried to make a simple code to show my problem, so it reads 10 times and then it gets closed. However, the writer should be executed many times (in my original program, it can't be executed twice at once: You have to wait for it to finish to run it again).
The problem with this code is: it only prints the incoming message when another one arrives. It seems that it gets blocked until it receives another message. I don't know what is happening, but it seems the "read" line blocks the program although there is data to read, and it works again when I send new data.
I tried another thing: As you can see the writer closes the file descriptor. The reader opens the file descriptor twice, because it would find EOF and get unblocked if it didn't. I tried eliminating those lines (the writer wouldn't close the fd, the reader would open the fd just once, eliminating the second "open()"). But for some reason, it unblocks if I do that. Why does that happen?
This is my code:
Reader:
int main () {
int fd;
static const std::string FILE_FIFO = "/tmp/archivo_fifo";
mknod ( static_cast<const char*>(FILE_FIFO.c_str()),S_IFIFO|0666,0 );
std::string mess = "Hii!! Example";
//open:
fd = open ( static_cast<const char*>(FILE_FIFO.c_str()),O_WRONLY );
//write:
write ( fd, static_cast<const void*>(mess.c_str()) ,mess.length() );
std::cout << "[Writer] I wrote " << mess << std::endl;
//close:
close ( fd );
fd = -1;
std::cout << "[Writer] END" << std::endl;
exit ( 0 );
}
Writer:
int main () {
int i,fd;
static const int BUFFSIZE = 100;
static const std::string name = "/tmp/archivo_fifo";
mknod ( static_cast<const char*>(name.c_str()),S_IFIFO|0666,0 );
char buffer[BUFFSIZE];
i=0;
fd = open ( name.c_str(),O_RDONLY );
while (true) {
i++;
std::cout << "Waiting to read Fifo: "<< i << std::endl;
ssize_t bytesLeidos = read ( fd,static_cast<void*>(buffer),BUFFSIZE);
fd = open ( name.c_str(),O_RDONLY );
std::string mess = buffer;
mess.resize ( bytesLeidos );
std::cout << "[Reader] I read: " << mess << std::endl;
sleep(3);
if (i==10) break;
}
close ( fd );
fd = -1;
unlink ( name.c_str() );
std::cout << "[Reader] END" << std::endl;
exit ( 0 );
}
Thanks in advance. And please excuse my poor English

you should use the select call to find out if any data is available on the fd of the pipe.
have a look at
http://en.wikipedia.org/wiki/Select_(Unix)

You've opened file in blocking mode:
If some process has the pipe open for writing and O_NONBLOCK is clear, read() shall block the calling thread until some data is written or the pipe is closed by all processes that had the pipe open for writing.
Depends on you goals you shoud rather synchronize readers and writers of your pipe, or use non-blocking mode for reader. Read about poll, epoll, select.

I've been reading more about unnamed pipes and now I understand the problem. I wrote:
the reader opens the file descriptor twice, because it would find EOF and get unblocked if it didn't. I tried eliminating those lines (the writer wouldn't close the fd, the reader would open the fd just once, eliminating the second "open()"). But for some reason, it unblocks if I do that. Why does that happen?
It unblocks because the other process closes, so the OS closes the file descriptor anyway. That's why although I didn't wrote close(fd) it unblocks.
The only way in which a blocking fifo can unblock is:
1) there is data to read
2) the other program closed the file descriptor. If there is no data to read and the writer closed the file descriptor (even if the file descriptor is open in reader), read() returns 0 and unblocks.
So my solution was: redesign my program so it would have the writer's file descriptor open all the time. Which means: there is only a executable file now. I'm pretty sure I could have done it with two executables, but I would probably need semaphores or something like that to synchronize, so it wouldn't try to read if the writer's fd is closed.

Related

Is output read from popen()ed FILE* complete before pclose()?

pclose()'s man page says:
The pclose() function waits for the associated process to terminate and returns the exit status of the command as returned by wait4(2).
I feel like this means if the associated FILE* created by popen() was opened with type "r" in order to read the command's output, then you're not really sure the output has completed until after the call to pclose(). But after pclose(), the closed FILE* must surely be invalid, so how can you ever be certain you've read the entire output of command?
To illustrate my question by example, consider the following code:
// main.cpp
#include <iostream>
#include <cstdio>
#include <cerrno>
#include <cstring>
#include <sys/types.h>
#include <sys/wait.h>
int main( int argc, char* argv[] )
{
FILE* fp = popen( "someExecutableThatTakesALongTime", "r" );
if ( ! fp )
{
std::cout << "popen failed: " << errno << " " << strerror( errno )
<< std::endl;
return 1;
}
char buf[512] = { 0 };
fread( buf, sizeof buf, 1, fp );
std::cout << buf << std::endl;
// If we're only certain the output-producing process has terminated after the
// following pclose(), how do we know the content retrieved above with fread()
// is complete?
int r = pclose( fp );
// But if we wait until after the above pclose(), fp is invalid, so
// there's nowhere from which we could retrieve the command's output anymore,
// right?
std::cout << "exit status: " << WEXITSTATUS( r ) << std::endl;
return 0;
}
My questions, as inline above: if we're only certain the output-producing child process has terminated after the pclose(), how do we know the content retrieved with the fread() is complete? But if we wait until after the pclose(), fp is invalid, so there's nowhere from which we could retrieve the command's output anymore, right?
This feels like a chicken-and-egg problem, but I've seen code similar to the above all over, so I'm probably misunderstanding something. I'm grateful for an explanation on this.
TL;DR executive summary: how do we know the content retrieved with the fread() is complete? — we've got an EOF.
You get an EOF when the child process closes its end of the pipe. This can happen when it calls close explicitly or exits. Nothing can come out of your end of the pipe after that. After getting an EOF you don't know whether the process has terminated, but you do know for sure that it will never write anything to the pipe.
By calling pclose you close your end of the pipe and wait for termination of the child. When pclose returns, you know that the child has terminated.
If you call pclose without getting an EOF, and the child tries to write stuff to its end of the pipe, it will fail (in fact it wil get a SIGPIPE and probably die).
There is absolutely no room for any chicken-and-egg situation here.
Read the documentation for popen more carefully:
The pclose() function shall close a stream that was opened by popen(), wait for the command to terminate, and return the termination status of the process that was running the command language interpreter.
It blocks and waits.
I learned a couple things while researching this issue further, which I think answer my question:
Essentially: yes it is safe to fread from the FILE* returned by popen prior to pclose. Assuming the buffer given to fread is large enough, you will not "miss" output generated by the command given to popen.
Going back and carefully considering what fread does: it effectively blocks until (size * nmemb) bytes have been read or end-of-file (or error) is encountered.
Thanks to C - pipe without using popen, I understand better what popen does under the hood: it does a dup2 to redirect its stdout to the write-end of the pipe it uses. Importantly: it performs some form of exec to execute the specified command in the forked process, and after this child process terminates, its open file descriptors, including 1 (stdout) are closed. I.e. termination of the specified command is the condition by which the child process' stdout is closed.
Next, I went back and thought more carefully about what EOF really was in this context. At first, I was under the loosey-goosey and mistaken impression that "fread tries to read from a FILE* as fast as it can and returns/unblocks after the last byte is read". That's not quite true: as noted above: fread will read/block until its target number of bytes is read or EOF or error are encountered. The FILE* returned by popen comes from a fdopen of the read-end of the pipe used by popen, so its EOF occurs when the child process' stdout - which was dup2ed with the write-end of the pipe - is closed.
So, in the end what we have is: popen creating a pipe whose write end gets the output of a child process running the specified command, and whose read end if fdopened to a FILE* passed to fread. (Assuming fread's buffer is big enough), fread will block until EOF occurs, which corresponds to closure of the write end of popen's pipe resulting from termination of the executing command. I.e. because fread is blocking until EOF is encountered, and EOF occurs after command - running in popen's child process - terminates, it's safe to use fread (with a sufficiently large buffer) to capture the complete output of the command given to popen.
Grateful if anyone can verify my inferences and conclusions.
popen() is just a shortcut for series of fork, dup2, execv, fdopen, etc. It will give us access to child STDOUT, STDIN via files stream operation with ease.
After popen(), both the parent and the child process executed independently.
pclose() is not a 'kill' function, its just wait for the child process to terminate. Since it's a blocking function, the output data generated during pclose() executed could be lost.
To avoid this data lost, we will call pclose() only when we know the child process was already terminated: a fgets() call will return NULL or fread() return from blocking, the shared stream reach the end and EOF() will return true.
Here is an example of using popen() with fread(). This function return -1 if the executing process is failed, 0 if Ok. The child output data is return in szResult.
int exec_command( const char * szCmd, std::string & szResult ){
printf("Execute commande : [%s]\n", szCmd );
FILE * pFile = popen( szCmd, "r");
if(!pFile){
printf("Execute commande : [%s] FAILED !\n", szCmd );
return -1;
}
char buf[256];
//check if the output stream is ended.
while( !feof(pFile) ){
//try to read 255 bytes from the stream, this operation is BLOCKING ...
int nRead = fread(buf, 1, 255, pFile);
//there are something or nothing to read because the stream is closed or the program catch an error signal
if( nRead > 0 ){
buf[nRead] = '\0';
szResult += buf;
}
}
//the child process is already terminated. Clean it up or we have an other zoombie in the process table.
pclose(pFile);
printf("Exec command [%s] return : \n[%s]\n", szCmd, szResult.c_str() );
return 0;
}
Note that, all files operation on the return stream work on BLOCKING mode, the stream is open without O_NONBLOCK flags. The fread() can be blocked forever when the child process hang and nerver terminated, so use popen() only with trusted program.
To take more controls on child process and avoid the file blockings operation, we should use fork/vfork/execlv, etc. by ourself, modify the pipes opened attribut with O_NONBLOCK flags, use poll() or select() from time to time to determine if there are some data then use read() function to read from the pipe.
Use waitpid() with WNOHANG periodically to see if the child process was terminated.

What's the best way to copy a file in a way that I can easily cancel the copy while it is in progress?

I am using ReadFileEx to read some bytes from a file and using WriteFileEx to write some bytes to a device. This action will repeat till all file bytes are read and written to the device.
Reason I use Ex APIs is because by requesting overlapped IOs to the OS can keep UI thread responsive updating a progress bar while the read/write function is doing their tasks.
The process begins with a ReadFileEx and a MY_OVERLAPPED structure and a ReadCompletionRoutine will be passed in together. Once the read is done, the read completion routine will be called. Inside the routine, a WriteFileEx will be emitted and WriteCompletionRoutine will be called. Inside the write completion routine, another ReadFileEx will be emitted after the offset of the MY_OVERLAPPED structure is reset to next position. That is, two completions will call each other once a read or write is done.
Notice that the above process will only be executed if the calling thread is under alertable state. I use a while loop to keep the thread alertable by keep checking a global state variable is set to TRUE or not. The state variable, completed, will be set to TRUE inside ReadCompletionRoutine once all procedure is done.
FYI, MY_OVERLAPPED structure is a self-define structure that inherits OVERLAPPPED structure so that I can add 2 more information I need to it.
Now, my question is I would like to add a cancel function so that the user can cancel all the process that has been started. What I do is pretty simple. I set the completed variable to TRUE when a cancel button is pressed, so the while loop will break and alertable state will be stoped so the completion routines won't be executed. But, I don't know how to cancel the overlapped request that sent by the Read/WriteFileEx and their completion routines along with the MY_OVERLAPPED structure(see the //******* part in code). Now my code will crash once the cancel button is pressed. The cancel part is the one causing the crash. Please help, thank you.
//In MyClass.h========================================
struct MY_OVERLAPPED: OVERLAPPED {
MyClass *event;
unsigned long long count;
};
//In MyClass.cpp - main===============================
MY_OVERLAPPED overlap;
memset(&overlap, 0,sizeof(overlap));
//point to this class (MyClass), so all variables can later be accessed
overlap.event = this;
//set read position
overlap.Offset = 0;
overlap.OffsetHigh = 0;
overlap.count = 0;
//start the first read io request, read 524288 bytes, which 524288 bytes will be written in ReadCompletionRoutine
ReadFileEx(overlap.event->hSource, overlap.event->data, 524288, &overlap, ReadCompletionRoutine);
while(completed != true) {
updateProgress(overlap.count);
SleepEx(0,TRUE);
}
//********
CancelIo(overlap.event.hSource);
CancelIo(overlap.event.hDevice);
//********
//In MyClass.cpp - CALLBACKs===============================
void CALLBACK ReadCompletionRoutine(DWORD errorCode, DWORD bytestransfered, LPOVERLAPPED lpOverlapped)
{
//type cast to MY_OVERLAPPED
MY_OVERLAPPED *overlap = static_cast<MY_OVERLAPPED*>(lpOverlapped);
//write 524288 bytes and continue to read next 524288 bytes in WriteCompletionRoutine
WriteFileEx(overlap->event->hDevice, overlap->event->data, 524288, overlap, WriteCompletionRoutine);
}
void CALLBACK WriteCompletionRoutine(DWORD errorCode, DWORD bytestransfered, LPOVERLAPPED lpOverlapped)
{
MY_OVERLAPPED *overlap = static_cast<MY_OVERLAPPED*>(lpOverlapped);
if(overlap->count<fileSize/524288) {
//set new offset to 524288*i, i = overlap->count for next block reading
overlap->count = (overlap->count)+1;
LARGE_INTEGER location;
location.QuadPart = 524288*(overlap->count);
overlap->Offset = location.LowPart;
overlap->OffsetHigh = location.HighPart;
ReadFileEx(overlap->event->hSource, overlap->event->data, 524288, overlap, ReadCompletionRoutine);
}
else {
completed = TRUE;
}
}
Note that I prefer not to use multi-thread programming. Other than that, any better way of accomplishing the same goals is appreciated. Please and feel free to provide detail code and explanations. Thanks.
I actually would use a background thread for this, because modern C++ makes this very easy. Much easier, certainly, than what you are trying to do at the moment. So please try to shed any preconceptions you might have that this is the wrong approach for you and please try to read this post in the spirit in which it is intended. Thanks.
First up, here's some very simple proof of concept code which you can compile and run for yourself to try it out. At first sight, this might look a bit 'so what?', but bear with me, I'll explain at the end:
#define _CRT_SECURE_NO_WARNINGS
#include <iostream>
#include <thread>
#include <chrono>
#include <memory>
#include <atomic>
int usage ()
{
std::cout << "Usage: copy_file infile outfile\n";
return 255;
}
void copy_file (FILE *infile, FILE *outfile, std::atomic_bool *cancel)
{
constexpr int bufsize = 32768;
std::unique_ptr <char []> buf (new char [bufsize]);
std::cout << "Copying: ";
while (1)
{
if (*cancel)
{
std::cout << "\nCopy cancelled";
break;
}
size_t bytes_read = fread (buf.get (), 1, bufsize, infile);
if (bytes_read == 0)
{
// Check for error here, then break out of the loop
break;
}
size_t bytes_written = fwrite (buf.get (), 1, bytes_read, outfile);
// Again, check for error etc
std::cout << ".";
}
std::cout << "\nCopy complete\n";
// Now probably something like PostMessage here to alert your main loop hat the copy is complete
}
int main (int argc, char **argv)
{
if (argc < 3) return usage ();
FILE *infile = fopen (argv [1], "rb");
if (infile == NULL)
{
std::cout << "Cannot open input file " << argv [1] << "\n";
return 255;
}
FILE *outfile = fopen (argv [2], "wb");
if (outfile == NULL)
{
std::cout << "Cannot open output file " << argv [2] << "\n";
fclose (infile);
return 255;
}
std::atomic_bool cancel = false;
std::thread copy_thread = std::thread (copy_file, infile, outfile, &cancel);
std::this_thread::sleep_for (std::chrono::milliseconds (200));
cancel = true;
copy_thread.join (); // waits for thread to complete
fclose (infile);
fclose (outfile); // + error check!
std::cout << "Program exit\n";
}
And when I run this on my machine, I get something like:
background_copy_test bigfile outfile
Copying:
.....................................................................................
..............
Copy cancelled
Copy complete
Program exit
So, what's noteworthy about this? Well, in no particular order:
It's dead simple.
It's standard C++. There are no Windows calls in there at all (and I did that deliberately, to try to make a point).
It's foolproof.
It 'just works'.
Now of course, you're not going to put your main thread to sleep while you're copying the file in real life. No no no. Instead, you're going to just kick the copy off via std::thread and then put up your 'Copying...' dialog with a Cancel button in it (presumably, this would be a modal dialog)
Then:
If that button is pressed, just set cancel to true and the magic will then happen.
Have copy_file send your 'copying' dialog a WM_APP+nnn message when it is done. It can also do that to have the dialog update its progress bar (I'm leaving all that stuff to you).
Don't omit that call to join() before you destroy copy_thread or it goes out of scope!
What else? Well, to get your head around this properly, study a bit of modern C++. cppreference is a useful site, but you should really read a good book. Then you should be able to apply the lessons learned here to your particular use-case.
Edit: It occurs to me to say that you might do better to create your thread in the WM_INITDIALOG handler for your 'Copying' dialog. Then you can pass the dialog's HWND to copy_file so that it knows where to send those messages to. Just a thought.
And you have a fair bit of reading to do if you're going to profit from this post. But then again, you should. And this post is going to achieve precisely nothing, I fear. Shame.

Close the stdin of boost::process child

I'm trying to call a process with a string to its stdin, with Boost-1.64.0.
The current code is :
bp::opstream inStream ;
bp::ipstream outStream;
bp::ipstream errStream;
bp::child child(
command, // the command line
bp::shell,
bp::std_out > outStream,
bp::std_err > errStream,
bp::std_in < inStream);
// read the outStream/errStream in threads
child.wait();
The problem is that the child executable is waiting for its stdin EOF. Here child.wait() is hanging indefinitely…
I tried to used asio::buffer, std_in.close(),… But no luck.
The only hack I found was to delete() the inStream… And that's not really reliable.
How am I supposed to "notify" the child process and close its stdin with the new boost::process library ?
Thanks !
I tried to used asio::buffer, std_in.close()
This works. Of course it only works if you pass it to a launch function (bp::child constructor, bp::system, etc).
If you need to pass data, and then close it, simply close the associated filedescriptor. I do something like this:
boost::asio::async_write(input, bp::buffer(_stdin_data), [&input](auto ec, auto bytes_written){
if (ec) {
logger.log(LOG_WARNING) << "Standard input rejected: " << ec.message() << " after " << bytes_written << " bytes written";
}
may_fail([&] { input.close(); });
});
Where input is
bp::async_pipe input(ios);
Also, check that the process is not actually stuck sending the output! If you fail to consume the output it would be buffering and waiting if the buffer is full.
Closing the pipe by calling inStream.close(); when you're done writing to it. You can also close it while launching with bp::std_in.close().
The asio solution of course also works and avoids the danger of deadlocks.

Open an ifstream on a pipe with no data without blocking

I am trying to use an ifstream to open a named pipe that will eventually have data written to it.
std::cout << "Opening " << name << std::endl;
std::ifstream manual_shutdown_file(name.c_str());
std::cout << "Opened " << name << std::endl;
When I run the program, it blocks in the ifstream constructor. I see "Opening name" printed to the console, but the opened statement does not appear.
I know that I am connecting to the pipe, because if I execute
$ echo foo > name
from a shell, then the constructor returns and the Opened statement is printed. Is there no way to open a pipe until it has data in it, even if I do not want to immediately try reading it?
Calling open on the read end of a pipe will block until the write end is opened.
You can use the O_NONBLOCK flag to open the file descriptor for the pipe, but there is no standard way to then use the fd with std::ifstream, see here.
Guessing at your requirement, I'd say a small class that opens the fd and presents a polling signal interface would suit, something like:
namespace blah
{
class signal_t
{
private:
int fd;
// note: define sensible copy/move semantics
signal_t(const signal_t&) = delete;
signal_t& operator=(const signal_t&) = delete;
public:
signal_t(const char* named_pipe); // open fd, set O_NONBLOCK
void notify() const; // write 1 byte to fd as signal
bool poll() const; // attempt to read from fd, return true if signalled.
~signal_t(); // close fd
};
}
Since opening an input pipe via ifstream blocks until there is someone writing to it, you could always just let the ifstream block. Then to unblock it from another thread create your own ofstream to that same pipe then immediately close the ofstream. This will unblock the ifstream and have it marked with eof. This is much easier and less error prone than messing with the platform specific controls of the file handles.
You actually can open a std::ifstream on a named pipe without blocking for a writer, but you must set the flags as though you were also going to write to the stream.
Try std::ifstream pipe_stream(filename, std::ifstream::in | std::ifstream::out), or stream.open(filename, std::ifstream::in | std::ifstream::out).

Using posix pipe() and dup() with C++ to redirect I/O problems

I have to modify a simple shell I wrote for a previous homework assignment to handle I/O redirection and I'm having trouble getting the pipes to work. It seems that when I write and read to stdout and from stdin after duplicating the file descriptors in the separates processes, the pipe works, but if I use anything like printf, fprintf, gets, fgets, etc to try and see if the output is showing up in the pipe, it goes to the console even though the file descriptor for stdin and stdout clearly is a copy of the pipe (I don't know if that's the correct way to phrase that, but the point is clear I think).
I am 99.9% sure that I am doing everything as it should be at least in plain C -- such as closing all the file descriptors appropriately after the dup() -- and file I/O works fine, so this seems like an issue of a detail that I am not aware of and cannot find any information on. I've spent most of the day trying different things and the past few hours googling trying to figure out if I could redirect cin and cout to the pipe to see if that would fix it, but it seems like it's more trouble than it's worth at this point.
Should this work just by redirecting stdin and stdout since cin and cout are supposed to be sync'd with stdio? I thought it should, especially since the commands are probably written in C so they would use stdio, I would think. However, if I try a command like "cat [file1] [file2] | sort", it prints the result of cat [file1] [file2] to the command line, and the sort doesn't get any input so it has no output. It's also clear that cout and cin are not affected by the dup() either, so I put two and two together and came to this conclusion
Here is a somewhat shortened version of my code minus all the error checking and things like that, which I am confident I am handling well. I can post the full code if it come to it, but it's a lot so I'll start with this.
I rewrote the function so that the parent forks off a child for each command and connects them with pipes as necessary and then waits for the child processes to die. Again, write and read on the file descriptors 0 and 1 work (i.e. write to and reads from the pipe), stdio on the FILE pointers stdin and stdout do not work (do not write to pipe).
Thanks a lot, this has been killing me...
UPDATE: I wasn't changing the string cmd for each of the different commands so it didn't appear to work because the pipe just went to the same command so the final output was the same... Sorry for the dumbness, but thanks because I found the problem with strace.
int call_execv( string cmd, vector<string> &argv, int argc,
vector<int> &redirect)
{
int result = 0, pid, /* some other declarations */;
bool file_in, file_out, pipe_in, pipe_out;
queue<int*> pipes; // never has more than 2 pipes
// parse, fork, exec, & loop if there's a pipe until no more pipes
do
{
/* some declarations for variables used in parsing */
file_in = file_out = pipe_in = pipe_out = false;
// parse the next command and set some flags
while( /* there's more redirection */ )
{
string symbol = /* next redirection symbol */
if( symbol == ">" )
{
/* set flags, get filename, etc */
}
else if( symbol == "<" )
{
/* set flags, get filename, etc */
}
else if( pipe_out = (symbol == "|") )
{
/* set flags, and... */
int tempPipes[2];
pipes.push( pipe(tempPipes) );
break;
}
}
/* ... set some more flags ... */
// fork child
pid = fork();
if( pid == 0 ) // child
{
/* if pipe_in and pipe_out set, there are two pipes in queue.
the old pipes read is dup'd to stdin, and the new pipes
write is dup'd to stdout, other two FD's are closed */
/* if only pipe_in or pipe_out, there is one pipe in queue.
the unused end is closed in whichever if statement evaluates */
/* if neither pipe_in or pipe_out is set, no pipe in queue */
// redirect stdout
if( pipe_out ){
// close newest pipes read end
close( pipes.back()[P_READ] );
// dup the newest pipes write end
dup2( pipes.back()[P_WRITE], STDOUT_FILENO );
// close newest pipes write end
close( pipes.back()[P_WRITE] );
}
else if( file_out )
freopen(outfile.c_str(), "w", stdout);
// redirect stdin
if( pipe_in ){
close( pipes.front()[P_WRITE] );
dup2( pipes.front()[P_READ], STDIN_FILENO );
close( pipes.front()[P_READ] );
}
else if ( file_in )
freopen(infile.c_str(), "r", stdin);
// create argument list and exec
char **arglist = make_arglist( argv, start, end );
execv( cmd.c_str(), arglist );
cout << "Execution failed." << endl;
exit(-1); // this only executes is execv fails
} // end child
/* close the newest pipes write end because child is writing to it.
the older pipes write end is closed already */
if( pipe_out )
close( pipes.back()[P_WRITE] );
// remove pipes that have been read from front of queue
if( init_count > 0 )
{
close( pipes.front()[P_READ] ); // close FD first
pipes.pop(); // pop from queue
}
} while ( pipe_out );
// wait for each child process to die
return result;
}
Whatever the problem, you are not checking any return values. How do you know if the pipe() or the dup2() command succeeded? Have you verified that stdout and stdin really point to the pipe right before execv? Does execv keep the filedescriptors you give it? Not sure, here is the corresponding paragraph from the execve documentation:
By default, file descriptors remain open across an execve(). File descriptors that are marked close-on-exec are closed; see the description of FD_CLOEXEC in fcntl(2). (If a
file descriptor is closed, this will cause the release of all record locks obtained on the underlying file by this process. See fcntl(2) for details.) POSIX.1-2001 says
that if file descriptors 0, 1, and 2 would otherwise be closed after a successful execve(), and the process would gain privilege because the set-user_ID or set-group_ID per‐
mission bit was set on the executed file, then the system may open an unspecified file for each of these file descriptors. As a general principle, no portable program,
whether privileged or not, can assume that these three file descriptors will remain closed across an execve().
You should add more debug output and see what really happens. Did you use strace -f (to follow children) on your program?
The following:
queue<int*> pipes; // never has more than 2 pipes
// ...
int tempPipes[2];
pipes.push( pipe(tempPipes) );
Is not supposed to work. Not sure how it compiles since the result of pipe() is int. Note only that, tempPipes goes out of scope and its contents get lost.
Should be something like that:
struct PipeFds
{
int fds[2];
};
std::queue<PipeFds> pipes;
PipeFds p;
pipe(p.fds); // check the return value
pipes.push(p);