I have a simple program (in C) that create two child process, wait on an inherited pipe each, and put the output in a file.
Everything works well, except that after some write/read cycle on the two pipe, when the child ends, the call to ReadFile block, waiting for data on the pipe. I use the following pattern:
...
//create pipe1
CreatePipe(&hReadDup,&hWrite,&saAttr,0);
DuplicateHandle(GetCurrentProcess(),hReadDup,GetCurrentProcess(),&hRead,0,FALSE,DUPLICATE_SAME_ACCESS);
CloseHandle(hReadDup);
si.cb = sizeof(si);
si.dwFlags = STARTF_USESTDHANDLES;
si.hStdOutput = hWrite;
CreateProcess( NULL,
const_cast<LPWSTR>(cmd2.c_str()), //the command to execute
NULL,
NULL,
TRUE,
0,
NULL,
NULL,
&si, //si.
&pi
);
...
CloseHandle(hWrite); // EDIT: this was the operation not properly done!
while(cont){
...
cont = ReadFile(hRead,buf,50, &actual,NULL);
...
}
...
The last call (after child process exit) block.
Idea of why (and, if not, how to debug this)?
I found out the solution myself (wich actually was a coding error).
I wasn't closing the parent's write handle of the pipe properly (hWrite), so, the synchronous ReadFile wasn't able to report me back the child process termination.
If somebody has the same problem, make sure you close the inheritable handle of the pipe before starting the I/O operation on that pipe (as MSDN reports, cannot find again were).
You are calling ReadFile() in synchronous mode. As long as the pipe is open, ReadFile() will block waiting for more data. If you leave open the process and thread handles that CreateProcess() returns to you, that will prevent the child process from fully exiting, so the pipe may not get closed on the child end. Before entering your reading loop, close the handles that CreateProcess() returns, allowing the pipe to close properly when the child process fully terminates, and then ReadFile() can report an error back to you when it can't read from the pipe anymore. Alterntively, switch to overlapped I/O on the pipe so you can monitor the child process with WaitForSingleObject() or GetExitCodeProcess() while the loop is running so you can detect when the child process terminates regardless of the pipe state.
In your case all good, you had access to both processes on the pipe. If however you did not, or just wanted to interrupt the ReadFile call, then CancelSynchronousIo is your friend: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363789(v=vs.85).aspx
Related
I have a Win32 program. This program creates a process with CreateProcess function to run another program. I want to parent process to be closed, if the child process was closed or crashed for any reason.
How can I do it?
You can use the WaitForSingleObject function on the created process' handle, like so:
STARTUPINFO si {sizeof(si)};
PROCESS_INFORMATION pi {};
CreateProcessW(/*your arguments here*/);
WaitForSingleObject(pi.hProcess, INFINITE);
Note that if you do use INFINITE as the wait time, the function blocks until the process terminates. If you want the parent process to be doing other things in the mean time, it's best to have that code in a separate thread.
If you want the parent process to be a complete wrapper for the created process, use GetExitCodeProcess when you're done to obtain the child process' exit code.
DWORD dwExit;
GetExitCodeProcess(pi.hProcess, &dwExit);
This code was just a simple example. All three functions I mentioned in my answer can fail, and robust code would check their return values and act accordingly in the case of failure.
I am trying to receive data from a child process over an anonymous pipe in Windows. I know how to do this using standard I/O streams but these are being used for other purposes. I also know how to do this in Linux or OSX using fork(), pipe() and execv().
In Windows, you can create a pipe with CreatePipe() and make one end not inheritable with SetHandleInformation(). Then for stdout and stderr you can pass STARTUPINFO, with hStdOutput or hStdError set, to CreateProcess() to pass the other end to the child. After the call to CreateProcess() the parent most close it's handle to the child's end of the pipe. This is all explained in detail in Creating a Child Process with Redirected Input and Output on MSDN. However, I have not found a way to pass a HANDLE, other than via stderr, stdout or stdin, to the child.
I've tried converting the HANDLE to a string with something like this:
std::ostringstream str;
str << hex << "0x" << handle;
std::string handleArg = str.str();
And then passing it as a command line argument and converting it back to a HANDLE, which is just a void * in the child process. Although the child process apparently inherits the pipe HANDLE the actual value of the HANDLE must be different than in the parent because passing it this way fails to work.
I know I can use a named pipe to do this but it seems it should be possible to do this with anonymous pipes.
So how can I pass a pipe HANDLE to a child process in Windows?
Update1: Sample code in this MSDN article seems to indicate that, at least with socket handles, you can pass them as a string to the child.
Update2: Turns out I made a mistake. See my answer below.
Turns out you can pass a HANDLE to a child process as a command line argument by converting it to string and then, in the child process, back to a HANDLE (which is just a void *). Sample code using a HANDLE to a socket can be found here.
To make this work you need to make sure you follow Creating a Child Process with Redirected Input and Output closely. It's important that you close the child process's end of the pipe after calling CreateProcess() and get all the inheritance settings right.
Note, I tried passing the HANDLE as a string on the command line before but I was doing it wrong. My mistake was in passing the HANDLE as an int to boost::iostreams::file_descriptor() which made it treat it as a file descriptor instead of a Windows HANDLE.
Why not use the method shown here?
Call the GetStdHandle function to get the current standard output
handle; save this handle so you can restore the original standard
output handle after the child process has been created.
Call the SetStdHandle function to set the standard output handle to
the write handle to the pipe. Now the parent process can create the
child process.
Call the CloseHandle function to close the write handle to the pipe.
After the child process inherits the write handle, the parent process
no longer needs its copy.
Call SetStdHandle to restore the original standard output handle.
I have a program that launch another console program as its child process and communicate with it using anonymous pipe. I redirected both its stdin and stdout. The pseudocode is like:
// Create pipes for stdin and stdout
CreatePipe(&std_in_rd, &std_in_wr, NULL, 0);
CreatePipe(&std_out_rd, &std_out_wr, NULL, 0);
// redirection
startup_info.hStdOutput = std_out_wr;
startup_info.hStdInput = std_in_rd;
// create the process
CreateProcess(...);
// send command to the child process.
WriteFile(hWriteStdin, ...);
// receive feedback from the child process.
ReadFile(hReadStdout, ...);
But, the child process's processing the commands needs time, and I don't know how much time I should wait to get its output.
I used a loop that calls PeekNamedPipe to examine whether I can read from the pipe, but this method is not good for it consums a lot CPU.
The child process was not written by me and I can't modify its code.
How can I get informed when the child process has finished writing, like a hook?
Thanks.
You should try using ReadFileEx() to read from the pipe asynchronously.
I've written a two short programs that use anonymous pipes to communicate. The parent process shares the pipe handles by setting the standard IO handles for the child:
// -- Set STARTUPINFO for the spawned process -------------------------
ZeroMemory(&m_ChildSI, sizeof(STARTUPINFO));
GetStartupInfo(&m_ChildSI);
m_ChildSI.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW;
m_ChildSI.wShowWindow = SW_HIDE;
m_ChildSI.hStdError = m_pipeChild.WritePipeHandle();
m_ChildSI.hStdOutput = m_pipeChild.WritePipeHandle();
m_ChildSI.hStdInput = m_pipeParent.ReadPipeHandle();
The child acquires a read pipe handle with a call to GetStdHandle:
hReadPipe = GetStdHandle(STD_INPUT_HANDLE)
My question is:
The pipe handles are created by the parent process that calls CloseHandle() on them, once parent and child have finished communication.
Does the child also have to call CloseHandle() also? I was thinking that because these are the standard IO handles, that they'd be automatically deallocated when the process folds.
thanks!
On Win32, kernel objects such as pipes are references by one or more user mode handles. When all handles are closed, the underlying object can be closed.
The handles in each process, while they might have the same value, and might refer to the same object, are different handles, and should be closed separately.
I just read in the document Pipe Handle Inheritance on MSDN that:
"When the child has finished with the pipe, it should close the pipe handle by calling CloseHandle or by terminating, which automatically closes the handle."
Any handle can be left unclosed when application terminates, Windows will free resources automatically. But it is better practice to close them manually so everything is logical and coherent. Leaving handles opened can lead to bugs and leaks when the code is reused or modernized.
I have written a program a.exe which launches another program I wrote, b.exe, using the CreateProcess function. The caller creates two pipes and passes the writing ends of both pipes to the CreateProcess as the stdout/stderr handles to use for the child process. This is virtually the same as the Creating a Child Process with Redirected Input and Output sample on the MSDN does.
Since it doesn't seem to be able to use one synchronization call which waits for the process to exit or data on either stdout or stderr to be available (the WaitForMultipleObjects function doesn't work on pipes), the caller has two threads running which both perform (blocking) ReadFile calls on the reading ends of the stdout/stderr pipes; here's the exact code of the 'read thread procedure' which is used for stdout/stderr (I didn't write this code myself, I assume some colleague did):
DWORD __stdcall ReadDataProc( void *handle )
{
char buf[ 1024 ];
DWORD nread;
while ( ReadFile( (HANDLE)handle, buf, sizeof( buf ), &nread, NULL ) &&
GetLastError() != ERROR_BROKEN_PIPE ) {
if ( nread > 0 ) {
fwrite( buf, nread, 1, stdout );
}
}
fflush( stdout );
return 0;
}
a.exe then uses a simple WaitForSingleObject call to wait until b.exe terminates. Once that call returns, the two reading threads terminate (because the pipes are broken) and the reading ends of both pipes are closed using CloseHandle.
Now, the problem I hit is this: b.exe might (depending on user input) launch external processes which live longer than b.exe itself, daemon processes basically. What happens in that case is that the writing ends of the stdout/stderr pipes are inherited to that daemon process, so the pipe is never broken. This means that the WaitForSingleObject call in a.exe returns (because b.exe finished) but the CloseHandle call on either of the pipes blocks because both reading threads are still sitting in their (blocking!) ReadFile call.
How can I solve this without terminating both reading threads with brute force (TerminateThread) after b.exe returned? If possible, I'd like to avoid any solutions which involve polling of the pipes and/or the process, too.
UPDATE: Here's what I tried so far:
Not having b.exe inherit a.exe; this doesn't work. the MSDN specifically says that the handles passed to CreateProcess must be inheritable.
Clearing the inheritable flag on stdout/stderr inside b.exe: doesn't seem to have any effect (it would have surprised me if it did).
Having the ReadDataProc procedure (which reads on both pipes) consider whether b.exe is actually running in addition to checking for ERROR_BROKEN_PIPE. This didn't work of course (but I only realized afterwards) because the thread is blocked in the ReadFile call.
Use named pipe and asynchronous ReadFile
or
Parse the output read from the pipe looking for the end (it may be too complicated in your case).
What happens in that case is that the
writing ends of the stdout/stderr
pipes are inherited to that daemon
process, so the pipe is never broken.
Daemons should close their inherited file descriptors.
Is seems that on Windows versions prior to Windows Vista (where you can use the CancelSynchronousIO function, there is no way around terminating the reading threads using TerminateThread.
A suitable alternative (suggested by adf88) might be to use asynchronous ReadFile calls, but that's not possible in my case (too many changes to the existing code required).
set some global flag (bool exit_flag) and write something to pipe in a.exe