Anonymous Pipes - c++

I've written a two short programs that use anonymous pipes to communicate. The parent process shares the pipe handles by setting the standard IO handles for the child:
// -- Set STARTUPINFO for the spawned process -------------------------
ZeroMemory(&m_ChildSI, sizeof(STARTUPINFO));
GetStartupInfo(&m_ChildSI);
m_ChildSI.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW;
m_ChildSI.wShowWindow = SW_HIDE;
m_ChildSI.hStdError = m_pipeChild.WritePipeHandle();
m_ChildSI.hStdOutput = m_pipeChild.WritePipeHandle();
m_ChildSI.hStdInput = m_pipeParent.ReadPipeHandle();
The child acquires a read pipe handle with a call to GetStdHandle:
hReadPipe = GetStdHandle(STD_INPUT_HANDLE)
My question is:
The pipe handles are created by the parent process that calls CloseHandle() on them, once parent and child have finished communication.
Does the child also have to call CloseHandle() also? I was thinking that because these are the standard IO handles, that they'd be automatically deallocated when the process folds.
thanks!

On Win32, kernel objects such as pipes are references by one or more user mode handles. When all handles are closed, the underlying object can be closed.
The handles in each process, while they might have the same value, and might refer to the same object, are different handles, and should be closed separately.

I just read in the document Pipe Handle Inheritance on MSDN that:
"When the child has finished with the pipe, it should close the pipe handle by calling CloseHandle or by terminating, which automatically closes the handle."

Any handle can be left unclosed when application terminates, Windows will free resources automatically. But it is better practice to close them manually so everything is logical and coherent. Leaving handles opened can lead to bugs and leaks when the code is reused or modernized.

Related

Keeping track of background processes internally in cpp shell

I'm writing a shell in cpp and I was hoping to get some advice. I have a command that will do an exec in the background, and I'm trying to keep track of which background processes are still running. I thought maybe I could keep track of the PID and do a string find on /proc/, but it seems to stay longer than it should. I'm testing it by using the sleep command, but it seems to always linger around wherever I look long after it should've finished. I'm probably just not doing the right thing to see if it is still running though.
Thanks in advance for any help.
Assuming you are spawning off the child process via fork() or forkpty(), one reasonably good way to track the child process's condition is to have the parent process create a connected-socket-pair (e.g. via socketpair()) before forking, and have the child process call dup2() to make one end of that socket-pair its stdin/stdout/stderr file descriptor, e.g.:
// Note: error-checking has been removed for clarity
int temp[2];
(void) socketpair(AF_UNIX, SOCK_STREAM, 0, temp);
pid_t pid = fork();
if (pid == 0)
{
// We are the child process!
(void) dup2(temp[1], STDIN_FILENO);
(void) dup2(temp[1], STDOUT_FILENO);
(void) dup2(temp[1], STDERR_FILENO);
// call exec() here...
}
The benefit of this is that now the parent process has a file descriptor (temp[0]) that is connected to the stdin, stdout, and stderr of the child process, and the parent process can select() on that descriptor to find out whenever the child process has written text to its stderr or stdout streams, and can then read() on that file descriptor to find out what the child process wrote (useful if you want to then display that text to the user, or if not you can just throw the read text away), and most importantly, it will know when the child process has closed its stderr and stdout streams, because then the parent process's next call to read() on that file descriptor will indicate 0 aka EOF.
Since the OS will automatically close the child process's streams whenever it exits for any reason (including crashing), this is a pretty reliable way to get notified that the child process has gone away.
The only potential gotcha is that the child process could (for whatever reason) manually call close(STDOUT_FILENO) and close(STDERR_FILENO), and yet still remain running; in that case the parent process would see the socket-pair connection closing as usual, and wrongly think the child process had gone away when in fact it hadn't. Fortunately it's pretty rare for a program to do that, so unless you need to be super-robust you can probably ignore that corner case.
On a POSIX-like system, after you create any child processes using fork, you should clean up those child processes by calling wait or waitpid from the parent process. The name "wait" is used because the functions are most commonly used when the parent has nothing to do until a child exits or is killed, but waitpid can also be used (by passing WNOHANG) to check on whether a child process is finished without making the parent process wait.
Note that at least on Linux, when a child process has exited or been killed but the parent process has not "waited" for the child, the kernel keeps some information about the child process in memory, as a "zombie process". This is done so that a later "wait" can correctly fetch the information about the child's exit code or fatal signal. These zombie processes do have entries in /proc, which may be why you see a child "stay longer than it should", if that's how you were checking.

Close parent process if child closed or crashed

I have a Win32 program. This program creates a process with CreateProcess function to run another program. I want to parent process to be closed, if the child process was closed or crashed for any reason.
How can I do it?
You can use the WaitForSingleObject function on the created process' handle, like so:
STARTUPINFO si {sizeof(si)};
PROCESS_INFORMATION pi {};
CreateProcessW(/*your arguments here*/);
WaitForSingleObject(pi.hProcess, INFINITE);
Note that if you do use INFINITE as the wait time, the function blocks until the process terminates. If you want the parent process to be doing other things in the mean time, it's best to have that code in a separate thread.
If you want the parent process to be a complete wrapper for the created process, use GetExitCodeProcess when you're done to obtain the child process' exit code.
DWORD dwExit;
GetExitCodeProcess(pi.hProcess, &dwExit);
This code was just a simple example. All three functions I mentioned in my answer can fail, and robust code would check their return values and act accordingly in the case of failure.

How to Prevent stdout and stdin Deadlock after CreateProcess [duplicate]

I have a simple program (in C) that create two child process, wait on an inherited pipe each, and put the output in a file.
Everything works well, except that after some write/read cycle on the two pipe, when the child ends, the call to ReadFile block, waiting for data on the pipe. I use the following pattern:
...
//create pipe1
CreatePipe(&hReadDup,&hWrite,&saAttr,0);
DuplicateHandle(GetCurrentProcess(),hReadDup,GetCurrentProcess(),&hRead,0,FALSE,DUPLICATE_SAME_ACCESS);
CloseHandle(hReadDup);
si.cb = sizeof(si);
si.dwFlags = STARTF_USESTDHANDLES;
si.hStdOutput = hWrite;
CreateProcess( NULL,
const_cast<LPWSTR>(cmd2.c_str()), //the command to execute
NULL,
NULL,
TRUE,
0,
NULL,
NULL,
&si, //si.
&pi
);
...
CloseHandle(hWrite); // EDIT: this was the operation not properly done!
while(cont){
...
cont = ReadFile(hRead,buf,50, &actual,NULL);
...
}
...
The last call (after child process exit) block.
Idea of why (and, if not, how to debug this)?
I found out the solution myself (wich actually was a coding error).
I wasn't closing the parent's write handle of the pipe properly (hWrite), so, the synchronous ReadFile wasn't able to report me back the child process termination.
If somebody has the same problem, make sure you close the inheritable handle of the pipe before starting the I/O operation on that pipe (as MSDN reports, cannot find again were).
You are calling ReadFile() in synchronous mode. As long as the pipe is open, ReadFile() will block waiting for more data. If you leave open the process and thread handles that CreateProcess() returns to you, that will prevent the child process from fully exiting, so the pipe may not get closed on the child end. Before entering your reading loop, close the handles that CreateProcess() returns, allowing the pipe to close properly when the child process fully terminates, and then ReadFile() can report an error back to you when it can't read from the pipe anymore. Alterntively, switch to overlapped I/O on the pipe so you can monitor the child process with WaitForSingleObject() or GetExitCodeProcess() while the loop is running so you can detect when the child process terminates regardless of the pipe state.
In your case all good, you had access to both processes on the pipe. If however you did not, or just wanted to interrupt the ReadFile call, then CancelSynchronousIo is your friend: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363789(v=vs.85).aspx

How does shell pipe the child process? [duplicate]

This question already has an answer here:
Breaking down shell scripts; What happens under the hood?
(1 answer)
Closed 9 years ago.
Recently I'm studying linux inter process communication. But I have some problems in understanding the pipe mechanism.
I know that pipe is a pair of files created by parent process, then the parent process passes the file descriptors to its child process then child process can operate on it.
But since child process has a totally new virtual memory when exec() is called after fork(), so why can the parent process pass its information to the child process? Is there anything that I have missed?
A file descriptor is a handle to a resource managed by the operating system(kernel). When you create a pipe, the kernel creates facilities so data can be sent from one end of the pipe to the other.
This data is sent via the kernel.
When you fork(), the child inherits all file descriptors, which means they inherit the data structure that is managed by the kernel that the file descriptors refer to.
So now the file descriptor refers to the very same kernel resource in the child and the parent. Since the kernel resource lives in the kernel, that part is shared between the 2 processes, it is not duplicated like the user space memory.
Basically, you write() data to one end of the pipe, that data is copied into a buffer in the kernel. You can then read() that data, and it gets copied from the kernel buffer into memory space of the reading process. After a fork(), both child and parent refer to that same buffer in the kernel which was created with pipe().
When a process exec()s to another, that child generally inherits the parent's standard file paths: stdin(0), stdout(1), stderr(2). When a shell creates a pipeline, it uses the dup2() call to duplicate a path to a desired path number in order to force the right paths to the child's standard paths.
// pseudo-code:
// create the pipe
int pipe_end[2];
pipe(pipe_end);
// "back up" stdin
int save_in = dup(0);
// position the pipe to stdin for the benefit of the child
dup2(pipe_end[0], 0);
// start the child
fork() && exec();
// restore stdin
close(0);
dup2(save_in, 0);
// write to the child
write(pipe_end[1], ...);
The information isn't passed to the child process - it's done by an implicit convention. The parent knows it should dup2 the fds into slots 0,1,2, and the child knows to read/write from those descriptors. You're right that there's no magic involved across the exec, the child really does get zero information from its parent, aside from the argument and environment vectors. It's just that the unix platform has these conventions, so the child knows the relevant fds it's looking to use, and the parent knows which numbers to pick for the fds.
For processes where you need to pass more than two or three fds, the parent does indeed have to explicitly pass the number. Here are some processes on my machine where this is clearly happening (it's probably stuffed in an environment variable in other places):
klauncher --fd=8
/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session

Kill child process with cleanup

Is there a way to kill my app's child process and perform it's cleanup(calling deconstructors and atexit functions), similarly to exit(exit_code), but on another process?
If you are on windows, you probably start your child processes by CreateProcess, which has a PROCESS_INFORMATION as the last parameter.
CreateProcess on MSDN
Process Information on MSDN
Option 1:
This process information contains a handle to the process started in the hProcess member.
You can store this handle and use it to kill your child processes.
Insert
You probably want to send WM_CLOSE and / or WM_QUIT?
to "cleanly" end the process:
Here is a KB Article on what to do KB how to cleanly kill win32 processes
** End Insert**
Option 2:
Here is an discussion on how to properly kill a process tree: Terminate a process tree on windows
There's no simple Win32 API for that kind of thing. The OS doesn't care what language your program's source code was written in, the compiled program appears to it as just a sequence of CPU instructions plus data.
The cleanest way would be to establish some kind of a communication channel between the processes (e.g. via shared memory) and simply request process termination.
You can achieve the same by starting the child process as a debugged process and then using debug APIs to alter the child's behavior, but that's too intrusive and not very straightforward to implement.