Child process is able to change parent epoll state - c++

I am trying to figure out why a child process is able to change a parent epoll state.
I have program that declare a static epoll object (an object that wraps epoll):
static EventManager* evMgrPtr = NULL;
The parent process initialized it and use it to watch a listening socket (The parent is basically a daemon that occasionally need to respond to health check request by accepting these request through the listening socket).
The children does totally different thing, however, the program DOES NOT do a fork/exec, rather, the children carry on and run a piece of code in the same translation unit.
pid_t pid = fork();
switch(pid) {
case -1:
YREPL_LOG_FATAL("Couldn't start server process ");
exit(EXIT_OK);
case 0:
#ifndef __FreeBSD__
assert( closeThisFd != -1 );
evMgr.unregisterSocketEvent( closeThisFd );
close( closeThisFd );
#endif
close(outpipe[0]);
close(errpipe[0]);
dup2(outpipe[1], 1);
dup2(errpipe[1], 2);
close(outpipe[1]);
close(errpipe[1]);
The problem is that after I do evMgrPtr->unregisterSocketEvent( closeThisFd ) in the child process, I found out the parent stop watching for the listening socket as well!!!
Can anyone shed some light on why this is happening. I thought once a fork is executed the parent and children will do COW. So whatever children does to its copy of the epoll object should not get reflected in the parent right?

It seems that you use EPOLL-based event loop. So, since file descriptor for epoll-object itself is shared between child and parent, removing file descriptor from epoll()-based descriptor in child also affects parent process :). Please read man epoll, man epoll_create.

Related

What does the fork_rv return?

I know that fork() creates a duplicate process (clone) meaning two identical copies of address spaces are created - one for the parent and one for the child. This process becomes child process of the caller. However, I am confused as to what is inside fork_rv (see comment in code below)
include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
main()
{
int fork_rv;
printf("Before: my pid is %d\n",getpid());
fork_rv=fork();
if (fork_rv == -1)
perror("fork");
else if (fork_rv == 0)
printf ("I am the child. my pid=%d\n",getpid());
else
printf ("I am the parent. my child is %d\n",fork_rv); /* What is inside fork_rv What gets printed exactly? The address of the child?) */
}
Quoting from the Linux manual page for fork:
On success, the PID of the child process is returned in the parent, and
0 is returned in the child. On failure, -1 is returned in the parent, no child process is created, and errno is set appropriately.
While #Brian's answer is already correct, maybe explaining the logic behind these return values makes it easier to understand:
-1 for error should be clear.
0 as value for the child process makes sense because the child process can always get its own pid (getpid()) as well as the pid of its parent process (getppid()).
All values > 0 are the pid of the new child process that was created, returned to the parent process. Since a process can have multiple child processes, a 'get child pid' function is not possible: The pid of which child should it return? And with a 'get children pids' function that would return a list of all child processes pids, it would be cumbersome to find the pid of the latest, new child process.

Daemon child can't execute library

I am writing a Linux daemon to execute my code. My code makes a call to a third party library. If I execute my code from the parent then everything runs fine, but if I execute my code directly from a child the call to the third party library never returns. And if I create a second executable that executes my code and I have the daemon run the executable then everything runs fine.
Why can't I call my code from the child process?
int main(void)
{
// Our process ID and Session ID
pid_t pid, sid;
fflush(stdout);
// Fork off the parent process
pid = fork();
if (pid < 0)
exit(EXIT_FAILURE);
// If we got a good PID, then we can exit the parent process.
if (pid > 0)
exit(EXIT_SUCCESS);
// Change the file mode mask
umask(0);
// Open any logs here
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
if (open("/dev/null",O_RDONLY) == -1)
exit(EXIT_FAILURE);
if (open("/dev/null",O_WRONLY) == -1)
exit(EXIT_FAILURE);
if (open("/dev/null",O_WRONLY) == -1)
exit(EXIT_FAILURE);
// Create a new SID for the child process
sid = setsid();
if (sid < 0)
exit(EXIT_FAILURE);
// Change the current working directory
if ((chdir("/")) < 0)
exit(EXIT_FAILURE);
// doesn't work
MyObject ob;
ob.start();
// works
//execlp("/home/root/NextGenAutoGuidance", "NextGenAutoGuidance", (char*)NULL);
while(1)
{
sleep(60);
}
exit(EXIT_SUCCESS);
}
I have tried putting the object declaration of my object as a global and static global, I have also tried doing a new/delete of my object.
The only way the call to the third party library will return is if my object is started from the parent process.
How can I create the daemon so that I don't have to call an external binary to run correctly?
Edit
I need to add that I have also tried to not kill the parent and I have the same problem.
After many hours of digging I found the cause and solution of my problem.
Cause:
I had a private global static class object in the MyObject class that started a thread that called the third party library.
Because the class object was global it was created before the fork, even though I declared MyObject after the fork. As soon as the static class object was created it started a thread that called the third party library and in the library function it hit a mutex. When you fork threads are not copied, so after the fork the parent process was killed and the child process created a new static class object which started a new thread which called the library function and came to the same mutex. Because the mutex wasn't released by the parent because it was killed before leaving the library function, the child process was stuck waiting for the mutex to be released.
Solution:
Don't create threads on object creation and wait for child to create threads until after the fork.

fork and exec many different processes, and obtain results from each one

I have managed to fork and exec a different program from within my app. I'm currently working on how to wait until the process called from exec returns a result through a pipe or stdout. However, can I have a group of processes using a single fork, or do I have to fork many times and call the same program again? Can I get a PID for each different process ? I want my app to call the same program I'm currently calling many times but with different parameters: I want a group of 8 processes of the same program running and returning results via pipes. Can someone please point me to the right direction please ? I've gone through the linux.die man pages, but they are quite spartan and cryptic in their description. Is there an ebook or pdf I can find for detailed information ? Thank you!
pid_t pID = fork();
if (pID == 0){
int proc = execl(BOLDAGENT,BOLDAGENT,"-u","2","-c","walkevo.xml",NULL);
std::cout << strerror(errno) << std::endl;
}
For example, how can I control by PID which child (according to the parameter xml file) has obtained which result (by pipe or stdout), and thus act accordingly? Do I have to encapsulate children processes in an object, and work from there, or can I group them altogether?
One Fork syscall make only one new process (one PID). You should organize some data structures (e.g. array of pids, array of parent's ends of pipes, etc), do 8 fork from main program (every child will do exec) and then wait for childs.
After each fork() it will return you a PID of child. You can store this pid and associated information like this:
#define MAX_CHILD=8
pid_t pids[MAX_CHILD];
int pipe_fd[MAX_CHILD];
for(int child=0;child<MAX_CHILD;child++) {
int pipe[2];
/* create a pipe; save one of pipe fd to the pipe_fd[child] */
int ret;
ret = fork();
if(ret) { /* parent */
/* close alien half of pipe */
pids[child] = ret; /* save the pid */
} else { /* child */
/* close alien half of pipe */
/* We are child #child, exec needed program */
exec(...);
/* here can be no more code in the child, as `exec` will not return if there is no error! */
}
}
/* there you can do a `select` to wait data from several pipes; select will give you number of fd with data waiting, you can find a pid from two arrays */
It's mind-bending at first, but you seem to grasp that, when you call fork( ):
the calling process (the "parent") is
essentially duplicated by the
operating system and the duplicate process
becomes the "child"
with a unique PID all its own;
the returned value from the fork( )
call is either: integer
0,1 meaning that the
program receiving the 0 return is the
"child"; or it is the non-zero integer PID
of that forked child; and
the new child process is entered into
the scheduling queue for execution.
The parent remains in the scheduling
queue and continues to execute as
before.
It is this ( 0 .xor. non-0 ) return from fork( ) that tells the program which role it's playing at this instant -- 0 returned, program is the child process; anything else returned, program is the parent process.
If the program playing the parent role wants many children, he has to fork( ) each one separately; there's no such thing as multiple children sharing a fork( ).
Intermediate results certainly can be sent via a pipe.
As for calling each child with different parameters, there's really nothing special to do: you can be sure that, when the child gets control, he will have (copies of) exactly the same variables as does the parent. So communicating parameters to the child is a matter of the parent's setting up variable values he wants the child to operate on; and then calling fork( ).
1 More accurately: fork( ) returns a value of type pid_t, which these days is identical to an integer on quite a few systems.
It's been a while since I've worked in C/C++, but a few points:
The Wikipedia fork-exec page provides a starting point to learn about forking and execing. Google is your friend here too.
As osgx's answer says, fork() can only give you one subprocess, so you'll have to call it 8 times to get 8 processes and then each one will have to exec the other program.
fork() returns the PID of the child process to the main process and 0 to the subprocess, so you should be able to do something like:
int pid = fork();
if (pid == 0) {
/* exec new program here */
} else {
/* continue with parent process stuff */
}

child waiting for another child

is there a way for a forked child to examine another forked child so that, if the other forked child takes more time than usual to perform its chores, the first child may perform predefined steps?
if so, sample code will be greatly appreciated.
Yes. Simply fork the process to be watched, from the process to watch it.
if (fork() == 0) {
// we are the watcher
pid_t watchee_pid = fork();
if (watchee_pid != 0) {
// wait and/or handle timeout
int status;
waitpid(watchee_pid, &status, WNOHANG);
} else {
// we're being watched. do stuff
}
} else {
// original process
}
To emphasise: There are 3 processes. The original, the watcher process (that handles timeout etc.) and the actual watched process.
To do this, you'll need to use some form of IPC, and named shared memory segments makes perfect sense here. Your first child could read a value in a named segment which the other child will set once it has completed it's work. Your first child could set a time out and once that time out expires, check for the value - if the value is not set, then do what you need to do.
The code can vary greatly depending on C or C++, you need to select which. If C++, you can use boost::interprocess for this - which has lots of examples of shared memory usage. If C, then you'll have to put this together using native calls for your OS - again this should be fairly straightforward - start at shmget()
This is some orientative code that could help you to solve the problem in a Linux environment.
pid_t pid = fork();
if (pid == -1) {
printf("fork: %s", strerror(errno));
exit(1);
} else if (pid > 0) {
/* parent process */
int i = 0;
int secs = 60; /* 60 secs for the process to finish */
while(1) {
/* check if process with pid exists */
if (exist(pid) && i > secs) {
/* do something accordingly */
}
sleep(1);
i++;
}
} else {
/* child process */
/* child logic here */
exit(0);
}
... those 60 seconds are not very strict. you could better use a timer if you want more strict timing measurement. But if your system doesn't need critical real time processing should be just fine like this.
exist(pid) refers to a function that you should have code that looks into proc/pid where pid is the process id of the child process.
Optionally, you can implement the function exist(pid) using other libraries designed to extract information from the /proc directory like procps
The only processes you can wait on are your own direct child processes - not siblings, not your parent, not grandchildren, etc. Depending on your program's needs, Matt's solution may work for you. If not, here are some other alternatives:
Forget about waiting and use another form of IPC. For robustness, it needs to be something where unexpected termination of the process you're waiting on results in your receiving an event. The best one I can think of is opening a pipe which both processes share, and giving the writing end of the pipe to the process you want to wait for (make sure no other processes keep the writing end open!). When the process holding the writing end terminates, it will be closed, and the reading end will then indicate EOF (read will block on it until the writing end is closed, then return a zero-length read).
Forget about IPC and use threads. One advantage of threads is that the atomicity of a "process" is preserved. It's impossible for individual threads to be killed or otherwise terminate outside of the control of your program, so you don't have to worry about race conditions with process ids and shared resource allocation in the system-global namespace (IPC objects, filenames, sockets, etc.). All synchronization primitives exist purely within your process's address space.

Problem: recvmsg(pfd[0], &message, MSG_WAITALL) always returns -1 instead of being blocked?

I'm making a server which spawn a child upon connection (using fork), and use pipe to send another socket to this child when there is another connection comming in. The idea is to let the child process manage two connections in a 2-player network game mode.
IPC pipe variable between parent and child is pfd[2].
Basically, in the child process, I do recvmsg(pfd[0], &message, MSG_WAITALL) to wait for the 2nd socket to be passed from the parent.
However, recvmsg is never blocked, and always gets returned -1.
I've already set pfd[0] to BLOCKINg as follows:
// set to blocking pipe
int oldfl;
oldfl = fcntl(pfd[0], F_GETFL);
if (oldfl == -1) {
perror("fcntl F_GETFL");
exit(1);
}
fcntl(pfd[0], F_SETFL, oldfl & ~O_NONBLOCK);
How can I make the child to be blocked at recvmsg?
Thanks a million for any hint.
recvmsg() does not work for pipes, rather for sockets only. When recvmsg() returns -1 you should check errno value, it is probably EBADF.
You can use unix sockets instead of pipe to pass file descriptors between processes.