In my C/C++ server application which runs on Mac (Darwin Kernel Version 10.4.0) I'm forking child processes and want theses childes to not inherit file handles (files, sockets, pipes, ...) of the server. Seems that by default all handles are being inherited, even more, netstat shows that child processes are listening to the server's port. How can I do such kind of fork?
Normally, after fork() but before exec() one does getrlimit(RLIMIT_NOFILE, fds); and then closes all file descriptors lower than fds.
Also, close-on-exec can be set on file descriptors using fcntl(), so that they get closed automatically on exec(). This, however, is not thread-safe because another thread can fork() after this thread opens a new file descriptor but before it sets close-on-exec flag.
On Linux this problem has been solved by adding O_CLOEXEC flag to functions like open() so that no extra call is required to set close-on-exec flag.
Nope, you need to close them yourself since only you know which ones you need to keep open or not.
Basically no. You have to do that yourself. Maybe pthread_atfork help, but it still be tedious.
Related
In my c++ windows app I start multiple child processes and I want them to inherit parent's stdout/stderr, so that if output of my app is redirected to some file then that file would also contain output of all child processes that my app creates.
Currently I do that using CreateProcess without output redirection. MSDN has a sample how to redirect output: Creating a Child Process with Redirected Input and Output, but I want to see what alternative do I have. Simplest is to use system and call it from a blocking thread that waits for child to exit. All output is then piped back to parent's stdout/stderr, however in parent process I do not have a chance to process stdout data that comes from child.
There are also other functions to start processes on windows: spawn, exec, which might be easier to port to posix systems.
What should I use if I want it to work on linux/osx? What options do I have if I want it to work on UWP aka WinRT? I might be totally ok with system called from a blocking thread, but perhaps I'd prefer to be able to have more control on process PID (to be able to terminate it) and process stdout/stderr, to prepend each line with child##: for example.
The boost libraries recently released version 1.64 which includes a new boost::process library.
In it, you're given a C++ way to be able to redirect output to a pipe or asio::streambuf, from which you can create a std::string or std::istream to read whatever your child process wrote.
You can read up on boost::process tutorials here, which shows some simple examples of reading child output. It does make heavy use of boost::asio, so I highly recommend you read up on that too.
I have a Linux C++ application which spawns and interact with another process through Unix Domain Socket. This new process basically just displays an icon of the process being currently runnning in the taskbar with some menu items displayed in the icon.
Problem:
When main application is closed gracefully the UDS file is removed.
But in case of an application crash, this UDS file is not removed and it lingers.
Is there any way of removing the UDS file upon application crash through coding?
Is there any way of removing the UDS file upon application crash through coding?
Yes. Several ways depending on how okay you are with using potentially non portable capabilities or not.
Using a separate process:
Use a separate process to monitor your application; perhaps one you've written for this purpose. When this monitoring process detects that your application has ended, it checks for the Unix domain socket file. If found, it deletes it. It then restarts the application (if needed).
Using "abstract socket":
I believe you can also use an "abstract socket", though I have not tried this myself.
An online linux manual page for the Unix domain socket describes an extension called "abstract sockets". It explains that: "Abstract sockets automatically disappear when all open references to the socket are closed.".
Using "close-behind semantics":
A Linux-based Unix domain socket manual page notes section claims: "The usual UNIX close-behind semantics apply; the socket can be unlinked at any time and will be finally removed from the filesystem when the last reference to it is closed". I.e. call bind to create the socket, wait till the client has connected, then unlink the socket, then go about with the code that might crash. Once the socket is removed from the directory entry however, new client connection attempts will fail.
Using a potential workaround
Use SO_REUSEADDR on your socket before your bind call. This may allow the application to restart without needing to delete the socket. I do not know if this behavior is well defined though for Unix sockets. It may work on one platform but not another.
Problem: When main application is closed gracefully the UDS file is removed. But in case of an application crash, this UDS file is not removed and it lingers.
Another way to handle the Unix domain socket file (the portable/standard version of it) is to delete the socket file in your application before it goes about creating it. So before your application calls bind, it would use unlink. So long as it's the sole process that would be creating this file, things should be copacetic w.r.t. avoiding races.
Beware though that using unlink could open a potential security vulnerability if your application runs with heightened privileges (for instance using the set-user-ID capability to run as say root). Make sure then that the user cannot tell the application what path to use for the socket and that none of the directories wherein the socket will reside is modifiable by the user. Otherwise, a user could tell the application that the socket's full path was something like /etc/passwd and run it to have that file deleted even though the user them self would not have had the privilege to do it.
This potential for damage is of course mitigated by things like using a least-privileged account for a set-user-ID privilege or by avoiding set-user-ID all together. Another mitigation would be to not allow the user to instruct the application what path to use for its socket - like perhaps by just using a hard-coded pathname for which the user would have no write privileges to any of its directories.
Not sure if that helps, but you can detect and orphaned unix socket.
You can try locking a file or the socket on start-up. If the lock succeeds that means the socket is orphaned and can be removed. This is because file locks are released by the OS when a process is terminated for any reason.
Alternatively, bind to that unix socket. bind succeeds only if the socket name is unused.
A common server socket pattern on Linux/UNIX systems is to listen on a socket, accept a connection, and then fork() to process the connection.
So, it seems that after you accept() and fork(), once you're inside the child process, you will have inherited the listening file descriptor of the parent process. I've read that at this point, you need to close the listening socket file descriptor from within the child process.
My question is, why? Is this simply to reduce the reference count of the listening socket? Or is it so that the child process itself will not be used by the OS as a candidate for routing incoming connections? If it's the latter, I'm a bit confused for two reasons:
(A) What tells the OS that a certain process is a candidate for accepting connections on a certain file descriptor? Is it the fact that the process has called accept()? Or is it the fact that the process has called listen()?
(B) If it's the fact that the process has called listen(), don't we have a race condition here? What if this happens:
Parent process listens on socket S.
Incoming connection goes to Parent Process.
Parent Process forks a child, child has a copy of socket S
BEFORE the child is able to call close(S), a second incoming connection goes to Child Process.
Child Process never calls accept() (because it's not supposed to), so the incoming connection gets dropped
What prevents the above condition from happening? And more generally, why should a child process close the listening socket?
Linux queues up pending connections. A call to accept, from either the parent or child process, will poll that queue.
Not closing the socket in the child process is a resource leak, but not much else. The parent will still grab all the incoming connections, because it's the only one that calls accept, but if the parent exits, the socket will still exist because it's open on the child, even if the child never uses it.
The incoming connection will be 'delivered' to which ever process is calling accept(). After you forked before closing the file descriptor you could accept the connection in both processes.
So as long as you never accept any connections in the child thread and the parent is continuing to accept the connections everything would work fine.
But if you plan to never accept connections in your child process, why would you want to keep resources for the socket in this process?
The interesting question would be what happens if both processes call accept() on the socket. I could not find definite information on this at the moment. What I could find is, that you can be sure, that every connection is only delivered to only one of these processes.
In the socket() manual, a paragraph says:
SOCK_CLOEXEC
Set the close-on-exec (FD_CLOEXEC) flag on the new file descriptor. See the description of the O_CLOEXEC flag in open(2) for
reasons why this may be useful.
Unfortunately, that doesn't do anything when you call fork(), it's only for when you call execv() and other similar functions. Anyway, reading the info in the open() function manual we see:
O_CLOEXEC (since Linux 2.6.23)
Enable the close-on-exec flag for the new file descriptor. Specifying this flag permits a program to avoid additional fcntl(2) F_SETFD operations to set the FD_CLOEXEC flag.
Note that the use of this flag is essential in some multithreaded programs, because using a separate fcntl(2) F_SETFD operation to set the FD_CLOEXEC flag does not suffice to avoid race conditions where one thread opens a file descriptor and attempts to set its close-on-exec flag using fcntl(2) at the same time as another thread does a fork(2) plus execve(2). Depending on the order of execution, the race may lead to the file descriptor returned by open() being unintentionally leaked to the program executed by the child process created by fork(2). (This kind of race is in principle possible for any system call that creates a file descriptor whose close-on-exec flag should be set, and various other Linux system calls provide an equivalent of the O_CLOEXEC flag to deal with this problem.)
Okay so what does all of that mean?
The idea is very simple. If you leave a file descriptor open when you call execve(), you give the child process access to that file descriptor and thus it may be given access to data that it should not have access to.
When you create a service which fork()s and then executes code, that code often starts by dropping rights (i.e. the main apache2 service runs as root, but all the spawned fork() actually run as the httpd or www user—it is important for the main process to be root in order to open ports 80 and 443, any port under 1024, actually). Now, if a hacker is somehow able to gain control of that child process, they at least won't have access to that file descriptor if closed very early on. This is much safer.
On the other hand, my apache2 example works differently: it first opens a socket and binds it to port 80, 443, etc. and then creates children with fork() and each child calls accept() (which by default blocks). The first incoming connection will wake up one of the children by returning from the accept() call. So I guess that one is not that risky after all. It will even keep that connection open and call accept() again, up to the max. defined in your settings (something like 100 by default, depends on the OS you use). After max. accept() calls, that child process exits and the server creates a new instance. This is to make sure that the memory footprint doesn't grow too much.
So in your case, it may not be that important. However, if a hacker takes over your process, they could accept other connections and handle them with their canny version of your server... something to thing about. If your service is internal (only runs on your Intranet), then the danger is lesser (although from what I read, most thieves in companies are employees working there...)
The child process won't be listening on the socket unless accept() is called, in which case incoming connections can go to either process.
A child process inherits all files descriptors from its parent. A child process should close all listening sockets to avoid conflicts with its parent.
I have a problem with my multithreaded networking server program.
I have a main thread that is listening for new client connections. I use Linux epoll to get I/O event notifications. For each incoming event, I create a thread that accept() the new connection and assign a fd to it. Under heavy loading, it can occur that the same fd is assigned twice causing my program to crash.
My question is: how can the system re-assign a fd that is still used by another thread?
Thanks,
Presumably there is a race condition here - but without seeing your code it's hard to diagnose.
You would be better to accept on the Main thread and then pass the accepted socket to the new thread.
If you pass your listening socket to a new thread to then perform the accept - you're going to hit a race condition.
For further information you can look here: https://stackoverflow.com/a/4687952/516138
And this is a good background on networking efficiency (although perhaps a bit out of date).
You should call accept() on the same thread that you are calling epoll() on. Otherwise you are inviting race conditions.
File descriptors are modified in a "per process basis". This means that they are unique for each process. This means that multiple threads can share the same file descriptors in the same process.
Having an accept syscall returning the same file descriptor inside the same process is a very strong indication that some of your threads are closing the previous "version" of the repeated file descriptor.
Issues like this one may be difficult to debug in complex software. A way to identify that in Linux system is to use the strace command. One can run strace -f -e trace=close,accept4,accept,pipe,open <your program>. That's going to output on your screen the respective syscalls specified in the command along with which thread is calling it.
I'm new to working with forking and I am having a trouble understanding how to achieve what I want. I'll try to explain as best I can.
I have Process A which is a functional Berkeley socket server running on Linux.
I need Process A to load a program from the disk into a separate non-blocking process (Process B) in a background state. Then Process A needs to pass Process B control of Process A's sockets. Lastly Process A needs to end, leaving process B running.
I'm unclear on whats needed to pass the sockets to a new process if the old one ends, and the best way to create a non-blocking new process that allows the original process to end.
There's nothing special you need to do. Just make sure the close on exec flag is cleared for any file descriptors you want process B to inherit and set for any file descriptors you don't want process B to inherit. Then call exec to replace process A with process B. Process B will start with all inheritable file descriptors intact.
If you need to pass an open file (such as a socket) without using inheritance-through-fork, you use ioctl with I_SENDFD. Here is a very detailed description. (There is a corresponding mechanism for receiving it.) You can do this with a named pipe which connects the processes, or via a variation, with a Unix domain socket.