Unix domain socket file not removed on application crash - c++

I have a Linux C++ application which spawns and interact with another process through Unix Domain Socket. This new process basically just displays an icon of the process being currently runnning in the taskbar with some menu items displayed in the icon.
Problem:
When main application is closed gracefully the UDS file is removed.
But in case of an application crash, this UDS file is not removed and it lingers.
Is there any way of removing the UDS file upon application crash through coding?

Is there any way of removing the UDS file upon application crash through coding?
Yes. Several ways depending on how okay you are with using potentially non portable capabilities or not.
Using a separate process:
Use a separate process to monitor your application; perhaps one you've written for this purpose. When this monitoring process detects that your application has ended, it checks for the Unix domain socket file. If found, it deletes it. It then restarts the application (if needed).
Using "abstract socket":
I believe you can also use an "abstract socket", though I have not tried this myself.
An online linux manual page for the Unix domain socket describes an extension called "abstract sockets". It explains that: "Abstract sockets automatically disappear when all open references to the socket are closed.".
Using "close-behind semantics":
A Linux-based Unix domain socket manual page notes section claims: "The usual UNIX close-behind semantics apply; the socket can be unlinked at any time and will be finally removed from the filesystem when the last reference to it is closed". I.e. call bind to create the socket, wait till the client has connected, then unlink the socket, then go about with the code that might crash. Once the socket is removed from the directory entry however, new client connection attempts will fail.
Using a potential workaround
Use SO_REUSEADDR on your socket before your bind call. This may allow the application to restart without needing to delete the socket. I do not know if this behavior is well defined though for Unix sockets. It may work on one platform but not another.
Problem: When main application is closed gracefully the UDS file is removed. But in case of an application crash, this UDS file is not removed and it lingers.
Another way to handle the Unix domain socket file (the portable/standard version of it) is to delete the socket file in your application before it goes about creating it. So before your application calls bind, it would use unlink. So long as it's the sole process that would be creating this file, things should be copacetic w.r.t. avoiding races.
Beware though that using unlink could open a potential security vulnerability if your application runs with heightened privileges (for instance using the set-user-ID capability to run as say root). Make sure then that the user cannot tell the application what path to use for the socket and that none of the directories wherein the socket will reside is modifiable by the user. Otherwise, a user could tell the application that the socket's full path was something like /etc/passwd and run it to have that file deleted even though the user them self would not have had the privilege to do it.
This potential for damage is of course mitigated by things like using a least-privileged account for a set-user-ID privilege or by avoiding set-user-ID all together. Another mitigation would be to not allow the user to instruct the application what path to use for its socket - like perhaps by just using a hard-coded pathname for which the user would have no write privileges to any of its directories.

Not sure if that helps, but you can detect and orphaned unix socket.
You can try locking a file or the socket on start-up. If the lock succeeds that means the socket is orphaned and can be removed. This is because file locks are released by the OS when a process is terminated for any reason.
Alternatively, bind to that unix socket. bind succeeds only if the socket name is unused.

Related

Many-to-one two-way communication of separate programs

I'm trying to make two-way many-to-one communication between programs in Linux.
My plan is the following: One program called "driver" that talks with the hardware needs to communicate with an unknown number of applications in Linux.
I read that one of the most common ways for inter process communication is "named pipes".
The question I haven't found yet is: How new programs should notify the driver that a new program is running so that one more connection (named pipe) between the new program and the driver enstablished?
All programs will be written in C++
In essence, what you've described is a server/client relationship between programs; what each program does on either side of the communications bridge is probably irrelevant.
Even though these processes appear from the question to be intended to be on the same machine, networking is still available to you via localhost.
If you're not wedded to pipes, why not use a port for the driver (server) known to each program (client), to which the server listens?
That's pretty much the underlying philosophy of X-Windows, I believe.
Plus, there should be lots of reliable code out there to get you started.
I also think sockets may be a better solution, but if you really want named pipes, I'd do it this way:
The server opens a pipe named channel_request for reading. Any new client opens it for writing and writes a unique ID. (PID should work). The server reads this id and creates a named pipe called channel_[id]. The client then opens channel_[id] for reading and can start receiving data.
Note that linux pipes are unidirectional, so if you want two-way communications as shown in your diagram, you will need to open both a channel_[id]_out and a channel_[id]_in.

C ++ logger Multiple process support logger

Muliple process access to writing on same file simultaneously..if the file size is excess on the limit(example 10mb),the processing file is renamed(sample.txt to sample1.txt)rolling appender) and create a new one on the same name.
My issue is ,multiple process writing at same time,File size exceed time file closed, if one of the process is still writing on same file. doesnt File rolling .can any one help
One strategy that I've used also works on a distributed computing system accross multiple machines.
If you create a library which will package log messages and then send them via TCP to a destination, then you can have as many processes as you like writing to the same logger. You'd need a server at that destination to receive the log messages and write them to one file.
Generally, inter-process communication occurs via either shared memory or networking. Using networking we can go not-only inter-process, but also inter machine. If we just use the destination of localhost or 127.0.0.1, then the packet never actually reaches the network card. Most drivers are smart enough to just pass the packet to any processes listening, leading to good performance too.

Make the second instance of a Qt5 application transfer command line arguments to the first instance

I'm making a Qt 5 application that reads the files of a certain type and I want to limit it to one instance. And I want to transfer command line arguments from the 2nd instance to the 1st to make it open a file when user double clicks on that file.
Most information I've found deals with simply disabling running the 2nd instance and not passing data. I've found QLocalServer but apparently it is not destroyed when application crashes on GNU/Linux, I've also found boost::interprocess::message_queue but it looks like I will have to have a dedicated thread that will read from it. Here's the closest thing I've found: https://github.com/itay-grudev/SingleApplication/ It provides a signal I can listen to but unfortunately doesn't provide an option to pass command line.
What is the best solution? The OSes I care about are GNU/Linux, Mac, Windows and preferably Android.
Another method is to create and bind a unix domain socket using a predefined socket name or a local TCP socket (on platforms that do not support unix sockets). Bind succeeds only for the first instance of your application. The OS unbinds the socket when the application terminates for any reason. When bind() fails that means another instance of the application is already running. The second instance can connect() and use this socket to pass its command line arguments to the first instance.
OK, I've followed #peppe's suggestion and used code in this example and it worked.

IPC methods for local processes with multiple separate groups

I’m new to IPC and I’m trying to implement a secure IPC method (not related to encryption).
I’m developing a system in C++ using Visual Studio 2010 (but will be ported to others platforms Linux/MacOS/FreeBSD), this system have a process “A” that needs receive and send a XML to other process “B” on the same computer, but will exist around of 14 process like “B” (B1, B2, ..., B14) that need send/receive a XML to the process “A”.
The process “A” will acts as a proxy/bridge between every process “B”, all data/XML that the process “B” must send, will be sent to the process “A”, and just the process “A” will sends data/XML to the process “B”.
I’m looking for an IPC method to exchange this data between the process “A” and “B1…B14”. The shared memory sounds good to do this, but any process can write/read to the address, so this isn’t secure (I know that is possible to set permission access).
I’m trying to find an IPC method that:
Must be a local only method, I need avoid remote connections.
For security reasons, when a process opens a “channel for communication” to send/receive the data, other process can’t use the same “channel” (unlike shared memory or Boost Message Queue that is possible to write on this channel, or NamedPipe that is possible open other instance with the give name), I want to avoid fake/malicious process. TCP sounds good for this, because isn’t possible that two process listen on the same port (but isn't local only).
3- The process “A” will be a service, and some processes “B” will run as service too and others processes “B” will run as a unprivileged user, so this must not be an administrator-only feature.
4- This project will be code-closed, so I can’t use a code/lib based on the GPL license.
5- If possible, cross-platform (Windows/Linux/MacOS/FreeBSD).
Can someone suggest a suitable IPC technique, either built into the OS or requiring a third-party library?
Short answer:
Windows Pipes for Win32.
Anonymous local sockets for Linux(and family).
Long answer:
On Windows platform there are following commonly used alternatives:
Memory mapped files
Named Pipes
Network sockets (mostly IP)
The unfortunate fact is that none of the above is local-only by nature. Files are shared by storage access, pipes are available due to common RPC/LPC routing and IP is a subject to routing/forwarding configuration (even when using loopback).
I personally recommend using pipes on Win32. They are acting more or less like local sockets on Linux (with some differences though).
On Linux platform:
Shared memory
Pipes
Local sockets (including anonymous ones).
Pipes and local sockets are secure, and in different scenarios each of them have own benefits. As you have multiple client/single server scenario, I would favor local (AF_LOCAL) socket programming. You can either use named sockets (with file-based access control), or anonymous ones. Both options are pretty secure (unless attacker gains local access).
Links
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365780(v=vs.85).aspx
http://manpages.ubuntu.com/manpages/lucid/man7/unix.7.html

What happens to a named pipe if server crashes?

i know little about pipes but have used one to connect two processes in my code in visual C++. The pipe is working well, but I need to add error handling to the same, hence wanted to know what will happen to a pipe if the server creating it crashed and how do I recognize it from client process?
Also what will happen if the client process tried accessing the same pipe, after the server crash, if no error handling is put in place?
Edit:
What impact will be there on the memory if i keep creating new pipes (say by using system time as pipe name) while the previous was broken because of a server crash? Will these broken pipes be removed from the memory?
IIRC the ReadFile or WriteFile function will return FALSE and GetLastError() will return STATUS_PIPE_DISCONNECTED
I guess this kind of handling is implemented in your code, if not you should better add it ;-)
I just want to throw this out there.
If you want a survivable method for transferring data between two applications, you might consider using MSMQ or even bringing in BizTalk or another message platform.
There are several things to consider:
what happens if the server is rebooted or loses power?
What happens if the server application becomes unresponsive?
What happens if the server application is killed or goes away completely?
What is the appropriate response of a client application in each of the above?
Each of those contexts represent a potential loss of data. If the data loss is unacceptable then named pipes is not the mechanism you should be using. Instead you need to persist the messages somehow.
MSMQ, storing to a database, or even leveraging Biztalk can take care of the survivability of the message itself.
If 1 or 3 happens, then the named pipe goes away and must be recreated by a new instance of your server application. If #2 happens, then the pipe won't go away until someone either reboots the server or kills the server app and starts it again.
Regardless, the client application needs to handle the above issues. They boil down to connection failed problems. Depending on what the client does you might have it move into a wait state and let it ping the server every so often to see if it has come back again.
Without knowing the nature of the data and communication processes involved its hard to recommend a proper approach.