Is ECHILD generated by a call to system(3) a failure? - c++

I'm currently working on a C++ server that takes requests and spawns off new processes to handle them. Those child processes then (sometimes) have to execute calls to system(3) to invoke other programs (third party ones over which I have no control). This server is being ported over to a new hardware platform, so I have to retain compatibility between multiple systems, going back to kernel 2.4.20. I'm currently ignoring children (signal(SIGCHLD, SIG_IGN)) and this works fine on the old kernel, however when I run the server on the newer kernels to which I'm porting the server (2.6, 3.2) on different hardware, this system call fails, with system(3) setting errno to ECHILD. What's changed in the kernel and what's the proper way of handling children if I can't ignore them ? (Note, when I register a handler for SIGCHLD following Beej's example, it works fine)

Related

Is there a cross-platform way to make a process based socket server in C++?

It's something that seems deceptively simple, but comes with a lot of nasty details and compatibility problems. I have some code that kinda works on Linux and... sorta works on Windows but it's having various problems, for what seems like a common and simple problem. I know async is all the rage these days, but I have good reasons to want a process per connection.
I'm writing a server that hosts simulation processes. So each connection is long-running and CPU intensive. But more importantly, these simulators (Ngspice, Xyce) have global state and sometimes segfault or reach unrecoverable errors. So it is essential that each connection has its own process so they can run/crash in parallel and not mess with each other's state.
Another semi-important detail is that the protocol is based on Capnp RPC, which has a nice cross-platform async API, but not a blocking one. So what I do is have my own blocking accept loop that forks a new process and then starts the Capnp event loop in the new process.
So I started with a simple accept loop, added a ton of ifdefs to support windows, and then added fork to make it multiprocess and then added a SIGCHLD handler to try to avoid zombie processes. But Windows doesn't have fork, and if many clients disconnect simultaneously I still get zombies.
My current code lives here: https://github.com/NyanCAD/SimServer/blob/1ba47205904fe57196498653ece828c572579717/main.cpp
I'm fine with either some more ifdefs and hacks to make Windows work and avoid zombies, or some sort of library that either offers a ready made multiprocess socket server or functionality for writing such a thing. The important part is that it can accept a socket in a new process and pass the raw FD to the Capnp event loop.

Unix domain socket file not removed on application crash

I have a Linux C++ application which spawns and interact with another process through Unix Domain Socket. This new process basically just displays an icon of the process being currently runnning in the taskbar with some menu items displayed in the icon.
Problem:
When main application is closed gracefully the UDS file is removed.
But in case of an application crash, this UDS file is not removed and it lingers.
Is there any way of removing the UDS file upon application crash through coding?
Is there any way of removing the UDS file upon application crash through coding?
Yes. Several ways depending on how okay you are with using potentially non portable capabilities or not.
Using a separate process:
Use a separate process to monitor your application; perhaps one you've written for this purpose. When this monitoring process detects that your application has ended, it checks for the Unix domain socket file. If found, it deletes it. It then restarts the application (if needed).
Using "abstract socket":
I believe you can also use an "abstract socket", though I have not tried this myself.
An online linux manual page for the Unix domain socket describes an extension called "abstract sockets". It explains that: "Abstract sockets automatically disappear when all open references to the socket are closed.".
Using "close-behind semantics":
A Linux-based Unix domain socket manual page notes section claims: "The usual UNIX close-behind semantics apply; the socket can be unlinked at any time and will be finally removed from the filesystem when the last reference to it is closed". I.e. call bind to create the socket, wait till the client has connected, then unlink the socket, then go about with the code that might crash. Once the socket is removed from the directory entry however, new client connection attempts will fail.
Using a potential workaround
Use SO_REUSEADDR on your socket before your bind call. This may allow the application to restart without needing to delete the socket. I do not know if this behavior is well defined though for Unix sockets. It may work on one platform but not another.
Problem: When main application is closed gracefully the UDS file is removed. But in case of an application crash, this UDS file is not removed and it lingers.
Another way to handle the Unix domain socket file (the portable/standard version of it) is to delete the socket file in your application before it goes about creating it. So before your application calls bind, it would use unlink. So long as it's the sole process that would be creating this file, things should be copacetic w.r.t. avoiding races.
Beware though that using unlink could open a potential security vulnerability if your application runs with heightened privileges (for instance using the set-user-ID capability to run as say root). Make sure then that the user cannot tell the application what path to use for the socket and that none of the directories wherein the socket will reside is modifiable by the user. Otherwise, a user could tell the application that the socket's full path was something like /etc/passwd and run it to have that file deleted even though the user them self would not have had the privilege to do it.
This potential for damage is of course mitigated by things like using a least-privileged account for a set-user-ID privilege or by avoiding set-user-ID all together. Another mitigation would be to not allow the user to instruct the application what path to use for its socket - like perhaps by just using a hard-coded pathname for which the user would have no write privileges to any of its directories.
Not sure if that helps, but you can detect and orphaned unix socket.
You can try locking a file or the socket on start-up. If the lock succeeds that means the socket is orphaned and can be removed. This is because file locks are released by the OS when a process is terminated for any reason.
Alternatively, bind to that unix socket. bind succeeds only if the socket name is unused.

Qt seems to use lots of threads

I have used Qt quite a lot, but recently needed to debug the threads I have been creating and found many more threads then I was expecting.
So my program is a simple console only (no GUI) Qt application (linux).
Threads that I have created:
It has a main() (which executes the QtCoreApplication) - so that is the main thread.
A thread to process received data from the com port (using FTDI D2XX thirdparty code drivers)
And that is all. When I do ps -T... and find my application there are 7 threads. I have two classes that are QObjects using signals and slots, so maybe they need a thread each for message handling, that takes me to 4 threads... so I am at a loss as to why I might have 7 threads for my application.
Can anyone explain more about what is going on? can post code if needed. Note I only use new QThread once in my code (for the moment).
Qt doesn't create any per-QObject threads. It creates helper threads for some plaform-specific reasons, e.g. QProcess sometimes needs helper threads.
The FTDI D2XX unix driver uses libusb and that implementation is completely backwards and uses additional threads on top of the thread you've provided for it. Frankly said, you shouldn't be using the D2XX driver on Linux or OS X. Just use the kernel driver.
You should simply run the D2XX driver in a trivial non-Qt test application that opens the device and reads from it continuously and see how many threads it spawns. You'll be dismayed...

Interprocess Communication in C++

I have a simple c++ application that generates reports on the back end of my web app (simple LAMP setup). The problem is the back end loads a data file that takes about 1.5GB in memory. This won't scale very well if multiple users are running it simultaneously, so my thought is to split into several programs :
Program A is the main executable that is always running on the server, and always has the data loaded, and can actually run reports.
Program B is spawned from php, and makes a simple request to program A to get the info it needs, and returns the data.
So my questions are these:
What is a good mechanism for B to ask A to do something?
How should it work when A has nothing to do? I don't really want to be polling for tasks or otherwise spinning my tires.
Use a named mutex/event, basically what this does is allows one thread (process A in your case) to sit there hanging out waiting. Then process B comes along, needing something done, and signals the mutex/event this wakes up process A, and you proceed.
If you are on Microsoft :
Mutex, Event
Ipc on linux works differently, but has the same capability:
Linux Stuff
Or alternatively, for the c++ portion you can use one of the boost IPC libraries, which are multi-platform. I'm not sure what PHP has available, but it will no doubt have something equivalent.
Use TCP sockets running on localhost.
Make the C++ application a daemon.
The PHP front-end creates a persistent connection to the daemon. pfsockopen
When a request is made, the PHP sends a request to the daemon which then processes and sends it all back. PHP Sockets C++ Sockets
EDIT
Added some links for reference. I might have some really bad C code that uses sockets of interprocess communication somewhere, but nothing handy.
IPC is easy on C++, just call the POSIX C API.
But what you're asking would be much better served by a queue manager. Make the background daemon wait for a message on the queue, and the frontend PHP just add there the specifications of the task it wants processed. Some queue managers allow the result of the task to be added to the same object, or you can define a new queue for the finish messages.
One of the best known high-performance queue manager is RabbitMQ. Another one very easy to use is MemcacheQ.
Or, you could just add a table to MySQL for tasks, the background process just queries periodically for unfinished ones. This works and can be very reliable (sometimes called Ghetto queues), but break down at high tasks/second.

Getting ring 0 mode in C++ (Windows)

How I can get ring 0 operating mode for my process in Windows 7(or Vista)?
Allowing arbitrary code to run in ring 0 violates basic OS security principles.
Only the OS kernel and device drivers run in ring 0. If you want to write ring 0 code, write a Windows device driver. This may be helpful.
Certain security holes may allow your code to run in ring 0 also, but this isn't portable because the hole might be fixed in a patch :P
Technically speaking, all processes have some threads spending some of their time in Kernel-Mode (ring 0). Whenever a user-mode process makes a syscall into the OS, there is a transition where the thread gets into ring 0 via a 'gate'. Whenever a process needs to talk to a device, allocate more process-wide memory, or spawn new threads, a syscall is used to ask the OS to provide this service.
Therefore, if you want to have a process run some code in ring 0, you'll need to write a driver and then communicate with this driver thru some syscalls. The most common syscall for this is called ioctl (stands for I/O Control).
Another thing to look at on the Windows platform is the UMDF (User-Mode Driver Framework). This allows you to write, debug, and test a driver in user-mode (running in ring 3) but it is still accessible to other drivers or other processes in the system.
You cannot set kernel mode from a user mode process. That's how security works.