I have seen lately a couple of programs that can be launched as daemons (e.g. linphonecsh) but also offer a second invocation method that will exchange information with the running daemon. In the linpohone case, linephonecsh with one set of parms launches the daemon but if invoked with a different set of parms it can query the status of the daemon (call in progress, call duration, hangup, exit, etc.).
So, since I need to write an app that could go either way, app or daemon, I was wonering about how one does this neat trick. I suppose UNIX domain sockets would work as might named interprocess pipes. D-bus perhaps?
And where might one see a good C/C++ example of this?
Any suggestion and alternate approaches are solicited.
You have a few options:
Shared memory
Pipes
UNIX domain sockets.
You should decide which one suits you best, based on the details of your task. I assume you're on Linux, so a chapter from the book "Advanced Linux Programming" on inter-process communication will help. It provides code examples, too.
Related
I know I can use QSharedMemory to transfer information between applications (I've used it successfully). But I'd like to trigger a function on another application. Ex: App1 has a button that when pressed triggers a function on App2 that prints "HelloWorld".
I've checked IPC solutions' page and I see that linux supports D-Bus communications and Qt uses it to extend signals and slots to IPC level, which is essentially the kind of thing I want. But I'm working on Windows for now and need a solution that is compatible with both OSs.
I also see that perhaps I could use TCP/IP solutions in Qt library to establish a server/client communication, but given that I'm not so versed in it I'd prefer something simpler. But maybe TCP/IP is quite simple and with a few variables and functions calls I could get it done. Please tell me if that's the case.
I've also thought of using a boolean variable on QSharedMemory that is set by the master app and is regularly checked and cleared by the slave app when a timer, running on a loop, runs out. But that means more overhead. It's not at all a clean solution, but it would work ok in my case.
I'm new here and I have a question which I could not find answered by searching.
I've written a program accessing a database in C++ on Linux. Now I would like to be able to give different UI processes (graphical (possible Qt) interface, console UI, web UI) access to this data - with client processes running on the same machine as the server. Which IPC methods would you recommend?
I looked into it, but mostly found sockets recommended - maybe, but not needed now.
The second best thing I found was D-BUS.
Is there any "best practices" or howto? Is there any criteria I should heed to chose a method?
Thanks,
Mark
Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.
These questions are quite general since they keep coming up for me in different situations. I'm hoping there are some basic principles/standard practices.
The typical requirements:
A program that acts like a "server", running in linux in the
background (and runs almost non-stop. restarting perhaps daily or weekly)
Handles client connections via some socket protocol
Has startup configuration files
Outputs to one or more log files
My questions:
Should I write the program as a "daemon"? What are things I should consider when choosing the daemon vs non-daemon route?
Where in the linux folder hierarchy should the log files and configuration files go? Should I run it out of some user's home directory or a sub-folder in some user's home directory? Or perhaps I should make a new folder i.e. /my_server_abc/ and then run it from there, writing log files to that directory as well?
Thanks
Should I write the program as a "daemon"?
No.
Do not try to daemonize yourself. Use OS provisioned tools to make your application run in background from a startup script, like start-stop-daemon in Debian/Ubuntu. Systemd and upstart can also handle this for you in their startup scripts, as can most init systems these days.
Writing a daemon has some pitfalls you might not expect, and most modern init scripts don't expect you to do send your own process to background - which would complicate their jobs anyway. This allows for example generating reliable .pid files for tracking your application's process id. If you daemonize yourself, your init system has to rely on your application to correctly communicate your process id somehow since you generate new PID's the init system can't track. This complicates things both for them as for you.
Use libdaemon.
Follow the Filesystem Hierarchy Standard.
I know you are thinking c/c++ when asking this question but I think it's more general than that and the logic used when designing is independent of the language used for the implementation.
There is a python enhancement proposal (PEP 3143) that was used to describe the standard daemon process library that has now become part of the language. If you look in this section on correct daemon behavior it describes how a daemon should act. There are also considerations of the differences between a 'service' and a daemon.
I think that should pretty well answer your general questions on daemons and their behavior. Check out W. Richard Stevens' Home Page and you can find info on 'Unix Network Programming', Prentice Hall which has more information specific to c/c++ when coding daemons in a *nix environment and best practice.
You should write a real daemon (fork of to background an release tty). This makes it easy to run the software on system startup is best practice.
For logging you should keep your logs in the default place: /var/log. You may even want to use syslog for logging, as it is the default under linux and you don't need to care about logfile handling.
I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?