Gammu-smsd runonreceive returns 0 but no program output - c++

I've written a C application that grabs some sensor data and puts it into a string. This string gets passed to gammu-smsd-inject for transmission by SMSD. For reference, my application launches gammu-smsd-inject using fork() & wait(). The program waits for gammu-smsd-inject to terminate and then exits itself.
My program works just fine: if I run it manually from a bash prompt it grabs the sensor data, calls gammu-smsd-inject and quits. The sms appears in the database outbox and shortly after I receive an sms on my phone.
I've added the absolute path to my program into the runonreceive directive of SMSD. When I send a text to SMSD, it is received in the inbox and from the log file I can see the daemon running my program. The logfile then states that the process (my program) successfully exited (0), but I never receive any sms and nothing is added to the database's outbox or sentitems tables.
Any idea what might be going on? I haven't posted a code listing as it's quite long,but it is available.
The only think I could think of that might be happening is that gammu-smsd-inject is perhaps being terminated (by a parent process somewhere up the tree) BEFORE it gets a chance to do any SQL stuff. Wouldn't this create a non zero exit code though?

So the problem was which user was running the program. When I ran my application manually from bash, it launch it with my user ID, but when the SMSD daemon ran it, it was launched with a different ID which was causing issues for some reason. I thought it was a problem with the user ID being used to access the mysql database, but apparently not. In short, I don't actually know what the problem was, but by assigning my login's UID to the child process, everything suddenly worked.

Related

Which process takes the Ctrl + C when STDIN file descriptor is passed around?

First off, sorry that I was not able to provide a reduced example. At the moment, it was beyond my ability. Especially my codes that pass the file descriptors around wasn't cleanly working. I think I have only a fair understanding on how the code works at a high level.
The question is essentially if, in the following complicated example, the end-user enters Ctrl + C, which process takes the SIGINT and how things happen that way.
The application works on the Command Line Interface (CLI, going forward). The user starts a client, which effectively sends a command to the server, prints some responses out, and terminates. The server upon request finds a proper worker executable, and fork-and-exec the executable, and waits for it. Then, the server constructs the response and sends it back to the client.
There are, however, some complications. The client starts the server if the server process is not already running -- there's one server process per user. When the server is fork-and-exec'ed, the line just after fork() has this:
if (pid == 0) {
daemon(0, 0);
// do more set up and exec
}
Another complication, which might be more important,is that when the client sends a request over a unix socket (which looks like #server_name), the client appears to send the three file descriptors for standard I/O, using techniques like this.
When the server fork-and-execs the worker executable, the server redirects the worker's standard I/O to the the three file descriptors received from the client:
// just after fork(), in the child process' code
auto new_fd = fcntl(received_fd, F_DUPFD_CLOEXEC, 3);
dup2(new_fd, channel); // channel seems to be 0, 1, or 2
That piece of codes run for all the three file descriptors, respectively. (The worker executable yet again creates a bunch of processes but it does not pass the STDIN to its children.)
The question is what happens if the end user inputs Ctrl + C in the terminal. I thought, the Bash shell takes it, and generates & sends SIGINT to the processes that has a particular session ID perhaps same as the bash shell's direct child process or itself: the client, in the example.
However, it looks like the worker executable receives the signal, and I cannot confirm if the client receives the signal. I do not think the server process receives the signal but cannot confirm that. How could this happen?
If the Bash takes the Ctrl+C first, and delivers it to whatever processes, I thought the server has been detached from the Bash (i.e. daemon(0, 0)), and has nothing to do with the bash process. I thought the server and thus the worker processes have different session IDs, and which looked so when I ran the ps -o command.
It's understandable that the user keyboard inputs (yes or no, etc) could be delivered to the worker process. I am not sure how Ctrl + C could be delivered to the worker process by just effectively sharing the standard input. I would like to understand how it works.
%P.S.Thank you for the answers and comments! The answer was really helpful. It sounded like the client must get the signal, and the worker process must be stopped by other mechanism. Based on that, I could look into the code deeper. It turned out that the client indeed catches the signal and dies. It breaks the socket connection. The server detects when the fd is broken, and signals the corresponding worker process. That was why the worker process looked like getting the signal from the terminal.
It's not Bash that sends the signal, but the tty driver. It sends it to the foreground process group, meaning all processes in the foreground group receive it.

log4cpp stops working properly after sometime

I have a log4cpp implementation in a multiple process environment . Logger is configured once during initialization and then is shared among forked processes which server http requests.
During first minute or so , I see the logs rolls perfectly fine at the query per second load( say it runs at 100qps).
After that, the log slows down dramatically. So, I logged pid as well and notice that only one process gets to write to the log for a time duration ( around 10-15 seconds) and then another process starts writing and so on so forth . Processes don't die. They just don't get a chance to write.
This is different from what happens when the server starts . At that time, every other log line is written by a different process. ( Also, I write one-log-line per process at the end of serving the request. )
At this point, I can't think of what could be going wrong.
This is how my log4cpp conf file looks
log4cpp.rootCategory=DEBUG,rootAppender
log4cpp.appender.rootAppender=org.apache.log4cpp.RollingFileAppender
log4cpp.appender.rootAppender.fileName=/tmp/mylogfile.log
log4cpp.appender.rootAppender.layout=org.apache.log4cpp.PatternLayout
log4cpp.appender.rootAppender.layout.ConversionPattern=%d|%p|%m%n
log4cpp.category.http.server.main=INFO,MAIN
log4cpp.additivity.http.server.main=false
log4cpp.appender.MAIN=org.apache.log4cpp.RollingFileAppender
log4cpp.appender.MAIN.maxBackupIndex=10
log4cpp.appender.MAIN.maxFileAge=1
log4cpp.appender.MAIN.append=true
log4cpp.appender.MAIN.fileName=/tmp/mylogfile.log
log4cpp.appender.MAIN.layout=org.apache.log4cpp.PatternLayout
log4cpp.appender.MAIN.layout.ConversionPattern=%d|%p|%m%n
Edit: more updates : Thanks #Botje for your time.
I see that whenever a new child process is created , it is only that process that gets to write to the log. That tells me that all the reference other processes were holding become invalid.
I also tried setting additive property to true. With that , server starts properly writing into the /tmp/myfile.log and then switches to writing into /tmp/myfile.log.1 withing a minute . And then stops writing after a minute.
At that point logs gets directed to stderr which is directed to another log file.
Also,
I did notice that the log4cpp FileAppender uses seek to determine the file size before writing log entries. If the file handle is shared between processes that will cause writes to end up at the start of the file instead of the end. Even if you fix that, you still have multiple processes that think they are in charge of log file rotation.
I suggest you have all processes write to a common udp/tcp/Unix socket and designate one process that collects all log entries and actually writes it to a file. You don't have to reinvent the wheel, you can use the syslog protocol and either the system syslog or a copy running in userspace.

How to stop a detached process in qt?

After starting a process with QProcess::startDetached, how can I stop it later?
Say the main program runs, then starts the detached process, which runs independently. The user closes the main program, then later opens it up again and wants to stop the process. How would I find the process and then stop it?
Is there a way I could prevent the application from the same process twice?
No, it will be decoupled from your application. You could get the the PID of it and then send a SIGSTOP on Linux, but this is platform specific and will not work without POSIX support, like with msvc. You would need to hand-craft your version therein.
Is there a way I could prevent the application from the same process twice?
Yes, by using lock file in the detached process. If that detached process happens to be written in at least partially Qt, you could use the QLockFile class.
If you happen to detach some platform specific process, then you have the same recurring issue again, for sure.
Here's the answer I figured out:
I first start the detached process that generates a unique id. That process write to a file whenever it runs (was a 1 minute timer). When it runs, it writes its id to a file. Then, if there happens to be another one that ran, if it sees a previous one ran, it just writes its id to the file and doesn't run, then, when the next one runs, it sees if its id is already in the file and if it is, it shuts itself off and clears the file, then the next run ends up running freely, being the only one running. This may end up skipping some time.
You can add a timestamp, too, as that might indicate it wasn't run recently and help with deciding whether or not to shut it down. The issue was if I just write the id to a file, when I turn the phone off, the file will say it's still running. The same applies to if it crashes.

C++ Having Windows Service Start another Program

Is it possible to create a windows service to create and maintain another process? Like I'm writing a program, and say a virus killed the process, could I have my window service running and basically 'watching' it? I already have the code for a regular application that stays running and executes a program if it's not currently running, to keep it running.
I've never written a service before, but would it be that hard to just write this simple program, which basically runs a check to see if the process is running, if not, it executes it and sleeps for a few minutes?
Thanks.
Yes, it is possible. It is not uncommon to see third-party apps have watchdog services to keep them running in case of crashes. A service can enumerate running processes using EnumProcesses(), and if the desired executable is not running then start a new copy of it using CreateProcessAsUser().
If the service is the one starting the executable process in the first place, or can find it after an enumeration, one optimization would be to keep an open handle to the process (returned by CreateProcess...(), or use OpenProcess() on the process ID an enumeration returns), and then use a wait function, like WaitForSingleObject(), to detect when the process stops running. That way, you don't have to enumerate processes to find out if the intended process is still running or not.

How to handle multiple concurrent instances at the same time?

That's the problem:
I don't like multiple instances of my program, that's why I've disabled them. My program opens a specific mime-type. In my system (Ubuntu 12.04), when I double click one of these files, this is executed:
/usr/bin/myprogram /path/to/double/clicked/file.myextension
As I said, I don't like multiple instances, so, if the program is already running and the user chooses to open one of these files, a DBus message is being sent to the already instance so as to take care the opened file. So, if there's an already running instance and the user choose 3 files to open with my program and hit the [Enter] button, the system executes:
/usr/bin/myprogram /path/to/double/clicked/file1.myextension
/usr/bin/myprogram /path/to/double/clicked/file2.myextension
/usr/bin/myprogram /path/to/double/clicked/file3.myextension
All of these instances detect the already running instance and sent the opened file to it. No problems at all, till now.
But, what if there isn't an already running instance and the user chooses to open 3 files altogether with my program? The system will call concurrently, again:
/usr/bin/myprogram /path/to/double/clicked/file1.myextension
/usr/bin/myprogram /path/to/double/clicked/file2.myextension
/usr/bin/myprogram /path/to/double/clicked/file3.myextension
and each of these instances will realize that there's an already running instance, it will try to sent a DBus message to the already running instance and it will exit. So, all the 3 processes will do the same thing and nothing will run.
How can I avoid this problem?
PS: In order to detect if there are already running instances I implement the following code:
bool already_runs(){
return !system("pidof myprogram | grep \" \" > /dev/null");
}
I would use some shared memory to store the pid of the first process. The QSharedMemory class will help you here.
The first thing your program should do is try to create a shared memory segment (using your own made up key) and store your pid inside it. If the create call fails, then you can try to attach to the segment instead. If that succeeds then you can read the pid of the original process from it.
EDIT: also, remomber to use lock() before writing to or reading from the shared memory, and then call unlock() when you are done.
The standard way to do this in DBus is to acquire your application's name on the bus; one instance will win the race and can become the running instance.
However, you should be able to do this using Qt functionality which will integrate better with the rest of your application; see Qt: Best practice for a single instance app protection.