I'm working on an audio encoder cgi script that utilises libmp3lame.
I'm writing in a mixture of C/C++.
I plan to have an entry-point cgi that can spawn multiple encoding processes that run in the background. I need the encoding processes to be asynchronous as encoding can take several hours but I need the entry-point cgi to return instantly so the browser can continue about its business.
I have found several solutions for this (some complete/ some not) but there are still a few things I'd like to clear up.
Solution 1 (easiest): The entry-point cgi is a bash script which can then run a C++ process cgi in the background by sending the output to /dev/null/ 2/&>1& (simples! but not very elegant).
Solution 2: Much like solution 1, except the entry-point cgi is in C++ and uses system() to run the proc/s and send the output to /dev/null/ 2/&>1& again.
[question] This works well but I'm not sure if shared hosting companies allow use of the system() function. Is this the case?
Solution 3 (incomplete): I've looked into using fork()/pthread_create() to spawn separate threads which seems more elegant as I can stay in the realms of C. The only problem being: It seems that the parent thread doesn't exit until all child threads have returned.
[question] Is there any way to get the parent thread to exit whilst allowing child threads to continue in the background.
[idea] Maybe I can send the child proc/s output to the black hole! Can I simply redirect stdout to /dev/null. If so, how do I do this?
I hope this makes sense to someone. I'm still a bit of a noob with C stuff so I may be missing very basic concepts (please have mercy!).
I'd be very grateful of any advise on this matter.
Many thanks in advance,
Josh
You probably want the standard Unix daemon technique, involving a double fork:
void daemonize(void)
{
if (fork()) exit(0); // fork. parent exits.
setsid(); // become process group leader
if (fork()) _exit(0); // second parent exits.
chdir("/"); // just so we don't mysteriously prevent fs unmounts later
close(0); // close stdin, stdout, stderr.
close(1);
close(2);
}
Looks like modern Linux machines have a daemon() library function that presumably does the same thing.
It's possible that the first exit should be _exit, but this code has always worked for me.
Related
I am trying to run multiple command in ubuntu using c++ code at the same time.
I used system() call to run multiple command but the problem with system() call is it invoke only one command at a time and rest commands are in waiting.
below I wrote my sample code, may this help you to get what I am trying to do.
major thing is I want to run all these command at a time not one by one. Please help me.
Thanks in advance.
main()
{
string command[3];
command[0]= "ls -l";
command[1]="ls";
command[2]="cat main.cpp";
for(int i=0;i<3;i++){
system(command[i].c_str());
}
}
You should read Advanced Linux Programming (a bit old, but freely available). You probably want (in the traditional way, like most shells do):
perhaps catch SIGCHLD (set the signal handler before fork, see signal(7) & signal-safety(7)...)
call fork(2) to create a new process. Be sure to check all three cases (failure with a negative returned pid_t, child with a 0 pid_t, parent with a positive pid_t). If you want to communicate with that process, use pipe(2) (read about pipe(7)...) before the fork.
in the child process, close some useless file descriptors, then run some exec function (or the underlying execve(2)) to run the needed program (e.g. /bin/ls)
call (in the parent, perhaps after having got a SIGCHLD) wait(2) or waitpid(2) or related functions.
This is very usual. Several chapters of Advanced Linux Programming are explaining it better.
There is no need to use threads in your case.
However, notice that the role of ls and cat could be accomplished with various system calls (listed in syscalls(2)...), notably read(2) & stat(2). You might not even need to run other processes. See also opendir(3) & readdir(3)
Perhaps (notably if you communicate with several processes thru several pipe(7)-s) you might want to have some event loop using poll(2) (or the older select(2)). Some libraries provide an event loop (notably all GUI widget libraries).
You have a few options (as always):
Use threads (C++ standard library implementation is good) to spawn multiple threads which each perform a system call then terminate. join on the thread list to wait for them all to terminate.
Use the *NIX fork command to spawn a new process, then within each child process use exec to execute the desired command (see here for an example of "getting the right string to the right child"). Parent process can use waitpid to determine when all children have finished running, in order to move on with the program.
Append "&" to each of your commands, which'll tell the shell to run each one in the background (specifically, system will start the process in the background then return, without waiting for the result). Not tried this, don't know if it'll work. You can't then wait for the call to terminate though (thanks PSkocik).
Just pointing out - if you run those 3 specific commands at the same time, you're unlikely to be able to read the output as they'll all print text to the terminal at the same time.
If you do require reading the output from within the program (though not mentioned in your question), this is relevant (although it doesn't use system).
In my c++ windows app I start multiple child processes and I want them to inherit parent's stdout/stderr, so that if output of my app is redirected to some file then that file would also contain output of all child processes that my app creates.
Currently I do that using CreateProcess without output redirection. MSDN has a sample how to redirect output: Creating a Child Process with Redirected Input and Output, but I want to see what alternative do I have. Simplest is to use system and call it from a blocking thread that waits for child to exit. All output is then piped back to parent's stdout/stderr, however in parent process I do not have a chance to process stdout data that comes from child.
There are also other functions to start processes on windows: spawn, exec, which might be easier to port to posix systems.
What should I use if I want it to work on linux/osx? What options do I have if I want it to work on UWP aka WinRT? I might be totally ok with system called from a blocking thread, but perhaps I'd prefer to be able to have more control on process PID (to be able to terminate it) and process stdout/stderr, to prepend each line with child##: for example.
The boost libraries recently released version 1.64 which includes a new boost::process library.
In it, you're given a C++ way to be able to redirect output to a pipe or asio::streambuf, from which you can create a std::string or std::istream to read whatever your child process wrote.
You can read up on boost::process tutorials here, which shows some simple examples of reading child output. It does make heavy use of boost::asio, so I highly recommend you read up on that too.
I am developing a program that is doing various tasks using fork(). I am starting the program, everything works fine. I observed that after some time (1 day) i get flooded with <defunct> processes, over 600 700 ... where max forks is setted to 500. This is the code :
int numforks = 0;
int maxf = 100;
// READ FROM FILE ...
while (fgets(nutt,2048,fp))
{
fflush(stdout);
if (!(fork()))
{
some_time_intensive_function();
exit(0);
}
else
{
numforks++;
if (numforks >= maxf)
{
wait(NULL);
numforks--;
}
}
}
// DON'T EXIT PROGRAM TILL ALL FORKS ARE FINISHED
while(numforks>0)
{
wait(NULL);
numforks--;
}
// CLOSE READ FILE ...
This programs keeps all the time 500 forks oped like a thread pool.
I don't really understand what <defunct> processes are, but i heard that they aren't errors in the child processes like SEG FAULT occurring, but rather parent process is not waiting correctly.
I want to get read of <defunct>s, any ideas to solve this?
I repeat, this happens after some time 1-2 days.
Thank you.
I think you have two problems:
Firstly wait can return for reasons other than a child process has terminated (and if it does, it will leave a defunct process). I think you need to pass in a non-null pointer, and inspect the returned wait status. Only decrement numforks if appropriate.
Secondlynumforks doesn't (effectively) limit the total number of child processes. If the parent process launches two processes, they will each go on to inherit numforks of 0 and 1. Then each of those child processes will launch 500 and 499 more subprocesses.
I think you need exit(0) (or break) after your time_consuming_process().
(I assume you are running on Linux, or some other POSIX system like MacOSX)
Beware of orphan processes.
Read Advanced Linux Programming which has several chapters related to your issue.
You'll better keep the result of fork (in some pid_t variable or field), and handle all three cases (>0: fork was successful; ==0, in child process, <0: fork failed!). And you should probably call waitpid(2) appropriately. In the child process it is reasonable to call exit(3) (or execve(2)...)
Perhaps you should handle SIGCHLD signal. Read carefully signal(7).
(you don't show enough of your program, and an entire book is needed to explain all that)
As a rule of thumb you don't want to have many runnable processes. On a typical laptop or desktop computer, you should not have more than a dozen of runnable processes. Use top(1) or ps(1) to list your processes (and notably to understand how many processes you have). Perhaps use (at least during debugging) bash ulimit builtin (it calls setrlimit(2) from inside your shell) in your terminal e.g. as ulimit -u 50 to limit the number of processes (to 50).
If coding in genuine C++11, you should consider using frameworks like Qt or POCO (both provide support for processes).
You should care about inter-process communication (perhaps with pipe(7)-s or socket(7)-s and some event loop, see poll(2) ...) and synchronization issues. Perhaps look into MPI or 0mq.
(you probably need to read a lot more)
Perhaps strace(1) might be helpful to debug your issues.
Don't forget to check every system call. See syscalls(2) & errno(3).
I'm building a failsafe application for professional video. The Qt application checks the 4 corners of the 2nd screen and if they are a certain RGB value (I use a special background) the Qt program knows it crashed so it sends a signal to the videomixer to fade to the other input.
Now I also want to add a check to see if the video program didn't crash (it can be the video program doesn't respond but still shows an output so I can't see the desktop on the 2nd screen). I know I can use Qprocess to start an external process. It's not that easy to hook it up to a process that already runs.
Now the question: how can I check if the program crashed (so "not responding") and see this as quick as possible so I can fade to the other video input. And what happens when my Qt program crashes, will it also exit the child process?
Thanks!
Using QProcess creates an attached process, so unfortunately it will be killed when your process dies. When you create a detached process using the static method QProcess::startDetached, you don't get the monitoring functionality.
You need to write a little platform-specific monitoring class that can launch a detached process and inform you of changes in its status. You need to use the native APIs in implementing that. QProcess's sources can be a good inspiration for where to start.
#KubaOber is partially correct in his statement. If you start and detach a process indeed you loose the Qt way of communicating with it and monitory what it does. However you OS offers plenty solutions to oversee what happens with it.
On Linux you can use:
pgrep to check if the process is running or not (execute the command as a child process and see if it returns 0 (process is running) or 1 (process is no longer running)
you can use proc filesystem to see when a process terminates (see here) and then use $? or a variable (as in described in the link) to check its exit status
kill allows you a great amount of control possibilities along with pipes
You should note however that especially on Windows there are plenty of programs that do not follow the Unix convention for exit codes (0 = exited normally, anything else - error has occurred). Also a crash is just an error state that the process ended up with. The exit code tells you that an error has occurred but in terms of a crash you will probably not be able to make the difference just by looking at it.
I am creating an application in C++ gtk and if I press a button a threading process will start and I need to run the application if the window is closed also is it possible?
Under a Unix system (and since Windows 10), you create another process using the fork() function. To run a program you then use the execve() or similar.
However, that means you need to communicate with that other process using a pipe (see pipe() or pipe2()) or via the network.
Using a thread instead of a process allows you to run in the same memory & process and you can very easily shared everything between multiple threads.
As far as I know, the gtk loop just returns once the user selects the "Close Window" or similar exit function. It would be up for your main() function to make sure that it waits for all the threads to be done before exiting. For threads, this is usually done with a "join()". It will depend on the library you use to run your background process.
Note that in most cases people expect processes to exit whenever they ask the process to exit. Showing a window saying that your process is still running in the background (is busy) is a good idea for a process which runs a GUI. Especially, if you run your process from the console, it would not exit immediately after you closed the window, so letting the user know what's happening is important otherwise they are likely to hit Ctrl-C and kill the whole thing.
If you'd like the main to return but be able to keep the background threads running, it's a tad bit more complicated, but it uses both of the solutions I just mentioned:
create a pipe()
fork() (but no execve())
from within the forked app. (child) open Gtk window, background thread, etc.
when last Gtk window is closed, send message over pipe
parent process receives message and quits immediately
child process still attempts a "join()" to wait for the background thread
This way, the background process with threads created in (3) can continue to run (your function still needs to wait for all the threads to end with the "join()" call), however, the use has a sense of "the app. is done" since it returns to the next line on the prompt in your console even though a background process is still running.
The pipe() and wait on a message on the pipe() is not required if you don't mind having your application always running in the background.
Note: that usage of fork() is most often seen when creating processes that want to run in the background (i.e. services, often called servers under Unix). That's how they get their PPID set to 1.
On Windows, you need to create a Windows/Linux/Mac Service or run the process in background. On Linux you need to create a daemon service or run the process in the background. Services allow to automatically start the process on boot.