Use system() to create independent child process - c++

I have written a program where I create a thread in the main and use system() to start another process from the thread. Also I start the same process using the system() in the main function also. The process started from the thread seems to stay alive even when the parent process dies. But the one called from the main function dies with the parent. Any ideas why this is happening.
Please find the code structure below:
void *thread_func(void *arg)
{
system(command.c_str());
}
int main()
{
pthread_create(&thread_id, NULL, thread_func, NULL);
....
system(command.c_str());
while (true)
{
....
}
pthread_join(thread_id, NULL);
return 0;
}

My suggestion is: Don't do what you do. If you want to create an independently running child-process, research the fork and exec family functions. Which is what system will use "under the hood".
Threads aren't really independent the same way processes are. When your "main" process ends, all threads end as well. In your specific case the thread seems to continue to run while the main process seems to end because of the pthread_join call, it will simply wait for the thread to exit. If you remove the join call the thread (and your "command") will be terminated.
There are ways to detach threads so they can run a little more independently (for example you don't have to join a detached thread) but the main process still can't end, instead you have to end the main thread, which will keep the process running for as long as there are detached threads running.
Using fork and exec is actually quite simple, and not very complex:
int pid = fork();
if (pid == 0)
{
// We are in the child process, execute the command
execl(command.c_str(), command.c_str(), nullptr);
// If execl returns, there was an error
std::cout << "Exec error: " << errno << ", " << strerror(errno) << '\n';
// Exit child process
exit(1);
}
else if (pid > 0)
{
// The parent process, do whatever is needed
// The parent process can even exit while the child process is running, since it's independent
}
else
{
// Error forking, still in parent process (there are no child process at this point)
std::cout << "Fork error: " << errno << ", " << strerror(errno) << '\n';
}
The exact variant of exec to use depends on command. If it's a valid path (absolute or relative) to an executable program then execl works well. If it's a "command" in the PATH then use execlp.

There are two points here that I think you've missed:
First, system is a synchronous call. That means, your program (or, at least, the thread calling system) waits for the child to complete. So, if your command is long-running, both your main thread and your worker thread will be blocked until it completes.
Secondly, you are "joining" the worker thread at the end of main. This is the right thing to do, because unless you join or detach the thread you have undefined behaviour. However, it's not what you really intended to do. The end result is not that the child process continues after your main process ends... your main process is still alive! It is blocked on the pthread_join call, which is trying to wrap up the worker thread, which is still running command.
In general, assuming you wish to spawn a new process entirely unrelated to your main process, threads are not the way to do it. Even if you were to detach your thread, it still belongs to your process, and you are still required to let it finish before your process terminates. You can't detach from the process using threads.
Instead, you'll need OS features such as fork and exec (or a friendly C++ wrapper around this functionality, such as Boost.Subprocess). This is the only way to truly spawn a new process from within your program.
But, you can cheat! If command is a shell command, and your shell supports background jobs, you could put & at the end of the command (this is an example for Bash syntax) to make the system call:
Ask the shell to spin off a new process
Wait for it to do that
The new process will now continue to run in the background
For example:
const std::string command = "./myLongProgram &";
// ^
However, again, this is kind of a hack and proper fork mechanisms that reside within your program's logic should be preferred for maximum portability and predictability.

Related

Kill all child processes that used exec, c++

I don't know if this is the best way but I have a random number of child processes who have beed execed and wanted to implement a way to kill them without using ctrl+c. I was thinking of keeping a set of their pids and then check that set whenever I want to kill them from the parent process.
The way I was trying to do it was something like this
set<pid_t> pids;
pid_t id = fork();
if(id == 0)
{
pids.insert(getpid());
execlp("./somewhere", "./somewhere", something.c_str(), NULL);
cout << "Didn't exec" << endl;
exit(0);
}
for(auto i : pids)
{
kill(i, something?)
}
I still don't quite know how to use the kill function or how pids work so I don't know if this will work in any way, I just did a simple project in c for college and though I could try something more complex in c++.
Anyways, the objective of this is to be able to have the parent process kill a single child process out of an undefined number of running child processes, or kill them all whenever the user writes quit
kill() on pid 0 sends the signal to all members of the calling process group:
If pid is 0, sig shall be sent to all processes (excluding an
unspecified set of system processes) whose process group ID is equal
to the process group ID of the sender, and for which the process has
permission to send a signal.
If you want to kill only certain processes (as seems to be your case) take a look to Grouping child processes with setpgid()

How to run a thread infinitely without blocking main thread in c++?

I am trying to make a native app , and I need a separate thread freezing some values(constant overwriting with delay) in the background and I don't need any return from it to main. So after creating the thread when I detach from it , it does not do the freezing.
pthread_create(&frzTh, NULL, freezingNow, NULL);
pthread_detach(frzTh);
But if I join the thread then it performs freezing but my main thread gets blocked as it waits for the child thread to finish , and since the child runs infinitely , there is no coming out.
pthread_create(&frzTh, NULL, freezingNow, NULL);
pthread_join(frzTh,NULL);
So, I tried using fork() to create a child process instead of thread. Now , I am able to perform all tasks parallel to my main. But , this is causing a lot of memory usage and leads to heating of device.
pid_t pid_c = fork();
if (pid_c == 0 && freeze) {
while (freeze) {
Freeze();
usleep(delay);
}
}
So, what is the best way to do this ?
Best example is game guardian app and it's freezing mechanism.
To do this properly, you need to have a mechanism by which the main thread can cause the child thread to exit (a simple std::atomic<bool> pleaseQuitNow that the child thread tests periodically, and the main thread sets to true before calling pthread_join(), will do fine).
As for why you need to call pthread_join() before exiting, rather than just allowing the main thread to exit while the child thread remains running: there is often run-time-environment code that executes after main() returns that tears down various run-time data structures that are shared by all threads in the process. If any threads are still running while the main-thread is tearing down these data structures, it is possible that the still-running thread(s) will try to access one of these data structures while it is in a destroyed or half-destroyed state, causing an occasional crash-on-exit.
(Of course, if your program never exits at all, or if you don't care about an occasional crash-on-exit, you could skip the orderly shutdown of your child thread, but since it's not difficult to implement, you're better off doing things the right way and avoiding embarrassment later when your app crashes at the end of a demo)
If you wanna do Something as async with Mainthread untill end main ,
I recommand Promise - future in c++
this example :) good luck
#include <future>
#include <iostream>
#include <thread>
void DoWork(promise<int> p)
{
// do something (child thread)
// saved value in p
p.set_value(10);
}
int main(void)
{
promise<int> p;
auto future = p.get_future();
thread worker{ DoWork, std::move(p)};
// do something you want
// return result
int result = future.get();
std::cout<< result <<'\n'; // print 10
}

Closing pipe does not interrupt read() in child process spawned from thread

In a Linux application I'm spawning multiple programs via fork/execvp and redirect the standard IO streams to a pipe for IPC. I spawn a child process, write some data into the child stdin pipe, close stdin, and then read the child response from the stdout pipe. This worked fine, until I've executed multiple child processes at the same time, using independent threads per child process.
As soon I increase the number of threads, I often find that the child processes hang while reading from stdin – although read should immediately exit with EOF because the stdin pipe has been closed by the parent process.
I've managed to reproduce this behaviour in the following test program. On my systems (Fedora 23, Ubuntu 14.04; g++ 4.9, 5, 6 and clang 3.7) the program often simply hangs after three or four child processes have exited. Child processes that have not exited are hanging at read(). Killing any child process that has not exited causes all other child processes to magically wake up from read() and the program continues normally.
#include <chrono>
#include <iostream>
#include <mutex>
#include <thread>
#include <vector>
#include <sys/fcntl.h>
#include <sys/wait.h>
#include <unistd.h>
#define HANDLE_ERR(CODE) \
{ \
if ((CODE) < 0) { \
perror("error"); \
quick_exit(1); \
} \
}
int main()
{
std::mutex stdout_mtx;
std::vector<std::thread> threads;
for (size_t i = 0; i < 8; i++) {
threads.emplace_back([&stdout_mtx] {
int pfd[2]; // Create the communication pipe
HANDLE_ERR(pipe(pfd));
pid_t pid; // Fork this process
HANDLE_ERR(pid = fork());
if (pid == 0) {
HANDLE_ERR(close(pfd[1])); // Child, close write end of pipe
for (;;) { // Read data from pfd[0] until EOF or other error
char buffer;
ssize_t bytes;
HANDLE_ERR(bytes = read(pfd[0], &buffer, 1));
if (bytes < 1) {
break;
}
// Allow time for thread switching
std::this_thread::sleep_for(std::chrono::milliseconds(
100)); // This sleep is crucial for the bug to occur
}
quick_exit(0); // Exit, do not call C++ destructors
}
else {
{ // Some debug info
std::lock_guard<std::mutex> lock(stdout_mtx);
std::cout << "Created child " << pid << std::endl;
}
// Close the read end of the pipe
HANDLE_ERR(close(pfd[0]));
// Send some data to the child process
HANDLE_ERR(write(pfd[1], "abcdef\n", 7));
// Close the write end of the pipe, wait for the process to exit
int status;
HANDLE_ERR(close(pfd[1]));
HANDLE_ERR(waitpid(pid, &status, 0));
{ // Some debug info
std::lock_guard<std::mutex> lock(stdout_mtx);
std::cout << "Child " << pid << " exited with status "
<< status << std::endl;
}
}
});
}
// Wait for all threads to complete
for (auto &thread : threads) {
thread.join();
}
return 0;
}
Compile using
g++ test.cpp -o test -lpthread --std=c++11
Note that I'm perfectly aware that mixing fork and threads is potentially dangerous, but please keep in mind that in the original code I'm immediately calling execvp after forking, and that I don't have any shared state between the child child process and the main program, except for the pipes specifically created for IPC. My original code (without the threading part) can be found here.
To me this almost seems like a bug in the Linux kernel, since the program continues correctly as soon as I kill any of the hanging child processes.
This problem is caused by two fundamental principles of how fork and pipes work in Unix. a) the pipe description is reference counted. The pipe is only closed, if all pipe file descriptors pointing at its other end (referencing the descriptions) are closed. b) fork duplicates all open file descriptors of a process.
In the above code, the following race condition might happen: If a thread switch occurs and fork is called between the pipe and fork system calls, the pipe file descriptors are duplicated, causing the write/read ends to be open multiple times. Remember that all duplicates must be closed for the EOF to be generated – which will not happen if there is another duplicate astray an unrelated process.
The best solution is to use the pipe2 system call with the O_CLOEXEC flag and to immediately call exec in the child process after a controlled duplicate of the file descriptor is created using dup2:
HANDLE_ERR(pipe2(pfd, O_CLOEXEC));
HANDLE_ERR(pid = fork());
if (pid == 0) {
HANDLE_ERR(close(pfd[1])); // Child, close write end of pipe
HANDLE_ERR(dup2(pfd[0], STDIN_FILENO));
HANDLE_ERR(execlp("cat", "cat"));
}
Note that the FD_CLOEXEC flag is not copied by the dup2 system call. This way all child processes will automatically close all the file descriptors they should not receive as soon as they reach the exec system call.
From the man-page on open on O_CLOEXEC:
O_CLOEXEC (since Linux 2.6.23)
Enable the close-on-exec flag for the new file descriptor.
Specifying this flag permits a program to avoid additional
fcntl(2) F_SETFD operations to set the FD_CLOEXEC flag.
Note that the use of this flag is essential in some
multithreaded programs, because using a separate fcntl(2)
F_SETFD operation to set the FD_CLOEXEC flag does not suffice
to avoid race conditions where one thread opens a file
descriptor and attempts to set its close-on-exec flag using
fcntl(2) at the same time as another thread does a fork(2)
plus execve(2). Depending on the order of execution, the race
may lead to the file descriptor returned by open() being
unintentionally leaked to the program executed by the child
process created by fork(2). (This kind of race is in
principle possible for any system call that creates a file
descriptor whose close-on-exec flag should be set, and various
other Linux system calls provide an equivalent of the
O_CLOEXEC flag to deal with this problem.)
The phenomenon of all child processes suddenly exiting when one child process is killed can be explained by comparing this issue to the dining philosophers problem. In the same way as killing one of the philosophers will solve the deadlock, killing one of the processes will close one of the duplicated file descriptors, triggering an EOF in another child process which will exit in return, freeing one of the duplicated file descriptors...
Thank you to David Schwartz for pointing this out.

How to restart a multithreaded C++ program inside the code?

as i describe in the header I would like to have in a thread an if statement which is checked every 1 minute and if it is true restart the whole programm.. Any suggestions?
void* checkThread(void* arg)
{
if(statement)
//restart procedure
sleep(60);
}
int main()
{
pthread_create(&thread1, NULL, checkThread, main_object);
pthread_create();
pthread_create();
}
If you are going for the nuke-it-from-orbit approach (i.e. you don't want to trust your code to do a controlled shutdown reliably), then having the kill-and-auto-relaunch mechanism inside the same process space as the other code is not a very robust approach. For example, if one of the other threads were to crash, it would take your auto-restart-thread down with it.
A more fail-safe approach would be to have your auto-restart-thread launch all of the other code in a sub-process (via fork(); calling exec() is allowable but not necessary in this case). After 60 seconds, the parent process can kill the child process it created (by calling kill() on the process ID that fork() returned) and then launch a new one.
The advantage of doing it this way is that the separating of memory spaces protects your relauncher-code from any bugs in the rest of the code, and the killing of the child process means that the OS will handle all the cleanup of memory and other resources for you, so there is less of a worry about things like memory or file-handle leaks.
If you want a "nice" way to do it, you set a flag, and then politely wait for the threads to finish, before relaunching everything.
main_thread() {
do {
kill_and_restart_everything = false;
// create your threads.
pthread_create(&thread1, NULL, checkThread, main_object);
pthread_create(&thread2, ...);
pthread_create(&thread3, ...);
// wait for your threads.
pthread_join(thread1, nullptr);
pthread_join(thread2, nullptr);
pthread_join(thread3, nullptr);
} while (kill_and_restart_everything);
}
void* checkThread(void* arg) {
while (! kill_and_restart_everything) {
if(statement)
kill_and_restart_everything = true;
else
sleep(60);
}
}
void* workerThread(void* arg) {
// do stuff. periodically check
if (kill_and_restart_everything) {
// terminate this thread early.
// do it cleanly too, release any resources, etc (RAII is your friend here).
return nullptr;
}
// do other stuff, remember to have that check happen fairly regularly.
}
This way, whenever if(statement) is true, it will set a boolean that can be used to tell each thread to shut down. Then the program waits for each thread to finish, and then starts it all over again.
Downsides: If you're using any global state, that data will not be cleaned up and can cause problems for you. If a thread doesn't check your signal, you could be waiting a looooong time.
If you want to kill everything (nuke it from orbit) and restart, you could simply wrap this program in a shell script (which can then detect whatever condition you want, kill -9 the program, and relaunch it).
Use the exec system call to restart the process from the start of the program.
you can do it in two parts:
Part1: one thread that checks for the statement and sets a boolean to true when you need to restart the program
This is the "checker" thread
Part2: one thread that computes what you want:
this will "relaunch" the program as long as needed
This "relaunch" consists in a big loop
In the loop:
creates a thread that will actually execute your programme (the task you want to be executed)
ends this taks when the boolean is set to true
creates another thread to replace then one that is terminated
The main of your program consists in launching the "checker" and the "relauncher"
Tell me if you have any questions/remarks I can detail or add some code

Avoiding the production of zombie processes in C++

Very strange bug, perhaps someone will see something I'm missing.
I have a C++ program which forks off a bash shell, and then passes commands to it.
Periodically, the commands will contain nonsense and the bash process will hang. I detect this using semtimedwait, and then run a little function like this:
if (kill(*bash_pid, SIGKILL)) {
cerr << "Error sending SIGKILL to the bash process!" << endl;
exit(1);
} else {
// collect exit status
long counter = 0;
do {
pid = waitpid(*bash_pid, &status, WNOHANG);
if (pid == 0) { // status not available yet
sleep(1);
}
if(counter++ > 5){
cerr << "ERROR: Bash child process ignored SIGKILL >5 sec!" << endl;
}
} while (pid != *bash_pid && pid != -1);
if(pid == -1){
cerr << "Failed to clean up zombie bash process!" << endl;
exit(1);
}
// re-initialized bash process
*bash_pid = init_bash();
}
Assuming I understand the workings of waitpid correctly, this should first send SIGKILL to the shell, and then essentially sit in a spinlock, trying to reap the resulting process. Eventually, it succeeds and then a new bash process is started with init_bash().
At least, that's what should happen. Instead, the child process's exit status is never collected, and it continues to exist as a zombie process. In spite of this, the parent does exit the loop and manages to restart the bash process, and continues with normal execution. Eventually too many zombies are generated and the system runs out of pids.
Additionally:
Fork is called in exactly one place in the program, inside init_bash.
Checks prevent init_bash from being called except once at the program's start and after a call to the function above.
Thoughts?
Articles that I read indicate that the reason for a zombie process is that a child process does an exit however the parent never collects the child's exit.
This article provides several ways to kill a zombie process from the command line. One technique is to use other signals besides SIGKILL for instance SIGTERM.
This article has an answer which suggests SIGKILL should not be used.
One of the techniques is to kill the parent thereby also killing its child processes including any zombies. The author indicates that there appear to be child processes that just remain as zombies until the OS is restarted.
You do not mention the mechanism used to communicate the commands to the child process. However one option may be to turn the child process loose by disconnecting it from its parent similar to the way a child of a terminal process can be disconnected from the terminal session. That way the child will become its own process and if there is a problem may exit without becoming a zombie.