I do want to kill a process I have a pid of, so my code is the following:
pid_t pid = 28880;
boost::process::child process { pid };
std::cout << "Running: " << process.running() << "\n";
process.terminate();
I noticed though that running() always returns false (no matter what pid I take) and based on the source code then terminate isn't even called.
Digging a little bit deeper it seem to be the linux function waitpid is called. And it always returns -1 (which means some error has occured, rather than 0, which would mean: Yes the process is still running).
WIFSIGNALED return 1 and WTERMSIG returns 5.
Am I doing something wrong here?
Boost process is a library for the execution (and manipulation) of child processes. Yours is not (necesarily) a child process, and not created by boost.
Indeed running() doesn't work unless the process is attached. Using that constructor by definition results in a detached process.
This leaves the interesting question why this constructor exists, but let's focus on the question.
Detached processes cannot be joined or terminated.
The terminate() call is a no-op.
I would suggest writing the logic yourself - the code is not that complicated (POSIX kill or TerminateProcess).
If you want you can cheat by using implementation details from the library:
#include <boost/process.hpp>
#include <iostream>
namespace bp = boost::process;
int main(int argc, char** argv) {
for (std::string_view arg : std::vector(argv + 1, argv + argc)) {
bp::child::child_handle pid{std::atoi(arg.data())};
std::cout << "pid " << pid.id();
std::error_code ec;
bp::detail::api::terminate(pid, ec);
std::cout << " killed: " << ec.message() << std::endl;
}
}
With e.g.
cat &
./sotest $!
Prints
Related
I am using this really simple code to try to create a mutex
int main(){
HANDLE hMutex = ::CreateMutex(nullptr, FALSE, L"SingleInstanceMutex");
if(!hMutex){
wchar_t buff[1000];
_snwprintf(buff, sizeof(buff), L"Failed to create mutex (Error: %d)", ::GetLastError());
::MessageBox(nullptr, buff, L"Single Instance", MB_OK);
return 0x1;
} else {
::MessageBox(nullptr, L"Mutex Created", L"Single Instance", MB_OK);
}
return 0x0;
}
And I get the message "Mutex Created" like if it is correctly created, but when I try to search it using the tool WinObj of SysInternals I can't find it.
Also if I restart the program many times while another instance is running I always get the message "Mutex Created" and never an error because the mutex already exists.
I'm trying it on a Windows 7 VM.
What I'm doing wrong?
Ah I'm compiling on Linux using:
i686-w64-mingw32-g++ -static-libgcc -static-libstdc++ Mutex.cpp
Thank you!
In order to use a Windows mutex (whether a named one like yours or an unnamed one), you need to use the following Win APIs:
CreateMutex - to obtain a handle to the mutex Windows kernel object. In case of a named mutex (like yours) multiple processes should succeed to get this handle. The first one will cause the OS to create a new named mutex, and the others will get a handle referring to that same mutex.
In case the function succeeds and you get a valid handle to the named mutex, you can determine whether the mutex already existed (i.e. that another process already created the mutex) by checking if GetLastError returns ERROR_ALREADY_EXISTS.
WaitForSingleObject - to lock the mutex for exclusive access. This function is actually not specific to mutex and is used for many kernel objects. See the link above for more info about Windows kernel objects.
ReleaseMutex - to unlock the mutex.
CloseHandle - to release the acquired mutex handle (as usual with Windows handles). The OS will automatically close the handle when the process exists, but it is good practice to do it explicitly.
A complete example:
#include <Windows.h>
#include <iostream>
int main()
{
// Create the mutex handle:
HANDLE hMutex = ::CreateMutex(nullptr, FALSE, L"SingleInstanceMutex");
if (!hMutex)
{
std::cout << "Failed to create mutex handle." << std::endl;
// Handle error: ...
return 1;
}
bool bAlreadyExisted = (GetLastError() == ERROR_ALREADY_EXISTS);
std::cout << "Succeeded to create mutex handle. Already existed: " << (bAlreadyExisted ? "YES" : "NO") << std::endl;
// Lock the mutex:
std::cout << "Atempting to lock ..." << std::endl;
DWORD dwRes = ::WaitForSingleObject(hMutex, INFINITE);
if (dwRes != WAIT_OBJECT_0)
{
std::cout << "Failed to lock the mutex" << std::endl;
// Handle error: ...
return 1;
}
std::cout << "Locked." << std::endl;
// Do something that required the lock: ...
std::cout << "Press ENTER to unlock." << std::endl;
std::getchar();
// Unlock the mutex:
if (!::ReleaseMutex(hMutex))
{
std::cout << "Failed to unlock the mutex" << std::endl;
// Handle error: ...
return 1;
}
std::cout << "Unlocked." << std::endl;
// Free the handle:
if (!CloseHandle(hMutex))
{
std::cout << "Failed to close the mutex handle" << std::endl;
// Handle error: ...
return 1;
}
return 0;
}
Error handling:
As you can see in the documentation links above, when CreateMutex,ReleaseMutex and CloseHandle fail, you should call GetLastError to get more info about the error. WaitForSingleObject will return a specific return value upon error (see the documentation link above). This should be done as a part of the // Handle error: ... sections.
Note:
Using a named mutex for IPC (interprocess communication) might be the only good use case for native Windows mutexes.
For a regular unnamed mutex it's better to use one of the available standard library types of mutexes: std::mutex,std::recursive_mutex,std::recursive_timed_mutex (the last 2 support repeated lock by a thread, similarly to Windows mutex).
I am looking to put in some code (temporary at the moment, but as a thought experiment, I am considering working it in long term), that would accomplish two things.
Allow toggling of logs via signals.
Trap all catchable signals and output pid/thread/signal info to the logs, followed by returning to default behavior.
here is some compiling pseudo-code which shows what I would like to achieve.
#include <iostream>
#include <csignal>
#include <thread> // for this_thread
#include <unistd.h> // For gitpid() and fork()
#include <cstdlib>
// What is a save way of determining the number of
// available signals
// ** Externally defined values
const int verbose_debug_signal = 30;
const int trace_logging_enabled_default = 0;
// ** Component Control Class Definition Start
volatile sig_atomic_t trace_logging_enabled
= trace_logging_enabled_default;
void handle_signal(int sig)
{
std::cout << "Process PID: " << std::dec << getpid()
<< " Caught signal " << std::dec
<< sig << " on thread 0x" << std::hex
<< std::this_thread::get_id() << "\n";
if(sig == verbose_debug_signal)
{
trace_logging_enabled = ~trace_logging_enabled;
std::cout << "Trace Logging "
<< (!!trace_logging_enabled ? "enabled\n" : "disabled\n");
}
else
{
signal(sig, SIG_DFL);
raise(sig);
}
}
class ComponentControlClass
{
public:
ComponentControlClass()
: trace_logging_enabled_ptr(&trace_logging_enabled)
{}
~ComponentControlClass(){}
private:
volatile sig_atomic_t *trace_logging_enabled_ptr;
};
int main()
{
for(int i = 1; i < NSIG; ++i)
{
signal(i, handle_signal);
}
// BAH! stupid SIP strikes again. I think I would
// have to sign this to make it work.
// pid_t children[3];
// for(auto child : children)
// children[child] = fork();
while(1);
return 0;
}
Compiled with the -O3 flag to make sure Volatile fields don't get clobbered. The result of running the code in window 1 and sending a "kill -30 " in window 2, exactly 4 times:
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging enabled
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging disabled
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging enabled
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging disabled
This is exactly what I hoped would happen. In this example this class would represent a component control class which is the entry point for our shared objects using a homegrown framework. In other words, this is where I would need to pass external elements to the schema, like a pointer to volatile memory indicating the log state?
My question(s)
I've never seen anyone do anything like this so I am certain there is a good reason. I am trying to understand the implications.
I found a few articles on complex signal handling, and I was having trouble determining what applied. For all other signals aside from the one I am using to toggle I am trapping and re-enabling, so I wouldn't think this would be an issue. Is it?
When the toggling signal is sent I would want to propagate the logging state to
child processes. I haven't been able to simulate this easily on my Mac because of
SIP, but I haven't quite figured out how I would do this. Only confirmation that it won't happen by itself, because signals work on a per-thread basis.
Now that I'm bouncing between linux and Mac, I am seeing that signal.h is pretty different. My main concern is that different macros are used for the max interrupt and apple seems to only have 32 interrupts where linux uses 64. Is there a boilerplate way of approaching the signals (specifically the max/min values) which is more portable? Something like this but hopefully more robust:
int sigma = 0;
#ifdef LINUX
sigmax = _NSIG;
#elif APPLE
sigmax = NSIG;
#endif
I'm handling incoming connections to a socket in separate std::thread for each client connection. So when trying to do a read() from the socket, the program crashes.
std::thread in_conn_th(handle_new_connection, in_socket); // <-- creating a new thread and passing the handle_new_connection function into the thread with the socket descriptor param
Here is the description of handle_new_connection()
waiterr::operation_codes waiterr::Waiter::handle_new_connection(int incoming_socket) {
std::cout << "Here comes " << incoming_socket << "\n";
char buffer[30000] = {0};
int val_read = read(incoming_socket, buffer, 30000); // <-- Error
std::cout << "Here comes 2\n";
std::cout << buffer << std::endl << std::endl;
write(incoming_socket, "Some response", 13);
std::cout << "* Msg sent *\n";
close(incoming_socket);
return operation_codes(OK);
}
Error
shantanu#Shantanus-MacBook-Pro webserver % ./test1.o
* Waiting for new connection *
libc++abi: terminating
Here comes 4
zsh: abort ./test1.o
If I'm just calling handle_new_connection() without spawning a new thread, the operation is successful and response is shown in the client.
So I'm pretty sure its about some thread thing that I'm unaware of.
Environment -
Apple M1 Silicon; running g++ natively on ARM.
Edit
function definition for handle_new_connection()
static enum operation_codes handle_new_connection(int incoming_socket);
I used pthread_t instead of std::thread and it worked just fine.
Instead of
std::thread in_conn_th(handle_new_connection, in_socket);
I used
pthread_t in_conn_th;
pthread_create(&in_conn_th, NULL, handle_new_connection, (void*)(&in_socket));
And changed the function definition to receive the void *
Do not forget to include pthread.h header.
I have a C++ script, which checks whether any action has to be done and if so it starts the right processor C++ script. However, since it runs every x minutes it also checks whether the processor isn't still running using lock files.
I use the following function to acquire the lock:
int LockFile(string FileNameToLock) {
FileNameToLock += ".lock";
int fd = open(FileNameToLock.c_str(), O_RDWR | O_CREAT, 0666);
int rc = flock(fd, LOCK_EX | LOCK_NB);
if (rc || rc == -1) {
cout << errno << endl;
cout << strerror(errno) << endl;
return -1;
}
return fd;
}
The code that is being executed:
[...]
if (LockFile(ExecuteFileName, Extra) == -1) {
cout << "Already running!" << endl; //MAIN IS ALREADY RUNNING
//RETURNS ME Resource temporarily unavailable when processor is running from an earlier run
exit(EXIT_SUCCESS);
}
if (StartProcessor) { //PSEUDO
int LockFileProcessor = LockFile("Processor");
if (LockFileProcessor != -1) {
string Command = "nohup setsid ./Processor"; //NOHUP CREATES ZOMBIE?
Command += IntToString(result->getInt("Klantnummer"));
Command += " > /dev/null 2>&1 &"; //DISCARD OUTPUT
system(Command.c_str());
//STARTS PROCESSOR (AS ZOMBIE?)
}
}
The first run works well, however when the main script runs again, the lock file returns -1, which means an error occurred (only when the processor is still running). The errno is 11 which results in the error message: Resource temporarily unavailable. Note that this only happens when the (zombie) processor is still running. (However, the main script has already terminated, which should close the file handle?)
For some reason, the zombie keeps the file handle to the lock file of the main script open???
I have no idea where to look for this problem.
SOLVED:
see my answer
No, 11 is EAGAIN/EWOULDBLOCK which simply means that you cannot acquire the lock because the resource is already locked (see the documentation). You received that error (instead of blocking behaviour) due to LOCK_NB flag.
EDIT: After some discussion it seems that the problem is due to flock() locks being preserved when subprocessing. To avoid this issue I recommend not using flock() for the lifetime but instead touch-and-delete-at-exit strategy:
If file.lock exists then exit
Otherwise create file.lock and start processing
Delete file.lock at exit.
Of course there's a race condition here. In order to make it safe you would need another file with flock:
flock(common.flock)
If file.lock exists then exit
Otherwise create file.lock
Unlock flock(common.flock)
Start processing
Delete file.lock at exit
But this only matters if you expect simultaneous calls to main. If you don't (you said that a cron starts the process every 10min, no race here) then stick to the first version.
Side note: here's a simple implementation of such (non-synchronized) file lock:
#include <string>
#include <fstream>
#include <stdexcept>
#include <cstdio>
// for sleep only
#include <chrono>
#include <thread>
class FileLock {
public:
FileLock(const std::string& path) : path { path } {
if (std::ifstream{ path }) {
// You may want to use a custom exception here
throw std::runtime_error("Already locked.");
}
std::ofstream file{ path };
};
~FileLock() {
std::remove(path.c_str());
};
private:
std::string path;
};
int main() {
// This will throw std::runtime_error if test.xxx exists
FileLock fl { "./test.xxx" };
std::this_thread::sleep_for(std::chrono::seconds { 5 });
// RAII: no need to delete anything here
return 0;
};
Requires C++11. Note that this implementation is not race-condition-safe, i.e. you generally need to flock() the constructor but in this situation it probably be fine (i.e. when you don't start main in parallel).
I solved this issue, since the system and fork commands seem to pass on the flock, by saving the command to execute in a vector. Unlock the lock file right before looping the Commands vector and execute each command. This leaves the main with a very tiny gap of running while not locked, but for now it seems to work great. This also supports ungraceful terminations.
[...]
if (LockFile(ExecuteFileName, Extra) == -1) {
cout << "Already running!" << endl; //MAIN IS ALREADY RUNNING
//RETURNS ME Resource temporarily unavailable when processor is running from an earlier run
exit(EXIT_SUCCESS);
}
vector<string> Commands;
if (StartProcessor) { //PSEUDO
int LockFileProcessor = LockFile("Processor");
if (LockFileProcessor != -1) {
string Command = "nohup setsid ./Processor"; //NOHUP CREATES ZOMBIE
Command += IntToString(result->getInt("Klantnummer"));
Command += " > /dev/null 2>&1 &"; //DISCARD OUTPUT
Commands.push_back(Command);
}
}
//UNLOCK MAIN
if (UnlockFile(LockFileMain)) {
for(auto const& Command: Commands) {
system(Command.c_str());
}
}
I am launching a command using system api (I am ok with using this api with C/C++). The command I pass may hang at times and hence I would like to kill after certain timeout.
Currently I am using it as:
system("COMMAND");
I want to use it something like this:
Run a command using a system independent API (I don't want to use CreateProcess since it is for Windows only) Kill the process if it does not exit after 'X' Minutes.
Since system() is a platform-specific call, there cannot be a platform-independent way of solving your problem. However, system() is a POSIX call, so if it is supported on any given platform, the rest of the POSIX API should be as well. So, one way to solve your problem is to use fork() and kill().
There is a complication in that system() invokes a shell, which will probably spawn other processes, and I presume you want to kill all of them, so one way to do that is to use a process group. The basic idea is use fork() to create another process, place it in its own process group, and kill that group if it doesn't exit after a certain time.
A simple example - the program forks; the child process sets its own process group to be the same as its process ID, and uses system() to spawn an endless loop. The parent process waits 10 seconds then kills the process group, using the negative value of the child process PID. This will kill the forked process and any children of that process (unless they have changed their process group.)
Since the parent process is in a different group, the kill() has no effect on it.
#include <unistd.h>
#include <stdlib.h>
#include <signal.h>
#include <stdio.h>
int main() {
pid_t pid;
pid = fork();
if(pid == 0) { // child process
setpgid(getpid(), getpid());
system("while true ; do echo xx ; sleep 5; done");
} else { // parent process
sleep(10);
printf("Sleep returned\n");
kill(-pid, SIGKILL);
printf("killed process group %d\n", pid);
}
exit(0);
}
There is no standard, cross-platform system API. The hint is that they are system APIs! We're actually "lucky" that we get system, but we don't get anything other than that.
You could try to find some third-party abstraction.
Check below C++ thread based attempt for linux. (not tested)
#include <iostream>
#include <string>
#include <thread>
#include <stdio.h>
using namespace std;
// execute system command and get output
// http://stackoverflow.com/questions/478898/how-to-execute-a-command-and-get-output-of-command-within-c
std::string exec(const char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
void system_task(string& cmd){
exec(cmd.c_str());
}
int main(){
// system commad that takes time
string command = "find /";
// run the command in a separate thread
std::thread t1(system_task, std::ref(command));
// gives some time for the system task
std::this_thread::sleep_for(chrono::milliseconds(200));
// get the process id of the system task
string query_command = "pgrep -u $LOGNAME " + command;
string process_id = exec(query_command.c_str());
// kill system task
cout << "killing process " << process_id << "..." << endl;
string kill_command = "kill " + process_id;
exec(kill_command.c_str());
if (t1.joinable())
t1.join();
cout << "continue work on main thread" << endl;
return 0;
}
I had a similar problem, in a Qt/QML development: I want to start a bash command, while continuing to process events on the Qt Loop, and killing the bash command if it takes too long.
I came up with the following class that I'm sharing here (see below), in hope it may be of some use to people with a similar problem.
Instead of calling a 'kill' command, I call a cleanupCommand supplied by the developper. Example: if I'm to call myscript.sh and want to check that it won't last run for more than 10 seconds, I'll call it the following way:
SystemWithTimeout systemWithTimeout("myScript.sh", 10, "killall myScript.sh");
systemWithTimeout.start();
Code:
class SystemWithTimeout {
private:
bool m_childFinished = false ;
QString m_childCommand ;
int m_seconds ;
QString m_cleanupCmd ;
int m_period;
void startChild(void) {
int rc = system(m_childCommand.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout startChild: system returned %d", rc);
m_childFinished = true ;
}
public:
SystemWithTimeout(QString cmd, int seconds, QString cleanupCmd)
: m_childFinished {false}, m_childCommand {cmd}, m_seconds {seconds}, m_cleanupCmd {cleanupCmd}
{ m_period = 200; }
void setPeriod(int period) {m_period = period;}
void start(void) ;
};
void SystemWithTimeout::start(void)
{
m_childFinished = false ; // re-arm the boolean for 2nd and later calls to 'start'
qDebug()<<"systemWithTimeout"<<m_childCommand<<m_seconds;
QTime dieTime= QTime::currentTime().addSecs(m_seconds);
std::thread child(&SystemWithTimeout::startChild, this);
child.detach();
while (!m_childFinished && QTime::currentTime() < dieTime)
{
QTime then = QTime::currentTime();
QCoreApplication::processEvents(QEventLoop::AllEvents, m_period); // Process events during up to m_period ms (default: 200ms)
QTime now = QTime::currentTime();
int waitTime = m_period-(then.msecsTo(now)) ;
QThread::msleep(waitTime); // wait for the remaning of the 200 ms before looping again.
}
if (!m_childFinished)
{
SYSLOG(LOG_NOTICE, "Killing command <%s> after timeout reached (%d seconds)", m_childCommand.toUtf8().data(), m_seconds);
int rc = system(m_cleanupCmd.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout 164: system returned %d", rc);
m_childFinished = true ;
}
}
I do not know any portable way to do that in C nor C++ languages. As you ask for alternatives, I know it is possible in other languages. For example in Python, it is possible using the subprocess module.
import subprocess
cmd = subprocess.Popen("COMMAND", shell = True)
You can then test if COMMAND has ended with
if cmd.poll() is not None:
# cmd has finished
and you can kill it with :
cmd.terminate()
Even if you prefere to use C language, you should read the documentation for subprocess module because it explains that internally it uses CreateProcess on Windows and os.execvp on Posix systems to start the command, and it uses TerminateProcess on Windows and SIG_TERM on Posix to stop it.