Unsuccesfull management of specific processes in my C++ console application - c++

In this thread I solved a part of my cases and thanks to NathanOliver I've got to the following code so far:
int main(){
//...
bool proc1 = false, proc2 = false, proc3 = false, proc4 = false,
while(true) {
if(!proc1 && ProcessRunning("process1.exe")){
fun1("fun1.bat");
proc1 = true;
}
if(!proc2 && ProcessRunning("process2.exe")){
fun1("fun2.bat");
proc2 = true;
}
if(!proc3 && ProcessRunning("process3.exe")){
fun1("fun3.bat");
proc3 = true;
}
if(!proc4 && ProcessRunning("process4.exe")){
fun1("fun4.bat");
proc4 = true;
}
}
return 0;
}
What I still can't get through is the case where:
double click on app1 -> process1 starts.
while process1 is running I double click on app2 so that process2 should have the same behaviour as I mentioned in my first thread:
if it finds process2 (the second if(){}), it creates that .bat file and
it executes it (kill process2(it might existed before i opened it), start > it again, delete the .bat file generated by fun2(const char name[]){}).
Summary of previous post:
int fun1(const char name[]){
ofstream file;
file.open(name, ios::out);
//start of what I write in .bat
file << "#echo off";
file << "cd to specific path";
file << "taskkill /im process.exe* /f";
file << "start process.exe";
file << "del \"%~f0\"";
file.close();
return system(name);
}
Exactly the same for the rest functions.

I believe you run the .bat file as sync program, therefore until the bat won't finish and return exit code (which you may check as return value of system function) your main program won't continue to run. You may use async process using fork on linux based systems and CreateProcess on windows OS.

How about this? (C++11 Standard)
bool proc[] = {false, false, false, false};
while(true)
{
for(int i=0 ; i<4 ; i++)
{
const std::string exeName = "process" + std::to_string(i) + ".exe";
const std::string funName = "fun" + std::to_string(i) + ".bat";
if(!proc[i] && ProcessRunning(exeName.c_str()))
{
fun1(funName.c_str());
proc[i] = true;
}
}
Should make things a bit more flexible, but getting to your problem. If-condition compares the first expression proc and then ProcessRunning(...). Both must be true in order to enter the control block.
But your file explaining fun1() seems to have an issue at least in comments. You mention for example to //taskkill process.exe but it has to be process#.
I can help you more on that if you provide the full fun1 implementation or at least the most relevant parts. Your cycle though should work. The problem has to be in fun1(...)
P.D.: I also would rename fun1(...) to something that makes more sense.

Related

C++ Ubuntu not releasing lock on lock file when terminated

I have a C++ script, which checks whether any action has to be done and if so it starts the right processor C++ script. However, since it runs every x minutes it also checks whether the processor isn't still running using lock files.
I use the following function to acquire the lock:
int LockFile(string FileNameToLock) {
FileNameToLock += ".lock";
int fd = open(FileNameToLock.c_str(), O_RDWR | O_CREAT, 0666);
int rc = flock(fd, LOCK_EX | LOCK_NB);
if (rc || rc == -1) {
cout << errno << endl;
cout << strerror(errno) << endl;
return -1;
}
return fd;
}
The code that is being executed:
[...]
if (LockFile(ExecuteFileName, Extra) == -1) {
cout << "Already running!" << endl; //MAIN IS ALREADY RUNNING
//RETURNS ME Resource temporarily unavailable when processor is running from an earlier run
exit(EXIT_SUCCESS);
}
if (StartProcessor) { //PSEUDO
int LockFileProcessor = LockFile("Processor");
if (LockFileProcessor != -1) {
string Command = "nohup setsid ./Processor"; //NOHUP CREATES ZOMBIE?
Command += IntToString(result->getInt("Klantnummer"));
Command += " > /dev/null 2>&1 &"; //DISCARD OUTPUT
system(Command.c_str());
//STARTS PROCESSOR (AS ZOMBIE?)
}
}
The first run works well, however when the main script runs again, the lock file returns -1, which means an error occurred (only when the processor is still running). The errno is 11 which results in the error message: Resource temporarily unavailable. Note that this only happens when the (zombie) processor is still running. (However, the main script has already terminated, which should close the file handle?)
For some reason, the zombie keeps the file handle to the lock file of the main script open???
I have no idea where to look for this problem.
SOLVED:
see my answer
No, 11 is EAGAIN/EWOULDBLOCK which simply means that you cannot acquire the lock because the resource is already locked (see the documentation). You received that error (instead of blocking behaviour) due to LOCK_NB flag.
EDIT: After some discussion it seems that the problem is due to flock() locks being preserved when subprocessing. To avoid this issue I recommend not using flock() for the lifetime but instead touch-and-delete-at-exit strategy:
If file.lock exists then exit
Otherwise create file.lock and start processing
Delete file.lock at exit.
Of course there's a race condition here. In order to make it safe you would need another file with flock:
flock(common.flock)
If file.lock exists then exit
Otherwise create file.lock
Unlock flock(common.flock)
Start processing
Delete file.lock at exit
But this only matters if you expect simultaneous calls to main. If you don't (you said that a cron starts the process every 10min, no race here) then stick to the first version.
Side note: here's a simple implementation of such (non-synchronized) file lock:
#include <string>
#include <fstream>
#include <stdexcept>
#include <cstdio>
// for sleep only
#include <chrono>
#include <thread>
class FileLock {
public:
FileLock(const std::string& path) : path { path } {
if (std::ifstream{ path }) {
// You may want to use a custom exception here
throw std::runtime_error("Already locked.");
}
std::ofstream file{ path };
};
~FileLock() {
std::remove(path.c_str());
};
private:
std::string path;
};
int main() {
// This will throw std::runtime_error if test.xxx exists
FileLock fl { "./test.xxx" };
std::this_thread::sleep_for(std::chrono::seconds { 5 });
// RAII: no need to delete anything here
return 0;
};
Requires C++11. Note that this implementation is not race-condition-safe, i.e. you generally need to flock() the constructor but in this situation it probably be fine (i.e. when you don't start main in parallel).
I solved this issue, since the system and fork commands seem to pass on the flock, by saving the command to execute in a vector. Unlock the lock file right before looping the Commands vector and execute each command. This leaves the main with a very tiny gap of running while not locked, but for now it seems to work great. This also supports ungraceful terminations.
[...]
if (LockFile(ExecuteFileName, Extra) == -1) {
cout << "Already running!" << endl; //MAIN IS ALREADY RUNNING
//RETURNS ME Resource temporarily unavailable when processor is running from an earlier run
exit(EXIT_SUCCESS);
}
vector<string> Commands;
if (StartProcessor) { //PSEUDO
int LockFileProcessor = LockFile("Processor");
if (LockFileProcessor != -1) {
string Command = "nohup setsid ./Processor"; //NOHUP CREATES ZOMBIE
Command += IntToString(result->getInt("Klantnummer"));
Command += " > /dev/null 2>&1 &"; //DISCARD OUTPUT
Commands.push_back(Command);
}
}
//UNLOCK MAIN
if (UnlockFile(LockFileMain)) {
for(auto const& Command: Commands) {
system(Command.c_str());
}
}

Is it possible a kill a command launched using system api in C? If not any alternatives?

I am launching a command using system api (I am ok with using this api with C/C++). The command I pass may hang at times and hence I would like to kill after certain timeout.
Currently I am using it as:
system("COMMAND");
I want to use it something like this:
Run a command using a system independent API (I don't want to use CreateProcess since it is for Windows only) Kill the process if it does not exit after 'X' Minutes.
Since system() is a platform-specific call, there cannot be a platform-independent way of solving your problem. However, system() is a POSIX call, so if it is supported on any given platform, the rest of the POSIX API should be as well. So, one way to solve your problem is to use fork() and kill().
There is a complication in that system() invokes a shell, which will probably spawn other processes, and I presume you want to kill all of them, so one way to do that is to use a process group. The basic idea is use fork() to create another process, place it in its own process group, and kill that group if it doesn't exit after a certain time.
A simple example - the program forks; the child process sets its own process group to be the same as its process ID, and uses system() to spawn an endless loop. The parent process waits 10 seconds then kills the process group, using the negative value of the child process PID. This will kill the forked process and any children of that process (unless they have changed their process group.)
Since the parent process is in a different group, the kill() has no effect on it.
#include <unistd.h>
#include <stdlib.h>
#include <signal.h>
#include <stdio.h>
int main() {
pid_t pid;
pid = fork();
if(pid == 0) { // child process
setpgid(getpid(), getpid());
system("while true ; do echo xx ; sleep 5; done");
} else { // parent process
sleep(10);
printf("Sleep returned\n");
kill(-pid, SIGKILL);
printf("killed process group %d\n", pid);
}
exit(0);
}
There is no standard, cross-platform system API. The hint is that they are system APIs! We're actually "lucky" that we get system, but we don't get anything other than that.
You could try to find some third-party abstraction.
Check below C++ thread based attempt for linux. (not tested)
#include <iostream>
#include <string>
#include <thread>
#include <stdio.h>
using namespace std;
// execute system command and get output
// http://stackoverflow.com/questions/478898/how-to-execute-a-command-and-get-output-of-command-within-c
std::string exec(const char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
void system_task(string& cmd){
exec(cmd.c_str());
}
int main(){
// system commad that takes time
string command = "find /";
// run the command in a separate thread
std::thread t1(system_task, std::ref(command));
// gives some time for the system task
std::this_thread::sleep_for(chrono::milliseconds(200));
// get the process id of the system task
string query_command = "pgrep -u $LOGNAME " + command;
string process_id = exec(query_command.c_str());
// kill system task
cout << "killing process " << process_id << "..." << endl;
string kill_command = "kill " + process_id;
exec(kill_command.c_str());
if (t1.joinable())
t1.join();
cout << "continue work on main thread" << endl;
return 0;
}
I had a similar problem, in a Qt/QML development: I want to start a bash command, while continuing to process events on the Qt Loop, and killing the bash command if it takes too long.
I came up with the following class that I'm sharing here (see below), in hope it may be of some use to people with a similar problem.
Instead of calling a 'kill' command, I call a cleanupCommand supplied by the developper. Example: if I'm to call myscript.sh and want to check that it won't last run for more than 10 seconds, I'll call it the following way:
SystemWithTimeout systemWithTimeout("myScript.sh", 10, "killall myScript.sh");
systemWithTimeout.start();
Code:
class SystemWithTimeout {
private:
bool m_childFinished = false ;
QString m_childCommand ;
int m_seconds ;
QString m_cleanupCmd ;
int m_period;
void startChild(void) {
int rc = system(m_childCommand.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout startChild: system returned %d", rc);
m_childFinished = true ;
}
public:
SystemWithTimeout(QString cmd, int seconds, QString cleanupCmd)
: m_childFinished {false}, m_childCommand {cmd}, m_seconds {seconds}, m_cleanupCmd {cleanupCmd}
{ m_period = 200; }
void setPeriod(int period) {m_period = period;}
void start(void) ;
};
void SystemWithTimeout::start(void)
{
m_childFinished = false ; // re-arm the boolean for 2nd and later calls to 'start'
qDebug()<<"systemWithTimeout"<<m_childCommand<<m_seconds;
QTime dieTime= QTime::currentTime().addSecs(m_seconds);
std::thread child(&SystemWithTimeout::startChild, this);
child.detach();
while (!m_childFinished && QTime::currentTime() < dieTime)
{
QTime then = QTime::currentTime();
QCoreApplication::processEvents(QEventLoop::AllEvents, m_period); // Process events during up to m_period ms (default: 200ms)
QTime now = QTime::currentTime();
int waitTime = m_period-(then.msecsTo(now)) ;
QThread::msleep(waitTime); // wait for the remaning of the 200 ms before looping again.
}
if (!m_childFinished)
{
SYSLOG(LOG_NOTICE, "Killing command <%s> after timeout reached (%d seconds)", m_childCommand.toUtf8().data(), m_seconds);
int rc = system(m_cleanupCmd.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout 164: system returned %d", rc);
m_childFinished = true ;
}
}
I do not know any portable way to do that in C nor C++ languages. As you ask for alternatives, I know it is possible in other languages. For example in Python, it is possible using the subprocess module.
import subprocess
cmd = subprocess.Popen("COMMAND", shell = True)
You can then test if COMMAND has ended with
if cmd.poll() is not None:
# cmd has finished
and you can kill it with :
cmd.terminate()
Even if you prefere to use C language, you should read the documentation for subprocess module because it explains that internally it uses CreateProcess on Windows and os.execvp on Posix systems to start the command, and it uses TerminateProcess on Windows and SIG_TERM on Posix to stop it.

Release version works unexpectedly - memory races and strange QThread behaviour

I wrote whole application in debug mode and everything works fine in this mode. Unfortunately, now when I trying to run release app two unexpected things happen.
Base information:
Qt 5.1.1
Qt Creator 2.8.1
Windows 7 64x
Application has got second thread which decapsulated data from buffer which is update in main thread.
First problem - memory race:
In one of my methods strange memory race occures in release version - in debug everything is ok. Method looks like:
std::vector<double> dataVec;
std::vecotr<unsigned char> frame("U+014-00300027950l");
//EFrame_AccXPos == 1;
dataVec.push_back(decapsulateAcc(frame.begin()+EFrame_AccXPos));
double Deserializator::decapsulateAcc(std::vector<unsigned char>::iterator pos)
{
const char frac[2] = {*(pos+2),*(pos+3)};
const char integ[] = {*(pos+1)};
double sign;
if (*pos == consts::frame::plusSign) {
sign = 1.0;
} else {
sign = -1.0;
}
double integer = (std::strtod(integ, 0));
double fractial = (std::strtod(frac, 0))/100;
qDebug() << QString::fromStdString(std::string(integ));
//prints "014Rd??j?i" should be "0 ?s"
qDebug() << QString::number(integer);
//prints "14" should be "0"
qDebug() << QString::number(fractial);
//prints "0.14" - everything ok.
return sign*integer+sign*fractial;
}
What wrong with this method?
Second problem:
In additional thread I emit signal to manage data which it decapsulate from buffer. After emit thread wait until flag change to false. When I add some qDebug prints - it's start works, but without them it blocks (even though the flag is already false). Below code:
void DataManager::sendPlottingRequest()
{
numberOfMessurement++;
if (numberOfMessurement == plotAfterEquals ) {
numberOfMessurement = consts::startsFromZero;
isStillPlotting=true;
emit requestPlotting(dataForChart);
//block in next line
while (isStillPlotting);
//it starts work when:
//int i = 0;
//while (isStillPlotting) {
//i++
//if (i == 10000) qDebug() << isStillPlotting;
//}
}
}
void DataManager::continueProcess()
{
plottingState++;
if (plottingState == consts::plottingFinished) {
//program reach this point
isStillPlotting = false;
plottingState = consts::startsFromZero;
}
}
while (isStillPlotting); gets optimized out to if(isStillPlotting)while(true);
you should make isStillPlotting volatile or use an atomicInt instead.
or you can emit a signal plottingDone() from the if in continueProcess() and then connect the slot that executes the code that is after the while

waitpid/wexitstatus returning 0 instead of correct return code

I have the helper function below, used to execute a command and get the return value on posix systems. I used to use popen, but it is impossible to get the return code of an application with popen if it runs and exits before popen/pclose gets a chance to do its work.
The following helper function creates a process fork, uses execvp to run the desired external process, and then the parent uses waitpid to get the return code. I'm seeing odd cases where it's refusing to run.
When called with wait = true, waitpid should return the exit code of the application no matter what. However, I'm seeing stdout output that specifies the return code should be non-zero, yet the return code is zero. Testing the external process in a regular shell, then echoing $? returns non-zero, so it's not a problem w/ the external process not returning the right code. If it's of any help, the external process being run is mount(8) (yes, I know I can use mount(2) but that's besides the point).
I apologize in advance for a code dump. Most of it is debugging/logging:
inline int ForkAndRun(const std::string &command, const std::vector<std::string> &args, bool wait = false, std::string *output = NULL)
{
std::string debug;
std::vector<char*> argv;
for(size_t i = 0; i < args.size(); ++i)
{
argv.push_back(const_cast<char*>(args[i].c_str()));
debug += "\"";
debug += args[i];
debug += "\" ";
}
argv.push_back((char*)NULL);
neosmart::logger.Debug("Executing %s", debug.c_str());
int pipefd[2];
if (pipe(pipefd) != 0)
{
neosmart::logger.Error("Failed to create pipe descriptor when trying to launch %s", debug.c_str());
return EXIT_FAILURE;
}
pid_t pid = fork();
if (pid == 0)
{
close(pipefd[STDIN_FILENO]); //child isn't going to be reading
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO);
if (execvp(command.c_str(), &argv[0]) != 0)
{
exit(EXIT_FAILURE);
}
return 0;
}
else if (pid < 0)
{
neosmart::logger.Error("Failed to fork when trying to launch %s", debug.c_str());
return EXIT_FAILURE;
}
else
{
close(pipefd[STDOUT_FILENO]);
int exitCode = 0;
if (wait)
{
waitpid(pid, &exitCode, wait ? __WALL : (WNOHANG | WUNTRACED));
std::string result;
char buffer[128];
ssize_t bytesRead;
while ((bytesRead = read(pipefd[STDIN_FILENO], buffer, sizeof(buffer)-1)) != 0)
{
buffer[bytesRead] = '\0';
result += buffer;
}
if (wait)
{
if ((WIFEXITED(exitCode)) == 0)
{
neosmart::logger.Error("Failed to run command %s", debug.c_str());
neosmart::logger.Info("Output:\n%s", result.c_str());
}
else
{
neosmart::logger.Debug("Output:\n%s", result.c_str());
exitCode = WEXITSTATUS(exitCode);
if (exitCode != 0)
{
neosmart::logger.Info("Return code %d", (exitCode));
}
}
}
if (output)
{
result.swap(*output);
}
}
close(pipefd[STDIN_FILENO]);
return exitCode;
}
}
Note that the command is run OK with the correct parameters, the function proceeds without any problems, and WIFEXITED returns TRUE. However, WEXITSTATUS returns 0, when it should be returning something else.
Probably isn't your main issue, but I think I see a small problem. In your child process, you have...
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO); //but wait, this pipe is closed!
But I think what you want is:
dup2(pipefd[STDOUT_FILENO], STDOUT_FILENO);
dup2(pipefd[STDOUT_FILENO], STDERR_FILENO);
close(pipefd[STDOUT_FILENO]); //now that it's been dup2'd for both, can close
I don't have much experience with forks and pipes in Linux, but I did write a similar function pretty recently. You can take a look at the code to compare, if you'd like. I know that my function works.
execAndRedirect.cpp
I'm using the mongoose library, and grepping my code for SIGCHLD revealed that using mg_start from mongoose results in setting SIGCHLD to SIG_IGN.
From the waitpid man page, on Linux a SIGCHLD set to SIG_IGN will not create a zombie process, so waitpid will fail if the process has already successfully run and exited - but will run OK if it hasn't yet. This was the cause of the sporadic failure of my code.
Simply re-setting SIGCHLD after calling mg_start to a void function that does absolutely nothing was enough to keep the zombie records from being immediately erased.
Per #Geoff_Montee's advice, there was a bug in my redirect of STDERR, but this was not responsible for the problem as execvp does not store the return value in STDERR or even STDOUT, but rather in the kernel object associated with the parent process (the zombie record).
#jilles' warning about non-contiguity of vector in C++ does not apply for C++03 and up (only valid for C++98, though in practice, most C++98 compilers did use contiguous storage, anyway) and was not related to this issue. However, the advice on reading from the pipe before blocking and checking the output of waitpid is spot-on.
I've found that pclose does NOT block and wait for the process to end, contrary to the documentation (this is on CentOS 6). I've found that I need to call pclose and then call waitpid(pid,&status,0); to get the true return value.

building a shell - IO trouble

I am working on a shell for a systems programming class. I have been having some trouble with the file redirection. I just got redirecting the output to work, e.x. "ls > a" however when I type a command like "cat < a" into my shell it deletes everything in the file. I feel like the problem stems from the second if statement- "fdin = open(_inputFile,777)"
If that is the case a link to a recommended tutorial / other examples would be much appreciated.
On a side note, I included the entire function, however at the point which it creates the pipe, I have not tested anything there yet. I don't believe it works properly either though, but that may be from a mistake in another file.
void Command:: execute(){
if(_numberOfSimpleCommands == 0){
prompt();
return;
}
//save input/output
int defaultin = dup(0);
int defaultout = dup(1);
//initial input
int fdin;
if(_inputFile){
fdin = open(_inputFile,0777);
}else{
//use default input
fdin = dup(defaultin);
}
//execution
int pid;
int fdout;
for(int i = 0; i < _numberOfSimpleCommands; i++){
dup2(fdin,0);
close(fdin);
//setoutput
if(i == _numberOfSimpleCommands -1){
if(_outFile){
fdout = creat(_outFile,0666);
}else{
fdout = dup(defaultout);
}
}else{
int fdpipe[2];
pipe(fdpipe);
fdout = fdpipe[0];
fdin = fdpipe[1];
}
dup2(fdout,1);
close(fdout);
//create child
pid = fork();
if(pid == 0){
execvp(_simpleCommands[0]->_arguments[0],_simpleCommands[0]->_arguments);
perror("-myshell");
_exit(1);
}
}
//restore IO defaults
dup2(defaultin,0);
dup2(defaultout,1);
close(defaultin);
close(defaultout);
if(!_background){
waitpid(pid,0,0);
}
}
Your call open(_inputFile, 0777) is incorrect. The second argument to open is supposed to contain a bitwise or'd combination of values that specify access mode and file creation flags, among other things (O_RDONLY, O_WRONLY, etc). Since you're passing 0777, that probably ends up containing both O_CREAT and O_TRUNC, which causes _inputFile to be erased. You probably want open(_inputFile, O_RDONLY).