Simplest code:
void test
{
QProcess p;
p.start("sleep 10");
p.waitForBytesWritten();
p.waitForFinished(1);
}
Of course, the process can't be finished before the end of the function, so it displays a warning message:
QProcess: Destroyed while process ("sleep") is still running.
I want this message not to be shown - I should destroy the process by myself before end of function, but I can't find how to do this correctly: p.~QProcess(), p.terminate(), p.kill() can't help me.
NOTE: I don't want wait for process execution, just kill it when its running, by myself.
You can kill or terminate the process explicitly depending on your desire. That is, however, not enough on its own because you actually need to wait for the process to terminate. "kill" means it will send the SIGKILL signal on Unix to the process and that also takes a bit of time to actually finish.
Therefore, you would be writing something like this:
main.cpp
#include <QProcess>
int main()
{
QProcess p;
p.start("sleep 10");
p.waitForBytesWritten();
if (!p.waitForFinished(1)) {
p.kill();
p.waitForFinished(1);
}
return 0;
}
main.pro
TEMPLATE = app
TARGET = main
QT = core
SOURCES += main.cpp
Related
I have a minimal example i am trying to get working. The end goal is to be able to communicate some information to a program that is waiting on a "cin" call. I guess that means something to do with Standard Input.
I am trying to use some Qt objects to help me at this stage. Although I am not using any other Qt stuff.
The example I am trying that gives me errors is:
#include <iostream>
#include <QtCore/QString>
#include <QtCore/QProcess>
#include <QtCore/QStringList>
int main() {
QProcess process;
QString prog = "test.exe";
// Starting "test.exe":
process.start(prog);
bool started = process.waitForStarted();
std::cout << started << std::endl;
// test.exe is waiting for cin, so give "2":
bool response = process.write("2\n");
std::cout << response << std::endl;
}
Here is the error messages:
1
QObject::startTimer: Timers can only be used with threads started with QThread
1
QProcess: Destroyed while process ("test.exe") is still running.
In rare cases you will have a Qt-app without QApplication or QCoreApplication. They start event loop, required for timers, events, signals/slots.
A console XML-parser could be such kind of event-less application.
Take a look e.g. here for a minimal QtCoreApplication app: How do I create a simple Qt console application in C++?
Start your process within a subclassed QWidget or QObject.
I am trying to debug a server application but I am running into some difficulties breaking where I need to. The application is broken up into two parts:
A server application, which spawns worker processes (not threads) to handle incoming requests. The server basically spawns off processes which will process incoming requests first-come first-served.
The server also loads plugins in the form of shared libraries. The shared library defines most of the services the server is able to process, so most of the actual processing is done here.
As an added nugget of joy, the worker processes "respawn" (i.e. exit and a new worker process is spawned) so the PIDs of the children change periodically. -_-'
Basically I need to debug a service that's called within the shared library but I don't know which process to attach to ahead of time since they grab requests ad-hoc. Attaching to the main process and setting a breakpoint hasn't seemed to work so far.
Is there a way to debug this shared library code without having to attach to a process in advance? Basically I'd want to debug the first process that called the function in question.
For the time being I'll probably try limiting the number of worker processes to 1 with no respawn, but it'd be good to know how to handle a scenario like this in the future, especially if I'd like to make sure it still works in the "release" configuration.
I'm running on a Linux platform attempting to debug this with DDD and GDB.
Edit: To help illustrate what I'm trying to accomplish, let me provide a brief proof on concept.
#include <iostream>
#include <stdlib.h>
#include <unistd.h>
using namespace std;
int important_function( const int child_id )
{
cout << "IMPORTANT(" << child_id << ")" << endl;
}
void child_task( const int child_id )
{
const int delay = 10 - child_id;
cout << "Child " << child_id << " started. Waiting " << delay << " seconds..." << endl;
sleep(delay);
important_function(child_id);
exit(0);
}
int main( void )
{
const int children = 10;
for (int i = 0; i < 10; ++i)
{
pid_t pid = fork();
if (pid < 0) cout << "Fork " << i << "failed." << endl;
else if (pid == 0) child_task(i);
}
sleep(10);
return 0;
}
This program will fork off 10 processes which will all sleep 10 - id seconds before calling important_function, the function in which I want to debug in the first calling child process (which should, here, be the last one I fork).
Setting the follow-fork-mode to child will let me follow through to the first child forked, which is not what I'm looking for. I'm looking for the first child that calls the important function.
Setting detach-on-fork off doesn't help, because it halts the parent process until the child process forked exits before continuing to fork the other processes (one at a time, after the last has exited).
In the real scenario, it is also important that I be able to attach on to an already running server application who's already spawned threads, and halt on the first of those that call the function.
I'm not sure if any of this is possible since I've not seen much documentation on it. Basically I want to debug the first application to call this line of code, no matter what process it's coming from. (While it's only my application processes that'll call the code, it seems like my problem may be more general: attaching to the first process that calls the code, no matter what its origin).
You can set a breakpoint at fork(), and then issue "continue" commands until the main process's next step is to spawn the child process you want to debug. At that point, set a breakpoint at the function you want to debug, and then issue a "set follow-fork-mode child" command to gdb. When you continue, gdb should hook you into the child process at the function where the breakpoint is.
If you issue the command "set detach-on-fork off", gdb will continue debugging the child processes. The process that hits the breakpoint in the library should halt when it reaches that breakpoint. The problem is that when detach-on-fork is off, gdb halts all the child processes that are forked when they start. I don't know of a way to tell it to keep executing these processes after forking.
A solution to this I believe would be to write a gdb script to switch to each process and issue a continue command. The process that hits the function with the breakpoint should stop.
A colleague offered another solution to the problem of getting each child to continue. You can leave "detach-on-fork" on, insert a print statement in each child process's entry point that prints out its process id, and then give it a statement telling it to wait for the change in a variable, like so:
{
volatile int foo = 1;
printf("execute \"gdb -p %u\" in a new terminal\n", (unsigned)getpid());
printf("once GDB is loaded, give it the following commands:\n");
printf(" set variable foo = 0\n");
printf(" c\n");
while (foo == 1) __asm__ __volatile__ ("":::"memory");
}
Then, start up gdb, start the main process, and pipe the output to a file. With a bash script, you can read in the process IDs of the children, start up multiple instances of gdb, attach each instance to one of the different child processes, and signal each to continue by clearing the variable "foo".
I'm making a C++ GUI program in Qt using qtcreator its not complete yet but when ever I build and run to test the program it runs then if i click buttons that open a file or write something in a file, the button does that and then the program freezes. Why this happens, What I'm doing wrong or what's the issue.
It mainly freezes in theses two functions:
void MainWindow::on_kmpOpenButton_clicked()
{
QString kmplayerloc = "\"F:\\Program Files\\The KMPlayer\\KMPlayer.exe\"";
QProcess::execute(kmplayerloc);
}
void MainWindow::on_nbopenbutton_clicked()
{
// Remember tha if you have to insert " in a string \"....location of file or anything u want to put.......\"
QString netbeansloc = "\"F:\\Program Files\\NetBeans 7.4\\bin\\netbeans.exe\"";
QProcess::execute(netbeansloc);
}
From the documentation
Starts the program program [..] in a new
process, waits for it to finish, and then returns the exit code of the
process.
The calling thread freezes until the external process is finished. If you don't want this, use the method start or startDetached.
How to design a C/C++ program so that it can save some data after receiving interrupt signal.
I have a long running program that I might need to kill (say, by pressing Ctrl-C) before it finished running. When killed (as opposed to running to conclusion) the program should be able to save some variables to disk. I have several big Linux books, but not very sure where to start. A cookbook recipe would be very helpful.
Thank you.!
to do that, you need to make your program watch something, for example a global variable, that will tell him to stop what it is doing.
For example, supposing your long-running program execute a loop, you can do that :
g_shouldAbort = 0;
while(!finished)
{
// (do some computing)
if (g_shouldAbort)
{
// save variables and stuff
break; // exit the loop
}
}
with g_shouldAbort defined as a global volatile variable, like that :
static volatile int g_shouldAbort = 0;
(It is very important to declare it "volatile", or else the compiler, seeing that no one write it in the loop, may consider that if (g_shouldAbort) will always be false and optimize it away.)
then, using for example the signal API that other users suggested, you can do that :
void signal_handler(int sig_code)
{
if (sig_code == SIGUSR1) // user-defined signal 1
g_shouldAbort = 1;
}
(you need to register this handler of course, cf. here.
signal(SIGUSR, signal_handler);
Then, when you "send" the SIGUSR1 signal to your program (with the kill command for example), g_shouldAbort will be set to 1 and your program will stop its computing.
Hope this help !
NOTE : this technique is easy but crude. Using signals and global variables makes it difficult to use multiple threads of course, as other users have outlined.
What you want to do isn't trivial. You can start by installing a signal handler for SIGINT (C-c) using signal or sigaction but then the hard part starts.
The main problem is that in a signal handler you can only call async-signal-safe functions (or reentrant functions). Most library function can't be reliably considered reentrant. For instance, stdio functions, malloc, free and many others aren't reentrant.
So how do you handle this ? Set a flag in you handler (set some global variable done to 1) and look out for EINTR errors. It should be safe to do the cleanup outside the handler.
What you are trying to do falls under the rubric of checkpoint/restart.
There's several big problems with using a signal-driven scheme for checkpoint/restart. One is that signal handlers have to be very compact and very primitive. You cannot write the checkpoint inside your signal handler. Another problem is that your program can be anywhere in its execution state when the signal is sent. That random location almost certainly is not a safe point from which a checkpoint can be dropped. Yet another problem is that you need to outfit your program with some application-side checkpoint/restart capability.
Rather than rolling your own checkpoint/restart capability, I suggest you look into using a free one that already exists. gdb on linux provides a checkpoint/restart capability. Another is DMTCP, see http://dmtcp.sourceforge.net/index.html .
Use signal(2) or sigaction(2) to assign a function pointer to the SIGINT signal, and do your cleanups there.
Make your you enter only once in your save function
// somewhere in main
signal( SIGTERM, signalHandler );
signal( SIGINT, signalHandler );
void saveMyData()
{
// save some data here
}
void signalHandler( int signalNumber )
{
static pthread_once_t semaphore = PTHREAD_ONCE_INIT;
std::cout << "signal " << signalNumber << " received." << std::endl;
pthread_once( & semaphore, saveMyData );
}
If your process get 2 or more signals before you finish writing your file you'll save weird data
I am trying to execute another command line process in parallel with the current process. However, I realize that the command line program sometimes abnormally exits, and that kills my main program as well.
// MAIN PROGRAM
pid = fork();
char *argv[] = { stuff.. };
if (pid == 0) {
int rc = execv("command line program...", argv);
}
// DO OTHER STUFF HERE.
if (pid > 0) {
waitpid(pid, 0, 0);
}
Is there any way to keep my main program running after the command line program dies abnormally? Thanks!
[UPDATE]:Yes, the main process is writing to a file where the command line is reading from, but it is a normal file, not a pipe. I receive a segfault.
It is extremely hard for me to reproduce the bug, since the child process does not crash very often. But it does happen. Randomly crashing is a known bug in the command line program, which is why I want to keep my main program alive even if the command line dies.
In your real code do you have an else here:
if (pid == 0) {
int rc = execv("command line program...", argv);
// possibly more child stuff
}
else {
// parent stuff
}
It's always a good idea to post real code when asking questions here.
Use vfork rather than fork to avoid unnecessary process cloning.
Make sure you don't crash when SIGCHLD is received by parent process.
Use proper if-then-else statement to make it clear what code executes in parent process and what happens in a child process. For example it is very likely that both child and process will execute code where // DO OTHER STUFF HERE. comment is in case execv fails.
After all, use gdb. It will tell you where the crash occurs.