With QProcess, Is it necessary to call waitForReadyRead after waitForFinished? - c++

I use the code below to capture the 'help' from the standard output of a command line utility. This code has worked without problems until this morning when someone encountered an issue (the message box appeared that stated that the command looks odd).
I can't seem to reproduce the problem, so I'm about to chalk it up to a system anomaly, since the utility is located on a network shared drive and we have systems that are burdened with security processes which cause a lot of lag.
Would it be beneficial to add a waitForReadyRead check, or is it redundant?
Any thoughts or suggestions would be appreciated.
QProcess cmd_process;
cmd_process.setWorkingDirectory("x:/working/directory");
cmd_process.start(R"(t:\bin\win\cmdlineutility.exe)", QStringList() << "/help");
if(cmd_process.waitForFinished())
{
// TODO - should waitForReadyRead() go here?
QByteArray ba = cmd_process.readAll();
if(ba.contains("something good"))
{
// do stuff here
}
else
{
QMessageBox::information(0, "Something wrong", "cmdlineutility looks odd");
}
}
else
{
QMessageBox::information(0, "something wrong", "total fail");
}

At least in qprocess_win.cpp both
QProcessPrivate::waitForReadyRead(int msecs) and QProcessPrivate::waitForFinished(int msecs)
perform the same actions:
calling
stdoutChannel.reader && stdoutChannel.reader->waitForReadyRead(0)
and when finished, calling
drainOutputPipes()
So when waitForFinished() returns, all available data will have been read into the output buffers.

Related

ubuntu server pipeline stop process termination when the first exit

The situation is: I have an external application so I don't have the source code and i can't change it. While running, the application writes logs to the stderr. The task is to write a program that check the output of it and separate some part of the output to other file. My solution is to start the app like
./externalApp 2>&1 | myApp
the myApp is a c++ app with the following source:
using namespace std;
int main ()
{
string str;
ofstream A;
A.open("A.log");
ofstream B;
B.open("B.log");
A << "test start" << endl;
int i = 0;
while (getline(cin,str))
{
if(str.find("asdasd") != string::npos)
{
A << str << endl;
}
else
{
B << str << endl;
}
++i;
}
A << "test end: " << i << " lines" << endl;
A.close();
B.close();
return 0;
}
The externalApp can crash or be terminated. A that moment the myApp gets terminated too and it is don't write the last lines and don't close the files. The file can be 60Gb or larger so saving it and processing it after not a variant.
Correction: My problem is that when the externalApp crash it terminate myApp. That mean any code after while block will never run. So the question is: Is there a way to run myApp even after the externalApp closed?
How can I do this task correctly? I interesed in any other idea to do this task.
There's nothing wrong with the shown code, and nothing in your question offers any evidence of anything being wrong with the shown code. No evidence was shown that your logging application actually received "the last lines" to be written from that external application. Most likely that external application simply failed to write them to standard output or error, before crashing.
The most likely explanation is that your external application checks if its standard output or error is connected to an interactive terminal; if so each line of its log message is followed by an explicit buffer flush. When the external application's standard output is a pipe, no such flushing takes place, so the log messages get buffered up, and are flushed only when the application's internal output buffer is full. This is a fairly common behavior. But because of that, when the external application crashes its last logged lines are lost forever. Because your logger never received them. Your logger can't do anything about log lines it never read.
In your situation, the only available option is to set up and connect a pseudo-tty device to the external application's standard output and error, making it think that's connected to an interactive terminal, while its output is actually captured by your application.
You can't do this from the shell. You need to write some code to set this up. You can start by reading the pty(7) manual page which explains the procedure to follow, at which point you will end up with file descriptors that you can take, and attach to your external application.
If you want your program to cleanly deal with the external program crashing you will probably need to handle SIGPIPE. The default behaviour of this signal is to terminate the process.
So the problem was not that when the first element of the pipe ended it terminate the second. The real problem was that the two app with pipes launched from bash script and when the bash script ended it terminated all of it child process. I solved it using
signal(SIGHUP,SIG_IGN);
that way my app executed to the end.
Thank you for all the answer at least I learned lot about the signals and pipes.

How to avoid waitForStarted with QProcess to stop GUI from freezing?

I am running wscript with QProcess to run a VB Script that converts Excel files to tab delimited text files. The script runs fine and everything, but the GUI freezes and the user is unable to interact with it for a significant amount of time. Here is the code:
/* Create txt files and store paths */
for (int i = 0; i < excelFilepaths.size(); ++i) {
wscript->start("wscript.exe", QStringList() << vbs.fileName() << excelFilepaths.at(i) << newDir.absolutePath() + "/" + QString::number(i + 1));
wscript->waitForFinished();
payloadPaths.push_back(newDir.absolutePath() + "/" + QString::number(i + 1));
}
So whats going on is that I have multiple excel file paths and a QProcess allocated on the heap. This QProcess runs the VB Script that converts the excel files into text files and then stores the path of the new text file. This takes a long time (about 20 seconds for 4 excel files). During this time the GUI is frozen. I would like the user to be able to use parts of the GUI that don't interfere with the process.
Now I suspect that the cause of this issue is
QProcess::waitForFinished()
And I've read online about connecting the finished() and error() signals of QProcess to remove this problem. However I've been having difficulty doing so. I'm running this code as a method of a class that inherits from QObject and containst the Q_OBJECT macro, so everything should be set. I just need some help putting the rest of the pieces together. How can I make it so my GUI does not freeze while QProcess is running? Please help.
To quote the documentation at the section called Synchronous Process API:
waitForStarted() blocks until the process has started.
waitForReadyRead() blocks until new data is available for reading on the current read channel.
waitForBytesWritten() blocks until one payload of data has been written to the process.
waitForFinished() blocks until the process has finished.
Calling these functions from the main thread (the thread that calls QApplication::exec()) may cause your user interface to freeze.
Keep that in mind. However you may overcome this issue using something like that:
connect(process, static_cast<void(QProcess::*)(int, QProcess::ExitStatus)>(&QProcess::finished),
[=](int exitCode, QProcess::ExitStatus exitStatus){ /* ... */ });
Note that there are some more signals which may suite any desired purpose.
I had the same problem but with QSerialPort. However, i think the solution is the same. I couldn't find a way to have "serial->waitForReadyRead()" not freezing the GUI, so, I implemented my own function.
void Research::WaitSerial(int MilliSecondsToWait)
{
QTime DieTime = QTime::currentTime().addMSecs(MilliSecondsToWait);
flag = 0;
while(QTime::currentTime() < DieTime && !flag)
{
QCoreApplication::processEvents(QEventLoop::AllEvents,100);
if(BufferSerial != "")
{
flag++;
}
}
}
Of course, your problem is similar but not the same. Just change the ifto have your "stopping condition". Hope this helps.
EDIT: This was not originally my ideia. I found it on a forum somewhere. So I don't take the credits.

c threads - why does mmsystem (using mciSendString) not play the sound file?

I want my game to play some sfx. At the beginning, I open some mp3 file mciSendString("open Muzle.mp3 alias Muzle");.
My problem is that mciSendString("play Muzle from 0"); still causes a little lag and the game has to play the sounds frequently.
In another question, I read that using threads will solve the problem. I'm completely new to using threads. The problem now is that the sound doesn't play :p . I verified that the thread runs properly by giving a cout at the end.
I have this function now:
void Shout(string SoundName){
string FNstr;
wstring FNwstr;
FNstr = "play " + SoundName + " from 0";
FNwstr.assign(FNstr.begin(), FNstr.end());
mciSendString(FNwstr.c_str(), NULL, 0, NULL);
Sleep(2000);
cout << "Test woi\n";
}
(I tried without Sleep too. I wonder if I need it, because if the thread reaches the end, it might get deleted and the sound terminated... I'm not sure how threads or the mmsystem work)
If I simply call this Shout() function, it will play the sound, do the Sleep(2000), and then cout. Everything worked fine. But I have to use threads, so I try:
thread(Shout, "Muzle");
and I got error: abort() has been called. I figured out I may need to detach the thread:
thread t(Shout, "Muzle");
t.detach();
With this, everything looked to work fine (after 2 seconds, I see the "Test woi" printed on the console), but no sound was played.
Hmm, so thanks for reading everything ^.^ . Do you know how to solve this problem?
You should probably have ONE permanent thread that:
1. Waits for the sound to finish before moving on (assuming that is the way you want it to work). You can probably just use the "wait" option to do that.
2. When not playing a sound, waits for a command to play the next sound - using a pipe to send messages to the thread would be one such solution, but you could use other methods.

Debug Assertion Failed: _CrtIsValidHeapPointer(pUserData)

Sometimes I get this "Debug Assertion Failed" error running my Qt project in debug mode (image).
I don't know where I wrong because the compiler says nothing and I don't know what to do to find my error.
I program under Windows Vista, using Qt Creator 2.4.1, Qt 4.8.1.
My program has to read some informations from a laser device and save them into a file with a code similar to this:
void runFunction()
{
configure_Scanning(...);
while(...)
{
// do something
scanFunction();
// do something
}
}
and this is my "incriminated" function (where I think the problem is)
void scanFunction()
{
file.open();
data = getDataFromDevice();
if(flag)
{
if(QString::compare(lineB,"")!=0)
{
QTextStream out(&file);
out << lineB << endl;
lineB = "";
}
lineA.append(data+"\t");
}
else
{
if(QString::compare(lineA,"")!=0)
{
QTextStream out(&file);
out << lineA << endl;
lineA = "";
}
lineB.prepend(data+"\t");
}
file.close();
}
Where lineA and lineB are initially two void QString: the idea is that I make a bidirectional scanning to save informations in a 2D matrix (from -X to +X and viceversa, while Y goes to a specified target). lineA memorizes the (-)to(+) reading; lineB memorizes the (+)to(-) reading. When the scanning direction changes, I write lineA (or lineB) to the file and I proceed with the scanning.
Do you understand what I said?
Could you suggest me a solution?
Thanks and sorry for my English :P
_CrtIsValidHeapPointerUserData means, that you have a heap corruption, which is noticed by debug heap checker. Suspect everybody who can write any information into any deleted dynamic object.
And yes, you'll receive heap corruction not immideately on rewrite occurs, but on the next heap check, which will be performed on any next memory allocation/deallocation. However should be simply tracked by a call stack in single threaded applications.
In our case, the program worked perfectly in DEBUG mode and crashed with the similar error trace in RELEASE mode.
In my case, the RELEASE mode was having msvscrtd.dll in the linker definition. We removed it and the issue resolved.
Alternatively, adding /NODEFAULTLIB to the linker command line arguments also resolved the issue.

Make main() "uncrashable"

I want to program a daemon-manager that takes care that all daemons are running, like so (simplified pseudocode):
void watchMe(filename)
{
while (true)
{
system(filename); //freezes as long as filename runs
//oh, filename must be crashed. Nevermind, will be restarted
}
}
int main()
{
_beginThread(watchMe, "foo.exe");
_beginThread(watchMe, "bar.exe");
}
This part is already working - but now I am facing the problem that when an observed application - say foo.exe - crashes, the corresponding system-call freezes until I confirm this beautiful message box:
This makes the daemon useless.
What I think might be a solution is to make the main() of the observed programs (which I control) "uncrashable" so they are shutting down gracefully without showing this ugly message box.
Like so:
try
{
char *p = NULL;
*p = 123; //nice null pointer exception
}
catch (...)
{
cout << "Caught Exception. Terminating gracefully" << endl;
return 0;
}
But this doesn't work as it still produces this error message:
("Untreated exception ... Write access violation ...")
I've tried SetUnhandledExceptionFilter and all other stuff, but without effect.
Any help would be highly appreciated.
Greets
This seems more like a SEH exception than a C++ exception, and needs to be handled differently, try the following code:
__try
{
char *p = NULL;
*p = 123; //nice null pointer exception
}
__except(GetExceptionCode() == EXCEPTION_ACCESS_VIOLATION ?
EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH)
{
cout << "Caught Exception. Terminating gracefully" << endl;
return 0;
}
But thats a remedy and not a cure, you might have better luck running the processes within a sandbox.
You can change the /EHsc to /EHa flag in your compiler command line (Properties/ C/C++ / Code Generation/ Enable C++ exceptions).
See this for a similar question on SO.
You can run the watched process a-synchronously, and use kernel objects to communicate with it. For instance, you can:
Create a named event.
Start the target process.
Wait on the created event
In the target process, when the crash is encountered, open the named event, and set it.
This way, your monitor will continue to run as soon as the crash is encountered in the watched process, even if the watched process has not ended yet.
BTW, you might be able to control the appearance of the first error message using drwtsn32 (or whatever is used in Win7), and I'm not sure, but the second error message might only appear in debug builds. Building in release mode might make it easier for you, though the most important thing, IMHO, is solving the cause of the crashes in the first place - which will be easier in debug builds.
I did this a long time ago (in the 90s, on NT4). I don't expect the principles to have changed.
The basic approach is once you have started the process to inject a DLL that duplicates the functionality of UnhandledExceptionFilter() from KERNEL32.DLL. Rummaging around my old code, I see that I patched GetProcAddress, LoadLibraryA, LoadLibraryW, LoadLibraryExA, LoadLibraryExW and UnhandledExceptionFilter.
The hooking of the LoadLibrary* functions dealt with making sure the patching was present for all modules. The revised GetProcAddress had provide addresses of the patched versions of the functions rather than the KERNEL32.DLL versions.
And, of course, the UnhandledExceptionFilter() replacement does what you want. For example, start a just in time debugger to take a process dump (core dumps are implemented in user mode on NT and successors) and then kill the process.
My implementation had the patched functions implemented with __declspec(naked), and dealt with all the registered by hand because the compiler can destroy the contents of some registers that callers from assembly might not expect to be destroyed.
Of course there was a bunch more detail, but that is the essential outline.