How to avoid getting mixed outputs with QProcess->readAllStandardOutput()? - c++

I have a QProcess in which I get data from the backend of my application. I have a simple connection to get the output string generated by QProcess. Right now it works well if I request a single command.
Now, I need to run two commands in a row one by one. The expected behavior is the following:
Send command 1
Wait for the output of command 1
Store the output of command 1 in a variable
Send command 2
Wait for the output of command 2
Store the output of command 2 in a variable
But I'm having an unexpected behavior. The two commands are sent to the backend, but sometimes I get a mixed output from the two outputs. I think it could be related to the time it takes for the backend to return the first result. I need to wait for the first output to send the second command. Any ideas on how to solve this problem?
If if use study->waitForFinished(); or study->waitForFinished(-1); the app freezes and then crash.
This is my code:
connect(study, &QProcess::readyReadStandardOutput, [=] {
QString out = study->readAllStandardOutput();
qDebug()<< "Output= " << out;
}
void StudyClass::writeCommand(const QString& line) {
study->write(line.toLocal8Bit());
}
If I write two commands as follows:
writeCommand("print_status;");
writeCommand("print_say_hello");
Sometimes I get the desired output (qDebug called in the connection):
Output= 0
Output= hello world
But sometimes I just get a mixed output:
Output= 0 hello world
This is wrong behavior, because I need to get results for each command instead of just one.

In order to have separate outputs for the two commands you have to wait for the first output and only then send the second command. You seem to send both commands without a delay and so occasionally you get the output of both commands in one call to the signal handler.

If I understand correctly, your QProcess is a long running process taking input commands through stdin, and then you want to capture the output of each of these commands. If that is the case, then calling study->waitForFinished() on the main thread blocks because the process is still running after each command.
Instead you could try waitForReadyRead() or waitForBytesWritten().

Related

Monitor running qprocess and return value when qprocess is finished

I want to run a qprocess (the program adb) and when the process is finished return the results to the calling function. However, there's every possibility that adb could find itself in a loop, printing error messages such as "ADB server didn't ACK" to stdout, not ever finishing. I need to trap these errors.
QProcess run_command;
connect(&run_command,SIGNAL(readyReadStandardOutput()),this,SLOT( dolog() ));
QString result=RunProcess("adb connect 192.168.1.100");
...
QString MainWindow::RunProcess(QString cstring)
{
run_command.start(cstring);
// keep gui active for lengthy processes.
while(run_command.state() != QProcess::NotRunning)
qApp->processEvents();
QString command=run_command.readAll();
return command; // returns nothing if slot is enabled.
}
void MainWindow::dolog()
{
QString logstring = run_command.readAllStandardOutput();
if (logstring.contains("error condition")
logfile("Logfile:"+logstring);
}
If I enable the signal/slot, dolog() prints stdout to a logfile, but RunProcess returns a blank string. If I disable the signal/slot, RunProcess() returns the qprocess output as expected.
First you need to identify which output stream the command in question is using for its errors.
It is very like stderr so you would need to connect to the readyReadStandardError() signal instead.
For the command itself I would recommend to split it into command and arguments and use the QProcess::start() overload that takes the command and a list of arguments.
Just more robust than to rely on a single string being correctly separated again.

QProcess doesn't show any output when it runs rsync

I start rsync in QProcess. My process runs fine (in its own terminal) if I use QProcess::startDetached() but nothing happens if I start it with QProcess:start(). The problem seems to be that QProcess can't apparently read messages from rsync and write it to the output window.
I have connect this signal in constructor.
MainWindow::~MainWindow()
{
process = new QProcess(this);
connect( process, SIGNAL(readyReadStandardOutput()), this, SLOT(onReadyReadStandardOutput() ) );
}
Later on button clicked I call:
void MainWindow::onButton1Clicked()
{
process->start("rsync -a root#10.0.0.1:/path/ /rsync_folder");
//process->start("ping 10.0.0.01"); // this works for testing and I see output but not the above.
}
When rsync starts, it prints a message and ask for password..none of it is received by my QProcess but the ping message are received..what could be possibly wrong here?
The above grieving line also works directly on windows 7 command line but it just doesn't seem to show any progress in QProcess.
Update
Here is how I displaying the output.
void MainWindow::onReadyReadStandardOutput()
{
qDebug() << process->readAllStandardOutput();
}
http://doc.qt.io/qt-5/qprocess.html#communicating-via-channels
Did you remember to link to and check the standard error channel?
http://doc.qt.io/qt-5/qprocess.html#readAllStandardError
That has fixed it for me in the past for some QProcesses I have started.
Another way I've done it, is to use QProcess::startDetached();
Hope that helps.
My research shows that rsync probably behaves like scp which accordingly this answer doesn't generate output when it is redirected.

Program output is different when executed through Powershell remote session

I'm developing a Windows Console Application, using Visual Studio 2010, which prints progress percentage on screen such that the last number gets overwritten with the current number. I'm using carriage return to achieve this like so:
std::wcout
<< "[Running: "
<< std::setw(3)
<< std::setprecision(3)
<< percent
<< "%]"
<< '\r'
;
When I execute this program through PowerShell on a remote host using PSSesion:
Enter-PSSession WWW.XXX.YYY.ZZZ -credential <UserName>
[WWW.XXX.YYY.ZZZ]: PS C:\> .\test.exe
it behaves differently compared to when its executed locally either in PowerShell or in cmd.exe. I face following three problems:
1) The console output of the program is different, in that, instead of overwriting [Running xx%] its printing in the next line like so:
[Running 1%]
[Running 2%]
[Running 3%]
[Running 4%]
...
This is similar to what happens when output of a program is redirected to a file (lone carriage returns or newlines are replaced with carriage returns+newlines combinations).
2) The output doesn't shows up as and when the program writes something to cout. It comes at once at the end of execution. Does powershell caches the output of remote program and sends to the caller only once? If there is significant time difference between two lines (line meaning all the output between two newlines) then the first line gets printed as if it waits for some time for another newline and if received, it again goes to waiting state, if not received within certain time period (~500ms), it sends the output till last newline (and not all the accumulated output) to the caller. As can be seen from my code, there is no newline resulting in all the [Running xx%] being printed at once at the end.
3) At certain point in the program I need user's confirmation, but cin.fail() returns true in remote execution. So, is there any way to take user input in such an execution environment? Or is there any way to detect that the program is being executed remotely through Powershell (eg. some env variable)?
Any help related to any point will be greatly appreciated. :)
With respect to 1), the WinRM protocol for PowerShell Remoting is really just an input/output stream for commands and their serialized output. It has no terminal emulation capabilities, so when you run things remotely, their output is fed back sequentially one line at a time. If you used SSH or Telnet to a remote shell of powershell.exe, you would likely see what you expect.
As for 2), the default output buffermode appears to be "Block" so this would explain the behaviour you're seeing. You can change this with the New-PSSessionOption cmdlet to create a configuration for a session created by New-PSSession that can be passed to the -Session parameter on remoting aware cmdlets. The option is OutputBufferingMode. If you set it to None, you should get the desired behaviour.
# default output buffering mode is "none" for default options
$s = new-pssession localhost -sessionoption (new-pssessionoption)
enter-pssession $s
For 3), you should be able to accept input interactively IIRC. Looks like interactive stuff doesn't work right - I may have completely misremembered this.

Condor output file updating

I'm running several simulations using Condor and have coded the program so that it outputs a progress status in the console. This is done at the end of a loop where it simply prints the current time (this can also be percentage or elapsed time). The code looks something like this:
printf("START");
while (programNeedsToRum) {
// Run code repetitive code...
// Print program status update
printf("[%i:%i:%i]\r\n", hours, minutes, seconds);
}
printf("FINISH");
When executing normally (i.e. in the terminal/cmd/bash) this works fine, but the condor nodes don't seem to printf() the status. Only once the simulation has finished, all the status updates have been outputted to the file but then it's no longer of use. My *.sub file that I submit to condor looks like this:
universe = vanilla
executable = program
output = out/out-$(Process)
error = out/err-$(Process)
queue 100
When submitted the program executes (this is confirmed in condor_q) and the output files contain this:
START
Only once the program has finished running its corresponding output file shows (example):
START
[0:3:4]
[0:8:13]
[0:12:57]
[0:18:44]
FINISH
Whilst the program executes, the output file only contains the START text. So I came to the conclusion that the file is not updated if the node executing program is busy. So my question is, is there a way of updating the output files manually or gather any information on the program's progress in a better way?
Thanks already
Max
What you want to do is use the streaming output options. See the stream_error and stream_output options you can pass to condor_submit as outlined here: http://research.cs.wisc.edu/htcondor/manual/current/condor_submit.html
By default, HTCondor stores stdout and stderr locally on the execute node and transfers them back to the submit node on job completion. Setting stream_output to TRUE will ask HTCondor to instead stream the output as it occurs back to the submit node. You can then inspect it as it happens.
Here's something I used a few years ago to solve this problem. It uses condor_chirp which is used to transfer files from the execute host to the submitter. I have a python script that executes the program I really want to run, and redirects its output to a file. Then, periodically, I send the output file back to the submit host.
Here's the Python wrapper, stream.py:
#!/usr/bin/python
import os,sys,time
os.environ['PATH'] += ':/bin:/usr/bin:/cygdrive/c/condor/bin'
# make sure the file exists
open(sys.argv[1], 'w').close()
pid = os.fork()
if pid == 0:
os.system('%s >%s' % (' '.join (sys.argv[2:]), sys.argv[1]))
else:
while True:
time.sleep(10)
os.system('condor_chirp put %s %s' % (sys.argv[1], sys.argv[1]))
try:
os.wait4(pid, os.WNOHANG)
except OSError:
break
And my submit script. The problem ran sh hello.sh, and redirected the output to myout.txt:
universe = vanilla
executable = C:\cygwin\bin\python.exe
requirements = Arch=="INTEL" && OpSys=="WINNT60" && HAS_CYGWIN==TRUE
should_transfer_files = YES
transfer_input_files = stream.py,hello.sh
arguments = stream.py myout.txt sh hello.sh
transfer_executable = false
It does send the output in its entirety, so take that in to account if you have a lot of jobs running at once. Currently, its sending the output every 10 seconds .. you may want to adjust that.
with condor_tail you can view the output of a running process.
to see stdout just add the job-ID (and -f if you want to follow the output and see the updates immediately. Example:
condor_tail 314.0 -f

program stops after execvp( command.argv[0], command.argv)

I am writing a small shell program that takes a command and executes it. If the user enters a not valid command the if statement returns a -1. If the command is correct it executes the command, however once it executes the command the program ends. What am I doing wrong that is does not execute the lines of code after it? I have tested execvp( command.argv[0], command.argv) with ls and cat commands so I am pretty sure it works. Here is my code.
int shell(char *cmd_str ){
int commandLength=0;
cmd_t command;
commandLength=make_cmd(cmd_str, command);
cout<< commandLength<<endl;
cout << command.argv[0]<< endl;
if( execvp( command.argv[0], command.argv)==-1)
//if the command it executed nothing runs after this line
{
commandLength=-1;
}else
{
cout<<"work"<<endl;
}
cout<< commandLength<<endl;
return commandLength;
}
From man page of execvp(3)
The exec() family of functions replaces the current process image with
a new process image
So your current process image is overwritten with the image of your command! Hence you need to use a fork+exec combination always so that your command executes in the child process and your current process continues safely as a parent!
On a lighter note I want to illustrate the problem with a picture as a picture speaks a thousand words. No offence intended :) :)
From the documentation on exec
The exec() family of functions replaces the current process image with a new process image. The functions described in this manual page are front-ends for execve(2). (See the manual page for > execve(2) for further details about the replacement of the current process image.)
If you want your process to continue, this is not the function you want to use.
#Pavan - Just for nit-pickers like myself, technically the statement "current process is gone" is not true. It's still the same process, with the same pid, just overwritten with a different image (code, data etc).