I'm developing a Windows Console Application, using Visual Studio 2010, which prints progress percentage on screen such that the last number gets overwritten with the current number. I'm using carriage return to achieve this like so:
std::wcout
<< "[Running: "
<< std::setw(3)
<< std::setprecision(3)
<< percent
<< "%]"
<< '\r'
;
When I execute this program through PowerShell on a remote host using PSSesion:
Enter-PSSession WWW.XXX.YYY.ZZZ -credential <UserName>
[WWW.XXX.YYY.ZZZ]: PS C:\> .\test.exe
it behaves differently compared to when its executed locally either in PowerShell or in cmd.exe. I face following three problems:
1) The console output of the program is different, in that, instead of overwriting [Running xx%] its printing in the next line like so:
[Running 1%]
[Running 2%]
[Running 3%]
[Running 4%]
...
This is similar to what happens when output of a program is redirected to a file (lone carriage returns or newlines are replaced with carriage returns+newlines combinations).
2) The output doesn't shows up as and when the program writes something to cout. It comes at once at the end of execution. Does powershell caches the output of remote program and sends to the caller only once? If there is significant time difference between two lines (line meaning all the output between two newlines) then the first line gets printed as if it waits for some time for another newline and if received, it again goes to waiting state, if not received within certain time period (~500ms), it sends the output till last newline (and not all the accumulated output) to the caller. As can be seen from my code, there is no newline resulting in all the [Running xx%] being printed at once at the end.
3) At certain point in the program I need user's confirmation, but cin.fail() returns true in remote execution. So, is there any way to take user input in such an execution environment? Or is there any way to detect that the program is being executed remotely through Powershell (eg. some env variable)?
Any help related to any point will be greatly appreciated. :)
With respect to 1), the WinRM protocol for PowerShell Remoting is really just an input/output stream for commands and their serialized output. It has no terminal emulation capabilities, so when you run things remotely, their output is fed back sequentially one line at a time. If you used SSH or Telnet to a remote shell of powershell.exe, you would likely see what you expect.
As for 2), the default output buffermode appears to be "Block" so this would explain the behaviour you're seeing. You can change this with the New-PSSessionOption cmdlet to create a configuration for a session created by New-PSSession that can be passed to the -Session parameter on remoting aware cmdlets. The option is OutputBufferingMode. If you set it to None, you should get the desired behaviour.
# default output buffering mode is "none" for default options
$s = new-pssession localhost -sessionoption (new-pssessionoption)
enter-pssession $s
For 3), you should be able to accept input interactively IIRC. Looks like interactive stuff doesn't work right - I may have completely misremembered this.
Related
I have a QProcess in which I get data from the backend of my application. I have a simple connection to get the output string generated by QProcess. Right now it works well if I request a single command.
Now, I need to run two commands in a row one by one. The expected behavior is the following:
Send command 1
Wait for the output of command 1
Store the output of command 1 in a variable
Send command 2
Wait for the output of command 2
Store the output of command 2 in a variable
But I'm having an unexpected behavior. The two commands are sent to the backend, but sometimes I get a mixed output from the two outputs. I think it could be related to the time it takes for the backend to return the first result. I need to wait for the first output to send the second command. Any ideas on how to solve this problem?
If if use study->waitForFinished(); or study->waitForFinished(-1); the app freezes and then crash.
This is my code:
connect(study, &QProcess::readyReadStandardOutput, [=] {
QString out = study->readAllStandardOutput();
qDebug()<< "Output= " << out;
}
void StudyClass::writeCommand(const QString& line) {
study->write(line.toLocal8Bit());
}
If I write two commands as follows:
writeCommand("print_status;");
writeCommand("print_say_hello");
Sometimes I get the desired output (qDebug called in the connection):
Output= 0
Output= hello world
But sometimes I just get a mixed output:
Output= 0 hello world
This is wrong behavior, because I need to get results for each command instead of just one.
In order to have separate outputs for the two commands you have to wait for the first output and only then send the second command. You seem to send both commands without a delay and so occasionally you get the output of both commands in one call to the signal handler.
If I understand correctly, your QProcess is a long running process taking input commands through stdin, and then you want to capture the output of each of these commands. If that is the case, then calling study->waitForFinished() on the main thread blocks because the process is still running after each command.
Instead you could try waitForReadyRead() or waitForBytesWritten().
I have a C++ program which is called at startup via a cronjob (in crontab):
#reboot sudo /home/pi/CAN/RCR_datalogging/logfileControl
Which does run logfileControl anytime the Pi is booted as it shows up in the list of running programs (ps -e). LogfileControl contains two system calls to C++ programs related to SocketCAN (SocketCAN is part of the Linux Kernel, it allows for dealing with CAN data as network sockets). I want logfileControl to run on startup so that it can initialize the CAN socket (system call 1) and then start the first logfile (systemcall 2, candumpExternal, this is candump from socketCAN with a minor modification to make the logfile a specific location rather than just where candump is, but using the original version had the same issue). The first systemcall seems to be working properly as if I try and initialize the socket again it is busy, but the second systemcall doesn't appear to be happening as a logfile is not created at all as a logfile is not created. If I manually run logfileControl from the command line it works as expected and creates the logfile which has left me quite confused...
Does anyone have an insight as to what is going on here?
system("sudo /sbin/ip link set can0 up type can bitrate 500000");
// This is ran initially as logging should start as soon as the pi is on
system("/home/pi/CAN/RCR_datalogging/candumpExternal can0 -l -s 0"); // candump with the option to log(-l) as well as
// continue to output to console (-s 0)
std::cout <<"Setup Complete" << std:: endl;
while(true) { //sleeping indefinitely so that the program can stay open and wait for button presses
sleep(60);
}
Edit: I also tried adding a simple 5 second pause at the beginning of the program, but this didn't seem to make any difference.
I'm running several simulations using Condor and have coded the program so that it outputs a progress status in the console. This is done at the end of a loop where it simply prints the current time (this can also be percentage or elapsed time). The code looks something like this:
printf("START");
while (programNeedsToRum) {
// Run code repetitive code...
// Print program status update
printf("[%i:%i:%i]\r\n", hours, minutes, seconds);
}
printf("FINISH");
When executing normally (i.e. in the terminal/cmd/bash) this works fine, but the condor nodes don't seem to printf() the status. Only once the simulation has finished, all the status updates have been outputted to the file but then it's no longer of use. My *.sub file that I submit to condor looks like this:
universe = vanilla
executable = program
output = out/out-$(Process)
error = out/err-$(Process)
queue 100
When submitted the program executes (this is confirmed in condor_q) and the output files contain this:
START
Only once the program has finished running its corresponding output file shows (example):
START
[0:3:4]
[0:8:13]
[0:12:57]
[0:18:44]
FINISH
Whilst the program executes, the output file only contains the START text. So I came to the conclusion that the file is not updated if the node executing program is busy. So my question is, is there a way of updating the output files manually or gather any information on the program's progress in a better way?
Thanks already
Max
What you want to do is use the streaming output options. See the stream_error and stream_output options you can pass to condor_submit as outlined here: http://research.cs.wisc.edu/htcondor/manual/current/condor_submit.html
By default, HTCondor stores stdout and stderr locally on the execute node and transfers them back to the submit node on job completion. Setting stream_output to TRUE will ask HTCondor to instead stream the output as it occurs back to the submit node. You can then inspect it as it happens.
Here's something I used a few years ago to solve this problem. It uses condor_chirp which is used to transfer files from the execute host to the submitter. I have a python script that executes the program I really want to run, and redirects its output to a file. Then, periodically, I send the output file back to the submit host.
Here's the Python wrapper, stream.py:
#!/usr/bin/python
import os,sys,time
os.environ['PATH'] += ':/bin:/usr/bin:/cygdrive/c/condor/bin'
# make sure the file exists
open(sys.argv[1], 'w').close()
pid = os.fork()
if pid == 0:
os.system('%s >%s' % (' '.join (sys.argv[2:]), sys.argv[1]))
else:
while True:
time.sleep(10)
os.system('condor_chirp put %s %s' % (sys.argv[1], sys.argv[1]))
try:
os.wait4(pid, os.WNOHANG)
except OSError:
break
And my submit script. The problem ran sh hello.sh, and redirected the output to myout.txt:
universe = vanilla
executable = C:\cygwin\bin\python.exe
requirements = Arch=="INTEL" && OpSys=="WINNT60" && HAS_CYGWIN==TRUE
should_transfer_files = YES
transfer_input_files = stream.py,hello.sh
arguments = stream.py myout.txt sh hello.sh
transfer_executable = false
It does send the output in its entirety, so take that in to account if you have a lot of jobs running at once. Currently, its sending the output every 10 seconds .. you may want to adjust that.
with condor_tail you can view the output of a running process.
to see stdout just add the job-ID (and -f if you want to follow the output and see the updates immediately. Example:
condor_tail 314.0 -f
Actually I have trouble naming the title of this post. Because I don't know how to summarize my meaning in a professional way. But I'll explain my question as below:
I'm running a program written by C++, command is:
./VariationHunter_SC
Then it'll let you type in many parameters:
Please enter the minimum paired-end insert size:
Please enter the maximum paired-end insert size:
Please enter the pre-processing mapping prune probability:
Please enter the name of the input file:
Please enter the minimum support for a cluster:
Obviously I need to type in such parameters one by one to run the program; But I have thousands of such jobs, and need to pre-assign such parameters in script, and submit script to computer.
So how can I do that?
thx
Edit
so how can I make parameter-list?
Just like below?:
140
160
0
mrfast.vh
1
Seems the program cannot recognize these numbers, and distribute numbers..
This depends on how the program actually reads the data that you type in - it's likely that its reading stdin, so you could use separate files with the parameters and pass them in via redirection: ./VariationHunter_SC < parameter-file
It's also possible that the program will accept parameters on the command line, but there's no way of really knowing that (or how) except by whatever documentation the program might come with (or by reading the source code, if it's available and there is no other accurate docs).
Simply use the piping character to pipe the contents of a file to your program
example, in a windows command shell:
echo "asdf" | pause
This will pass "asdf" to the pause program. As a result, pause will print a "Press any key to continue" message, then imediately continue because it will receive the "asdf" string as a response.
So, overall, write or use a program that outputs the contents of your file. Call it, then pipe its output to the program that needs the input.
The unix cat command is such a command that writes the contents of a file to output, or to the input of another executable if you are piping the output.
I am trying to execute a dos command from within my C++ program, however soon as I add quotes to the output filepath (of a redirection) the command no longer gets executed and returns instantly. I've shown an example below of a path without spaces, but since paths may have spaces and thus be quoted for the shell to understand it properly I need to solve this dilemma - and I'm trying to get the simplest case working first.
i.e.
The following WORKS:
sprintf(exec_cmd,"\"C:/MySQL Server 5.5/bin/mysqldump.exe\" -u%s -p%s %s > C:/backup.bak",user,password,db_name);
system(exec_cmd);
The following does NOT work (notice the quotes around the output):
sprintf(exec_cmd,"\"C:/MySQL Server 5.5/bin/mysqldump.exe\" -u%s -p%s %s > \"C:/backup.bak\"",user,password,db_name);
system(exec_cmd);
I'm guessing it is choking somewhere. I've tried the same "exec_cmd" in popen to no avail.
Any help/advice is greatly appreciated.
I don't think your shell (cmd.exe) allows redirection to a file name with spaces. I couldn't make my command.com from DOS 6.22 accept it (I don't have a cmd.exe nearby to test).
Anyway, you can use the --result-file option to pass the redirection to the command itself.
mysqldump ... --result-file="file name" ...