I have finally worked out how to get stdin and stdout to pipe between the main app and a process created with CreateProcess (win32) or exec (linux). Now I am interested in harnessing the piping nature of an app. The app I am running can be piped into:
eg: cat file.txt | grep "a"
If I want to run "grep", sending the contents of "file.txt" to it (which I have in a buffer in my c++ app), how do I do this? I assume I don't just pump it down stdin, or am I wrong. Is that what I do?
Yes, that's exactly what you do: read from stdin and write to stdout.
One of the strokes of genius behind linux is the simplicity of redirecting input and output almost effortlessly, as long as your apps obey some very simple, basic rules. For example: send data to stdout and errors or informational messages to stderr. That makes it easy for a user to keep track of status, and you can still use your app to send data to a pipe.
You can also redirect data (from stdout) and messages (from stderr) independently:
myapp | tail -n 5 > myapp.data # Save the last 5 lines, display msgs
myapp 2> myapp.err | sort # Sort the output, send msgs to a file
myapp 2> /dev/null # Throw msgs away, display output
myapp > myapp.out 2>&1 # Send all output (incl. msgs) to a file
Redirection may be a bit confusing at first, but you'll find the time spent learning will be well worth it!
Related
I have a script where I launch a shell command. The problem is that the script doesn't wait until the command is finished and continues right away.
I have tried WAIT but it doesn't work as the shell command turns the source off and on (ignition off/on) and I get the error that WAIT cannot be executed because power is off.
Is there any command I can use for program to wait until the command is executed?
My script looks like this:
OS.COMMAND echo OUTP OFF > COM1
OS.COMMAND echo OUTP ON > COM1
System.up
If I would want to execute a shell command without redirecting I would use OS.Area instead of OS.Command, because OS.Area is blocking and will wait until the shell command has finished. However OS.Area does not support redirecting I think.
If I would want to execute a shell command and redirect the output to a file I would first delete the file and then wait until it gets accessible. Like this:
IF OS.FILE.EXIST("myfile.txt")
RM "myfile.txt"
OS.Command ECHO "Hello World" > "myfile.txt"
WAIT OS.FILE.readable("myfile.txt")
However it looks like you want to write via a shell command to a COM port on Windows. But I don't think it is possible to wait in TRACE32 until this write to the COM port has been done when using OS.Command...
So I suggest to do this task with the TERM commands instead:
TERM.METHOD #1 COM COM1 115200. 8 NONE 1STOP NONE
TERM.view #1
TERM.Out #1 "OUTP OFF" 0x0A
TERM.Out #1 "OUTP ON" 0x0A
Of course you have to set the correct baud rate, bits, parity and stop bits. The 0x0A after each TERM.Out is simply the line-feed character.
Does you terminal show any output as a reaction to OUTP ON? If yes you can also wait for this output with e.g. SCREEN.WAIT TERM.LINE(#1,-1)=="OUTP is now ON" 5.s
Otherwise I assume that a simple WAIT 50.ms before SYStem.Up will probably do the trick too.
I have a binary compiled in Cpp with the following code:
std::string input;
getline(std::cin, input);
std::cout << "Message given: " << input << std::endl;
If I execute this example, and write in the terminal "Hello world!" works perfectly:
Message given: Hello world!
Now, I launch the executable in redirecting stdout:
./basicsample >> output/test
If I try to inject inputs using file descriptor:
echo "Hello world!" > /proc/${PID}/fd/0
The message appear in terminal that launched the process:
[vgonisanz#foovar bash]$ ./basicsample >> output/test
Hello world!
But the message no appear in the programs output. I expect to get the message processed by getline, and it is not detected! But, If I write directly in that bash, the program get the input. I'm trying to do a script to inject inputs in a background process but it is not working.
How could I inject inputs to be detected into the process without do it manually?
UPDATE:
It seems that using expect, this could work, but I will prefer to avoid dependencies like this. After several tries, the best way to do it without dependencies is to use a pipe, in example:
mkdir tmp; mkfifo tmp/input.pipe; nohup ./basicsample tmp/user.out 2> tmp/nohup.err
This will run the creating a input pipe, an output for console and error.
Then, just feed the pipe using:
echo "Hello world!" > tmp/input.pipe
The problem of this is, the pipe works only once. After getting an input, it won't never listen it again. Maybe this is the way but I don't know how to avoid to lost the focus.
I tried to redirect it using several ways like files, etc, but it doesn't works. Thanks in advance.
The best way to do it without dependencies is to use a pipe, in example:
mkdir tmp
mkfifo tmp/input.pipe
(tail -f tmp/input.pipe) | ./basicsample > tmp/log.0 &
This will run creating an input pipe and an output saved in log file. You can prevent console blocking using the operator & to launch it in background.
Then inject data using:
echo "YOUR_STRING" > tmp/input.pipe
It should work for your posed problem.
I'm doing make all -d --trace
How do I get Gnu Make to output timestamps for every line it outputs?
More generally, how do I add a timestamp to every STDOUT and STDERR statement?
There is a solution for Linux/Bash but I'm on Windows.
I created a one line batch file add_ts.bat : echo %time% %1
I tried the following but I only got one timestamp (without the lines that were output):
make all --trace -d 2>&1 | add_ts.bat
To a first approximation you need a batch file like:
add_ts.bat
#for /F "usebackq delims==" %%i in (`%1`) do #echo %time% %%i
which you would run like:
add_ts.bat "make all -d --trace" > buildlog.txt
This however isn't good enough if you want to capture and
timestamp STDERR as well as STDOUT from the command passed as
%1, because the backticks operator around %1 will only capture STDOUT
To fix this you'll need to capture STDERR as well as STDOUT within the backticks, by using redirection in there, which in turns means
you need to run a subshell to understand the redirection, and you need to
escape the redirection operators so they're not interpreted by the toplevel
shell. Which all comes to:
#for /F "usebackq delims==" %%i in (`cmd /C %1 2^>^&1`) do #echo %time% %%i
Run just the same way.
Later
what I don't get is why the | itself wasn't enough to send STDOUT and STDERR to STDIN of add_ts.bat
It is. I think you are labouring under the combination of two misconceptions.
One: You believe that a program's commandline arguments are the same as its standard
input, or that it gets it commandline arguments from its standard input. It doesn't.
A program's commandline arguments, if any, are passed to it as a fixed list
of strings in the program-startup protocol. Its standard input is an input stream made
available to it at the same time by the OS and connected by default to the console in which the program
starts. This default can be overridden in the shell by redirection operators. The contents of that input stream are not fixed in advance. It will feed to the
the program whatever is input to the console, or from its redirected proxy, as long as the program is running, as and when the program reads it. The program
can parse or ignore its commandline arguments and, quite independently of that, it can read or ignore its standard input.
Your program add_ts.bat is a program that parses the first of its commandline arguments
- it uses %1 - and ignores any more. And it ignores its standard input entirely.
Two: You believe that the effect of a pipeline, e.g.
a | b
is to start an a process and then, for each line that it writes to the standard output, start
a distinct b process which will automatically receive that one line written by a as
a single commandline argument (no matter who many words are in the line) and do its stuff
with that single commandline argument.
That's not what happens. The OS starts one a process and one b process, and connects the
standard output of the one a process to the standard input of the one b process. For the
pipeline to work at all, b has got to be a program that reads its standard input. Your
add_ts.bat is not such a program. It's only a program that parses its first commandline
argument: the | doesn't give it any, and the commandline:
make all --trace -d 2>&1 | add_ts.bat
doesn't give it any either. The commandline:
make all --trace -d 2>&1 | add_ts.bat "Hello World"
would give it one commandline argument and:
make all --trace -d 2>&1 | add_ts.bat Hello World
would give it two, not one, commandline arguments, the second being ignored. But in any case
it doesn't read its standard input so piping to it is futile.
The site ss64.com is perfectly good about CMD redirection and piping
but it assumes you know what a program has to do to be a pipeline-able command: To be an upstream command,
it has to write its standard output; to be a downstream command it has to read its standard input.
Using a batch file wrapper is a clever solution if you don't mind the extra overhead. Otherwise I think you'll have to modify GNU make itself to have it print out this data.
If that's not palatable for some reason, you can get that information by using ElectricMake, a GNU-make-compatible implementation of make that includes lots of enhancements, including annotated build logs that have microsecond-resolution timestamps for every job in the build. ElectricMake is part of ElectricAccelerator Huddle.
Here's a bit of the annotation for a trivial "echo Hello World!" job:
<job id="J00007fb820002000" thread="7fb82f7fe700" start="3" end="4" type="rule" name="all" file="
Makefile" line="1">
<command line="2">
<argv>echo Hello, world!</argv>
<output src="prog">Hello, world!
</output>
</command>
<commitTimes start="0.291693" wait="0.296587" commit="0.296628" write="0.296680"/>
<timing invoked="0.291403" completed="0.296544" node="ecdroid3a-59"/>
</job>
Here, the <timing> tag shows the start time (0.291403 seconds) and end time (0.296544 seconds) of the job relative to the start of the build.
These annotated build logs can be viewed and analysed graphically with ElectricInsight, a companion tool for ElectricMake.
ElectricAccelerator Huddle is the freemium version of ElectricAccelerator -- usage is entirely free up to a point, with modest pay-as-you-go fees beyond that
Disclaimer: I'm the architect of ElectricAccelerator.
When I execute this command (where fail.cpp is a simple program filled with compiler errors), the errors are not output directly on the screen, but, rather, within the fail.out file:
g++ fail.cpp > fail.out 2>&1
From my introductory understanding of bash, this makes sense: > redirects the program output (stdout, a.k.a. 1) to fail.out, while 2>&1 redirects stderr (a.k.a. 2) to this new place for stdout, which is the file. (?)
But changing the order of the command makes things happen differently:
g++ fail.cpp 2>&1 > fail.out
Now, the error messages go directly onto the screen, and fail.out is a blank file.
Why is this? It seems like the same idea as above: redirect the errors that this command will produce to stdout (2>&1), and redirect that, in turn, to the fail.out file. Is it an order of operations thing that I am missing?
2>&1 means "redirect stderr to where stdout is currently connected", and redirections are processed in order from left to right. So the first one does:
Redirect stdout to the fail.out file.
Redirect stderr to stdout's current connection, i.e. the fail.out file
The second one does:
Redirect stderr to stdout's current connection, i.e. the terminal.
Redirect stdout to the fail.out file.
I'm writing a GUI app to wrap wmic.exe, using c++/win32api.
when calling to :
CreateProcess(.., "wmic.exe" , ..)
I'm sending handles to input and output Pipe that I'v opened for that purpose, from which I'll later read the output ( and write the input to).
the same code worked for any other windows command line utilities that I'v checked (net.exe , tree.exe , etc..) however , it doesn't work on the case of wmic.exe.
I've noticed that wmic.exe uses some functions of the Console family (http://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx) so I suspect that it might be the reason , but I don't really know whats going on inside there.
It should work. You can try using pipe in cmd to call wmic:
echo CPU | wmic >test.log ,
and it works on my 64-bit Windows 8 computer.