When I execute this command (where fail.cpp is a simple program filled with compiler errors), the errors are not output directly on the screen, but, rather, within the fail.out file:
g++ fail.cpp > fail.out 2>&1
From my introductory understanding of bash, this makes sense: > redirects the program output (stdout, a.k.a. 1) to fail.out, while 2>&1 redirects stderr (a.k.a. 2) to this new place for stdout, which is the file. (?)
But changing the order of the command makes things happen differently:
g++ fail.cpp 2>&1 > fail.out
Now, the error messages go directly onto the screen, and fail.out is a blank file.
Why is this? It seems like the same idea as above: redirect the errors that this command will produce to stdout (2>&1), and redirect that, in turn, to the fail.out file. Is it an order of operations thing that I am missing?
2>&1 means "redirect stderr to where stdout is currently connected", and redirections are processed in order from left to right. So the first one does:
Redirect stdout to the fail.out file.
Redirect stderr to stdout's current connection, i.e. the fail.out file
The second one does:
Redirect stderr to stdout's current connection, i.e. the terminal.
Redirect stdout to the fail.out file.
Related
I have a program that writes data to a file. Normally, the file is on disk, but I am experimenting with writing to /dev/stdout. Currently, when I do this, the program will not exit until I press Ctrl-C . Is there a way for the program to signal that the output is done ?
Edit:
Currently, a disk file is opened via fopen(FILE_NAME), so I am now trying to pass in /dev/stdout as FILE_NAME.
Edit 2:
command line is
MY_PROGRAM -i foo -o /dev/stdout > dump.png
Edit 3:
It looks like the problem here is that stdout is already open, and I am opening it a second time.
The EOF condition on a FIFO (which, if you're piping from your program into something else, is what your stdout is) is set when no file handles are still open for write.
In C, the standard-library is fclose(stdout), whereas the syscall interface is close(1) -- if you're using fopen(), you'll want to pair it with fclose().
If you're also doing a separate outFile = fopen("/dev/stdout", "w") or similar, then you'll need to close that copy as well: fclose(outFile) as well as fclose(stdout).
I'm doing make all -d --trace
How do I get Gnu Make to output timestamps for every line it outputs?
More generally, how do I add a timestamp to every STDOUT and STDERR statement?
There is a solution for Linux/Bash but I'm on Windows.
I created a one line batch file add_ts.bat : echo %time% %1
I tried the following but I only got one timestamp (without the lines that were output):
make all --trace -d 2>&1 | add_ts.bat
To a first approximation you need a batch file like:
add_ts.bat
#for /F "usebackq delims==" %%i in (`%1`) do #echo %time% %%i
which you would run like:
add_ts.bat "make all -d --trace" > buildlog.txt
This however isn't good enough if you want to capture and
timestamp STDERR as well as STDOUT from the command passed as
%1, because the backticks operator around %1 will only capture STDOUT
To fix this you'll need to capture STDERR as well as STDOUT within the backticks, by using redirection in there, which in turns means
you need to run a subshell to understand the redirection, and you need to
escape the redirection operators so they're not interpreted by the toplevel
shell. Which all comes to:
#for /F "usebackq delims==" %%i in (`cmd /C %1 2^>^&1`) do #echo %time% %%i
Run just the same way.
Later
what I don't get is why the | itself wasn't enough to send STDOUT and STDERR to STDIN of add_ts.bat
It is. I think you are labouring under the combination of two misconceptions.
One: You believe that a program's commandline arguments are the same as its standard
input, or that it gets it commandline arguments from its standard input. It doesn't.
A program's commandline arguments, if any, are passed to it as a fixed list
of strings in the program-startup protocol. Its standard input is an input stream made
available to it at the same time by the OS and connected by default to the console in which the program
starts. This default can be overridden in the shell by redirection operators. The contents of that input stream are not fixed in advance. It will feed to the
the program whatever is input to the console, or from its redirected proxy, as long as the program is running, as and when the program reads it. The program
can parse or ignore its commandline arguments and, quite independently of that, it can read or ignore its standard input.
Your program add_ts.bat is a program that parses the first of its commandline arguments
- it uses %1 - and ignores any more. And it ignores its standard input entirely.
Two: You believe that the effect of a pipeline, e.g.
a | b
is to start an a process and then, for each line that it writes to the standard output, start
a distinct b process which will automatically receive that one line written by a as
a single commandline argument (no matter who many words are in the line) and do its stuff
with that single commandline argument.
That's not what happens. The OS starts one a process and one b process, and connects the
standard output of the one a process to the standard input of the one b process. For the
pipeline to work at all, b has got to be a program that reads its standard input. Your
add_ts.bat is not such a program. It's only a program that parses its first commandline
argument: the | doesn't give it any, and the commandline:
make all --trace -d 2>&1 | add_ts.bat
doesn't give it any either. The commandline:
make all --trace -d 2>&1 | add_ts.bat "Hello World"
would give it one commandline argument and:
make all --trace -d 2>&1 | add_ts.bat Hello World
would give it two, not one, commandline arguments, the second being ignored. But in any case
it doesn't read its standard input so piping to it is futile.
The site ss64.com is perfectly good about CMD redirection and piping
but it assumes you know what a program has to do to be a pipeline-able command: To be an upstream command,
it has to write its standard output; to be a downstream command it has to read its standard input.
Using a batch file wrapper is a clever solution if you don't mind the extra overhead. Otherwise I think you'll have to modify GNU make itself to have it print out this data.
If that's not palatable for some reason, you can get that information by using ElectricMake, a GNU-make-compatible implementation of make that includes lots of enhancements, including annotated build logs that have microsecond-resolution timestamps for every job in the build. ElectricMake is part of ElectricAccelerator Huddle.
Here's a bit of the annotation for a trivial "echo Hello World!" job:
<job id="J00007fb820002000" thread="7fb82f7fe700" start="3" end="4" type="rule" name="all" file="
Makefile" line="1">
<command line="2">
<argv>echo Hello, world!</argv>
<output src="prog">Hello, world!
</output>
</command>
<commitTimes start="0.291693" wait="0.296587" commit="0.296628" write="0.296680"/>
<timing invoked="0.291403" completed="0.296544" node="ecdroid3a-59"/>
</job>
Here, the <timing> tag shows the start time (0.291403 seconds) and end time (0.296544 seconds) of the job relative to the start of the build.
These annotated build logs can be viewed and analysed graphically with ElectricInsight, a companion tool for ElectricMake.
ElectricAccelerator Huddle is the freemium version of ElectricAccelerator -- usage is entirely free up to a point, with modest pay-as-you-go fees beyond that
Disclaimer: I'm the architect of ElectricAccelerator.
I am manipulating characters from a file and sending them to another file using command line (which I am pretty new to).
a.out -d 5 < garbage01.txt > garbage02.txt
The characters are going to garbage02.txt through cout.put(char). If the command line arguments don't validate I just want to print to screen a simple message to state that, but everything goes to garbage02.txt. Changing the layout of the command is not an option.
I hope this is a pretty straight-forward issue, that I am just having difficulty finding a solution to.
It is common to write error messages to stderr and normal output to stdout. To print an error message to stderr do
std::cerr << "Something went wrong\n";
(You can also do this with fprintf, but that is usually not needed.)
Output written to stderr will not be redirected by
> someFile
but only by
2> someFile
so the user can choose where they want to see the "normal" and the "error" output separately.
std::cerr also has the nice property that it does not buffer the output (unlike std::cout). That means that the user will see the error message before the program continues after the output line.
If you do not want this non-buffer functionality, use std::clog.
You can use /dev/tty file for that, it is a special file representing terminal for current process.
#include <fstream>
std::ofstream screen("/dev/tty");
screen<<"Your message"<<std::endl;
Either use std::cerr to print to screen
std::cerr << "Some message" << std::endl;
or change your terminal command
a.out -d 5 < garbage01.txt 2> garbage02.txt # Redirect stderr stream only
During compilation process many errors are thrown on the screen. To start resolving them I need to scroll up 3, 4 pages. I tried doing head on them but they still came on the terminal.
g++ -std=c++0x testCoverDownloader.cpp -I /usr/include/QtCore/ -I /usr/include/QtGui 2>&1 | head
how to I just see the top errors first and then scroll down the page? The code above cuts the output to show the top 10 lines. What I want is all errors but from the start so that I dont need to scroll upwards
As well as using 2>&1 to get the STDERR results to STDOUT, you might want to try tee in order to get the results into a file for later viewing.
If you use vim you could try <your compile statement> 2>&1 | vim - That should pipe STDERR and STDOUT to vim for viewing.
EDIT:
Added in #joachim pilberg's comment to provide a more accurate answer:
The important part is the redirection part: Error from the compiler is
put on stderr. To pipe it to head, a viewer like more or less or even
an editor like vim, you need to redirect stderr to stdout. This is
what is done with the &2>1 (or more correctly 2>&1). See the manual
page of your shell for more information about redirection.
you can also add -Wfatal-errors compiler option to stop compilation after first error
I have finally worked out how to get stdin and stdout to pipe between the main app and a process created with CreateProcess (win32) or exec (linux). Now I am interested in harnessing the piping nature of an app. The app I am running can be piped into:
eg: cat file.txt | grep "a"
If I want to run "grep", sending the contents of "file.txt" to it (which I have in a buffer in my c++ app), how do I do this? I assume I don't just pump it down stdin, or am I wrong. Is that what I do?
Yes, that's exactly what you do: read from stdin and write to stdout.
One of the strokes of genius behind linux is the simplicity of redirecting input and output almost effortlessly, as long as your apps obey some very simple, basic rules. For example: send data to stdout and errors or informational messages to stderr. That makes it easy for a user to keep track of status, and you can still use your app to send data to a pipe.
You can also redirect data (from stdout) and messages (from stderr) independently:
myapp | tail -n 5 > myapp.data # Save the last 5 lines, display msgs
myapp 2> myapp.err | sort # Sort the output, send msgs to a file
myapp 2> /dev/null # Throw msgs away, display output
myapp > myapp.out 2>&1 # Send all output (incl. msgs) to a file
Redirection may be a bit confusing at first, but you'll find the time spent learning will be well worth it!