I'm working with the output of a program to which I have the C++ source code. The program sends output to stderr, and I need to know where/how the output is calculated in the source code.
I know that one form to send something to stderr is
std::cerr << "foo";
I use grep to see if this form is used, but I can't find it.
I know that is written to stderr because when I run the program I obtain the output in this form:
./program 2> file-with-info.txt
Are there any other ways for output to be sent to stderr? Can anybody suggest patterns I might grep for to find where this output is being sent?
It's not
cerr < "foo"
but
cerr << "foo"
You can try to grep for clog (redirected to the standard error stream) too :
clog <<
You can also search for stderr and perror which are the old C ways to output to standard err
std::cerr, std::clog and stderr all three denote the standard error stream. The first two are the (unbuffered and buffered) C++ interfaces, the third is the old C stdio interface. perror also writes to standard error.
Depending on the platform, there may be more ways to output to standard error, such as writing to the file descriptor 2 on Unix. (If you're lucky, you can grep for the symbolic constant STDERR_FILENO.)
The most reliable thing to do would be to hook the OS function that writes out and if it's writing to the Standard error output, then break/print callstack. If you settle for anything else, then there's a dozen ways it can be output without you finding that exact string.
Related
I have a cpp program that constantly prints out readings from a gyro. I want to write these values to a file but the problem is that the cpp program can be exited anytime (either power down of system or user press ctrl + c etc). What is a good way to safely write these values to a files as they are being read without having to safely close the file after. I am thinking of somehow using the bash >> operator.
.
.
.
while(1)
{
printf("acc: %+5.3f", ax);
//write the printed line to file...
}
.
.
.
To protect against the program being terminated with Ctl-c, flush the buffer after each write:
fstream << "acc: " << ax << std::flush;
Note that if you end the output with std::endl, this writes a newline and also flushes the buffer.
Protecting against the system losing power is harder. There are OS-specific functions like fsync() on Unix, which force any kernel file buffers to be written to disk. But to use this you need the underlying Unix file descriptor, and there's no standard way to get that from a C++ fstream. See Retrieving file descriptor from a std::fstream
In this case you have 2 points of possible synchronization issues:
printf buffers
bash buffers and file open/close operations.
You can improve the 'printf' part by using the 'fflush(stdout)' call, but you have no control over the bash process.
The best solution would be to use your own file output with fprintf, followed with fflush and sync or similar. The latter would guarantee that system buffers would be flushed as well.
In c++ world you can use output streams to a file with 'endl' or 'flush' at the end. They would flush output buffere, though you might still need 'sync'.
I am using llvm::MemoryBuffer::getFileOrSTDIN("-") and, according to the specification, it should Open the specified file as a MemoryBuffer, or open stdin if the Filename is "-".
Now, in the following context:
auto Source = llvm::MemoryBuffer::getFileOrSTDIN(File);
if (std::error_code err = Source.getError()) {
llvm::errs() << err.message();
} else{
someFunction(std::move(*Source), File, makeOutputWriter(Format, llvm::outs()),
IdentifiersOnly, DumpAST);
}
it blocks on the first line (when File == "-"); as expected as the STDIN never closes.
When a special *char appears in STDIN, let's say <END_CHAR>, I know that I am finished reading for a given task. How could I close the STDIN in this situations and move on to someFunction ?
Thanks,
You can always close the stdin file descriptor using close, i.e. close(0). If you check llvm::MemoryBuffer's source, you'll see that getFileOrSTDIN() basically boils down to a call to llvm::MemoryBuffer::getMemoryBufferForStream() with the first argument (the file descriptor) set to 0.
Also, see this SO answer.
The special character to close the standard input is ctrl-d (in *nix at least) on the command line (have a look here).
I've been trying to send data to stdin of a running process. Here is what I do:
In a terminal I've started a c++ program that simply reads a string and prints it. Code excerpt:
while (true) {
cin >> s;
cout << "I've just read " << s << endl;
}
I get the PID of the running program
I go to /proc/PID/fd/
I execute echo text > 0
Result: text appears in the terminal where the program is run. Note, not I've just read text, but simply text.
What am I doing wrong and what should I do to get this thing to print 'I've just read text'?
When you're starting your C++ program you need to make sure its input comes from a pipe but not from a terminal. You may use cat | myapp to do that. Once it's running you may use PID of your application for echo text > /proc/PID/fd/0
It could be a matter of stdout not being properly flushed -- see "Unix Buffering". Or you could be in a different shell as some commentators have suggested.
Generally, it's more reliable to handle basic interprocess communication via FIFOs or NODs -- named pipes. (Or alternatively redirect stdout and/or stderr to a file and read from that with your c++ program.)
Here's some good resources on how to use those in both the terminal and c++.
"FIFO – Named pipes: mkfifo, mknod"
"Using Pipes in Linux Processes"
"Programming with FIFO: mkfifo(), mknod()"
FD 0 is the terminal the program is running from. When you write to FD 0, you are writing to the terminal the program is running from. FD 0 is not required to be opened in read-only mode; in practice it seems to be read/write mode, so you can write to it. (I suspect this is because FDs 0, 1 and 2 all refer to the same file description)
So echo text > /proc/PID/fd/0 just echoes text to the terminal.
To pipe input to the program, you would need to write to the other end of the pipe (actually a PTY, which mostly behaves like a pair of pipes). Most likely, whatever terminal emulator you're using (xterm, konsole, gnome-terminal) will have the other end open, so you could try writing to that.
I wish to print some text directly to a network printer from my c++ code (I am coding with xcode 4). I do know that everything on unix is a file and believe that it would not be impossible to redirect the text using fstream method in c++ to the printer device file. The only problem is I don't know the device file in /dev associated with my network printer.
Is it possible to achieve printing using fstream method? Something like
std::fstream printFile;
printFile.open("//PATH/TO/PRINTER/DEV", std::ios::out);
printFile << "This must go to printer" << std::endl;
printFile.close();
And, if so
How to obtain the file in /dev corresponding to a particular printer?
Thanks in advance,
Nikhil
Opening and writing directly to a file used to be possible back in the days of serial printers; however, this is not the approach available today.
The CUPS daemon provides print queuing, scheduling, and administrative interfaces on OS X and many other Unix systems. You can use the lp(1) or lpr(1) commands to print files. (The different commands come from different versions of print spoolers available in Unix systems over the years; one was derived from the BSD-sources and the other derived from the AT&T sources. For compatibility, CUPS provides both programs.)
You can probably achieve something like you were after with popen(3). In shell, it'd be something like:
echo hello | lp -
The - says to print from standard input.
I haven't tested this, but the popen(3) equivalent would probably look like this:
FILE *f = popen("lp -", "w");
if (!f)
exit(1);
fprintf(f, "output to the printer");
I recommend testing some inputs at the shell first to make sure that CUPS is prepared to handle the formatting of the content you intend to send. You might need to terminate lines with CRLF rather than just \n, otherwise the printer may "stair-step" the output. Or, if you're sending PDF or PS or PCL data, it'd be worthwhile testing that in the cheapest possible manner to make sure the print system works as you expect.
I have the following code:
ifstream initFile;
initFile.open("D:\\InitTLM.csv");
if(initFile.is_open())
{
// Process file
}
The file is not opening. The file does exist on the D: drive. Is there a way to find out exactly why this file cannot be found? Like an "errno"?
You should be able to use your OS's underlying error reporting mechanism to get the reason (because the standard library is built on the OS primitives). The code won't be portable, but it should get you to the bottom of your issue.
Since you appear to be using Windows, you would use GetLastError to get the raw code and FormatMessage to convert it to a textual description.
Answered here I believe: Get std::fstream failure error messages and/or exceptions
The STL is not great at reporting errors. Here's the best you can do within the standard:
ifstream initFile;
initFile.exceptions(ifstream::eofbit|ifstream::failbit|ifstream::badbit);
try
{
initFile.open("D:\\InitTLM.csv");
// Process File
}
catch(ifstream::failure e)
{
cout << "Exception opening file:" << e.what() << endl;
}
In my experience, the message returned by what() is usually useless.
Check the permissions on the root of the D: drive. You may find that your compiled executable, or the service under which your debugger is running, does not have sufficient access privileges to open that file.
Try changing the permissions on the D:\ root directory temporarily to "Everyone --> Full Control", and see if that fixes the issue.