Standard output hangs after adding fprintf() statement with custom standard error - c++

I have a C++ class Archive with a member function extractData(). This function calls realExtractData(), which is implemented in a separate C library.
I want to pass the extractData() function a pair of FILE * instances that are usually stdout and stderr, but I want to provide the option of custom file pointers, as well:
class Archive {
public:
...
int extractData(string id, FILE *customOut, FILE *customErr);
...
};
int
Archive::extractData(string id, FILE *customOut, FILE *customErr)
{
if (realExtractData(id.c_str(), customOut) != EXIT_SUCCESS) {
fprintf(stderr, "something went wrong...\n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
If I call the above as listed, there is no delay in outputting data to standard output. All extracted data get sent to standard output (stdout) almost immediately:
FILE *outFp = stdout;
FILE *errFp = stderr;
Archive *archive = new Archive(inFilename);
if (archive->extractData(id, outFp, errFp) != EXIT_SUCCESS) {
fprintf(errFp, "[error] - could not extract %s\n", archive->getInFnCStr());
return EXIT_FAILURE;
}
If I change extractData() so that its fprintf() call uses customErr:
int
Archive::extractData(string id, FILE *customOut, FILE *customErr)
{
if (realExtractData(id.c_str(), customOut) != EXIT_SUCCESS) {
fprintf(customErr, "something went wrong...\n"); /* <-- changed this line */
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
...then when I run the binary, the binary seems to hang in processing input and printing to standard output.
If I change fprintf() back to using stderr and not customErr, things once again work properly, i.e., data are flushed to standard output (my customOut) immediately.
Is this a buffering issue? Is there a way to fix this?

"stderr and not customErr"
Standard error is un-buffered which means it prints out almost immediately. other output streams are buffered unless you're using low-level OS calls, which means they will take longer to print unless you do a buffer flush with something like an endl, ::flush, or whatever else.
If you want to go for the low-level OS calls and you're working with unix, check this out:
http://www.annrich.com/cs590/notes/cs590_lecture_2.pdf
I haven't read the whole thing, but on scanning it it looks as if it has similar info to the good Stevens Advanced Programming in Unix book which defitely talks through this.

Related

Get the last string printed in C/C++

I am trying to create a tester using googletest. the problem is that the function that I am testing is returning void and printing a result instead. I want to get the last string printed into the console so I can test the output. the string may include \n.
so I have the function itself:
void f_sequence(char sequenceStr[])
{
//logic...
if(condotion1)
printf("somthing1");
else if(condotion2)
printf("somthing2")
(...)
}
and then the tester:
TEST(TesterGroup, TesterName)
{
f_sequence("input");
EXPECT_EQ("somthing1", /*how do i get the output?*/);
}
Is it possible?
The functions I test are in c, while the Test function itself (the tester) is in c++. the output is printed using printf. I cannot change the function itself. I am using CLion latest version.
Redirect the standard output to a buffer.
Live on Coliru
#include <stdio.h>
#include <unistd.h>
#define BUFFER_SIZE 1024
int stdoutSave;
char outputBuffer[BUFFER_SIZE];
void replaceStdout()
{
fflush(stdout); //clean everything first
stdoutSave = dup(STDOUT_FILENO); //save the stdout state
freopen("NUL", "a", stdout); //redirect stdout to null pointer
setvbuf(stdout, outputBuffer, _IOFBF, 1024); //set buffer to stdout
}
void restoreStdout()
{
freopen("NUL", "a", stdout); //redirect stdout to null again
dup2(stdoutSave, STDOUT_FILENO); //restore the previous state of stdout
setvbuf(stdout, NULL, _IONBF, BUFFER_SIZE); //disable buffer to print to screen instantly
}
void printHelloWorld()
{
printf("hello\n");
printf("world");
}
int main()
{
replaceStdout();
printHelloWorld();
restoreStdout();
// Use outputBuffer to test EXPECT_EQ("somthing1", outputBuffer);
printf("Fetched output: (%s)", outputBuffer);
return 0;
}
References: http://kaskavalci.com/redirecting-stdout-to-array-and-restoring-it-back-in-c/
I don't know if it's possible to get what was last printed, but if you control the environment before your test function is called, you can redirect where standard output goes, which lets you write it to a file, which you can then check.
See this old answer which IMO was neglected. The example from it is modified here:
FILE *fp_old = stdout; // preserve the original stdout
stdout = fopen("/path/to/file/you/want.txt","w"); // redirect stdout to anywhere you can open
// CALL YOUR FUNCTION UNDER TEST HERE
fclose(stdout); // Close the file with the output contents
stdout=fp_old; // restore stdout to normal
// Re-open the file from above, and read it to make sure it contains what you expect.
Two ways:
If you're on a POSIX compatibile system, you can redirect the output of the program to a file using >, then read from the file later to confirm that the output is correct.
The other way is something like this:
freopen("output.txt", "w", stdout);
f_sequence();
freopen("/dev/tty", "w", stdout);
for POSIX compatible systems. On other platforms you'd need to change the "/dev/tty" to something else. I'm not aware of a completely portable way to do this.
And then read from output.txt. What the above snippet does is change what stdout is, so that it prints to a file instead of the normal stdout.
One solution: You can write a separate program that executes the function, and in the unit test, you can execute that program as a sub process and inspect the output. This can be done with std::system, but be very careful to not pass any non-constant input to it. You don't want shell injection vulnerability even in a unit test. System specific functions exist that avoid the use of shell in the subprocess.
Another solution, which is possible at least on POSIX: Replace the the standard out / err streams with file descriptors, and read the files afterwards.
Googletest specific: There seems to be testing::internal::CaptureStdout, which appears to implement the idea of replacing standard streams. But as the namespace implies, this is not official API, so may change in future.
There is a solution ( in C ) for an API call ( cmd_rsp ) with source code here, that when called in your program, creates a separate process, and pipes to both stdin and stdout, from which it accepts a command and returns the response via an auto sizing buffer. Similar in concept to popen(...).
A simple use case:
char *buf = NULL;
/// test cmd_rsp
buf = calloc(BUF_SIZE, 1);
if(!buf)return 0;
if (!cmd_rsp("dir /s", &buf, BUF_SIZE))//note the first argument can be any legal command that
//can be sent via the CMD prompt in windows,
//including a custom executable
{
printf("%s", buf);
}
else
{
printf("failed to send command.\n");
}
free(buf);

Why does the buffering of std::ifstream "break" std::getline when using LLVM?

I have a simple C++ application which is supposed to read lines from a POSIX named pipe:
#include<iostream>
#include<string>
#include<fstream>
int main() {
std::ifstream pipe;
pipe.open("in");
std::string line;
while (true) {
std::getline(pipe, line);
if (pipe.eof()) {
break;
}
std::cout << line << std::endl;
}
}
Steps:
I create a named pipe: mkfifo in.
I compile & run the C++ code using g++ -std=c++11 test.cpp && ./a.out.
I feed data to the in pipe:
sleep infinity > in & # keep pipe open, avoid EOF
echo hey > in
echo cats > in
echo foo > in
kill %1 # this closes the pipe, C++ app stops on EOF
When doing this under Linux, the application successfully displays output after each echo command as expected (g++ 8.2.1).
When trying this whole process on macOS, output is only displayed after closing the pipe (i.e. after kill %1).
I started suspecting some sort of buffering issue, so i've tried disabling it like so:
std::ifstream pipe;
pipe.rdbuf()->pubsetbuf(0, 0);
pipe.open("out");
With this change, the application outputs nothing after the first echo, then prints out the first message after the second echo ("hey"), and keeps doing so, alwasy lagging a message behind and displaying the message of the previous echo instead of the one executed.
The last message is only displayed after closing the pipe.
I found out that on macOS g++ is basically clang++, as
g++ --version yields: "Apple LLVM version 10.0.1 (clang-1001.0.46.3)".
After installing the real g++ using Homebrew, the example program works, just like it did on Linux.
I am building a simple IPC library built on named pipes for various reasons, so this working correctly is pretty much a requirement for me at this point.
What is causing this weird behaviour when using LLVM? (update: this is caused by libc++)
Is this a bug?
Is the way this works on g++ guaranteed by the C++ standard in some way?
How could I make this code snippet work properly using clang++?
Update:
This seems to be caused by the libc++ implementation of getline().
Related links:
Why does libc++ getline block when reading from pipe, but libstdc++ getline does not?
https://bugs.llvm.org/show_bug.cgi?id=23078
The questions still stand though.
I have worked around this issue by wrapping POSIX getline() in a simple C API and simply calling that from C++.
The code is something like this:
typedef struct pipe_reader {
FILE* stream;
char* line_buf;
size_t buf_size;
} pipe_reader;
pipe_reader new_reader(const char* pipe_path) {
pipe_reader preader;
preader.stream = fopen(pipe_path, "r");
preader.line_buf = NULL;
preader.buf_size = 0;
return preader;
}
bool check_reader(const pipe_reader* preader) {
if (!preader || preader->stream == NULL) {
return false;
}
return true;
}
const char* recv_msg(pipe_reader* preader) {
if (!check_reader(preader)) {
return NULL;
}
ssize_t read = getline(&preader->line_buf, &preader->buf_size, preader->stream);
if (read > 0) {
preader->line_buf[read - 1] = '\0';
return preader->line_buf;
}
return NULL;
}
void close_reader(pipe_reader* preader) {
if (!check_reader(preader)) {
return;
}
fclose(preader->stream);
preader->stream = NULL;
if (preader->line_buf) {
free(preader->line_buf);
preader->line_buf = NULL;
}
}
This works well against libc++ or libstdc++.
As discussed separately, a boost::asio solution would be best, but your question is specifically about how getline is blocking, so I will talk to that.
The problem here is that std::ifstream is not really made for a FIFO file type. In the case of getline(), it is trying to do a buffered read, so (in the initial case) it decides the buffer does not have enough data to reach the delimiter ('\n'), calls underflow() on the underlying streambuf, and that does a simple read for a buffer-length amount of data. This works great for files because the file's length at a point in time is a knowable length, so it can return EOF if there's not enough data to fill the buffer, and if there is enough data, it simply returns with the filled buffer. With a FIFO, however, running out of data does not necessarily mean EOF, so it doesn't return until the process that writes to it closes (this is your infinite sleep command that holds it open).
A more typical way to do this is for the writer to open and close the file as it reads and writes. This is obviously a waste of effort when something more functional like poll()/epoll() is available, but I'm answering the question you're asking.

Fail to write to file when I force quit

I have a program that looks like this. I need to consistently write something into a text file but I cannot predetermine when the program is going to end. So I force quit the program.
FILE *f = fopen("text.txt", "w");
while (1) {
fprintf(f, "something");
sleep(1000);
}
The problem is that the text file would be empty. Can anyone give me some suggestions?
I am using XCode to do the job.
Use fflush(f) after fprintf(f) otherwise the data you printed is still in the stream buffers in stdio and hasn't yet reached the filesystem.
Note that doing this very often may dramatically reduce your performance.
Another way to write in files would be to use the append mode and close the file when you don't need it.
while (1) {
FILE *f = fopen("text.txt", "a");
if (f != NULL) {
fprintf(f, "something");
fclose(f);
}
sleep(1000);
}
In your case, if you 'force-quit' during a sleep phase, the file will not be opened at this time.

Writing popen() output to a file

I've been trying to call another program from c++, and save the stout of that program to a text file. popen() seems to be the appropriate function, but saving it to a text file isn't working.
ofstream delaunayfile;
delaunayfile.open ("triangulation/delaunayedpoints.txt");
FILE *fp;
fp = popen("qdelaunay < triangulation/rawpoints.txt", "r");
delaunayfile << fp;
delaunayfile.close();
Any help? Thanks in advance!
You cannot write a FILE* directly into a stream. It will write a memory address instead of the actual file contents, therefore it will not give you the desired result.
The ideal solution would be to read from an ifstream and write to your ofstream, but there's no way to construct an ifstream from a FILE*.
However, we can extend the streambuf class, make it work over a FILE*, and then pass it to an istream instead. A quick search revealed someone already implemented that, and properly named popen_streambuf. See this specific answer.
Your code then would look like this:
std::ofstream output("triangulation/delaunayedpoints.txt");
popen_streambuf popen_buf;
if (popen_buf.open("qdelaunay < triangulation/rawpoints.txt", "r") == NULL) {
std::cerr << "Failed to popen." << std::endl;
return;
}
char buffer[256];
std::istream input(&popen_buf);
while (input.read(buffer, 256)) {
output << buffer;
}
output.close();
As pointed by Simon Richter in comments, there's an operator<< that accepts streambuf and writes data to ostream until EOF is reached. This way, the code would be simplified to:
std::ofstream output("triangulation/delaunayedpoints.txt");
popen_streambuf popen_buf;
if (popen_buf.open("qdelaunay < triangulation/rawpoints.txt", "r") == NULL) {
std::cerr << "Failed to popen." << std::endl;
return;
}
output << &popen_buf;
output.close();
There are two ways to do this: The simple way
int rc = system("qdelaunay < triangulation/rawpoints.txt >triangulation/delaunayedpoints.txt");
and the slightly more elaborate way, using fork(), dup2() and execve(), the latter working without a shell interpreter installed on the system. Given that this looks like you are doing computation work, I suspect this is not an embedded system, so you can assume a working shell.
popen opens a pipe but I am not aware you can just stream it into delaunayfile that way.
Of course it would be very nice if you could just do that and it would read from the pipe until it was complete.
The normal way to check for data on the pipe is to use select(). I found a useful link http://codenewbie.com/forums/threads/2908-Using-std-fstream-in-a-pipe that integrates pipes with fstream though and it may help you achieve what you want.
In this instance though as all you want to do is write the output to a file, why not redirect the output of the process to it rather than to a pipe? The purpose of a pipe is Inter-Process communication but your process does not appear to be using the data it receives from the other process for any practical purpose.

What's the correct way to do a 'catch all' error check on an fstream output operation?

What's the correct way to check for a general error when sending data to an fstream?
UPDATE: My main concern regards some things I've been hearing about a delay between output and any data being physically written to the hard disk. My assumption was that the command "save_file_obj << save_str" would only send data to some kind of buffer and that the following check "if (save_file_obj.bad())" would not be any use in determining if there was an OS or hardware problem. I just wanted to know what was the definitive "catch all" way to send a string to a file and check to make certain that it was written to the disk, before carrying out any following actions such as closing the program.
I have the following code...
int Saver::output()
{
save_file_handle.open(file_name.c_str());
if (save_file_handle.is_open())
{
save_file_handle << save_str.c_str();
if (save_file_handle.bad())
{
x_message("Error - failed to save file");
return 0;
}
save_file_handle.close();
if (save_file_handle.bad())
{
x_message("Error - failed to save file");
return 0;
}
return 1;
}
else
{
x_message("Error - couldn't open save file");
return 0;
}
}
A few points. Firstly:
save_file_handle
is a poor name for an instance of a C++ fstream. fstreams are not file handles and all this can do is confuse the reader.
Secondly, as Michael pints out, there is no need to convert a C++ string to a C-string. The only time you should really find yourself doing this is when interfacing with C-style APIS, and when using a few poorly designed C++ APIs, such as (unfortunately) fstream::open().
Thirdly, the canonical way to test if a stream operation worked is to test the operation itself. Streams have a conversion to void * which means you can write stuff like this:
if ( save_file_handle << save_str ) {
// operation worked
}
else {
// failed for some reason
}
Your code should always testv stream operations, whether for input or output.
Everything except for the check after the close seems reasonable. That said, I would restructure things slightly differently and throw an exception or use a bool, but that is simply a matter of preference:
bool Saver::output()
{
std::fstream out(_filename.c_str(),std::ios::out);
if ( ! out.is_open() ){
LOG4CXX_ERROR(_logger,"Could not open \""<<filename<<"\"");
return false;
}
out << _savestr << std::endl;
if ( out.bad() ){
LOG4CXX_ERROR(_logger,"Could not save to \""<<filename<<"\"");
out.close();
return false;
}
out.close();
return true;
}
I should also point out that you don't need to use save_str.c_str(), since C++ iostreams (including fstream, ofstream, etc.) are all capable of outputting std::string objects. Also, if you construct the file stream object in the scope of the function, it will automatically be closed when it goes out of scope.
Are you absolutely sure that save_file_handle doesn't already have a file associated (open) with it? If it does then calling its open() method will fail and raise its ios::failbit error flag -- and any exceptions if set to do so.
The close() method can't fail unless the file isn't open, in which case the method will raise the ios::failbit error flag. At any rate, the destructor should close the file, and do so automatically if the save_file_handle is a stack variable as in your code.
int Saver::output()
{
save_file_handle.open(file_name.c_str());
if (save_file_handle.fail())
{
x_message("Error - file failed to previously close");
return 0;
}
save_file_handle << save_str.c_str();
if (save_file_handle.bad())
{
x_message("Error - failed to save file");
return 0;
}
return 1;
}
Alternatively, you can separate the error checking from the file-saving logic if you use ios::exceptions().
int Saver::output()
{
ios_base::iostate old = save_file_handle.exceptions();
save_file_handle.exceptions(ios::failbit | ios::badbit);
try
{
save_file_handle.open(file_name.c_str());
save_file_handle << save_str.c_str();
}
catch (ofstream::failure e)
{
x_message("Error - couldn't save file");
save_file_handle.exceptions(old);
return 0;
}
save_file_handle.exceptions(old);
return 1;
}
You might prefer to move the call to save_file_handle.exceptions(ios::failbit | ios::badbit) to the constructor(s). Then you can get rid of the statements that reset the exceptions flag.