How to read from a library standard error? - c++

I have a Qt/C++ acpplication which is using a C++ library.
This library has a log mechanism that writes string messages to standard error.
Now, I would like to be able to redirect those messages toward a panel in my Qt tool.
I would like to avoid modifying the library because is adopted by many other clients.
Any idea how to get at runtime these messages?
Having instead the possibility of changing it what could be a good practise for carrying those messages up to the application?

That's very poor library design. However...
How does it write to standard error. If it is outputing to std::cerr,
then you can change the streambuf used by std::cerr, something like:
std::filebuf logStream;
if ( ~logStream.open( "logfile.txt" ) )
// Error handling...
std::streambuf* originalCErrStream = std::cerr.rdbuf();
std::cerr.rdbuf( &logStream );
// Processing here, with calls to library
std::cerr.rdbuf( originalCErrStream ); // Using RAII would be better.
Just don't forget to restore the original streambuf; leaving std::cerr
pointing to a filebuf which has been destructed is not a good idea.
If they're using FILE*, there's an freopen function in C (and by
inclusion in C++) that you can use.
If they're using system level output (write under Unix, WriteFile
under Windows), then you're going to have to use some system level code
to change the output. (open on the new file, close on fd
STDERR_FILENO, and dup2 to set STDERR_FILENO to use the newly
opened file under Unix. I'm not sure it's possible under
Windows—maybe something with ReOpenFile or some combination of
CloseHandle followed by CreateFile.)
EDIT:
I just noticed that you actually want to output to a Qt window. This
means that you probably need a string, rather than a file. If the
library is using std::cerr, you can use a std::stringbuf, instead of
a std::filebuf; you may, in fact, want to create your own streambuf,
to pick up calls to sync (which will normally be called after each
<< on std::cerr). If the library uses one of the other techniques,
the only thing I can think of is to periodically read the file, to see
if anything has been added. (I would use read() in Unix, ReadFile()
in Windows for this, in order to be sure of being able to distinguish a
read of zero bytes, due to nothing having been written since the last
read, and an error condition. FILE* and iostream functions treat a
read of zero bytes as end of file, and will not read further.)

write to stderr is actually a syscall:
write(2, "blahblah ...");
you can redirect file descriptor number 2 to anything (file, pipe, socket):
close(2); // close old stderr
int redirect_target = open(...); // open a file where you want to redirect to
// or use pipe, socket whatever you like
dup2(redirect_target, 2); // copy the redirect_target fd to fd number 2
close(redirect_target);
in your situation, you will need a pipe.
close(2);
int pipefd[2];
pipe2(pipefd);
dup2(pipefd[1], 2);
close(pipefd[1]);
then, everything write to stderr can be obtained by reading pipe[0]:
read(pipe[0], buffer, ...);

If they're using calls to std::cerr, you can redirect this to a std::ostringstream.
#include <iostream>
#include <sstream>
class cerr_redirector
{
public:
cerr_redirector(std::ostream& os)
:backup_(std::cerr.rdbuf())
,sbuf_(os.rdbuf())
{
std::cerr.rdbuf(sbuf_);
}
~cerr_redirector()
{
std::cerr.rdbuf(backup_);
}
private:
cerr_redirector();
cerr_redirector(const cerr_redirector& copy);
cerr_redirector& operator =(const cerr_redirector& assign);
std::streambuf* backup_;
std::streambuf* sbuf_;
};
You can capture the output using:
std::ostringstream os;
cerr_redirector red(os);
std::cerr << "This is written to the stream" << std::endl;
std::cout will be unaffected:
std::cout << "This is written to stdout" << std::endl;
So you can then test your capture is working:
std::cout << "and now: " << os.str() << std::endl;
Or just add the contents of os.str() to your Qt Window.
Demonstration at ideone.

Here I found a complete implemenation of what i needed...
Thanks everybody for the help! :)
Will loading a DLL dynamically reconcile its stderr to a main application? If so, then how...?

Related

SHARING_VIOLATION with multi-threaded file IO on Windows

I have some code that resembles this minimal reproduction example (the real version generates some code and compiles it):
#include <fstream>
#include <string>
#include <thread>
#include <vector>
void write(unsigned int thread)
{
std::ofstream stream("test_" + std::to_string(thread) + ".txt");
stream << "test" << std::endl;
stream << "thread" << std::endl;
stream << "bad" << std::endl;
}
void test(unsigned int thread)
{
write(thread);
#ifdef _WIN32
const std::string command = "rename test_" + std::to_string(thread) + ".txt test_renamed_" + std::to_string(thread) + ".txt";
#else
const std::string command = "mv test_" + std::to_string(thread) + ".txt test_renamed_" + std::to_string(thread) + ".txt";
#endif
system(command.c_str());
}
int main()
{
std::vector<std::thread> threads;
for(unsigned int i = 0; i < 5; i++) {
// Remove renamed file
std::remove(("test_renamed_" + std::to_string(i) + ".txt").c_str());
threads.emplace_back(test, i);
}
// Join all threads
for(auto &t : threads) {
t.join();
}
return EXIT_SUCCESS;
}
My understanding is that std::ofstream should behave in a nice RAII manner and close and flush at the end of the write function. On Linux, it appears to do just this. However, on Windows 10 I get sporadic "The process cannot access the file because it is being used by another process" errors. I've dug into it with procmon and it looks like the file isn't getting closed by the parent process (22224) resulting in the SHARING_VIOLATION which presumably causes the error:
Although the procmon trace looks like the problem is within my process, I have tried turning off the virus scanner. I have also tried using C-style fopen,fprintf,fclose and also ensuring that the process I'm spawning with system isn't inheriting file handles somehow by clearing HANDLE_FLAG_INHERIT on the underlying file handle...which leaves me somewhat out of ideas! Any thoughts SO?
We can rewrite the file writing using Win32 API:
void writeRaw(unsigned int thread)
{
const auto str = "test_" + std::to_string(thread) + ".txt";
auto hFile = CreateFileA(str.c_str(), GENERIC_WRITE,
FILE_SHARE_WRITE, nullptr, CREATE_ALWAYS, 0, nullptr);
DWORD ret{};
WriteFile(hFile, str.data(), str.size(), &ret, nullptr);
CloseHandle(hFile);
}
Running the test still gives a file share violation due to the way windows works. When the last handle is closed, filesystem driver performs IRP_MJ_CLEANUP IOCTL to finish processing anything related to the file.
Antivirus software, for instance, would attempt to scan the file (and incidentally holds the lock on it =) ). Additionally documentation MSDN IRP_MJ_CLEANUP states that:
It is important to note that when all handles to a file object have been closed, this does not necessarily mean that the file object is no longer being used. System components, such as the Cache Manager and the Memory Manager, might hold outstanding references to the file object. These components can still read to or write from a file, even after an IRP_MJ_CLEANUP request is received.
Conclusion: It is expected that to receive a file share violation in windows if a process tries to do something with the file shortly after closing the handle as the underlying system components are still processing the file close request.
At least on VS 2017, I can confirm the file is closed from your snippet. (In destructor of ofstream, the code calls fclose on the handle).
I think however, that the issue is not in the C++ code, but the behavior of the OS.
With Windows, the act of removing a file which the OS thinks is open, will be blocked. In Unix the behavior of unlinking a file from a directory, is to allow existing handles to continue acting the orphaned files. So in unix the operation could never be a sharing violation, as the act of unlinking a file is a different operation. linux semantics can be opted into for recent windows 10 builds.
procmon on Windows has a given altitude. That means that any operation which is perfformed by virus scanners may be hidden to procmon, and it would give a false answer.
A process can also duplicate a handle on the open file, and that would also cause this issue, but not show the handle being closed.
The most probable cause of the problem is that when you delete a file in windows ,it isn't immediatly deleted (it's just flagged for deletion). It can/will take some milliseconds (up to seconds if you're very unlucky) for it to be actually deleted.
Source : Niall Douglas in “Better mutual exclusion on the filesystem using Boost.AFIO" about 10m:10s https://www.youtube.com/watch?v=9l28ax3Zq0w

Moving std::clog in source to output file [duplicate]

This question already has answers here:
What is the point of clog?
(6 answers)
Closed 7 years ago.
I have a basic debug message in my code that prints a message as to what function is called.
#ifdef _DEBUG
std::clog << "message etc" << std::endl;
#endif
How do I redirect the output to send the message to a textfile?
You can set the buffer associated with clog that uses a file to save its data to.
Here's a simple program that demonstrates the concept.
#include <iostream>
#include <fstream>
int main()
{
std::ofstream out("test.txt");
// Get the rdbuf of clog.
// We need it to reset the value before exiting.
auto old_rdbuf = std::clog.rdbuf();
// Set the rdbuf of clog.
std::clog.rdbuf(out.rdbuf());
// Write to clog.
// The output should go to test.txt.
std::clog << "Test, Test, Test.\n";
// Reset the rdbuf of clog.
std::clog.rdbuf(old_rdbuf);
return 0;
}
How do I redirect the output to send the message to a textfile?
As far redirect means from outside the program code, it depends a bit on your shell syntax actually. According this reference std::clog is usually bound to std::cerr:
The global objects std::clog and std::wclog control output to a stream buffer of implementation-defined type (derived from std::streambuf), associated with the standard C output stream stderr, but, unlike std::cerr/std::wcerr, these streams are not automatically flushed and not automatically tie()'d with cout.
So e.g. in bash you would do something like
$ program 2> Logs.txt
Regarding redirecting programmatically, you can do it as mentioned in R Sahu's answer, or explained in the currently marked duplicate.

Issue with Pointer to an fstream in C++

I have an issue with my code below I am trying to use pointer to file stream to write some text in to the file, but the code below does not write in to file, I have tried without pointer to fstream which worked fine, but with pointer I can't see any changes in my text file but the code compile successfully.
fstream *io = new fstream("FILE/myFile.txt" , ios_base::in | ios_base::out);
if(!io -> is_open()){
cout << "Could not open file or file does not exist!" << endl;
exit(1);
}
*io << "Hello World"
Streams buffer the output. If the stream isn't flushed, the output is never written. Since the string written is tiny it will be buffered. The destructor of the stream would flush the stream as would filling the buffer. As written, the pointer is leaked and the stream is never destroyed and, thus, not flushed.
The fix to your problem is in order of preference:
Do not use pointers.
Use a smart pointer, e.g., std::unique_ptr<std::ofstream> to hold the stream.
delete the stream object at the end of the program (this is easy to forget and using automated distruction is much preferable).
At least, close() the stream using io->close(). Not deleteing the stream would be a resource leak.
Flushing the stream using *io << std::flush should still write the buffer. This approach would leak memory as the previous approach but additional also leak a file descriptor.
Personally, would go with approach 1. If I absolute had to use pointers which never happened to me with stream, I would use 2. Everything else would technically work but is likely to result in resource leaks.
You need to close the file :
io->close();
If you don't close the file, it will not flush data on the disk.
When you you fstream object on the stack, when the object goes out of scope, it close the file (in the distructor).
This is probably only a problem of buffer: you need to flush the stream in order to be sure everything is written in the file. The following snippet works fine:
#include <iostream>
#include <fstream>
#include <cstdlib>
using namespace std;
int main()
{
ofstream* f = new ofstream("out.dat");
if(! f->is_open())
{
cerr << "Impossible to open the file" << endl;
exit(-1);
}
*f << "Hello, world!" << flush;
f->close();
delete f;
return 0;
}
Do not forget that to every new, a delete should follow!

Redirecting console output to MFC screen output within the same solution

I have been messing around all day trying to get my MFC application to show the log output of a console application on the screen. Starting with what Visual Studio 2013's wizard gave me, I was able to modify their code to take a string as input and make a running log of application messages (below):
void COutputWnd::FillBuildWindow(std::string build_text)
{
std::wstring wsTmp(build_text.begin(), build_text.end());
std::wstring z = wsTmp;
LPTSTR x = new TCHAR[z.size() + 1];
_tcscpy(x, z.c_str());
m_wndOutputBuild.AddString(x);
free(x);
}
However, I cannot call this from outside the MFC function for a number of reasons. One is the object is not visible globally and two I am using windows.h in the console parts of my application and it does not play nicely with MFC.
Much of my application is already written and I am trying to put a GUI around it and use the ribbon features. Is there any way to take the cout statements and pipe them to a message log display in my MFC app? I have googled a ton of things today and not found anything that is either straightforward or clearly for applications which have both MFC and console code as part of their solution. I am not invoking a separate executable or dll. This is all compiled as a single standalone exe.
I have no idea about MFC but I would derive a class from std::streambuf to redirect its output to an MFC class and install the resulting stream buffer into std::cout. The stream buffer deals with the output written to a stream and you can get hold of the written characters in its overflow() and sync() methods:
class windowbuf
: std::streambuf {
SomeHandle handle;
char buffer[256];
public:
typedef std::char_traits<char> traits;
windowbuf(SomeHandle handle): handle(handle) { this->setp(buffer, buffer + 255); }
int overflow(int c) {
if (!traits::eq_int_type(c, traits::eof())) {
*this->pptr() = traits::to_char_type(c);
this->pbump(1);
}
return this->sync() == -1? traits::eof(): traits::not_eof(c);
}
int sync() {
writeToHandle(this->handle, this->pbase(), this->pptr() - this->pbase());
this->setp(buffer, buffer + 255);
return 0;
}
};
The above is a simple stream buffer which transfers characters somewhere identified by handle when its buffer is full or when the stream is flushed. I think it should be doable to come up with some handle and a write function which transfers the characters to an MFC window although I don't know anything about that part.
To have std::cout send its characters to the above stream buffer, you just need to install this stream buffer into std::cout, e.g., using
int main() {
SomeHandle handle = get_a_handle_from_somewhere();
std::streambuf* cout_rdbuf = std::cout.rdbuf(new windowbuf(handle));
// run your program here
std::cout.rdbuf(cout_rdbuf); // this should really be restored using RAII approaches
}
I'd think something like the above approach should be able to bridge the gap between some code writing to std::cout and other parts of the code displaying information with some sort of GUI.

redirect standard output to a file using multiple thread

I tried to redirect standart output (cout) to a file, for debugging purposes
std::ofstream traceFile;
traceFile.open("c:/path/file.txt");
std::streambuf* fileBuff = traceFile.rdbuf();
std::cout.rdbuf(fileBuff);
std::cout << std::unitbuff;
std::cout << "disk is written\n";
But calling cout from a new thread make the code stuck on a mutex. (xmtx.c 39: _Mtxlock()).
Have you got an idea, how i could solve it?
Thank you
This example works fine for me, whilst your test case doesn't. On my machine your code seemed to double free the streambuf from the file, whereas this example swaps it back before the destructors are called.
May be you need to reset cout's streambuf to original.
std::ofstream traceFile;
traceFile.open("c:/path/file.txt");
std::streambuf* fileBuff = traceFile.rdbuf(), *origBuf;
origBuf = cout.rdbuf(); //Save cout's StreamBuf pointer
std::cout.rdbuf(fileBuff); //Set cout's StreamBuf to file's StreamBuf pointer
std::cout << std::unitbuff;
std::cout << "disk is written\n";
cout.rdbuf(origBuf); //Reset cout's StreamBuf back to original
Also, writing into same file by multiple threads concurrently may not be allowed.
That may be reason for the failure of acquisition of mutex.