I have an issue with my code below I am trying to use pointer to file stream to write some text in to the file, but the code below does not write in to file, I have tried without pointer to fstream which worked fine, but with pointer I can't see any changes in my text file but the code compile successfully.
fstream *io = new fstream("FILE/myFile.txt" , ios_base::in | ios_base::out);
if(!io -> is_open()){
cout << "Could not open file or file does not exist!" << endl;
exit(1);
}
*io << "Hello World"
Streams buffer the output. If the stream isn't flushed, the output is never written. Since the string written is tiny it will be buffered. The destructor of the stream would flush the stream as would filling the buffer. As written, the pointer is leaked and the stream is never destroyed and, thus, not flushed.
The fix to your problem is in order of preference:
Do not use pointers.
Use a smart pointer, e.g., std::unique_ptr<std::ofstream> to hold the stream.
delete the stream object at the end of the program (this is easy to forget and using automated distruction is much preferable).
At least, close() the stream using io->close(). Not deleteing the stream would be a resource leak.
Flushing the stream using *io << std::flush should still write the buffer. This approach would leak memory as the previous approach but additional also leak a file descriptor.
Personally, would go with approach 1. If I absolute had to use pointers which never happened to me with stream, I would use 2. Everything else would technically work but is likely to result in resource leaks.
You need to close the file :
io->close();
If you don't close the file, it will not flush data on the disk.
When you you fstream object on the stack, when the object goes out of scope, it close the file (in the distructor).
This is probably only a problem of buffer: you need to flush the stream in order to be sure everything is written in the file. The following snippet works fine:
#include <iostream>
#include <fstream>
#include <cstdlib>
using namespace std;
int main()
{
ofstream* f = new ofstream("out.dat");
if(! f->is_open())
{
cerr << "Impossible to open the file" << endl;
exit(-1);
}
*f << "Hello, world!" << flush;
f->close();
delete f;
return 0;
}
Do not forget that to every new, a delete should follow!
Related
This question already has answers here:
do I need to close a std::fstream? [duplicate]
(3 answers)
Closed 7 years ago.
Below is the code for same case.
#include <iostream>
#include <fstream>
using namespace std;
int main () {
ofstream myfile;
myfile.open ("example.txt");
myfile << "Writing this to a file.\n";
//myfile.close();
return 0;
}
What will be the difference if I uncomment the myfile.close() line?
There is no difference. The file stream's destructor will close the file.
You can also rely on the constructor to open the file instead of calling open(). Your code can be reduced to this:
#include <fstream>
int main()
{
std::ofstream myfile("example.txt");
myfile << "Writing this to a file.\n";
}
To fortify juanchopanza's answer with some reference from the std::fstream documentation
(destructor)
[virtual](implicitly declared)
destructs the basic_fstream and the associated buffer, closes the file
(virtual public member function)
In this case, nothing will happen and code execution time is very less.
However, if your codes runs for long time when you are continuously opening files and not closing, after a certain time, there may be crash in run time.
when you open a file, the operating system creates an entry to represent that file and store the information about that opened file. So if there are 100 files opened in your OS then there will be 100 entries in OS (somewhere in kernel). These entries are represented by integers like (...100, 101, 102....). This entry number is the file descriptor. So it is just an integer number that uniquely represents an opened file in operating system. If your process open 10 files then your Process table will have 10 entries for file descriptors.
Also, this is why you can run out of file descriptors, if you open lots of files at once. Which will prevent *nix systems from running, since they open descriptors to stuff in /proc all the time.
Similar thing should happen in case of all operating system.
Under normal conditions there is no difference.
BUT under exceptional conditions (with slight change) the call to close can cause an expception.
int main()
{
try
{
ofstream myfile;
myfile.exceptions(std::ios::failbit | std::ios::badbit);
myfile.open("example.txt");
myfile << "Writing this to a file.\n";
// If you call close this could potentially cause an exception
myfile.close();
// On the other hand. If you let the destructor call the close()
// method. Then the destructor will catch and discard (eat) the
// exception.
}
catch(...)
{
// If you call close(). There is a potential to get here.
// If you let the destructor call close then the there is
// no chance of getting here.
}
}
The author presented this code under the title A bus error on my platform
#include <fstream>
#include <iostream>
int main()
{
std::ofstream log("oops.log");
std::cout.rdbuf(log.rdbuf());
std::cout << "Oops!\n";
return 0;
}
The string "Oops!\n" is printed to the file "oops.log". The code doesn't restore cout's streambuf, but VS2010 didn't report a runtime error.
Since log and std::cout share a buffer, that buffer will probably be freed twice (once when log goes out of scope, then once more when the program terminates).
This results in undefined behavior, so it's hard to tell the exact reason why it triggers a bus error on his machine but silently fails on yours.
Since the other answers don't mention what to do about this I'll provide that here. You need to save and restore the buffer that cout is supposed to be managing. For example:
#include <fstream>
#include <iostream>
// RAII method of restoring a buffer
struct buffer_restorer {
std::ios &m_s;
std::streambuf *m_buf;
buffer_restorer(std::ios &s, std::streambuf *buf) : m_s(s), m_buf(buf) {}
~buffer_restorer() { m_s.rdbuf(m_buf); }
};
int main()
{
std::ofstream log("oops.log");
buffer_restorer r(std::cout, std::cout.rdbuf(log.rdbuf()));
std::cout << "Oops!\n";
return 0;
}
Now when cout's buffer is replaced before cout is destroyed at the end of the program, so when cout destroys its buffer the correct thing happens.
For simply redirecting standard io generally the environment already has the ability to do that for you (e.g., io redirection in the shell). Rather than the above code I'd probably simply run the program as:
yourprogram > oops.log
Also one thing to remember is that std::cout is a global variable with all the same downsides as other global variables. Instead of modifying it or even using it you may prefer to use the usual techniques to avoid global variables all together. For example you might pass a std::ostream &log_output parameter around and use that instead of having code use cout directly.
Your program has Undefined Behavior.
The destructor of the global cout object will delete the stream buffer when going out of scope, and the same is true of log, which also owns that very same stream buffer. Thus, you are deleting the same object twice.
When a program has Undefined Behavior, anything could happen, from formatting your hard drive to terminating without any error.
On my platform, for instance, the program enters an infinite loop after returning from main().
I have a Qt/C++ acpplication which is using a C++ library.
This library has a log mechanism that writes string messages to standard error.
Now, I would like to be able to redirect those messages toward a panel in my Qt tool.
I would like to avoid modifying the library because is adopted by many other clients.
Any idea how to get at runtime these messages?
Having instead the possibility of changing it what could be a good practise for carrying those messages up to the application?
That's very poor library design. However...
How does it write to standard error. If it is outputing to std::cerr,
then you can change the streambuf used by std::cerr, something like:
std::filebuf logStream;
if ( ~logStream.open( "logfile.txt" ) )
// Error handling...
std::streambuf* originalCErrStream = std::cerr.rdbuf();
std::cerr.rdbuf( &logStream );
// Processing here, with calls to library
std::cerr.rdbuf( originalCErrStream ); // Using RAII would be better.
Just don't forget to restore the original streambuf; leaving std::cerr
pointing to a filebuf which has been destructed is not a good idea.
If they're using FILE*, there's an freopen function in C (and by
inclusion in C++) that you can use.
If they're using system level output (write under Unix, WriteFile
under Windows), then you're going to have to use some system level code
to change the output. (open on the new file, close on fd
STDERR_FILENO, and dup2 to set STDERR_FILENO to use the newly
opened file under Unix. I'm not sure it's possible under
Windows—maybe something with ReOpenFile or some combination of
CloseHandle followed by CreateFile.)
EDIT:
I just noticed that you actually want to output to a Qt window. This
means that you probably need a string, rather than a file. If the
library is using std::cerr, you can use a std::stringbuf, instead of
a std::filebuf; you may, in fact, want to create your own streambuf,
to pick up calls to sync (which will normally be called after each
<< on std::cerr). If the library uses one of the other techniques,
the only thing I can think of is to periodically read the file, to see
if anything has been added. (I would use read() in Unix, ReadFile()
in Windows for this, in order to be sure of being able to distinguish a
read of zero bytes, due to nothing having been written since the last
read, and an error condition. FILE* and iostream functions treat a
read of zero bytes as end of file, and will not read further.)
write to stderr is actually a syscall:
write(2, "blahblah ...");
you can redirect file descriptor number 2 to anything (file, pipe, socket):
close(2); // close old stderr
int redirect_target = open(...); // open a file where you want to redirect to
// or use pipe, socket whatever you like
dup2(redirect_target, 2); // copy the redirect_target fd to fd number 2
close(redirect_target);
in your situation, you will need a pipe.
close(2);
int pipefd[2];
pipe2(pipefd);
dup2(pipefd[1], 2);
close(pipefd[1]);
then, everything write to stderr can be obtained by reading pipe[0]:
read(pipe[0], buffer, ...);
If they're using calls to std::cerr, you can redirect this to a std::ostringstream.
#include <iostream>
#include <sstream>
class cerr_redirector
{
public:
cerr_redirector(std::ostream& os)
:backup_(std::cerr.rdbuf())
,sbuf_(os.rdbuf())
{
std::cerr.rdbuf(sbuf_);
}
~cerr_redirector()
{
std::cerr.rdbuf(backup_);
}
private:
cerr_redirector();
cerr_redirector(const cerr_redirector& copy);
cerr_redirector& operator =(const cerr_redirector& assign);
std::streambuf* backup_;
std::streambuf* sbuf_;
};
You can capture the output using:
std::ostringstream os;
cerr_redirector red(os);
std::cerr << "This is written to the stream" << std::endl;
std::cout will be unaffected:
std::cout << "This is written to stdout" << std::endl;
So you can then test your capture is working:
std::cout << "and now: " << os.str() << std::endl;
Or just add the contents of os.str() to your Qt Window.
Demonstration at ideone.
Here I found a complete implemenation of what i needed...
Thanks everybody for the help! :)
Will loading a DLL dynamically reconcile its stderr to a main application? If so, then how...?
I tried to redirect standart output (cout) to a file, for debugging purposes
std::ofstream traceFile;
traceFile.open("c:/path/file.txt");
std::streambuf* fileBuff = traceFile.rdbuf();
std::cout.rdbuf(fileBuff);
std::cout << std::unitbuff;
std::cout << "disk is written\n";
But calling cout from a new thread make the code stuck on a mutex. (xmtx.c 39: _Mtxlock()).
Have you got an idea, how i could solve it?
Thank you
This example works fine for me, whilst your test case doesn't. On my machine your code seemed to double free the streambuf from the file, whereas this example swaps it back before the destructors are called.
May be you need to reset cout's streambuf to original.
std::ofstream traceFile;
traceFile.open("c:/path/file.txt");
std::streambuf* fileBuff = traceFile.rdbuf(), *origBuf;
origBuf = cout.rdbuf(); //Save cout's StreamBuf pointer
std::cout.rdbuf(fileBuff); //Set cout's StreamBuf to file's StreamBuf pointer
std::cout << std::unitbuff;
std::cout << "disk is written\n";
cout.rdbuf(origBuf); //Reset cout's StreamBuf back to original
Also, writing into same file by multiple threads concurrently may not be allowed.
That may be reason for the failure of acquisition of mutex.
Do I need to manually call close() when I use a std::ifstream?
For example, in the code:
std::string readContentsOfFile(std::string fileName) {
std::ifstream file(fileName.c_str());
if (file.good()) {
std::stringstream buffer;
buffer << file.rdbuf();
file.close();
return buffer.str();
}
throw std::runtime_exception("file not found");
}
Do I need to call file.close() manually? Shouldn't ifstream make use of RAII for closing files?
NO
This is what RAII is for, let the destructor do its job. There is no harm in closing it manually, but it's not the C++ way, it's programming in C with classes.
If you want to close the file before the end of a function you can always use a nested scope.
In the standard (27.8.1.5 Class template basic_ifstream), ifstream is to be implemented with a basic_filebuf member holding the actual file handle. It is held as a member so that when an ifstream object destructs, it also calls the destructor on basic_filebuf. And from the standard (27.8.1.2), that destructor closes the file:
virtual ˜basic_filebuf();
Effects: Destroys an object of class basic_filebuf<charT,traits>. Calls close().
Do you need to close the file?
NO
Should you close the file?
Depends.
Do you care about the possible error conditions that could occur if the file fails to close correctly? Remember that close calls setstate(failbit) if it fails. The destructor will call close() for you automatically because of RAII but will not leave you a way of testing the fail bit as the object no longer exists.
You can allow the destructor to do it's job. But just like any RAII object there may be times that calling close manually can make a difference. For example:
#include <fstream>
using std::ofstream;
int main() {
ofstream ofs("hello.txt");
ofs << "Hello world\n";
return 0;
}
writes file contents. But:
#include <stdlib.h>
#include <fstream>
using std::ofstream;
int main() {
ofstream ofs("hello.txt");
ofs << "Hello world\n";
exit(0);
}
doesn't. These are rare cases where a process suddenly exits. A crashing process could do similar.
No, this is done automatically by the ifstream destructor. The only reason you should call it manually, is because the fstream instance has a big scope, for example if it is a member variable of a long living class instance.