Why doesn't closing a file automatically clear error state? - c++

When I use ifstream to read file, I loop over all lines in the file and close it. Then I try opening a different file with the same ifstream object, it still says the End-Of-File error. I'm wondering why closing the file won't automatically clearing the state for me. I have to call clear() explictly after close() then.
Is there any reason why they design it like this? To me, that's really painful if you wanna reuse the fstream object for different files.
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
void main()
{
ifstream input;
input.open("c:\\input.txt");
string line;
while (!input.eof())
{
getline(input, line);
cout<<line<<endl;
}
// OK, 1 is return here which means End-Of-File
cout<<input.rdstate()<<endl;
// Why this doesn't clear any error/state of the current file, i.e., EOF here?
input.close();
// Now I want to open a new file
input.open("c:\\output.txt");
// But I still get EOF error
cout<<input.rdstate()<<endl;
while (!input.eof())
{
getline(input, line);
cout<<line<<endl;
}
}

Personally, I think close() should reset the flags, as I've been bitten by this in the past. Still, to mount my hobby-horse once more, your read code is wrong:
while (!input.eof())
{
getline(input, line);
cout<<line<<endl;
}
should be:
while (getline(input, line))
{
cout<<line<<endl;
}
To see why, consider what happens if you try to read a completely empty file. The eof() call will return false (because although the file is empty, you have not yet read anything, and only reads set the eof bit) and you will output a line which does not exist.

The call to close may fail. When it does fail, it sets the failbit. If it reset the state of the stream, you wouldn't be able to check whether or not the call to close succeeded.

Because the flags are associated with the stream, not the file.

This has been changed in C++11 (C++0x), not so that close() discards any errors detected but the next open() will call clear() for you.

Related

Keep boost.process alive outside of the function is was called from

I'm using Visual Studio 2019 on Windows 10, using the boost.process library. I'm trying to make chess, and I'm using the stockfish engine as a separate executable. I need the engine to run throughout the entirety of the game, as that's how it's designed to be used.
Currently I have in ChessGame.h
class ChessGame
{
public:
void startStockFish();
void beginGame();
void parseCommand(std::string cmd);
private:
boost::process::child c;
boost::process::ipstream input;
boost::process::opstream output;
}
And in ChessGame.cpp
#include ChessGame.h
void ChessGame::startStockFish()
{
std::string exec = "stockfish_10_x32.exe";
std::vector<std::string> args = { };
boost::process::child c(exec, args, boost::process::std_out > input,
boost::process::std_in < output);
//c.wait()
}
void ChessGame::beginGame()
{
parseCommand("uci");
parseCommand("ucinewgame");
parseCommand("position startpos");
parseCommand("go");
}
void ChessGame::parseCommand(std::string cmd)
{
output << cmd << std::endl;
std::string line;
while (std::getline(input, line) && !line.empty())
{
std::cout << line << std::endl;
}
}
And in main.cpp
ChessGame chessGame = ChessGame(isWhite); //isWhite is a boolean that control who the player is, irrelevent to the question
//std::thread t(&ChessGame::startStockFish, chessGame);
chessGame.startStockFish();
chessGame.beginGame();
The problem is that I believe as soon as the function startStockFish finishes it terminates c, as nothing is outputted to the terminal as described above, but if I use beginGame() within startStockFish() it outputs as expected. Also, if I uncomment the line c.wait() and the funtion waits for stockfish to exit, it gets stuck as stockfish never gets the exit command. If I instead try running startStockFish on a separate thread in main (as seen above) I
get the following two errors:
the argument to a feature-test macro must be a simple identifier.
In file 'boost\system\detail\config.hpp' line 51
and
'std::tuple::tuple': no overloaded function takes 2 arguments.
In file 'memory' line 2042
Also, I don't want to use threads as I can imagine that will have its own issues with the input and output streams.
So is there a way for me to keep the process alive out of this function, or do I need to reorganise my code some other way? I believe having the process being called in main would work, but I really don't want to do that as I want to keep all the chess-related code in ChessGame.cpp.
Ok I believe that adding c.detach(); after initialising the boost.process child in startStockFish() has done what I want, as the program no longer terminates c when the function ends. Input appears to work fine with a detached process, simply writing output << cmd << std::endl; where cmd is the desired command as a std::string has no issues. However, output does have some issues, the usual method of
std::string line;
while (std::getline(input, line) && !line.empty())
{
// Do something with line
}
somewhat works, but std::getline(input, line) will get stuck in an infinite loop when there are no more lines to output. I couldn't find a direct solution to this, but I did find a work around.
Firstly I changed the initialisation of the boost.process child to
boost::process::child c(exec, args, boost::process::std_out > "console.txt", boost::process::std_in < output);
And then changed input to a std::ifstream, a file reader stream. Then to get the output I used
input.open("console.txt");
std::string line;
while (std::getline(input, line))
{
// Do something with line
}
input.close();
I also added remove("console.txt"); to the beginning of startStockFish() to attain a fresh text file.
I'm not confident that this is the best solution, as I am worried about what would happen if stockfish tried to write to console.txt as input was reading from it, but that hasn't seemed to occur or doesn't seemed to be an issue if it has occurred, so right now it is an adequate solution.

In c++, is stream.clear() necessary after a failure has occurred?

I have the following code:
string promptPlayerForFile(ifstream &infile, string prompt) {
while (true) {
string filename;
cout << prompt;
getline(cin, filename);
infile.open(filename.c_str());
if (!infile.fail()) return filename;
infile.clear();
cout << "Unable to open that file. Try again." << endl;
}
}
The function works as expected: you enter file names until you give a correct one, in which case it associates a stream with the file and returns the filename string.
I then tried commenting out the line infile.clear() to see what happens. (I read that it needs to be included after a failure has occurred in order to reset the relevant bits of the stream.)
However, after commenting this out, the function behaves as before. If I first give a wrong filename and then a correct one it works, so somehow the failure bits get reset even without that line. Is then infile.clear() necessary and what are its appropriate uses?
If you are using C++11 or higher, you don't need to call infile.clear();. If open() is successful, then clear() is called.
If you are using a pre-C++11 compiler, it is necessary to call infile.clear(). The language does not guarantee that the failbit(s) are cleared when open() is successful.
See https://en.cppreference.com/w/cpp/io/basic_ifstream/open for details about the call to clear().
the infile.clear() is relevant if and only if you want to continue to interact with the stream (e.g. read from it). If your program ends anyway, you dont't have to clear the error flags.

What happens if I never call `close` on an open file stream? [duplicate]

This question already has answers here:
do I need to close a std::fstream? [duplicate]
(3 answers)
Closed 7 years ago.
Below is the code for same case.
#include <iostream>
#include <fstream>
using namespace std;
int main () {
ofstream myfile;
myfile.open ("example.txt");
myfile << "Writing this to a file.\n";
//myfile.close();
return 0;
}
What will be the difference if I uncomment the myfile.close() line?
There is no difference. The file stream's destructor will close the file.
You can also rely on the constructor to open the file instead of calling open(). Your code can be reduced to this:
#include <fstream>
int main()
{
std::ofstream myfile("example.txt");
myfile << "Writing this to a file.\n";
}
To fortify juanchopanza's answer with some reference from the std::fstream documentation
(destructor)
[virtual](implicitly declared)
destructs the basic_fstream and the associated buffer, closes the file
(virtual public member function)
In this case, nothing will happen and code execution time is very less.
However, if your codes runs for long time when you are continuously opening files and not closing, after a certain time, there may be crash in run time.
when you open a file, the operating system creates an entry to represent that file and store the information about that opened file. So if there are 100 files opened in your OS then there will be 100 entries in OS (somewhere in kernel). These entries are represented by integers like (...100, 101, 102....). This entry number is the file descriptor. So it is just an integer number that uniquely represents an opened file in operating system. If your process open 10 files then your Process table will have 10 entries for file descriptors.
Also, this is why you can run out of file descriptors, if you open lots of files at once. Which will prevent *nix systems from running, since they open descriptors to stuff in /proc all the time.
Similar thing should happen in case of all operating system.
Under normal conditions there is no difference.
BUT under exceptional conditions (with slight change) the call to close can cause an expception.
int main()
{
try
{
ofstream myfile;
myfile.exceptions(std::ios::failbit | std::ios::badbit);
myfile.open("example.txt");
myfile << "Writing this to a file.\n";
// If you call close this could potentially cause an exception
myfile.close();
// On the other hand. If you let the destructor call the close()
// method. Then the destructor will catch and discard (eat) the
// exception.
}
catch(...)
{
// If you call close(). There is a potential to get here.
// If you let the destructor call close then the there is
// no chance of getting here.
}
}

Reuse a filestream in C++?

I have a program that takes multiple files as input. What I'm trying to do is use the same filestream? I keep getting an error when trying to open the stream with the second file. Why is not code not valid and creating an error at compile time? argv[2] is a const char*.
error: no match for call to '(std::ifstream) (char*&)'
ifstream fin(argv[1]);
//work with filestream
fin.close();
fin(argv[2]);
//work with filestream
fin.close();
The first line ifstream fin(argv[1]); is evoking ifstream's constructor, and the constructor can only be called once per object. Your code is trying to call it a second time. Try using open() instead:
fin.open(argv[2]);
As an aside, you may also want to call clear() before you reopen your ifstream. The reason for this is that if the first open() (or even close()) fails, error bits on the ifstream will be set, and won't be cleared by close().
Use a local scope:
{
ifstream fin(argv[1]);
//work with filestream
}
{
ifstream fin(argv[2]);
//work with filestream
}
Note that you dont manually need to close the streams, this is handled automatically when they go out of scope.

Do I need to manually close an ifstream?

Do I need to manually call close() when I use a std::ifstream?
For example, in the code:
std::string readContentsOfFile(std::string fileName) {
std::ifstream file(fileName.c_str());
if (file.good()) {
std::stringstream buffer;
buffer << file.rdbuf();
file.close();
return buffer.str();
}
throw std::runtime_exception("file not found");
}
Do I need to call file.close() manually? Shouldn't ifstream make use of RAII for closing files?
NO
This is what RAII is for, let the destructor do its job. There is no harm in closing it manually, but it's not the C++ way, it's programming in C with classes.
If you want to close the file before the end of a function you can always use a nested scope.
In the standard (27.8.1.5 Class template basic_ifstream), ifstream is to be implemented with a basic_filebuf member holding the actual file handle. It is held as a member so that when an ifstream object destructs, it also calls the destructor on basic_filebuf. And from the standard (27.8.1.2), that destructor closes the file:
virtual ˜basic_filebuf();
Effects: Destroys an object of class basic_filebuf<charT,traits>. Calls close().
Do you need to close the file?
NO
Should you close the file?
Depends.
Do you care about the possible error conditions that could occur if the file fails to close correctly? Remember that close calls setstate(failbit) if it fails. The destructor will call close() for you automatically because of RAII but will not leave you a way of testing the fail bit as the object no longer exists.
You can allow the destructor to do it's job. But just like any RAII object there may be times that calling close manually can make a difference. For example:
#include <fstream>
using std::ofstream;
int main() {
ofstream ofs("hello.txt");
ofs << "Hello world\n";
return 0;
}
writes file contents. But:
#include <stdlib.h>
#include <fstream>
using std::ofstream;
int main() {
ofstream ofs("hello.txt");
ofs << "Hello world\n";
exit(0);
}
doesn't. These are rare cases where a process suddenly exits. A crashing process could do similar.
No, this is done automatically by the ifstream destructor. The only reason you should call it manually, is because the fstream instance has a big scope, for example if it is a member variable of a long living class instance.