I am having trouble writing to ofstream pointer and this is quite perplexing as I really don't see anything that is missing anymore. Note, this is a follow up from this question:
C++ vector of ofstream, how to write to one particular element
My code is as follows:
std::vector<shared_ptr<ofstream>> filelist;
void main()
{
for(int ii=0;ii<10;ii++)
{
string filename = "/dev/shm/table_"+int2string(ii)+".csv";
filelist.push_back(make_shared<ofstream>(filename.c_str()));
}
*filelist[5]<<"some string"<<endl;
filelist[5]->flush();
exit(1);
}
This does doesn't write anything to the output file but it does create 10 empty files. Does anybody know what might possibly be wrong here?
EDIT: I ran some further tests. I let the code run without exit(1) until completion, over all files until all callbacks are finished. It turns out that some files are not empty, while others that should have data are empty.
There is plenty of disk space, and I know I have more file descriptors than are necessary for this. Any explanation for why some of the files would be written properly while others are not?
I'd try: (*filelist[5])<<"some string\n";.
I'd guess, however, that you probably meant to write to the files inside a loop -- as-is, you're writing to only one file.
Oh, and in C++, you don't want to use exit.
Edit: Here's a quick (tested) standalone demo:
#include <fstream>
#include <string>
#include <vector>
std::vector<std::ofstream *> filelist;
int main() {
for(int ii=0;ii<3;ii++)
{
char *names[] = {"one", "two", "three"};
std::string filename = "c:\\trash_";
filename += names[ii];
filename += ".txt";
filelist.push_back(new std::ofstream(filename.c_str()));
}
for (int i=0; i<filelist.size(); i++) {
(*filelist[i])<<"some string\n";
filelist[i]->close();
}
}
Note, however, that the file name this generates is for Windows, whereas the original was (apparently) intended for something Unix-like. For a Unix-like OS, you'll need/want a different file name string.
Try closing the file before you call exit with filelist[5]->close();. You've aborted a process with an open file which means your write may not have made it to the OS buffer or was discarded upon process exit. You could also remove the exit call it would probably fix the problem. The results of IO on a process that is aborted are tricky to nail down, so it's best to try avoiding aborts with active IO or to assume any active IO will fail upon abort.
Related
I have some code that resembles this minimal reproduction example (the real version generates some code and compiles it):
#include <fstream>
#include <string>
#include <thread>
#include <vector>
void write(unsigned int thread)
{
std::ofstream stream("test_" + std::to_string(thread) + ".txt");
stream << "test" << std::endl;
stream << "thread" << std::endl;
stream << "bad" << std::endl;
}
void test(unsigned int thread)
{
write(thread);
#ifdef _WIN32
const std::string command = "rename test_" + std::to_string(thread) + ".txt test_renamed_" + std::to_string(thread) + ".txt";
#else
const std::string command = "mv test_" + std::to_string(thread) + ".txt test_renamed_" + std::to_string(thread) + ".txt";
#endif
system(command.c_str());
}
int main()
{
std::vector<std::thread> threads;
for(unsigned int i = 0; i < 5; i++) {
// Remove renamed file
std::remove(("test_renamed_" + std::to_string(i) + ".txt").c_str());
threads.emplace_back(test, i);
}
// Join all threads
for(auto &t : threads) {
t.join();
}
return EXIT_SUCCESS;
}
My understanding is that std::ofstream should behave in a nice RAII manner and close and flush at the end of the write function. On Linux, it appears to do just this. However, on Windows 10 I get sporadic "The process cannot access the file because it is being used by another process" errors. I've dug into it with procmon and it looks like the file isn't getting closed by the parent process (22224) resulting in the SHARING_VIOLATION which presumably causes the error:
Although the procmon trace looks like the problem is within my process, I have tried turning off the virus scanner. I have also tried using C-style fopen,fprintf,fclose and also ensuring that the process I'm spawning with system isn't inheriting file handles somehow by clearing HANDLE_FLAG_INHERIT on the underlying file handle...which leaves me somewhat out of ideas! Any thoughts SO?
We can rewrite the file writing using Win32 API:
void writeRaw(unsigned int thread)
{
const auto str = "test_" + std::to_string(thread) + ".txt";
auto hFile = CreateFileA(str.c_str(), GENERIC_WRITE,
FILE_SHARE_WRITE, nullptr, CREATE_ALWAYS, 0, nullptr);
DWORD ret{};
WriteFile(hFile, str.data(), str.size(), &ret, nullptr);
CloseHandle(hFile);
}
Running the test still gives a file share violation due to the way windows works. When the last handle is closed, filesystem driver performs IRP_MJ_CLEANUP IOCTL to finish processing anything related to the file.
Antivirus software, for instance, would attempt to scan the file (and incidentally holds the lock on it =) ). Additionally documentation MSDN IRP_MJ_CLEANUP states that:
It is important to note that when all handles to a file object have been closed, this does not necessarily mean that the file object is no longer being used. System components, such as the Cache Manager and the Memory Manager, might hold outstanding references to the file object. These components can still read to or write from a file, even after an IRP_MJ_CLEANUP request is received.
Conclusion: It is expected that to receive a file share violation in windows if a process tries to do something with the file shortly after closing the handle as the underlying system components are still processing the file close request.
At least on VS 2017, I can confirm the file is closed from your snippet. (In destructor of ofstream, the code calls fclose on the handle).
I think however, that the issue is not in the C++ code, but the behavior of the OS.
With Windows, the act of removing a file which the OS thinks is open, will be blocked. In Unix the behavior of unlinking a file from a directory, is to allow existing handles to continue acting the orphaned files. So in unix the operation could never be a sharing violation, as the act of unlinking a file is a different operation. linux semantics can be opted into for recent windows 10 builds.
procmon on Windows has a given altitude. That means that any operation which is perfformed by virus scanners may be hidden to procmon, and it would give a false answer.
A process can also duplicate a handle on the open file, and that would also cause this issue, but not show the handle being closed.
The most probable cause of the problem is that when you delete a file in windows ,it isn't immediatly deleted (it's just flagged for deletion). It can/will take some milliseconds (up to seconds if you're very unlucky) for it to be actually deleted.
Source : Niall Douglas in “Better mutual exclusion on the filesystem using Boost.AFIO" about 10m:10s https://www.youtube.com/watch?v=9l28ax3Zq0w
I'm using Visual Studio 2019 on Windows 10, using the boost.process library. I'm trying to make chess, and I'm using the stockfish engine as a separate executable. I need the engine to run throughout the entirety of the game, as that's how it's designed to be used.
Currently I have in ChessGame.h
class ChessGame
{
public:
void startStockFish();
void beginGame();
void parseCommand(std::string cmd);
private:
boost::process::child c;
boost::process::ipstream input;
boost::process::opstream output;
}
And in ChessGame.cpp
#include ChessGame.h
void ChessGame::startStockFish()
{
std::string exec = "stockfish_10_x32.exe";
std::vector<std::string> args = { };
boost::process::child c(exec, args, boost::process::std_out > input,
boost::process::std_in < output);
//c.wait()
}
void ChessGame::beginGame()
{
parseCommand("uci");
parseCommand("ucinewgame");
parseCommand("position startpos");
parseCommand("go");
}
void ChessGame::parseCommand(std::string cmd)
{
output << cmd << std::endl;
std::string line;
while (std::getline(input, line) && !line.empty())
{
std::cout << line << std::endl;
}
}
And in main.cpp
ChessGame chessGame = ChessGame(isWhite); //isWhite is a boolean that control who the player is, irrelevent to the question
//std::thread t(&ChessGame::startStockFish, chessGame);
chessGame.startStockFish();
chessGame.beginGame();
The problem is that I believe as soon as the function startStockFish finishes it terminates c, as nothing is outputted to the terminal as described above, but if I use beginGame() within startStockFish() it outputs as expected. Also, if I uncomment the line c.wait() and the funtion waits for stockfish to exit, it gets stuck as stockfish never gets the exit command. If I instead try running startStockFish on a separate thread in main (as seen above) I
get the following two errors:
the argument to a feature-test macro must be a simple identifier.
In file 'boost\system\detail\config.hpp' line 51
and
'std::tuple::tuple': no overloaded function takes 2 arguments.
In file 'memory' line 2042
Also, I don't want to use threads as I can imagine that will have its own issues with the input and output streams.
So is there a way for me to keep the process alive out of this function, or do I need to reorganise my code some other way? I believe having the process being called in main would work, but I really don't want to do that as I want to keep all the chess-related code in ChessGame.cpp.
Ok I believe that adding c.detach(); after initialising the boost.process child in startStockFish() has done what I want, as the program no longer terminates c when the function ends. Input appears to work fine with a detached process, simply writing output << cmd << std::endl; where cmd is the desired command as a std::string has no issues. However, output does have some issues, the usual method of
std::string line;
while (std::getline(input, line) && !line.empty())
{
// Do something with line
}
somewhat works, but std::getline(input, line) will get stuck in an infinite loop when there are no more lines to output. I couldn't find a direct solution to this, but I did find a work around.
Firstly I changed the initialisation of the boost.process child to
boost::process::child c(exec, args, boost::process::std_out > "console.txt", boost::process::std_in < output);
And then changed input to a std::ifstream, a file reader stream. Then to get the output I used
input.open("console.txt");
std::string line;
while (std::getline(input, line))
{
// Do something with line
}
input.close();
I also added remove("console.txt"); to the beginning of startStockFish() to attain a fresh text file.
I'm not confident that this is the best solution, as I am worried about what would happen if stockfish tried to write to console.txt as input was reading from it, but that hasn't seemed to occur or doesn't seemed to be an issue if it has occurred, so right now it is an adequate solution.
This question already has answers here:
do I need to close a std::fstream? [duplicate]
(3 answers)
Closed 7 years ago.
Below is the code for same case.
#include <iostream>
#include <fstream>
using namespace std;
int main () {
ofstream myfile;
myfile.open ("example.txt");
myfile << "Writing this to a file.\n";
//myfile.close();
return 0;
}
What will be the difference if I uncomment the myfile.close() line?
There is no difference. The file stream's destructor will close the file.
You can also rely on the constructor to open the file instead of calling open(). Your code can be reduced to this:
#include <fstream>
int main()
{
std::ofstream myfile("example.txt");
myfile << "Writing this to a file.\n";
}
To fortify juanchopanza's answer with some reference from the std::fstream documentation
(destructor)
[virtual](implicitly declared)
destructs the basic_fstream and the associated buffer, closes the file
(virtual public member function)
In this case, nothing will happen and code execution time is very less.
However, if your codes runs for long time when you are continuously opening files and not closing, after a certain time, there may be crash in run time.
when you open a file, the operating system creates an entry to represent that file and store the information about that opened file. So if there are 100 files opened in your OS then there will be 100 entries in OS (somewhere in kernel). These entries are represented by integers like (...100, 101, 102....). This entry number is the file descriptor. So it is just an integer number that uniquely represents an opened file in operating system. If your process open 10 files then your Process table will have 10 entries for file descriptors.
Also, this is why you can run out of file descriptors, if you open lots of files at once. Which will prevent *nix systems from running, since they open descriptors to stuff in /proc all the time.
Similar thing should happen in case of all operating system.
Under normal conditions there is no difference.
BUT under exceptional conditions (with slight change) the call to close can cause an expception.
int main()
{
try
{
ofstream myfile;
myfile.exceptions(std::ios::failbit | std::ios::badbit);
myfile.open("example.txt");
myfile << "Writing this to a file.\n";
// If you call close this could potentially cause an exception
myfile.close();
// On the other hand. If you let the destructor call the close()
// method. Then the destructor will catch and discard (eat) the
// exception.
}
catch(...)
{
// If you call close(). There is a potential to get here.
// If you let the destructor call close then the there is
// no chance of getting here.
}
}
I need to implement a class which holds a regular text file that will be valid for both read and write operations from multiple threads (say, "reader" threads and "writers").
I am working on visual studio 2010 and can use only the available libraries that it (VS 2010) has, so I chose to use the std::fstream class for the file operations and the CreateThread function & CRITICAL_SECTION object from the header.
I might start by saying that I seek, at the beginning, for a simple solution - just so it works....:)
My idea is as follows:
I created a File class that will hold the file and a "mutex" (CRITICAL_SECTION object) as private members.
In addition, this class (File class) provides a "public interface" to the "reader/writer" threads in order to perform a synchronized access to the file for both read and write operations.
See the header file of File class:
class File {
private:
std::fstream iofile;
int size;
CRITICAL_SECTION critical;
public:
File(std::string fileName = " ");
~File();
int getSize();
// the public interface:
void read();
void write(std::string str);
};
Also see the source file:
#include "File.h"
File :: File(std::string fileName)
{
// create & open file for read write and append
// and write the first line of the file
iofile.open(fileName, std::fstream::in | std::fstream::out | std::fstream::app); // **1)**
if(!iofile.is_open()) {
std::cout << "fileName: " << fileName << " failed to open! " << std::endl;
}
// initialize class member variables
this->size = 0;
InitializeCriticalSection(&critical);
}
File :: ~File()
{
DeleteCriticalSection(&critical);
iofile.close(); // **2)**
}
void File :: read()
{
// lock "mutex" and move the file pointer to beginning of file
EnterCriticalSection(&critical);
iofile.seekg(0, std::ios::beg);
// read it line by line
while (iofile)
{
std::string str;
getline(iofile, str);
std::cout << str << std::endl;
}
// unlock mutex
LeaveCriticalSection(&critical);
// move the file pointer back to the beginning of file
iofile.seekg(0, std::ios::beg); // **3)**
}
void File :: write(std::string str)
{
// lock "mutex"
EnterCriticalSection(&critical);
// move the file pointer to the end of file
// and write the string str into the end of the file
iofile.seekg(0, std::ios::end); // **4)**
iofile << str;
// unlock mutex
LeaveCriticalSection(&critical);
}
So my questions are (see the numbers regarding the questions within the code):
1) Do I need to specify anything else for the read and write operations I wish to perform ?
2) Anything else I need to add in the destrutor?
3) What do I need to add here in order that EVERY read operation will occur necessarily from the beginning of the file ?
4) What do I need to modify/add here in order that each write will take place at the end of the file (meaning I wish to append the str string into the end of the file)?
5) Any further comments will be great: another way to implement , pros & cons regarding my implementation, points to watch out , etc'.....
Thanks allot in advance,
Guy.
You must handle exceptions (and errors in general).
No, you destructor even has superfluous things like closing the underlying fstream, which the object takes care of itself in its destructor.
If you always want to start reading at the beginning of the file, just open it for reading and you automatically are at the beginning. Otherwise, you could seek to the beginning and start reading from there.
You already opened the file with ios::app, which causes every write operation to append to the end (including that it ignores seek operations that set the write position, IIRC).
There is a bunch that isn't going to work like you want it to...
Most importantly, you need to define what you need the class to behave like, i.e. what the public interface is. This includes guarantees about the content of the file on disk. For example, after creating an object without passing a filename, what should it write to? Should that really be a file who's name is a single space? Further, what if a thread wants to write two buffers that each contain 100 chars? The only chance to not get interrupted is to first create a buffer combining the data, otherwise it could get interrupted by a different thread. It gets even more complicate concerning the guarantees that your class should fulfill while reading.
Why are you not using references when passing strings? Your tutorial should mention them.
You are invoking the code to enter and leave the critical section at the beginning and end of a function scope. This operation should be bound to the ctor and dtor of a class, check out the RAII idiom in C++.
When you are using a mutex, you should document what it is supposed to protect. In this case, I guess it's the iofile, right? You are accessing it outside the mutex-protected boundaries though...
What is getSize() supposed to do? What would a negative size indicate? In case you want to signal errors with that, that's what exceptions are for! Also, after opening an existing, possibly non-empty file, the size is zero, which sounds weird to me.
read() doesn't return any data, what is it supposed to do?
Using a while-loop to read something always has to have the form "while try-to-read { use data}", yours has the form "while success { try-to-read; use data; }", i.e. it will use data after failing to read it.
Streams have a state, and that state is sticky. Once the failbit is set, it remains set until you explicitly call clear().
BTW: This looks like logging code or a file-backed message queue. Both can be created in a thread-friendly way, but in order to make suggestions, you would have to tell us what you are actually trying to do. This is also what you should put into a comment section on top of your class, so that any reader can understand the intention (and, more important now, so that YOU make up you mind what it's supposed to be).
The author presented this code under the title A bus error on my platform
#include <fstream>
#include <iostream>
int main()
{
std::ofstream log("oops.log");
std::cout.rdbuf(log.rdbuf());
std::cout << "Oops!\n";
return 0;
}
The string "Oops!\n" is printed to the file "oops.log". The code doesn't restore cout's streambuf, but VS2010 didn't report a runtime error.
Since log and std::cout share a buffer, that buffer will probably be freed twice (once when log goes out of scope, then once more when the program terminates).
This results in undefined behavior, so it's hard to tell the exact reason why it triggers a bus error on his machine but silently fails on yours.
Since the other answers don't mention what to do about this I'll provide that here. You need to save and restore the buffer that cout is supposed to be managing. For example:
#include <fstream>
#include <iostream>
// RAII method of restoring a buffer
struct buffer_restorer {
std::ios &m_s;
std::streambuf *m_buf;
buffer_restorer(std::ios &s, std::streambuf *buf) : m_s(s), m_buf(buf) {}
~buffer_restorer() { m_s.rdbuf(m_buf); }
};
int main()
{
std::ofstream log("oops.log");
buffer_restorer r(std::cout, std::cout.rdbuf(log.rdbuf()));
std::cout << "Oops!\n";
return 0;
}
Now when cout's buffer is replaced before cout is destroyed at the end of the program, so when cout destroys its buffer the correct thing happens.
For simply redirecting standard io generally the environment already has the ability to do that for you (e.g., io redirection in the shell). Rather than the above code I'd probably simply run the program as:
yourprogram > oops.log
Also one thing to remember is that std::cout is a global variable with all the same downsides as other global variables. Instead of modifying it or even using it you may prefer to use the usual techniques to avoid global variables all together. For example you might pass a std::ostream &log_output parameter around and use that instead of having code use cout directly.
Your program has Undefined Behavior.
The destructor of the global cout object will delete the stream buffer when going out of scope, and the same is true of log, which also owns that very same stream buffer. Thus, you are deleting the same object twice.
When a program has Undefined Behavior, anything could happen, from formatting your hard drive to terminating without any error.
On my platform, for instance, the program enters an infinite loop after returning from main().