std::ofstream : Writing to a file using append and out flags - c++

Below is a simple class which attempts to write an integer to a file. The mode of writing the file is to append characters at the end of the file (In this mode, file should be created if it doesn't exist)
#include <iostream>
#include <fstream>
class TestFileStream
{
private:
std::ofstream* _myFileStream;
bool isFileOpen;
public:
TestFileStream():isFileOpen(false)
{
_myFileStream = new std::ofstream("TestFile.txt", std::ios_base::out | std::ios_base::app );
isFileOpen = _myFileStream->is_open();
if( !isFileOpen )
{
std::cout << "Unable to open log file" << std::endl;
std::cout << "Good State: " << _myFileStream->good() <<std::endl;
std::cout << "Eof State: " << _myFileStream->eof() <<std::endl;
std::cout << "Fail State: " << _myFileStream->fail() <<std::endl;
std::cout << "Bad State: " << _myFileStream->bad() <<std::endl;
}
else
{
std::cout << "Opened log file" << std::endl;
}
}
~TestFileStream()
{
_myFileStream->close();
delete _myFileStream;
_myFileStream = nullptr;
}
void WriteFile( unsigned number )
{
if ( isFileOpen )
{
(*_myFileStream) << "Number: " << number << std::endl;
}
}
};
int main()
{
// Number of iterations can be multiple.
// For testing purpose, only 1 loop iteration executes
for( unsigned iter = 1; iter != 2; ++iter )
{
TestFileStream fileWriteObj;
fileWriteObj.WriteFile( 100+iter );
}
return 0;
}
When I execute the above code, I get following log output:
Unable to open log file
Good State: 0
Eof State: 0
Fail State: 1
Bad State: 0
This seems like trivial task, but I am not able to find out whats causing the failure. Note that this question is most likely related to the following question

Just to summarize the comments, there is nothing wrong about the code you posted (apart from the rather unconventional new ostream ;) )
Note however that opening files may fail for a number of reasons (Permissions, file in use, disk unavailable, the file does not exist, the file exists...). That is why you must always test it.
If you tried to run the above code in an online emulator, then chances are file IO is disabled. Which would explain why you get that the streams fail-bit is set.

Related

File closes automatically

I am encountering a problem when I am trying to write to sysfs node.
In the below code I am trying to wite to a trace_marker file. In the ftrace log, the first write is successful. But after that the write fails.
The file descriptor seemingly closes.
I do not want to open file every time before writing as writes are too frequent.
class Logger {
int mFileFd;
void logFromAnotherThread(std::string s) {
std::unique_lock<std::mutex> ul(mLogMu);
...
int count = write(mFileFd, s.c_str(), s.length());
if (count > 0)
std::cout << "Wrote n bytes: " << count << std::endl;
else
std::cout << "Errornum: " << strerror(errno) << std::endl;
...
}
Logger() {
mFileFd = open(SYSFS_NODE_WRITE, O_WRONLY);
....
}
}
First write is succesful.
I get output as-
Errornum: Bad file descriptor
My expectation is file open should be once, file descrtiptor should remain open for entire duration, and close on exit.
Edit 1:
Thank you for the suggestions on object getting destroyed. But I ensure that object is not getting destroyed.
For debugging, I had removed class/structure. Logging is now in simple C++ function calls. The file descriptor is a global variable, initialized once in main.
It does not works.
My confusion was is it something to do with the way write operations are performed on sysfs node.
Or can this be because of the number of writes are high (about 2-3 logs in 10us).
I am doing this like below, but this has an overhead of two added system calls.
#define TRACE_MARKER_FILE "/sys/kernel/debug/tracing/trace_marker"
void logdata(pid_t tid, std::string mystring) {
if(useLogger) {
std::stringstream ss;
if (funcname.length() > 0)
ss << LOGTAG << mystring;
int tempfd = open(TRACE_MARKER_FILE, O_WRONLY);
int count = write(tempfd, ss.str().c_str(), ss.str().length());
if (count == 0) {
std::cout << "Errornum: " << strerror(errno) <<std::endl;
}
close(tempfd);
}
}
I would suggest using std::ofstream which allows for proper RAII, in other words it will open the file in your constructor, then will automatically close the file in your destructor
class Logger {
std::ofstream mFile;
void logFromAnotherThread(std::string s) {
std::unique_lock<std::mutex> ul(mLogMu);
...
mFile << s;
...
}
Logger() : mFile(SYSFS_NODE_WRITE, O_WRONLY) {
....
}
}

Writing to and reading from a file at the same time

I have two processes. One writes to a file, one has to read from it (At the same time..). So there's two fstreams open at a given time for the file (Although they may be in different processes).
I wrote a simple test function to crudely implement the sort of functionality I need:
void test_file_access()
{
try {
std::string file_name = "/Users/xxxx/temp_test_folder/test_file.dat";
std::ofstream out(file_name,
std::ios_base::out | std::ios_base::app | std::ios_base::binary);
out.write("Hello\n", 7);
std::this_thread::sleep_for(std::chrono::seconds(1));
std::array<char, 4096> read_buf;
std::ifstream in(file_name,
std::ios_base::in | std::ios_base::binary);
if (in.fail()) {
std::cout << "Error reading file" << std::endl;
return;
}
in.exceptions(std::ifstream::failbit | std::ifstream::badbit);
//Exception at the below line.
in.read(read_buf.data(), read_buf.size());
auto last_read_size = in.gcount();
auto offset = in.tellg();
std::cout << "Read [" << read_buf.data() << "] from file. read_size = " << last_read_size
<< ", offset = " << offset << std::endl;
out.write("World\n", 7);
std::this_thread::sleep_for(std::chrono::seconds(1));
//Do this so I can continue from the position I was before?
//in.clear();
in.read(read_buf.data(), read_buf.size());
last_read_size = in.gcount();
offset = in.tellg();
std::cout << "Read [" << read_buf.data() << "] from file. read_size = " << last_read_size
<< ", offset = " << offset << std::endl;
//Remove if you don't have boost.
boost::filesystem::remove(file_name);
}
catch(std::ios_base::failure const & ex)
{
std::cout << "Error : " << ex.what() << std::endl;
std::cout << "System error : " << strerror(errno) << std::endl;
}
}
int main()
{
test_file_access();
}
Run, and the output is like this:
Error : ios_base::clear: unspecified iostream_category error
System error : Operation timed out
So two questions,
What is going wrong here? Why do I get an Operation timed out error?
Is this an incorrect attempt to do what I need to get done? If so, what are the problems here?
You write into this file 7 bytes, but then try to read 4096 bytes. So in stream will read only 7 bytes and throw an exception as requested. Note that if you catch this exception the rest of the code will be executed correctly, e.g. last_read_size will be 7 and you can access those 7 bytes in buffer.

Reading and writing to the same file fstream

I would like to update existing json file.
This is example json file:
{
"Foo": 51.32,
"Number": 100,
"Test": "Test1"
}
Logs from program:
Operation successfully performed
100
"Test1"
51.32
46.32
Done
Looks like everythink works as expected...
If I change fstream to ifstream to read and later ofstream to write it's working...
I tried use debugger and as I see I have wrong data in basic_ostream object... but I dont know why, I use data from string with corrected (updated data).
Any idea what is wrong :-) ?
You have a few problems here.
First the command json json_data(fs); reads to the end of the file setting the EOF flag. The stream will stop working until that flag is cleared.
Second the file pointer is at the end of the file. If you want to overwrite the file you need to move back to the beginning again:
if (fs.is_open())
{
json json_data(fs); // reads to end of file
fs.clear(); // clear flag
fs.seekg(0); // move to beginning
Unfortunately that still doesn't fix everything because if the file you write back is smaller than the one you read in there will be some of the old data tagged to the end of the new data:
std::cout << "Operation successfully performed\n";
std::cout << json_data.at("Number") << std::endl;
std::cout << json_data.at("Test") << std::endl;
std::cout << json_data.at("Foo") << std::endl;
json_data.at("Foo") = 4.32; // what if new data is smaller?
Json file:
{
"Foo": 4.32, // this number is smaller than before
"Number": 100,
"Test": "Test1"
}} // whoops trailing character from previous data!!
In this situation I would simply open one file for reading then another for writing, its much less error prone and expresses the intention to overwrite everything.
Something like:
#include "json.hpp"
#include <iostream>
#include <fstream>
#include <string>
using json = nlohmann::json;
void readAndWriteDataToFile(std::string fileName) {
json json_data;
// restrict scope of file object (auto-closing raii)
if(auto fs = std::ifstream(fileName))
{
json_data = json::parse(fs);
std::cout << "Operation successfully performed\n";
std::cout << json_data.at("Number") << std::endl;
std::cout << json_data.at("Test") << std::endl;
std::cout << json_data.at("Foo") << std::endl;
}
else
{
throw std::runtime_error(std::strerror(errno));
}
json_data.at("Foo") = 4.32;
std::cout << json_data.at("Foo") << std::endl;
std::string json_content = json_data.dump(3);
if(auto fs = std::ofstream(fileName))
{
fs.write(json_content.data(), json_content.size());
std::cout << "Done" << std::endl;
}
else
{
throw std::runtime_error(std::strerror(errno));
}
}
int main()
{
try
{
std::string fileName = "C:/new/json1.json";
readAndWriteDataToFile(fileName);
}
catch(std::exception const& e)
{
std::cerr << e.what() << '\n';
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}

std::ifstream setting fail() even when no error

Using GCC 4.7.3 on Cygwin 1.7.24. Compiler options include: -std=gnu++11 -Wall -Wextra
I am working on a command line application and I needed to be able to load and save a set of strings so I wrote a quick wrapper class around std::set to add load and save methods.
// KeySet.h
#ifndef KEYSET_H
#define KEYSET_H
#include <cstdlib>
#include <sys/stat.h>
#include <cerrno>
#include <cstring>
#include <string>
#include <set>
#include <iostream>
#include <fstream>
inline bool file_exists (const std::string& filename)
{
/*
Utility routine to check existance of a file. Returns true or false,
prints an error and exits with status 2 on an error.
*/
struct stat buffer;
int error = stat(filename.c_str(), &buffer);
if (error == 0) return true;
if (errno == ENOENT) return false;
std::cerr << "Error while checking for '" << filename << "': " << strerror(errno) << std::endl;
exit (2);
}
class KeySet
{
private:
std::string filename;
std::set<std::string> keys;
public:
KeySet() {}
KeySet(const std::string Pfilename) : filename(Pfilename) {}
void set_filename (const std::string Pfilename) {filename = Pfilename;}
std::string get_filename () {return filename;}
auto size () -> decltype(keys.size()) {return keys.size();}
auto cbegin() -> decltype(keys.cbegin()) {return keys.cbegin();}
auto cend() -> decltype(keys.cend()) {return keys.cend();}
auto insert(const std::string key) -> decltype(keys.insert(key)) {return keys.insert(key);}
void load ();
void save ();
};
void KeySet::load ()
{
if (file_exists(filename)) {
errno = 0;
std::ifstream in (filename, std::ios_base::in);
if (in.fail()) {
std::cerr << "Error opening '" << filename << "' for reading: " << strerror(errno) << std::endl;
exit (2);
}
std::string token;
if (token.capacity() < 32) token.reserve(32);
while (in >> token) keys.insert(token);
if (!in.eof()) {
std::cerr << "Error reading '" << filename << "': " << strerror(errno) << std::endl;
exit (2);
}
in.clear(); // need to clear flags before calling close
in.close();
if (in.fail()) {
std::cerr << "Error closing '" << filename << "': " << strerror(errno) << std::endl;
exit (2);
}
}
}
void KeySet::save ()
{
errno = 0;
std::ofstream out (filename, std::ios_base::out);
if (out.fail()) {
std::cerr << "Error opening '" << filename << "' for writing: " << strerror(errno) << std::endl;
exit (2);
}
for (auto key = keys.cbegin(), end = keys.cend(); key != end; ++key) {
out << *key << std::endl;
}
out.close();
if (out.fail()) {
std::cerr << "Error writing '" << filename << "': " << strerror(errno) << std::endl;
exit (2);
}
}
#endif
//
Here's a quick program to test the load method.
// ks_test.cpp
#include "KeySet.h"
int main()
{
KeySet test;
std::string filename = "foo.keys.txt";
test.set_filename(filename);
test.load();
for (auto key = test.cbegin(), end = test.cend(); key != end; ++key) {
std::cout << *key << std::endl;
}
}
The data file just has "one two three" in it.
When I go to run the test program, I get the following error from my test program:
$ ./ks_test
Error closing 'foo.keys.txt': No error
Both cppreference.com and cplusplus.com say that the close method should set the fail bit on error. The save method works fine, and the load method works correctly if I comment out the error check after the close. Should this really work or have I misunderstood how close is supposed to work? Thanks in advance.
Edited to clarify, fix typo's and adjust code per Joachim Pileborg's and Konrad Rudolph's comments.
Edited to add solution to the code.
You have two errors here: The first is about how you do your reading, more specifically the loop for reading. The eof flag will not be set until after you tried to read and the read failed. Instead you should do like this:
while (in >> token) { ... }
Otherwise you will loop one time to many and try to read beyond the end of the file.
The second problem is the one you notice, and it depends on the the first problem. Since you try to read beyond the end of the file, the stream will set failbit causing in.fail() to return true even though there is no real error.
As it turns out, the close method for ifstream (and I assume all other IO objects) DOES NOT clear the error flags before closing the file. This means you need to add an explicit clear() call before you close the stream after end of file if you are checking for errors during the close. In my case, I added in.clear(); just before the in.close(); call and it is working as I expect.

setxkbmap returns 65280 when executed from system call

I am sending
std::string cmdStr = "setxkbmap us";
int res = system( cmdStr.c_str() );
and the result is
res: 65280
What can be the problem?
That value indicates that the child process exited normally with a value of 255.
This could happen if:
/bin/sh couldn't find setxkbmap. (note: I might be wrong on this one. On my PC, /bin/sh returns 127 in that case.)
setxkbmap couldn't open the X server at $DISPLAY, including if DISPLAY is unset
I'm sure that there are many other possibilities. Check stdout for error messages.
When interpreting the return value from system on Linux, do this:
#include <sys/wait.h>
int res = system(foo);
if(WIFEXITED(res)) {
std::cout << "Normal exit: " << WEXITSTATUS(res) << "\n";
} else {
if(WIFSIGNALED(res)) {
std::cout << "Killed by signal #" << WTERMSIG(status);
if(WCOREDUMP(res)) {
std::cout << " Core dumped";
}
std::cout << "\n";
} else {
std::cout << "Unknown failure\n";
}
}