ifstream keeps read even though the file does not exists anymore - c++

I'm facing a problem using std::ifstream to read a file. I have a zip file called "nice_folder.zip" (71.6MB) and the following code that reproduces the issue:
#include <filesystem>
#include <fstream>
#include <iostream>
#include <memory>
#include <unistd.h>
int main() {
size_t read = 0;
size_t f_size = std::filesystem::file_size("nice_folder.zip");
std::shared_ptr<char[]> buffer{new char[4096]};
std::ifstream file{"nice_folder.zip"};
while (read < f_size) {
size_t to_read = (f_size - read) > 4096 ? 4096 : (f_size - read);
file.read(buffer.get(), to_read);
sleep(2);
std::cout << "read: " << std::to_string(to_read) << "\n";
}
}
The problem is the following: after some read cycles I delete the file from the folder but it keeps reading it anyway. How is it possible ? How can I catch an error if an user delete a file while reading using ifstream ? I guess that ifstream takes the content of the file into memory before start reading but i'm not sure.

If you're doing this on e.g. linux, then the OS will not actually delete the file until you've closed all file handles to it. So it might seem like the file is deleted, but it's still stored somewhere on disk. See e.g. What happens internally when deleting an opened file in linux.
So if you're trying to detect this file deletion to prevent wrong reads, then don't worry, the file won't actually be deleted.
If you close the stream, and then try and open it again for that file, you'll get an error. But that also means you won't be able to read from it...

Related

fwrite() writes garbage at the end

I'm trying to write a function that execute a sql file with postgres. Postgres rise me an exception but without specificate the error. So I tryed to rewrite what it read, and I discovery that the file has some garbage at end
stat("treebase.sql",&buf);
dbschema= new (std::nothrow) char[buf.st_size+1];
if(!dbschema)
{
wxMessageBox(_("Not Enough memory"));
return;
}
if( !(fl=fopen("treebase.sql","r")))
{
wxMessageBox(_("Can not open treebase.sql"));
delete []dbschema;
return;
};
fo=fopen("tbout.sql","w");
fread(dbschema,sizeof(char),buf.st_size,fl);
fclose(fl);
dbschema[buf.st_size]='\0';
fwrite(dbschema,sizeof(char),buf.st_size+1,fo);
fflush(fo);
fclose(fo);
and the result is
![screen shot][1]
The input file 150473 length, the output is 156010. I really can not undersand where the 5000 bytes come from.
where is the bug?
[1]: https://i.stack.imgur.com/IXesz.png
You probably can't read buf.st_size much of data, because of the mode of fopen is "r" which defaults to text modes. In text mode fread and fwrite may do some conversions on what you read or write to match the environment special rules about text files such as end of lines. Use "rb" and "wb" modes for fopen for reading and writing binary files as is respectively.
Also, I would rather use fseek and ftell to get the size of file instead of stat.
Here's an example of how you could read the content of the file into memory and then write down an exact copy to another file. I added error checking too to make it clear if anything goes wrong. There's also no need to use stat etc. Plain standard C++ will do.
#include <cerrno>
#include <cstring>
#include <fstream>
#include <iostream>
#include <iterator>
#include <stdexcept>
#include <string>
std::string get_file_as_string(const std::string& filename) {
std::ifstream fl(filename, std::ios::binary); // binary mode
if(!fl) throw std::runtime_error(std::strerror(errno));
// return the content of the whole file as a std::string
return {std::istreambuf_iterator<char>(fl),
std::istreambuf_iterator<char>{}};
}
bool write_string_to_file(const std::string& str, const std::string& filename) {
std::ofstream fo(filename, std::ios::binary);
if(!fo) return false;
// return true or false depending on if it succeeded writing the file:
return static_cast<bool>(fo << str);
}
int main() {
auto dbschema = get_file_as_string("treebase.sql");
// use dbschema.c_str() if you need a `const char*`:
postgres_c_function(dbschema.c_str());
// use dbschema.data() if you need a `char*`:
postgres_c_function(dbschema.data());
if(write_string_to_file(dbschema, "tbout.sql")) {
std::cout << "success\n";
} else {
std::cout << "failure: " << std::strerror(errno) << '\n';
}
}

text file is not being opened for reading purposes

Before you flag this as a duplicate post and refer me on how to correctly open a text file and print to the console, I have looked at numerous StackOverflow posts about this topic and cannot find a solution for myself.
I am trying to open a text file I created (currently in the same project folder as my main.cpp), read the text, and print it to the console. I go through the if file is open statement fine but the while loop does not go through even once. I will post the function below. Please suggest any changes or ideas on how to correctly call and open/read the text file. (and I would prefer not to call the exact file location of the text file for this to work ex. C://example/textFile.txt/ Though I have not tried this method yet, I'd prefer avoiding it)
Also, I am using CLion IDE from jetbrains, C++17, and Ninja to build.
printing text file to console fucntion
#include <iostream>
#include <iomanip>
#include <iomanip>
#include <fstream>
#include <string>
#include <stdlib.h>
#include "printTest.h"
void printTest::print() {
std::string line; //string that holds the line of a text file
std::ifstream textFile("test.txt", std::ios::in); //file creation
if(textFile.is_open()) //checking if file was opened
{
while(std::getline(textFile, line))
{
//std::getline(textFile, line);
std::cout << line << "\n";
}
} else { //this is always printing i.e. file is not correctly being opened for reading
std::cout <<"Unable to open the text file..." <<std::endl; //Prints if file was not opened
}
textFile.close();
}

'Access violation' using fstream

#include <iostream>
#include <fstream>
#include <conio.h>
using namespace std;
int main()
{
char r;
fstream file1("text.txt", ios::in |ios::binary);
fstream file2("text.txt", ios::out | ios::binary);
r='r';
for(int i=0; i<100; i++)
file2.write((char*)r, sizeof(char));
while(!file1.eof())
{
file1.read((char*)r, sizeof(char));
cout<<r<<"\n";
}
file1.close();
file2.close();
getch();
}
when I run this in vc++ 2010, I get the following error during run time:
Unhandled exception at 0x55361f68 (msvcp100d.dll) in file io.exe: 0xC0000005: Access violation reading location 0x00000072.
what could be causing this error? this happens while reading the line :
file2.write((char*)r, sizeof(char));
did I make any mistake? If yes please point it out for me (thanks in advance).
Update: I am still not getting my expected output (after correcting (char*)r to (char*)&r). the output that I get is just: r. shouldn't I expect 100 characters to be displayed starting from 'r'? If not, please tell me why and thanks in advance.
You need
file1.read((char*)&r, sizeof(char));
or
file1.read(&r, sizeof(char));
In addition to the other answer, there's also another problem your code has.
Streams perform buffered I/O by default; when writing into file1, the contents that you've written probably haven't been outputted to the actual file yet. The contents are actually stored in a temporary buffer for efficiency. Writing to the actual file is an operation reserved for an explicit flush(), when close() is called, or when the file stream goes out of scope and is destructed.
The problem in your code is that directly after writing to the file stream, you perform input without determining whether that output data was written to the actual file. This can cause Undefined Behavior if you assume that the data was read successfully from the input file to the variable.
File streams that depend on each other should be synchronized. Namely, when a file stream is trying to read from the same file that you have written to, then the output file stream must be flushed. This can be facilitated by "tying" the streams together, this is done using tie():
file1.tie(&file2);
When file1 performs input, file2 will then be flushed, forcing the data in its buffer to be written the file.
Another problem you have is that you don't check if the file streams were constructed correctly, or that you have successfully read from file1. You can use if() statements to do this:
std::fstream file1("text.txt", std::ios_base::in | std::ios_base::binary);
std::fstream file2("text.txt", std::ios_base::out | std::ios_base::binary);
char r('r');
if (file1 && file2)
{
file1.tie(&file2);
for (int i = 0; i < 100; ++i)
file2.write(&r, sizeof(char));
while (file1.read(&r, sizeof(char))) {
std::cout << r << std::endl;
}
}
You started reading from a file immediately after writing on that file and without closing the write file stream. Until you close the write file stream it will not commit the writings. So there is a change of getting access violation as it holds the control.
Try following code
#include <iostream>
#include <fstream>
#include <conio.h>
using namespace std;
int main()
{
char *r="r";
fstream file2("text.txt", ios::out | ios::binary);
for(int i=0; i<100; i++)
file2.write((char*)r, sizeof(char));
file2.close();
fstream file1("text.txt", ios::in |ios::binary);
while(!file1.eof())
{
char rr;
file1.read(&rr, sizeof(char));
cout<<rr<<"\n";
}
file1.close();
getch();
}
You have tried to cast a single char to char * and also tried to read using fread without passing r's address. That's why the problem is occurring. Please carefully see my code above, it will fix your issues.

Copy a certificate file to another location

I have a .crl file which I want to copy to another location. From all the posts which I have seen till now, it can't be done without copying the contents. Is there any method in which I can transfer the file to another location in cpp without copying the contents?
I tried by copying the contents by using the usual fopen method. But the data was not being written to the buffer . If there is no direct method, could anyone please tell me how to read the certificate file and write the contents to another file in a different location?
I have also tried the fstream methods
std::ofstream dest("destination.crl", std::ios::trunc|std::ios::binary);
if(!dest.good())
{ std::cerr << "error opening output file\n";
//std::exit(2);
}
std::fstream src("sourcec.crl", std::ios::binary);
if(!src.good())
{ std::cerr << "error opening input file\n";
//std::exit(1);
}
dest << src.rdbuf();
if(!src.eof()) std::cerr << "reading from file failed\n";
if(!dest.good()) std::cerr << "writing to file failed\n";
But it displayed the errors:
error opening input file
reading from file failed
writing to file failed
Thank you in advance
I found this here.
See, this might help you. As you do not want to read, I thought you do not want to perform read and write operation yourself.
#include <istream>
#include <iostream>
#include <fstream>
#include <iterator>
using namespace std;
int main()
{
fstream f("source.crl", fstream::in|fstream::binary);
f << noskipws;
istream_iterator<unsigned char> begin(f);
istream_iterator<unsigned char> end;
fstream f2("destination.crl",
fstream::out|fstream::trunc|fstream::binary);
ostream_iterator<char> begin2(f2);
copy(begin, end, begin2);
}
Based on your response, I write another answer.
For Window, there is function [CopyFile][1]. You can use this function to copy the file without reading the content by yourself.
For linux/unix based, I am unable to find the direct equivalent of CopyFile.
I believe that this question might help you.

When will ofstream::open fail?

I am trying out try, catch, throw statements in C++ for file handling, and I have written a dummy code to catch all errors. My question is in order to check if I have got these right, I need an error to occur. Now I can easily check infile.fail() by simply not creating a file of the required name in the directory. But how will I be able to check the same for outfile.fail() (outfile is ofstream where as infile is ifstream). In which case, will the value for outfile.fail() be true?
sample code [from comments on unapersson's answer, simplified to make issue clearer -zack]:
#include <fstream>
using std::ofstream;
int main()
{
ofstream outfile;
outfile.open("test.txt");
if (outfile.fail())
// do something......
else
// do something else.....
return 0;
}
The open(2) man page on Linux has about 30 conditions. Some intresting ones are:
If the file exists and you don't have permission to write it.
If the file doesn't exist, and you don't have permission (on the diretory) to create it.
If you don't have search permission on some parent directory.
If you pass in a bogus char* for the filename.
If, while opening a device file, you press CTRL-C.
If the kernel encountered too many symbolic links while resolving the name.
If you try to open a directory for writing.
If the pathname is too long.
If your process has too many files open already.
If the system has too many files open already.
If the pathname refers to a device file, and there is no such device in the system.
If the kernel has run out of memory.
If the filesystem is full.
If a component of the pathname is not a directory.
If the file is on a read-only filesystem.
If the file is an executable file which is currently being executed.
By default, and by design, C++ streams never throw exceptions on error. You should not try to write code that assumes they do, even though it is possible to get them to. Instead, in your application logic check every I/O operation for an error and deal with it, possibly throwing your own exception if that error cannot be dealt with at the specific place it occurs in your code.
The canonical way of testing streams and stream operations is not to test specific stream flags, unless you have to. Instead:
ifstream ifs( "foo.txt" );
if ( ifs ) {
// ifs is good
}
else {
// ifs is bad - deal with it
}
similarly for read operations:
int x;
while( cin >> x ) {
// do something with x
}
// at this point test the stream (if you must)
if ( cin.eof() ) {
// cool - what we expected
}
else {
// bad
}
To get ofstream::open to fail, you need to arrange for it to be impossible to create the named file. The easiest way to do this is to create a directory of the exact same name before running the program. Here's a nearly-complete demo program; arranging to reliably remove the test directory if and only if you created it, I leave as an exercise.
#include <iostream>
#include <fstream>
#include <sys/stat.h>
#include <cstring>
#include <cerrno>
using std::ofstream;
using std::strerror;
using std::cerr;
int main()
{
ofstream outfile;
// set up conditions so outfile.open will fail:
if (mkdir("test.txt", 0700)) {
cerr << "mkdir failed: " << strerror(errno) << '\n';
return 2;
}
outfile.open("test.txt");
if (outfile.fail()) {
cerr << "open failure as expected: " << strerror(errno) << '\n';
return 0;
} else {
cerr << "open success, not as expected\n";
return 1;
}
}
There is no good way to ensure that writing to an fstream fails. I would probably create a mock ostream that failed writes, if I needed to test that.