Saving other files in a own data store with fstream - c++

I am developing at the time a small Filestoresystem, which should store
some files like .png´s and so in it.
,
So I read the bytes from the .png in a char vector successfully, the size of the vector is the same size as the picture (it should be OK).
Then, I wanted to save the bytes in another .png.
Actually, I created the File succesfully, but the File is completely empty.
Here is the most important code, I guess:
void storedFile::saveData(char Path[]){
std::fstream file;
file.open(Path,std::ios::trunc|std::ios::out|std::ios::binary);
if(!file.is_open())
std::cout << "Couldn´t open saved File (In Func saveData())" << std::endl;
file.write((char*)&Data,sizeof(char) * Data.size());
file.close();}
I think that I did it right, but it's not working.
Again, the bytes of the .png are stored in Data.
I tested after every opening and reading, if it opened and so on, everything worked fine (no error codes appeared).

This part looks strange:
file.write((char*)&Data,sizeof(char) * Data.size());
^^^^^^^^^^^^
Data.size() is a hint that data is a std::vector, so &Data is actually wrong, it should be (char*)Data.data()

Related

boost::binary_iarchive instead of boost::text_iarchive from SQLite3 Blob

I'm not an expert on streams and buffers, though I have learned an immense amount over the last month since I started tackling this problem.
This concerns boost::serialization and if I can get the binary archiving working, I'll save 50% of the storage space.
I've searched all over StackOverflow for the answer, and I've pieced together the following code that works, but only for text_iarchive. If I try to move to binary_iarchive, I get a segment fault with a message of "boost serialize allocate(size_t n) 'n' exceeds maximum supported size" or any other number of errors where it is obvious that there is a disconnect between the input stream/buffer and what binary_iarchive is expecting.
Like I said earlier, this works perfectly with text_iarchive. I can text_oarchive to an SQLite3 Blob, verify it in the database, retrieve it, text_iarchive it back in to the complex object, and it works perfectly.
Is there something wrong with the way I set up the input stream and buffer?
To not confuse everyone, I am NOT posting the structure of the object I am serializing and deserializing. There are many vector<double>, an Eigen Matrix, and a couple of basic objects. They work perfectly and are not part of the problem! (And yes, I delete the database records between tests to guard against reading a text_oarchive into a binary_iarchive.)
Here is the output archive section. This appears to work perfectly for text_oarchive OR binary_oarchive. The Blob shows up in the database and appears to be of the proper binary structure.
// BinaryData is a Typedef for std::vector<char>
BinaryData serializedDataStream;
bio::stream<bio::back_insert_device<BinaryData>> outbuf {serializedDataStream};
// I change the text_oarchive to binary_oarchive and uncomment the std::ios::binary parameter.
// when I'm attempting to move from text to binary
boost::archive::text_oarchive outStream(outbuf); //, std::ios::binary);
outStream << ssInputDataAndBestModel_->theModel_;
outbuf.flush();
// have to convert to unsigned char since that is the way sqlite3 expects to see
// a Blob object type
std::vector<unsigned char> buffer(serializedDataStream.begin(),serializedDataStream.end());
I then pass "buffer" to the SQLite3 processing object to store it in the Blob.
Here is the input archive section. The Blobs look identical storing and then retreiving from the DB whether it's text or binary. (But a text doesn't look like a binary, obviously.)
// this line is to get the blob out of SQLite3
currentModelDBRecPtr = cpp17::any_cast<dbo::ptr<Model>>(modelListModel);
if (!currentModelDBRecPtr->theModel.empty()) {
// have to convert to char since that is the way boost::serialize expects to see
// an archived object type (blob is vector of unsigned char)
std::vector<char> blobBuffer(currentModelDBRecPtr->theModel.begin(), currentModelDBRecPtr->theModel.end());
boost::iostreams::stream<boost::iostreams::array_source> membuf(blobBuffer.data(), blobBuffer.size());
std::istream &input_stream = membuf;
// Note: I change the following to binary_iarchive and uncomment the
// std::ios::binary flag to try to move from text_iarchive to binary_iarchive
boost::archive::text_iarchive input_archive(input_stream); //, std::ios::binary);
TheModel inputArchiveModel;
// it crashes on the next line, but it DOES successfully recreate half
// of the object before it randomly crashes.
input_archive >> inputArchiveModel;
}

C++ ofstream output to image file writes different data on Windows

Im doing a simple thing: writing the data of an image file stored as a string into the image file containing that string.
std::ofstream f("image.jpeg");
f << image_data; // image_data was created using python and copied over, in hex and turned back into ascii
And yet, the unexpected happens:
becomes:
I cannot understand why this is happening.
When I use python2.7 to get the data from the original picture and write
it to a new file, it works fine.
When I compile and run my program in ubuntu, the picture comes out
fine.
When I write a large text file (larger than the image) into a .txt,
the file comes out fine.
It is only jpegs on Windows that fails. The original image I tried was
an image from a PGP key packet, which came out with half of the
person's head clear and the other half messed up.
The compiled program doesnt mess up all of the data, since like I said above, some of the original picture is shown. Also, the images are the same size, so the jpeg format was preserved at least.
What is happening? I am using ming2 4.7.2 in Code::Blocks on Windows 7. Is Windows just being crazy?
You must open the file in binary mode:
std::ofstream f("image.jpeg", std::ios::out | std::ios::binary);
// ^^^^^^^^^^^^^^^^

How do I use QuaZip to extract multiple files?

I have the below code to move through a list of the folders and files in a zip archive creating them as I goes (also creating paths for files if not created yet).
The application crashes when I use readData(char*, qint64) to extract internal files data to stream it into a QFile. I don't think this is the right thing to use but it's all I've seen (in a very loose example on this site) and I also had to change the QuaZipFile.h to make the function public so I can use it (also hinting I shouldn't be using it).
It doesn't crash on the first file which has no contents but does after that. Here is the necessary code (ask if you need to see more):
QFile newFile(fNames);
newFile.open(QIODevice::WriteOnly);
QTextStream outToFile(&newFile);
char * data;
int len = file.readData(data, 100000000);
if(len > 0) {
outToFile << data;
}
newFile.close();
It doesn't pass the int len line. What should I be using here?
Note that the variable file is defined earlier pretty puch like this:
QuaZip zip("zip.zip");
QuaZipFile file(&zip);
...
zip.goToFirstFile();
...
zip.goToNextFile();
And the int passed to readData is a random number for the max data size.
The reason for the crash is that you have not allocated any memory for your buffer, named data.
Solved.
I tried using different reads (readData, read, readLine) and found that this line works with no need for a data buffer:
outToFile << file.readAll();

Writing at the beginning of a file, keeping file contents

I have a text file where I want to write. I want to keep the file content always. I want to write following a "FIFO" (last write always on the top line of the file).
I try using fout.open("filename"); with ate mode to keep file content, after that use seekg(0) trying to take back writing cursor to the begining of the file. Didn't work.
The unique way I found to do that I think it's so time-expensive, copy all the file content to a temporary file. Write want I want to write and after that write the content of the temp file at the end of the target file.
There must be an easy way do this operation?
Jorge, no matter what you will have to rewrite the entire file in memory. You cannot simply keep the file where it is and prepend memory, especially since it's a simple text file (maybe if there was some form of metadata you could...)
Anyways, your best chance is to flush the old contents into a temporary location, write what you need and append the old contents.
I'm not sure what you're asking for. If you want to add a
line to the beginning of the file, the only way is to open a
new, temporary file, write the line, copy the old file into
after the new line, then delete the old file and rename the
temporary.
If the original line has a fixed length, and you want to replace
it, then all you have to do is open the file with both
ios_base::in and ios_base::out.
First, you should realize that files are historically streams, i.e. they can only be read and written in one direction. This comes from the times when files were stored on tapes, which could move in one direction (at that time).
However, if you only want to prepend, then you can just store your file backwards. Sounds silly? Maybe, but this would work with just a little overhead.
Apart from that, with current OS's you will need to make a copy to prepend. While files are not streams anymore, and can be accessed randomly on a harddisk, they are still made to grow in one direction. Of course you could make a filesystem, where files grow in both directions, but I have not heard of one.
With <fstream> you may use the filebuf class.
filebuf myfile;
myfile.open ("test.txt", ios::in | ios::out);
if (!myfile.is_open()) cout << "cannot open" << endl;
myfile.sputn("AAAA", 4);
myfile.close();
filebuf myfile2;
myfile2.open ("test.txt", ios::in | ios::out);
if (!myfile2.is_open()) cout << "cannot open 2" << endl;
myfile2.sputn("BB", 2);
myfile2.close();
write to a string in order you want, then flush to the file

Copying contents of one file to another in C++

I am using the following program to try to copy the contents of a file, src, to another, dest, in C++. The simplified code is given below:
#include <fstream>
using namespace std;
int main()
{
fstream src("c:\\tplat\test\\secClassMf19.txt", fstream::binary);
ofstream dest("c:\\tplat\\test\\mf19b.txt", fstream::trunc|fstream::binary);
dest << src.rdbuf();
return 0;
}
When I built and executed the program using CODEBLOCKS ide with GCC Compiler in windows, a new file named "....mf19.txt" was created, but no data was copied into it, and filesize = 0kb. I am positive I have some data in "...secClassMf19.txt".
I experience the same problem when I compiled the same progeam in windows Visual C++ 2008.
Can anyone please help explain why I am getting this unexpected behaviour, and more importantly, how to solve the problem?
You need to check whether opening the files actually succeeds before using those streams. Also, it never hurts to check if everything went right afterwards. Change your code to this and report back:
int main()
{
std::fstream src("c:\\tplat\test\\secClassMf19.txt", std::ios::binary);
if(!src.good())
{
std::cerr << "error opening input file\n";
std::exit(1);
}
std::ofstream dest("c:\\tplat\\test\\mf19b.txt", std::ios::trunc|std::ios::binary);
if(!dest.good())
{
std::cerr << "error opening output file\n";
std::exit(2);
}
dest << src.rdbuf();
if(!src.eof())
std::cerr << "reading from file failed\n";
if(!dst.good())
std::cerr << "writing to file failed\n";
return 0;
}
I bet you will report that one of the first two checks hits.
If opening the input file fails, try opening it using std::ios::in|std::ios::binary instead of just std::ios::binary.
Do you have any reason to not use CopyFile function?
Best
As it is written, your src instance is a regular fstream, and you are not specifying an open mode for input. The simple solution is to make src an instance of ifstream, and your code works. (Just by adding one byte!)
If you had tested the input stream (as sbi suggests), you would have found that it was not opened correctly, which is why your destination file was of zero size. It was opened in write mode (since it was an ofstream) with the truncation option to make it zero, but writing the result of rdbuf() simply failed, with nothing written.
Another thing to note is that while this works fine for small files, it would be very inefficient for large files. As is, you are reading the entire contents of the source file into memory, then writing it out again in one big block. This wastes a lot of memory. You are better off reading in chunks (say 1MB for example, a reasonable size for a disk cache) and writing a chunk at a time, with the last one being the remainder of the size. To determine the source's size, you can seek to the end and query the file offset, then you know how many bytes you are processing.
And you will probably find your OS is even more efficient at copying files if you use the native APIs, but then it becomes less portable. You may want to look at the Boost filesystem module for a portable solution.