I have written two instances ck1,ck2 of a struct named Cookie and have saved them in a binary file named "mydat" by calling a function :
bool s_cookie(Cookie myck,std::string fname) {
std::ofstream ofs(fname,std::ios::binary | std::ios::app);
if(!ofs) return false;
ofs.write((char *) &myck, sizeof(Cookie));
ofs.close();
return true;
}
of course myck can be ck1, ck2, etc, and fname reps the "mydat" binary file. So the two structs have both been saved in the same file.
Now I want to read them back into ck3 and ck4 respectively. How do i do that? Cookie looks like this :
struct Cookie {
std::string name;
std::string value;
unsigned short duration;
bool expired;
};
Thanks
Something like writing, but read them, if Cookie is a POD:
std::ifstream ifs(fname,std::ios::binary);
Cookie ck3, ck4;
ifs.read((char *) &ck3, sizeof(Cookie));
ifs.read((char *) &ck4, sizeof(Cookie));
Also, you should check the result of each opening and reading operation and handle them.
Update: After your update and seeing the Cookie, you can not simply write it into a file. You should serialize it or make a well-defined protocol to read/write data.
A simple workaround is (read the comment):
// Assume name and value are not longer that 99
// and you don't care about wasted space in the file
struct CookiePOD {
CookiePOD(const Cookie &p)
{
// I ignored bound checking !
std::copy(p.name.begin(), p.name.end(), name);
name[p.name.size()] = 0;
std::copy(p.value.begin(), p.value.end(), value);
value[p.value.size()] = 0;
duration = p.duration;
expired = p.expired;
}
char name[100];
char value[100];
unsigned short duration;
bool expired;
};
And then try to read/write CookiePOD instead of Cookie.
Related
I'm reading a file in binary mode with std::ifstream. Here is the way I'm reading from the file:
enum class Resources : uint64_t
{
Resource_1 = 0xA97082C73B2BC4BB,
Resource_2 = 0xB89A596B420BB2E2,
};
struct ResourceHeader
{
Resources hash;
uint32_t size;
};
int main()
{
std::ifstream input(path, std::ios::binary);
while (true)
{
if (input.eof()) break;
ResourceHeader RCHeader{};
input.read((char*)&RCHeader, sizeof(ResourceHeader));
uint16_t length = 0;
input.read((char*)&length, sizeof(uint16_t));
std::string str;
str.resize(length);
input.read(&str[0], length); /* here I get a string from binary file */
/* and some other reading */
}
}
After reading, I want to make some changes in some data that I read from the file, and after all changes then write the changed data in a new file.
So I want to know, how can I store the edited char into a buffer (BTW, I don't know the exact size of the buffer with edits)? Also, I need to be able to go back and forth in new created buffer and edit some data again.
So, how can I achieve this?
I am trying to update a data to two JSON files by providing the filename at run time.
This is the updateTOFile function which will update data stored in JSON variable to two different in two different threads.
void updateToFile()
{
while(runInternalThread)
{
std::unique_lock<std::recursive_mutex> invlock(mutex_NodeInvConf);
FILE * pFile;
std::string conff = NodeInvConfiguration.toStyledString();
pFile = fopen (filename.c_str(), "wb");
std::ifstream file(filename);
fwrite (conff.c_str() , sizeof(char), conff.length(), pFile);
fclose (pFile);
sync();
}
}
thread 1:
std::thread nt(&NodeList::updateToFile,this);
thread 2:
std::thread it(&InventoryList::updateToFile,this);
now it's updating the files even if no data has changed from the previous execution. I want to update the file only if there's any change compared to previously stored one. if there is no change then it should print the data is same.
Can anyone please help with this??
Thanks.
You can check if it has changed before writing.
void updateToFile()
{
std::string previous;
while(runInternalThread)
{
std::unique_lock<std::recursive_mutex> invlock(mutex_NodeInvConf);
std::string conf = NodeInvConfiguration.toStyledString();
if (conf != previous)
{
// TODO: error handling missing like in OP
std::ofstream file(filename);
file.write (conf.c_str() , conf.length());
file.close();
previous = std::move(conf);
sync();
}
}
}
However such constant polling in loop is likely inefficient. You may add Sleeps to make it less diligent. Other option is to track by NodeInvConfiguration itself if it has changed and clear that flag when storing.
I have a simple function that edits an HTML file. All it does is just to replace some texts in the file. This is the code for the function:
void edit_file(char* data1, char* data1_token, char* data2, char* data2_token) {
std::ifstream filein("datafile.html");
std::ofstream fileout("temp.html");
std::string line;
//bool found = false;
while(std::getline(filein, line))
{
std::size_t for_data1 = line.find(data1_token);
std::size_t for_data2 = line.find(data2_token);
if (for_data1 != std::string::npos) {
line.replace(for_data1, 11, data1);
}
if (for_data2 != std::string::npos) {
line.replace(for_data2, 19, data2);
}
fileout<<line;
}
filein.close();
fileout.close();
}
void edit_file_and_copy_back(char* data1, char* data1_token, char* data2, char* data2_token)
{
edit_file(data1, data1_token, data2, data2_token);
MoveFileEx("temp.html", "datafile.html", MOVEFILE_REPLACE_EXISTING);
}
For some reasons, I will call this function multiple times, but this function only works for the first time, and later on the "getline" it will stop somewhere in the middle of the file.
The replace function works without any problems (because it works the first time). However, the second time, the while loop will end after just reading some lines.
I have tried filein.close() or file.seekg function, but neither of them fix the problem. What causes the incorrect execution and how do to solve it?
Buffering is biting you. Here's what you're doing:
Opening datafile.html for read and temp.html for write
Copying lines from datafile.html to temp.html
When you're done, without closing or flushing temp.html, you open a separate handle to temp.html for read (which won't share the buffer with the original handle, so unflushed data isn't seen)
You open a separate handle to datafile.html for write, and copying from the second temp.html handle to the new datafile.html handle
But the copy in steps 3 & 4 is missing the data still in the buffer for temp.html opened in step 1. And each time you call this, if the input and output buffer sizes don't match, or the iostream implementation you're using doesn't flush until you write buffer size + 1 bytes, you'll drop up to another buffer size worth of data.
Change your code so the scope of the original handles ends before you call the copy back function:
void edit_file(char* data1, char* data1_token, char* data2, char* data2_token) {
{ // New scope; when it ends, files are closed
ifstream filein("datafile.html");
ofstream fileout("temp.html");
string strTemp;
std::string line;
//bool found = false;
while(std::getline(filein, line))
{
std::size_t for_data1 = line.find(data1_token);
std::size_t for_data2 = line.find(data2_token);
if (for_data1 != std::string::npos) {
line.replace(for_data1, 11, data1);
}
if (for_data2 != std::string::npos) {
line.replace(for_data2, 19, data2);
}
fileout<<line;
}
} // End of new scope, files closed at this point
write_back_file();
}
void write_back_file() {
ifstream filein("temp.html");
ofstream fileout("datafile.html");
fileout<<filein.rdbuf();
}
Mind you, this still has potential errors; if both data tokens are found, and data1_token occurs before data2_token, the index for data2_token will be stale when you use it; you need to delay the scan for data2_token until after you scan and replace data1_token (or if the data1_token replacement might create a data2_token that shouldn't be replaced, you'll need to compare the hit indices and perform the replacement for the later hit first, so the earlier index remains valid).
Similarly, from a performance and atomicity perspective, you probably don't want to copy from temp.html back to datafile.html; other threads and processes would be able to see the incomplete datafile.html in that case, rather than seeing the old version atomically replaced with the new version. It also means you need to worry about removing temp.html at some point. Typically, you just move the temporary file over the original file:
rename("temp.html", "datafile.html");
If you're on Windows, that won't work atomically to replace an existing file; you'd need to use MoveFileEx to force replacing of existing files:
MoveFileEx("temp.html", "datafile.html", MOVEFILE_REPLACE_EXISTING);
void edit_file(char* data1, char* data1_token, char* data2, char* data2_token) {
ifstream filein("datafile.html");
ofstream fileout("temp.html");
// STUFF
// At this point the two streams are still open and
// may not have been flushed to the file system.
// You now call this function.
write_back_file();
}
void write_back_file() {
// You are opening files that are already open.
// Do not think there are any guarantees about the content at this point.
// So what is copied is questionable.
ifstream filein("temp.html");
ofstream fileout("datafile.html");
fileout<<filein.rdbuf();
}
Do not call write_back_file() from within edit_file(). Rather provide a wrapper that calls both.
void edit_file_and_copy_back(char* data1, char* data1_token, char* data2, char* data2_token)
{
edit_file(data1, data1_token, data2, data2_token);
write_back_file();
}
I'm doing an external merge sort for an assignment and I'm given two structs:
// This is the definition of a record of the input file. Contains three fields, recid, num and str
typedef struct {
unsigned int recid;
unsigned int num;
char str[STR_LENGTH];
bool valid; // if set, then this record is valid
int blockID; //The block the record belongs too -> Used only for minheap
} record_t;
// This is the definition of a block, which contains a number of fixed-sized records
typedef struct {
unsigned int blockid;
unsigned int nreserved; // how many reserved entries
record_t entries[MAX_RECORDS_PER_BLOCK]; // array of records
bool valid; // if set, then this block is valid
unsigned char misc;
unsigned int next_blockid;
unsigned int dummy;
} block_t;
Also I'm given this:
void MergeSort (char *infile, unsigned char field, block_t *buffer,
unsigned int nmem_blocks, char *outfile,
unsigned int *nsorted_segs, unsigned int *npasses,
unsigned int *nios)
Now, at phase 0 I'm allocating memory like this:
buffer = (block_t *) malloc (sizeof(block_t)*nmem_blocks);
//Allocate disc space for records in buffer
record_t *records = (record_t*)malloc(nmem_blocks*MAX_RECORDS_PER_BLOCK*sizeof(record_t));
And then after I read the records from a binary file (runs smoothly), I write them to multiple files (after sorting of course and some other steps) with this command:
outputfile = fopen(name.c_str(), "wb");
fwrite(records, recordsIndex, sizeof(record_t), outputfile);
and read like this:
fread(&buffer[b].entries[rec],sizeof(record_t),1,currentFiles[b])
And it works! Then I want to combine some of these files to produce a larger sorted file using a priority_queue turned to minheap (it's tested, it works), but when I try to write to files using this command:
outputfile = fopen(outputName.c_str(), "ab"); //Opens file for appending
fwrite(&buffer[nmem_blocks-1].entries, buffer[nmem_blocks-1].
nreserved, sizeof(record_t), outputfile);
It writes nonsense in the file, as if it reads random parts of memory.
I know that the code is probably not nearly enough, but all of it is quite large.
I'm making sure I'm closing the output file before I open it again using a new name. Also I use memset() (and not free()) to clear the buffer before I fill it again.
Finally the main problem was the way I was trying to open the file:
outputfile = fopen(outputName.c_str(), "ab"); //Opens file for appending
Instead I should have used again:
outputfile = fopen(outputName.c_str(), "wb"); //Opens file for writing to end of file
Because the file in the meantime never closed, so it was trying to open an already open file for appending and it didn't work quite well. But you couldn't have known since you didn't have the full code. Thank you for your help though! :)
I've finally figured out how to write some specifically formatted information to a binary file, but now my problem is reading it back and building it back the way it originally was.
Here is my function to write the data:
void save_disk(disk aDisk)
{
ofstream myfile("disk01", ios::out | ios::binary);
int32_t entries;
entries = (int32_t) aDisk.current_file.size();
char buffer[10];
sprintf(buffer, "%d",entries);
myfile.write(buffer, sizeof(int32_t));
std::for_each(aDisk.current_file.begin(), aDisk.current_file.end(), [&] (const file_node& aFile)
{
myfile.write(aFile.name, MAX_FILE_NAME);
myfile.write(aFile.data, BLOCK_SIZE - MAX_FILE_NAME);
});
}
and my structure that it originally was created with and what I want to load it back into is composed as follows.
struct file_node
{
char name[MAX_FILE_NAME];
char data[BLOCK_SIZE - MAX_FILE_NAME];
file_node(){};
};
struct disk
{
vector<file_node> current_file;
};
I don't really know how to read it back in so that it is arranged the same way, but here is my pathetic attempt anyway (I just tried to reverse what I did for saving):
void load_disk(disk aDisk)
{
ifstream myFile("disk01", ios::in | ios::binary);
char buffer[10];
myFile.read(buffer, sizeof(int32_t));
std::for_each(aDisk.current_file.begin(), aDisk.current_file.end(), [&] (file_node& aFile)
{
myFile.read(aFile.name, MAX_FILE_NAME);
myFile.read(aFile.data, BLOCK_SIZE - MAX_FILE_NAME);
});
}
^^ This is absolutely wrong. ^^
I understand the basic operations of the ifstream, but really all I know how to do with it is read in a file of text, anything more complicated than that I'm kind of lost.
Any suggestions on how I can read this in?
You're very close. You need to write and read the length as binary.
This part of your length-write is wrong:
char buffer[10];
sprintf(buffer, "%d",entries);
myfile.write(buffer, sizeof(int32_t));
It only writes the first four bytes of whatever the length is, but the length is character data from a sprintf() call. You need to write this as a binary-value of entries (the integer):
// writing your entry count.
uint32_t entries = (uint32_t)aDisk.current_file.size();
entries = htonl(entries);
myfile.write((char*)&entries, sizeof(entries));
Then on the read:
// reading the entry count
uint32_t entries = 0;
myFile.read((char*)&entries, sizeof(entries));
entries = ntohl(entries);
// Use this to resize your vector; for_each has places to stuff data now.
aDisk.current_file.resize(entries);
std::for_each(aDisk.current_file.begin(), aDisk.current_file.end(), [&] (file_node& aFile)
{
myFile.read(aFile.name, MAX_FILE_NAME);
myFile.read(aFile.data, BLOCK_SIZE - MAX_FILE_NAME);
});
Or something like that.
Note 1: this does NO error checking nor does it account for portability for potentially different endian-ness on different host machines (a big-endian machine writing the file, a little endian machine reading it). Thats probably ok for your needs, but you should at least be aware of it.
Note 2: Pass your input disk parameter to load_disk() by reference:
void load_disk(disk& aDisk)
EDIT Cleaning file_node content on construction
struct file_node
{
char name[MAX_FILE_NAME];
char data[BLOCK_SIZE - MAX_FILE_NAME];
file_node()
{
memset(name, 0, sizeof(name));
memset(data, 0, sizeof(data));
}
};
If you are using a compliant C++11 compiler:
struct file_node
{
char name[MAX_FILE_NAME];
char data[BLOCK_SIZE - MAX_FILE_NAME];
file_node() : name(), data() {}
};