This is something the professor showed us in his scripts. I have not used this method in any code I have written.
Basically, we take a class, or struct, and reinterpret_cast it and save off the entire struct like so:
struct Account
{
Account()
{ }
Account(std::string one, std::string two)
: login_(one), pass_(two)
{ }
private:
std::string login_;
std::string pass_;
};
int main()
{
Account *acc = new Account("Christian", "abc123");
std::ofstream out("File.txt", std::ios::binary);
out.write(reinterpret_cast<char*>(acc), sizeof(Account));
out.close();
This produces the output (in the file)
ÍÍÍÍChristian ÍÍÍÍÍÍ ÍÍÍÍabc123 ÍÍÍÍÍÍÍÍÍ
I'm confused. Does this method actually work, or does it cause UB because magical things happen within classes and structs that are at the whims of individual compilers?
It doesn't actually work, but it also does not cause undefined behavior.
In C++ it is legal to reinterpret any object as an array of char, so there is no undefined behavior here.
The results, however, are usually only usable if the class is POD (effectively, if the class is a simple C-style struct) and self-contained (that is, the struct doesn't have pointer data members).
Here, Account is not POD because it has std::string members. The internals of std::string are implementation-defined, but it is not POD and it usually has pointers that refer to some heap-allocated block where the actual string is stored (in your specific example, the implementation is using a small-string optimization where the value of the string is stored in the std::string object itself).
There are a few issues:
You aren't always going to get the results you expect. If you had a longer string, the std::string would use a buffer allocated on the heap to store the string and so you will end up just serializing the pointer, not the pointed-to string.
You can't actually use the data you've serialized here. You can't just reinterpret the data as an Account and expect it to work, because the std::string constructors would not get called.
In short, you cannot use this approach for serializing complex data structures.
It's not undefined. Rather, it's platform dependent or implementation defined behavior. This is, in general bad code, because differing versions of the same compiler, or even different switches on the same compiler, can break your save file format.
This could work depending on the contents of the struct, and the platform on which the data is read back. This is a risky, non-portable hack which your teacher should not be propagating.
Do you have pointers or ints in the struct? Pointers will be invalid in the new process when read back, and int format is not the same on all machines (to name but two show-stopping problems with this approach). Anything that's pointed to as part of an object graph will not be handled. Structure packing could be different on the target machine (32-bit vs 64-bit) or even due to compiler options changing on the same hardware, making sizeof(Account) unreliable as a read back data size.
For a better solution, look at a serialization library which handles those issues for you. Boost.Serialization is a good example.
Here, we use the term "serialization"
to mean the reversible deconstruction
of an arbitrary set of C++ data
structures to a sequence of bytes.
Such a system can be used to
reconstitute an equivalent structure
in another program context. Depending
on the context, this might used
implement object persistence, remote
parameter passing or other facility.
Google Protocol Buffers also works well for simple object hierarchies.
It's no substitute for proper serialization. Consider the case of any complex type that contains pointers - if you save the pointers to a file, when you load them up later, they're not going to point to anything meaningful.
Additionally, it's likely to break if the code changes, or even if it's recompiled with different compiler options.
So it's really only useful for short-term storage of simple types - and in doing so, it takes up way more space than necessary for that task.
This method, if it works at all, is far from robust. It is much better to decide on some "serialized" form, whether it is binary, text, XML, etc., and write that out.
The key here: You need a function/code to reliably convert your class or struct to/from a series of bytes. reinterpret_cast does not do this, as the exact bytes in memory used to represent the class or struct can change for things like padding, order of members, etc.
No.
In order for it to work, the structure must be a POD (plain old data: only simple data members and POD data members, no virtual functions... probably some other restrictions which I can't remember).
So if you wanted to do that, you'd need a struct like this:
struct Account {
char login[20];
char password[20];
};
Note that std::string's not a POD, so you'd need plain arrays.
Still, not a good approach for you. Keyword: "serialization" :).
Some version of string don;t actually use dynamic memory for the string when the string is small. Thus store the string internally in the string object.
Think of this:
struct SimpleString
{
char* begin; // beginning of string
char* end; // end of string
char* allocEnd; // end of allocated buffer end <= allocEnd
int* shareCount; // String are usually copy on write
// as a result you need to track the number of people
// using this buffer
};
Now on a 64 bit system. Each pointer is 8 bytes. Thus a string of less than 32 bytes could fit into the same structure without allocating a buffer.
struct CompressedString
{
char buffer[sizeof(SimpleString)];
};
stuct OptString
{
int type; // Normal /Compressed
union
{
SimpleString simple;
CompressedString compressed;
}
};
So this is what I believe is happening above.
A very efficient string implementation is being used thus allowing you to dump the object to file without worrying about pointers (as the std::string are not using pointers).
Obviously this is not portable as it depends on the implementation details of std::string.
So interesting trick, but not portable (and liable to break easily without some compile time checks).
Related
This is something the professor showed us in his scripts. I have not used this method in any code I have written.
Basically, we take a class, or struct, and reinterpret_cast it and save off the entire struct like so:
struct Account
{
Account()
{ }
Account(std::string one, std::string two)
: login_(one), pass_(two)
{ }
private:
std::string login_;
std::string pass_;
};
int main()
{
Account *acc = new Account("Christian", "abc123");
std::ofstream out("File.txt", std::ios::binary);
out.write(reinterpret_cast<char*>(acc), sizeof(Account));
out.close();
This produces the output (in the file)
ÍÍÍÍChristian ÍÍÍÍÍÍ ÍÍÍÍabc123 ÍÍÍÍÍÍÍÍÍ
I'm confused. Does this method actually work, or does it cause UB because magical things happen within classes and structs that are at the whims of individual compilers?
It doesn't actually work, but it also does not cause undefined behavior.
In C++ it is legal to reinterpret any object as an array of char, so there is no undefined behavior here.
The results, however, are usually only usable if the class is POD (effectively, if the class is a simple C-style struct) and self-contained (that is, the struct doesn't have pointer data members).
Here, Account is not POD because it has std::string members. The internals of std::string are implementation-defined, but it is not POD and it usually has pointers that refer to some heap-allocated block where the actual string is stored (in your specific example, the implementation is using a small-string optimization where the value of the string is stored in the std::string object itself).
There are a few issues:
You aren't always going to get the results you expect. If you had a longer string, the std::string would use a buffer allocated on the heap to store the string and so you will end up just serializing the pointer, not the pointed-to string.
You can't actually use the data you've serialized here. You can't just reinterpret the data as an Account and expect it to work, because the std::string constructors would not get called.
In short, you cannot use this approach for serializing complex data structures.
It's not undefined. Rather, it's platform dependent or implementation defined behavior. This is, in general bad code, because differing versions of the same compiler, or even different switches on the same compiler, can break your save file format.
This could work depending on the contents of the struct, and the platform on which the data is read back. This is a risky, non-portable hack which your teacher should not be propagating.
Do you have pointers or ints in the struct? Pointers will be invalid in the new process when read back, and int format is not the same on all machines (to name but two show-stopping problems with this approach). Anything that's pointed to as part of an object graph will not be handled. Structure packing could be different on the target machine (32-bit vs 64-bit) or even due to compiler options changing on the same hardware, making sizeof(Account) unreliable as a read back data size.
For a better solution, look at a serialization library which handles those issues for you. Boost.Serialization is a good example.
Here, we use the term "serialization"
to mean the reversible deconstruction
of an arbitrary set of C++ data
structures to a sequence of bytes.
Such a system can be used to
reconstitute an equivalent structure
in another program context. Depending
on the context, this might used
implement object persistence, remote
parameter passing or other facility.
Google Protocol Buffers also works well for simple object hierarchies.
It's no substitute for proper serialization. Consider the case of any complex type that contains pointers - if you save the pointers to a file, when you load them up later, they're not going to point to anything meaningful.
Additionally, it's likely to break if the code changes, or even if it's recompiled with different compiler options.
So it's really only useful for short-term storage of simple types - and in doing so, it takes up way more space than necessary for that task.
This method, if it works at all, is far from robust. It is much better to decide on some "serialized" form, whether it is binary, text, XML, etc., and write that out.
The key here: You need a function/code to reliably convert your class or struct to/from a series of bytes. reinterpret_cast does not do this, as the exact bytes in memory used to represent the class or struct can change for things like padding, order of members, etc.
No.
In order for it to work, the structure must be a POD (plain old data: only simple data members and POD data members, no virtual functions... probably some other restrictions which I can't remember).
So if you wanted to do that, you'd need a struct like this:
struct Account {
char login[20];
char password[20];
};
Note that std::string's not a POD, so you'd need plain arrays.
Still, not a good approach for you. Keyword: "serialization" :).
Some version of string don;t actually use dynamic memory for the string when the string is small. Thus store the string internally in the string object.
Think of this:
struct SimpleString
{
char* begin; // beginning of string
char* end; // end of string
char* allocEnd; // end of allocated buffer end <= allocEnd
int* shareCount; // String are usually copy on write
// as a result you need to track the number of people
// using this buffer
};
Now on a 64 bit system. Each pointer is 8 bytes. Thus a string of less than 32 bytes could fit into the same structure without allocating a buffer.
struct CompressedString
{
char buffer[sizeof(SimpleString)];
};
stuct OptString
{
int type; // Normal /Compressed
union
{
SimpleString simple;
CompressedString compressed;
}
};
So this is what I believe is happening above.
A very efficient string implementation is being used thus allowing you to dump the object to file without worrying about pointers (as the std::string are not using pointers).
Obviously this is not portable as it depends on the implementation details of std::string.
So interesting trick, but not portable (and liable to break easily without some compile time checks).
I've been doing some reading on immutable strings in general and in c++, here, here, and I think I have a decent understanding of how things work. However I have built a few assumptions that I would just like to run by some people for verification. Some of the assumptions are more general than the title would suggest:
While a const string in c++ is the closest thing to an immutable string in STL, it is only locally immutable and therefore doesn't experience the benefit of being a smaller object. So it has all the trimmings of a standard string object but it can't access all of the member functions. This means that it doesn't create any optimization in the program over non-const? But rather just protects the object from modification? I understand that this is an important attribute but I'm simply looking to know what it means to use this
I'm assuming that an object's member functions exist only once in read-only memory, and how is probably implementation specific, so does a const object have a separate location in memory? Or are the member functions limited in another way? If there are only 'const string' objects and no non-const strings in a code base, does the compiler leave out the inaccessible functions?
I recall hearing that each string literal is stored only once in read-only memory in c++, however I don't find anything on this here. In other words, if I use some string literal multiple times in the same program, each instance references the same location in memory. I'm going to assume no, but would two string objects initialized by the same string literal point to the same string until one is modified?
I apologize if I have included too many disjunct thoughts in the same post, they are all related to me as string representation and just learning how to code better.
As far as I know, std::string cannot assume that the input string is a read-only constant string from your data segment. Therefore, point (3) does not apply. It will most likely allocate a buffer and copy the string in the buffer.
Note that C++ (like C) has a const qualifier for compilation time, it is a good idea to use it for two reasons: (a) it will help you find bugs, a statement such as a = 5; if a is declared const fails to compile; (b) the compile may be able to optimize the code more easily (it may otherwise not be able to figure out that the object is constant.)
However, C++ has a special cast to remove the const-ness of a variable. So our a variable can be cast and assigned a value as in const_cast<int&>(a) = 5;. An std::string can also get its const-ness removed. (Note that C does not have a special cast, but it offers the exact same behavior: * (int *) &a = 5)
Are all class members defined in the final binary?
No. std::string as most of the STL uses templates. Templates are compiled once per unit (your .o object files) and the link will reduce duplicates automatically. So if you look at the size of all the .o files and add them up, the final output will be a lot small.
That also means only the functions that are used in a unit are compiled and saved in the object file. Any other function "disappear". That being said, often function A calls function B, so B will be defined, even if you did not explicitly call it.
On the other hand, because these are templates, very often the functions get inlined. But that is a choice by the compiler, not the language or the STL (although you can use the inline keyword for fun; the compiler has the right to ignore it anyway).
Smaller objects... No, in C++ an object has a very specific size that cannot change. Otherwise the sizeof(something) would vary from place to place and C/C++ would go berserk!
Static strings that are saved in read-only data sections, however, can be optimized. If the linker/compiler are good enough, they will be able to merge the same string in a single location. These are just plan char * or wchar_t *, of course. The Microsoft compiler has been able to do that one for a while now.
Yet, the const on a string does not always force your string to be put in a read-only data section. That will generally depend on your command line option. C++ may have corrected that, but I think C still put everything in a read/write section unless you use the correct command line option. That's something you need to test to make sure (your compiler is likely to do it, but without testing you won't know.)
Finally, although std::string may not use it, C++ offers a quite interesting keyword called mutable. If you heard about it, you would know that a variable member can be marked as mutable and that means even const functions can modify that variable member. There are two main reason for using that keyword: (1) you are writing a multi-thread program and that class has to be multi-thread safe, in that case you mark the mutex as mutable, very practical; (2) you want to have a buffer used to cache a computed value which is costly, that buffer is only initialized when that value is requested to not waste time otherwise, that buffer is made mutable too.
Therefore the "immutable" concept is only really something that you can count on at a higher level. In practice, reality is often quite different. For example, an std::string c_str() function may reallocate the buffer to add the necessary '\0' terminator, yet that function is marked as being a const:
const CharT* c_str() const;
Actually, an implementation is free to allocate a completely different buffer, copy its existing data to that buffer and return that bare pointer. That means internally the std::string could be allocate many buffers to store large strings (instead of using realloc() which can be costly.)
Once thing, though... when you copy string A into string B (B = A;) the string data does not get copied. Instead A and B will share the same data buffer. Once you modify A or B, and only then, the data gets copied. This means calling a function which accepts a string by copy does not waste that much time:
int func(std::string a)
{
...
if(some_test)
{
// deep copy only happens here
a += "?";
}
}
std::string b;
func(b);
The characters of string b do not get copied at the time func() gets called. And if func() never modifies 'a', the string data remains the same all along. This is often referenced as a shallow copy or copy on write.
Profiling of my application reveals that it is spending nearly 5% of CPU time in string allocation. In many, many places I am making C++ std::string objects from a 64MB char buffer. The thing is, the buffer never changes during the running of the program. My analysis of std::string(const char *buf,size_t buflen) calls is that that the string is being copied because the buffer might change after the string is made. That isn't the problem here. Is there a way around this problem?
EDIT: I am working with binary data, so I can't just pass around char *s. Besides, then I would have a substantial overhead from always scanning for the NULL, which the std::string avoids.
If the string isn't going to change and if its lifetime is guaranteed to be longer than you are going to use the string, then don't use std::string.
Instead, consider a simple C string wrapper, like the proposed string_ref<T>.
Binary data? Stop using std::string and use std::vector<char>. But that won't fix your issue of it being copied. From your description, if this huge 64MB buffer will never change, you truly shouldn't be using std::string or std::vector<char>, either one isn't a good idea. You really ought to be passing around a const char* pointer (const uint8_t* would be more descriptive of binary data but under the covers it's the same thing, neglecting sign issues). Pass around both the pointer and a size_t length of it, or pass the pointer with another 'end' pointer. If you don't like passing around separate discrete variables (a pointer and the buffer’s length), make a struct to describe the buffer & have everyone use those instead:
struct binbuf_desc {
uint8_t* addr;
size_t len;
binbuf_desc(addr,len) : addr(addr), len(len) {}
}
You can always refer to your 64MB buffer (or any other buffer of any size) by using binbuf_desc objects. Note that binbuf_desc objects don’t own the buffer (or a copy of it), they’re just a descriptor of it, so you can just pass those around everywhere without having to worry about binbuf_desc’s making unnecessary copies of the buffer.
There is no portable solution. If you tell us what toolchain you're using, someone might know a trick specific to your library implementation. But for the most part, the std::string destructor (and assignment operator) is going to free the string content, and you can't free a string literal. (It's not impossible to have exceptions to this, and in fact the small string optimization is a common case that skips deallocation, but these are implementation details.)
A better approach is to not use std::string when you don't need/want dynamic allocation. const char* still works just fine in modern C++.
Since C++17, std::string_view may be your way. It can be initialized both from a bare C string (with or without a length), or a std::string
There is no constraint that the data() method returns a zero-terminated string though.
If you need this "zero-terminated on request" behaviour, there are alternatives such as str_view from Adam Sawicki that looks satisfying (https://github.com/sawickiap/str_view)
Seems that using const char * instead of std::string is the best way to go for you. But you should also consider how you are using strings. It may be possible that there could be going on implicit conversion from char pointers to std::string objects. This could happen during function calls, for example.
Pointers cannot be persisted directly to file, because they point to absolute addresses. To address this issue I wrote a relative_ptr template that holds an offset instead of an absolute address.
Based on the fact that only trivially copyable types can be safely copied bit-by-bit, I made the assumption that this type needed to be trivially copyable to be safely persisted in a memory-mapped file and retrieved later on.
This restriction turned out to be a bit problematic, because the compiler generated copy constructor does not behave in a meaningful way. I found nothing that forbid me from defaulting the copy constructor and making it private, so I made it private to avoid accidental copies that would lead to undefined behaviour.
Later on, I found boost::interprocess::offset_ptr whose creation was driven by the same needs. However, it turns out that offset_ptr is not trivially copyable because it implements its own custom copy constructor.
Is my assumption that the smart pointer needs to be trivially copyable to be persisted safely wrong?
If there's no such restriction, I wonder if I can safely do the following as well. If not, exactly what are the requirements a type must fulfill to be usable in the scenario I described above?
struct base {
int x;
virtual void f() = 0;
virtual ~base() {} // virtual members!
};
struct derived : virtual base {
int x;
void f() { std::cout << x; }
};
using namespace boost::interprocess;
void persist() {
file_mapping file("blah");
mapped_region region(file, read_write, 128, sizeof(derived));
// create object on a memory-mapped file
derived* d = new (region.get_address()) derived();
d.x = 42;
d->f();
region.flush();
}
void retrieve() {
file_mapping file("blah");
mapped_region region(file, read_write, 128, sizeof(derived));
derived* d = region.get_address();
d->f();
}
int main() {
persist();
retrieve();
}
Thanks to all those that provided alternatives. It's unlikely that I will be using something else any time soon, because as I explained, I already have a working solution. And as you can see from the use of question marks above, I'm really interested in knowing why Boost can get away without a trivially copyable type, and how far can you go with it: it's quite obvious that classes with virtual members will not work, but where do you draw the line?
To avoid confusion let me restate the problem.
You want to create an object in mapped memory in such a way that after the application is closed and reopened the file can be mapped once again and object used without further deserialization.
POD is kind of a red herring for what you are trying to do. You don't need to be binary copyable (what POD means); you need to be address-independent.
Address-independence requires you to:
avoid all absolute pointers.
only use offset pointers to addresses within the mapped memory.
There are a few correlaries that follow from these rules.
You can't use virtual anything. C++ virtual functions are implemented with a hidden vtable pointer in the class instance. The vtable pointer is an absolute pointer over which you don't have any control.
You need to be very careful about the other C++ objects your address-independent objects use. Basically everything in the standard library may break if you use them. Even if they don't use new they may use virtual functions internally, or just store the address of a pointer.
You can't store references in the address-independent objects. Reference members are just syntactic sugar over absolute pointers.
Inheritance is still possible but of limited usefulness since virtual is outlawed.
Any and all constructors / destructors are fine as long as the above rules are followed.
Even Boost.Interprocess isn't a perfect fit for what you're trying to do. Boost.Interprocess also needs to manage shared access to the objects, whereas you can assume that you're only one messing with the memory.
In the end it may be simpler / saner to just use Google Protobufs and conventional serialization.
Yes, but for reasons other than the ones that seem to concern you.
You've got virtual functions and a virtual base class. These lead to a host of pointers created behind your back by the compiler. You can't turn them into offsets or anything else.
If you want to do this style of persistence, you need to eschew 'virtual'. After that, it's all a matter of the semantics. Really, just pretend you were doing this in C.
Even PoD has pitfalls if you are interested in interoperating across different systems or across time.
You might look at Google Protocol Buffers for a way to do this in a portable fashion.
Not as much an answer as a comment that grew too big:
I think it's going to depend on how much safety you're willing to trade for speed/ease of usage. In the case where you have a struct like this:
struct S { char c; double d; };
You have to consider padding and the fact that some architectures might not allow you to access a double unless it is aligned on a proper memory address. Adding accessor functions and fixing the padding tackles this and the structure is still memcpy-able, but now we're entering territory where we're not really gaining much of a benefit from using a memory mapped file.
Since it seems like you'll only be using this locally and in a fixed setup, relaxing the requirements a little seems OK, so we're back to using the above struct normally. Now does the function have to be trivially copyable? I don't necessarily think so, consider this (probably broken) class:
1 #include <iostream>
2 #include <utility>
3
4 enum Endian { LittleEndian, BigEndian };
5 template<typename T, Endian e> struct PV {
6 union {
7 unsigned char b[sizeof(T)];
8 T x;
9 } val;
10
11 template<Endian oe> PV& operator=(const PV<T,oe>& rhs) {
12 val.x = rhs.val.x;
13 if (e != oe) {
14 for(size_t b = 0; b < sizeof(T) / 2; b++) {
15 std::swap(val.b[sizeof(T)-1-b], val.b[b]);
16 }
17 }
18 return *this;
19 }
20 };
It's not trivially copyable and you can't just use memcpy to move it around in general, but I don't see anything immediately wrong with using a class like this in the context of a memory mapped file (especially not if the file matches the native byte order).
Update:
Where do you draw the line?
I think a decent rule of thumb is: if the equivalent C code is acceptable and C++ is just being used as a convenience, to enforce type-safety, or proper access it should be fine.
That would make boost::interprocess::offset_ptr OK since it's just a helpful wrapper around a ptrdiff_t with special semantic rules. In the same vein struct PV above would be OK as it's just meant to byte swap automatically, though like in C you have to be careful to keep track of the byte order and assume that the structure can be trivially copied. Virtual functions wouldn't be OK as the C equivalent, function pointers in the structure, wouldn't work. However something like the following (untested) code would again be OK:
struct Foo {
unsigned char obj_type;
void vfunc1(int arg0) { vtables[obj_type].vfunc1(this, arg0); }
};
That is not going to work. Your class Derived is not a POD, therefore it depends on the compiler how it compiles your code. In another words - do not do it.
by the way, where are you releasing your objects? I see are creaing in-place your objects, but you are not calling destructor.
Absolutely not. Serialisation is a well established functionality that is used in numerous of situations, and certainly does not require PODs. What it does require is that you specify a well defined serialisation binary interface (SBI).
Serialisation is needed anytime your objects leave the runtime environment, including shared memory, pipes, sockets, files, and many other persistence and communication mechanisms.
Where PODs help is where you know you are not leaving the processor architecture. If you will never be changing versions between writers of the object (serialisers) and readers (deserialisers) and you have no need for dynamically-sized data, then PODs allow easy memcpy based serialisers.
Commonly, though, you need to store things like strings. Then, you need a way to store and retrieve the dynamic information. Sometimes, 0 terminated strings are used, but that is pretty specific to strings, and doesn't work for vectors, maps, arrays, lists, etc. You will often see strings and other dynamic elements serialized as [size][element 1][element 2]… this is the Pascal array format. Additionally, when dealing with cross machine communications, the SBI must define integral formats to deal with potential endianness issues.
Now, pointers are usually implemented by IDs, not offsets. Each object that needs to be serialise can be given an incrementing number as an ID, and that can be the first field in the SBI. The reason you usually don't use offsets is because you may not be able to easily calculate future offsets without going through a sizing step or a second pass. IDs can be calculated inside the serialisation routine on first pass.
Additional ways to serialize include text based serialisers using some syntax like XML or JSON. These are parsed using standard textual tools that are used to reconstruct the object. These keep the SBI simple at the cost of pessimising performance and bandwidth.
In the end, you typically build an architecture where you build serialisation streams that take your objects and translate them member by member to the format of your SBI. In the case of shared memory, it typically pushes the members directly on to the memory after acquiring the shared mutex.
This often looks like
void MyClass::Serialise(SerialisationStream & stream)
{
stream & member1;
stream & member2;
stream & member3;
// ...
}
where the & operator is overloaded for your different types. You may take a look at boost.serialize for more examples.
I have a list of objects that I would like to store in a file as small as possible for later retrieval. I have been carefully reading this tutorial, and am beginning (I think) to understand, but have several questions. Here is the snippet I am working with:
static bool writeHistory(string fileName)
{
fstream historyFile;
historyFile.open(fileName.c_str(), ios::binary);
if (historyFile.good())
{
list<Referral>::iterator i;
for(i = AllReferrals.begin();
i != AllReferrals.end();
i++)
{
historyFile.write((char*)&(*i),sizeof(Referral));
}
return true;
} else return false;
}
Now, this is adapted from the snippet
file.write((char*)&object,sizeof(className));
taken from the tutorial. Now what I believe it is doing is converting the object to a pointer, taking the value and size and writing that to the file. But if it is doing this, why bother doing the conversions at all? Why not take the value from the beginning? And why does it need the size? Furthermore, from my understanding then, why does
historyFile.write((char*)i,sizeof(Referral));
not compile? i is an iterator (and isn't an iterator a pointer?). or simply
historyFile.write(i,sizeof(Referral));
Why do i need to be messing around with addresses anyway? Aren't I storing the data in the file? If the addresses/values are persisting on their own, why can't i just store the addresses deliminated in plain text and than take their values later?
And should I still be using the .txt extension? < edit> what should I use instead then? I tried .dtb and was not able to create the file. < /edit> I actually can't even seem to get file to open without errors with the ios::binary flag. I'm also having trouble passing the filename (as a string class string, converted back by c_str(), it compiles but gives an error).
Sorry for so many little questions, but it all basically sums up to how to efficiently store objects in a file?
What you are trying to do is called serialization. Boost has a very good library for doing this.
What you are trying to do can work, in some cases, with some very important conditions. It will only work for POD types. It is only guaranteed to work for code compiled with the same version of the compiler, and with the same arguments.
(char*)&(*i)
says to take the iterator i, dereference it to get your object, take the address of it and treat it as an array of characters. This is the start of what is being written to the file. sizeof(Referral) is the number of bytes that will be written out.
An no, an iterator is not necessarily a pointer, although pointers meet all the requirements for an iterator.
Question #1 why does ... not compile?
Answer: Because i is not a Referral* -- it's a list::iterator ;; an iterator is an abstraction over a pointer, but it's not a pointer.
Question #2 should I still be using the .txt extension?
Answer: probably not. .txt is associated by many systems to the MIME type text/plain.
Unasked Question: does this work?
Answer: if a Referral has any pointers on it, NO. When you try to read the Referrals from the file, the pointers will be pointing to the location on memory where something used to live, but there is no guarantee that there is anything valid there anymore, least of all the thing that the pointers were pointing to originally. Be careful.
isn't an iterator a pointer?
An iterator is something that acts like a pointer from the outside. In most (perhaps all) cases, it is actually some form of object instead of a bare pointer. An iterator might contain a pointer as an internal member variable that it uses to perform its job, but it just as well might contain something else or additional variables if necessary.
Furthermore, even if an iterator has a simple pointer inside of it, it might not point directly at the object you're interested in. It might point to some kind of bookkeeping component used by the container class which it can then use to get the actual object of interest. Fortunately, we don't need to care what those internal details actually are.
So with that in mind, here's what's going on in (char*)&(*i).
*i returns a reference to the object stored in the list.
& takes the address of that object, thus yielding a pointer to the object.
(char*) casts that object pointer into a char pointer.
That snippet of code would be the short form of doing something like this:
Referral& r = *i;
Referral* pr = &r;
char* pc = (char*)pr;
Why do i need to be messing around
with addresses anyway?
And why does it need the size?
fstream::write is designed to write a series of bytes to a file. It doesn't know anything about what those bytes mean. You give it an address so that it can write the bytes that exist starting wherever that address points to. You give it a size so that it knows how many bytes to write.
So if I do:
MyClass ExampleObject;
file.write((char*)ExampleObject, sizeof(ExampleObject));
Then it writes all the bytes that exist directly within ExampleObject to the file.
Note: As others have mentioned, if the object you want to write has members that dynamically allocate memory or otherwise make use of pointers, then the pointed to memory will not be written by a single simple fstream::write call.
will serialization give a significant boost in storage efficiency?
In theory, binary data can often be both smaller than plain-text and faster to read and write. In practice, unless you're dealing with very large amounts of data, you'll probably never notice the difference. Hard drives are large and processors are fast these days.
And efficiency isn't the only thing to consider:
Binary data is harder to examine, debug, and modify if necessary. At least without additional tools, but even then plain-text is still usually easier.
If your data files are going to persist between different versions of your program, then what happens if you need to change the layout of your objects? It can be irritating to write code so that a version 2 program can read objects in a version 1 file. Furthermore, unless you take action ahead of time (like by writing a version number in to the file) then a version 1 program reading a version 2 file is likely to have serious problems.
Will you ever need to validate the data? For instance, against corruption or against malicious changes. In a binary scheme like this, you'd need to write extra code. Whereas when using plain-text the conversion routines can often help fill the roll of validation.
Of course, a good serialization library can help out with some of these issues. And so could a good plain-text format library (for instance, a library for XML). If you're still learning, then I'd suggest trying out both ways to get a feel for how they work and what might do best for your purposes.
What you are trying to do (reading and writing raw memory to/from file) will invoke undefined behaviour, will break for anything that isn't a plain-old-data type, and the files that are generated will be platform dependent, compiler dependent and probably even dependent on compiler settings.
C++ doesn't have any built-in way of serializing complex data. However, there are libraries that you might find useful. For example:
http://www.boost.org/doc/libs/1_40_0/libs/serialization/doc/index.html
Did you have already a look at boost::serialization, it is robust, has a good documentation, supports versioning and if you want to switch to an XML format instead of a binary one, it'll be easier.
Fstream.write simply writes raw data to a file. The first parameter is a pointer to the starting address of the data. The second parameter is the length (in bytes) of the object, so write knows how many bytes to write.
file.write((char*)&object,sizeof(className));
^
This line is converting the address of object to a char pointer.
historyFile.write((char*)i,sizeof(Referral));
^
This line is trying to convert an object (i) into a char pointer (not valid)
historyFile.write(i,sizeof(Referral));
^
This line is passing write an object, when it expects a char pointer.