Writing a struct into a stringstream in C++ - c++

I have some structs as:
struct dHeader
{
uint8_t blockID;
uint32_t blockLen;
uint32_t bodyNum;
};
struct dBody
{
char namestr[10];
uint8_t blk_version;
uint32_t reserved1;
}
and I have a stringstream as:
std::stringstream Buffer(std::iostream::in | std::iostream::out);
I want to write a dHdr and multiple dBody structs into Buffer with
Buffer << Hdr1;
Buffer << Body1;
Buffer << Body1;
I get the error:
error: no match for 'operator<<' in 'Buffer << Hdr1'
If I try it with:
Buffer.write(reinterpret_cast<char*>(&Hdr1), sizeof(dbHdr1));
Buffer.write(reinterpret_cast<char*>(&Body1), sizeof(Body1));
Buffer.write(reinterpret_cast<char*>(&Body2), sizeof(Body2));
I get confused about the packing and memory alignment.
What is the best way to write a struct into a stringstream?
And read
the stringstream into a regular string?

For each of your structures, you need to define something similar to this:
struct dHeader
{
uint8_t blockID;
uint32_t blockLen;
uint32_t bodyNum;
};
std::ostream& operator<<(std::ostream& out, const dHeader& h)
{
return out << h.blockID << " " << h.blockLen << " " << h.bodyNum;
}
std::istream& operator>>(std::istream& in, dHeader& h) // non-const h
{
dHeader values; // use extra instance, for setting result transactionally
bool read_ok = (in >> values.blockID >> values.blockLen >> values.bodyNum);
if(read_ok /* todo: add here any validation of data in values */)
h = std::move(values);
/* note: this part is only necessary if you add extra validation above
else
in.setstate(std::ios_base::failbit); */
return in;
}
(similar for the other structures).
Edit: An un-buffered read/write implementation has the following drawbacks:
it is unformatted; This may not be an issue for a small utility application, if you control where it is compiled and run, but normally, if you take the serialized data and run/compile the app on a different architecture you will have issues with endianness; you will also need to ensure the types you use are not-architecture dependent (i.e. keep using uintXX_t types).
it is brittle; The implementation depends on the structures only containing POD types. If you add a char* to your structure later, your code will compile the same, just expose undefined behavior.
it is obscure (clients of your code would expect to either see an interface defined for I/O or assume that your structures support no serialization). Normally, nobody thinks "maybe I can serialize, but using un-buffered I/O" - at least not when being the client of a custom struct or class implementation.
The issues can be ameliorated, by adding i/o stream operators, implemented in terms of un-buffered reads and writes.
Example code for the operators above:
std::ostream& operator<<(std::ostream& out, const dHeader& h)
{
out.write(reinterpret_cast<char*>(&h), sizeof(dHeader));
return out;
}
std::istream& operator>>(std::istream& in, dHeader& h) // non-const h
{
dHeader values; // use extra instance, for setting result transactionally
bool read_ok = in.read( reinterpret_cast<char*>(&values), sizeof(dHeader) );
if(read_ok /* todo: add here any validation of data in values */)
h = std::move(values);
/* note: this part is only necessary if you add extra validation above
else
in.setstate(std::ios_base::failbit); */
return in;
}
This centralizes the code behind an interface (i.e. if your class no longer supports un-buffered writes, you will have to change code in one place), and makes your intent obvious (implement serialization for your structure). It is still brittle, but less so.

You can provide an overload for std::ostream::operator<< like
std::ostream& operator<<(std::ostream&, const dHeader&);
std::ostream& operator<<(std::ostream&, const dBody&);
For more information see this stackoverflow question.

Related

How to serialize / deserialize INTs in C++

So, I´d want to implement simple serialization for some int variables in C++ and I really don´t know how...
My goal is the following:
I essentially want to be able to convert any integer to binary, preferably with a simple function call.
// Here´s some dummy code of what I essentially want to do
int TestVariable = 25;
String FilePath = "D:\dev\Test.txt";
Serialize(TestVariable, FilePath);
// [...]
// at some later point in the code, when I want to access the file
Deserialize(&TestVariable, FilePath);
I already heard of libraries like Boost, but I think that´d be a bit overkill when I just want to serialize simple variables.
Already thank you in advance for your answers. :D
First of all, there is a little "inconsistency": you're asking for binary serialization, in something that looks like a text file. I will assume you really want a binary output.
The only thing to take care about when serializing integers is the endianness of the machine (even though most of machines are little endian).
In C++17 or lower the easiest way is a runtime check like
inline bool littleEndian()
{
static const uint32_t test = 0x01020304;
return *((uint8_t *)&test) == 0x04;
}
C++20 introduces a compile-time check so you can rewrite the previous as
constexpr bool littleEndian()
{
return std::endian::native == std::endian::little;
}
At this point what you want, is writing in a standard way all integers.
Usually BigEndian is the standard.
template <typename T>
inline static T revert(T num)
{
T res;
for (std::size_t i = 0; i < sizeof(T); i++)
((uint8_t *)&res)[i] = ((uint8_t *)&num)[sizeof(T) - 1 - i];
return res;
}
at this point your serializer would be:
template <typename T>
void serialize(T TestVariable, std::string& FilePath)
{
static_assert(std::is_integral<T>::value); //check that T is of {char, int, ...} type
static_assert(!std::is_reference<T>::value); //check that T is not a reference
std::ofstream o(FilePath);
if (littleEndian())
TestVariable = revert(TestVariable);
o.write((char *)&TestVariable, sizeof(T));
}
and your deserializer would be
template <typename T>
void deserialize(T *TestVariable, std::string FilePath)
{
static_assert(std::is_integral<T>::value);
std::ifstream i(FilePath);
i.read((char *)TestVariable, sizeof(T));
if (littleEndian())
*TestVariable = revert(*TestVariable);
}
Notice: this code is just an example that works with your interface, you just have to include <iostream>, <fstream> and if you're using the c++20 version, include <bit>
First let me laydown the reasons not to do this:
It will be not safe to reuse the files on a different machine
Speed could be much slower than any library
Complex type like pointer, maps or structure are very difficult to implement right
But if you really what to do something custom made you can simply use streams, here is an example using stringstream (I always use stringstream in my unit test because I want them to be quick), but you can simply modify it to use filestream.
Please note, the type must be default constructable to be used by deserialize template function. That must be a very stringent requirement for complex classes.
#include <sstream>
#include <iostream>
template<typename T>
void serialize(std::ostream& os, const T& value)
{
os << value;
}
template<typename T>
T deserialize(std::istream& is)
{
T value;
is >> value;
return value;
}
int main()
{
std::stringstream ss;
serialize(ss, 1353);
serialize(ss, std::string("foobar"));
std::cout << deserialize<int>(ss) << " " << deserialize<std::string>(ss) << std::endl;
return 0;
}

What's the most simple way to read and write data from a struct to and from a file in c++ without serialization library?

I am writing a program to that regularly stores and reads structs in the form below.
struct Node {
int leftChild = 0;
int rightChild = 0;
std::string value;
int count = 1;
int balanceFactor = 0;
};
How would I read and write nodes to a file? I would like to use the fstream class with seekg and seekp to do the serialization manually but I'm not sure how it works based off of the documentation and am struggling with finding decent examples.
[edit] specified that i do not want to use a serialization library.
This problem is known as serialization. Use a serializing library like e.g. Google's Protocol Buffers or Flatbuffers.
To serialize objects, you will need to stick to the concept that the object is writing its members to the stream and reading members from the stream. Also, member objects should write themselves to the stream (as well as read).
I implemented a scheme using three member functions, and a buffer:
void load_from_buffer(uint8_t * & buffer_pointer);
void store_to_buffer(uint8_t * & buffer_pointer) const;
unsigned int size_on_stream() const;
The size_on_stream would be called first in order to determine the buffer size for the object (or how much space it occupied in the buffer).
The load_from_buffer function loads the object's members from a buffer using the given pointer. The function also increments the pointer appropriately.
The store_to_buffer function stores the objects's members to a buffer using the given pointer. The function also increments the pointer appropriately.
This can be applied to POD types by using templates and template specializations.
These functions also allow you to pack the output into the buffer, and load from a packed format.
The reason for I/O to the buffer is so you can use the more efficient block stream methods, such as write and read.
Edit 1: Writing a node to a stream
The problem with writing or serializing a node (such a linked list or tree node) is that pointers don't translate to a file. There is no guarantee that the OS will place your program in the same memory location or give you the same area of memory each time.
You have two options: 1) Only store the data. 2) Convert the pointers to file offsets. Option 2) is very complicated as it may require repositioning the file pointer because file offsets may not be known ahead of time.
Also, be aware of variable length records like strings. You can't directly write a string object to a file. Unless you use a fixed string width, the string size will change. You will either need to prefix the string with the string length (preferred) or use some kind of terminating character, such as '\0'. The string length first is preferred because you don't have to search for the end of the string; you can use a block read to read in the text.
If you replace the std::string by a char buffer, you can use fwrite and fread to write/read your structure to and from disk as a fixed size block of information. Within a single program that should work ok.
The big bug-a-boo is the fact that compilers will insert padding between fields in order to keep the data aligned. That makes the code less portable as if a module is compiled with different alignment requirements the structure literally can be a different size, throwing your fixed size assumption out the door.
I would lean toward a well worn in serialization library of some sort.
Another approach would be to overload the operator<< and operator>> for the structure so that it knows how to save/load itself. That would reduce the problem to knowing where to read/write the node. In theory, your left and right child fields could be seek addresses to where the nodes actually reside, while a new field could hold the seek location of the current node.
When implementing your own serialization method, the first decision you'll have to make is whether you want the data on disk to be in binary format or textual format.
I find it easier to implement the ability to save to a binary format. The number of functions needed to implement that is small. You need to implement functions that can write the fundamental types, arrays of known size at compile time, dynamic arrays and strings. Everything else can be built on top of those.
Here's something very close to what I recently put into production code.
#include <cstring>
#include <fstream>
#include <cstddef>
#include <stdexcept>
// Class to write to a stream
struct Writer
{
std::ostream& out_;
Writer(std::ostream& out) : out_(out) {}
// Write the fundamental types
template <typename T>
void write(T number)
{
out_.write(reinterpret_cast<char const*>(&number), sizeof(number));
if (!out_ )
{
throw std::runtime_error("Unable to write a number");
}
}
// Write arrays whose size is known at compile time
template <typename T, uint64_t N>
void write(T (&array)[N])
{
for(uint64_t i = 0; i < N; ++i )
{
write(array[i]);
}
}
// Write dynamic arrays
template <typename T>
void write(T array[], uint64_t size)
{
write(size);
for(uint64_t i = 0; i < size; ++i )
{
write(array[i]);
}
}
// Write strings
void write(std::string const& str)
{
write(str.c_str(), str.size());
}
void write(char const* str)
{
write(str, std::strlen(str));
}
};
// Class to read from a stream
struct Reader
{
std::ifstream& in_;
Reader(std::ifstream& in) : in_(in) {}
template <typename T>
void read(T& number)
{
in_.read(reinterpret_cast<char*>(&number), sizeof(number));
if (!in_ )
{
throw std::runtime_error("Unable to read a number.");
}
}
template <typename T, uint64_t N>
void read(T (&array)[N])
{
for(uint64_t i = 0; i < N; ++i )
{
read(array[i]);
}
}
template <typename T>
void read(T*& array)
{
uint64_t size;
read(size);
array = new T[size];
for(uint64_t i = 0; i < size; ++i )
{
read(array[i]);
}
}
void read(std::string& str)
{
char* s;
read(s);
str = s;
delete [] s;
}
};
// Test the code.
#include <iostream>
void writeData(std::string const& file)
{
std::ofstream out(file);
Writer w(out);
w.write(10);
w.write(20.f);
w.write(200.456);
w.write("Test String");
}
void readData(std::string const& file)
{
std::ifstream in(file);
Reader r(in);
int i;
r.read(i);
std::cout << "i: " << i << std::endl;
float f;
r.read(f);
std::cout << "f: " << f << std::endl;
double d;
r.read(d);
std::cout << "d: " << d << std::endl;
std::string s;
r.read(s);
std::cout << "s: " << s << std::endl;
}
void testWriteAndRead(std::string const& file)
{
writeData(file);
readData(file);
}
int main()
{
testWriteAndRead("test.bin");
return 0;
}
Output:
i: 10
f: 20
d: 200.456
s: Test String
The ability to write and read a Node is very easily implemented.
void write(Writer& w, Node const& n)
{
w.write(n.leftChild);
w.write(n.rightChild);
w.write(n.value);
w.write(n.count);
w.write(n.balanceFactor);
}
void read(Reader& r, Node& n)
{
r.read(n.leftChild);
r.read(n.rightChild);
r.read(n.value);
r.read(n.count);
r.read(n.balanceFactor);
}
The process you are referring to are known as serialization. I'd recommend Cereal at http://uscilab.github.io/cereal/
It supports both json, xml and binary serialization and is very easy to use (with good examples).
(Unfortunately it does not support my favourite format yaml)

Saving a game state using serialization C++

I have a class called Game which contains the following:
vector<shared_ptr<A>> attr; // attributes
D diff; // differences
vector<shared_ptr<C>> change; // change
My question is, how can I write these (save) to a file and read/load it up later?
I thought about using a struct with these in it, and simply saving the struct but I have no idea where to start.
This is my attempt so far, with just trying to save change. I've read up a lot on the issue and my issue (well one of them, anyway) here seems to be that I am storing pointers which after closing the program would be invalid (compounded by the fact that I also free them before exiting).
/* Saves state to file */
void Game::saveGame(string toFile) {
ofstream ofs(toFile, ios::binary);
ofs.write((char *)&this->change, sizeof(C));
/* Free memory code here */
....
exit(0);
};
/* Loads game state from file */
void Game::loadGame(string fromFile) {
ifstream ifs(fromFile, ios::binary);
ifs.read((char *)&this->change, sizeof(C));
this->change.toString(); // display load results
};
Can anyone guide me in the right direction for serializing this data? I'd like to use only standard packages, so no boost.
Thanks.
I have no idea how is implemented classes A, C or D, but that is the first question: how to serialize an object of that class. For the C case, you need to implement something like this:
std::ostream& operator <<(std::ostream& os, const C& c) {
// ... code to serialize c to an output stream
return os;
}
std::istream& operator >>(std::istream& is, C& c) {
// ... code to populate c contents from the input stream
return is;
}
or, if you prefer, create a write() and read() function for that class.
Well, if you want to serialize a vector<shared_ptr<C>> looks obvious you don't want to serialize the pointer, but the contents. So you need to dereference each of those pointers and serialize. If the size of the vector is not known before loading it (i.e., is not always the same), you'll need to store that information. Then, you can create a pair of functions to serialize the complete vector:
std::ostream& operator <<(std::ostream& os, const std::vector<std::shared_ptr<C>>& vc) {
// serialize the size of the vector using << operator
// for each element of the vector, let it be called 'pc'
os << *pc << std::endl; // store the element pointed by the pointer, not the pointer.
return os;
}
std::istream& operator >>(std::istream& is, std::vector<std::shared_ptr<C>>& c) {
// read the size of the vector using >> operator
// set the size of the vector
// for each i < sizeo of the vector, let 'auto &pc = vc[i]' be a reference to the i-th element of the vector
C c; // temporary object
is >> c; // read the object stored in the stream
pc = std::make_shared<C>(c); // construct the shared pointer, assuming the class C has copy constructor
return is;
}
And then,
/* Saves state to file */
void Game::saveGame(string toFile) {
ofstream ofs(toFile);
ofs << change;
....
};
/* Loads game state from file */
void Game::loadGame(string fromFile) {
ifstream ifs(fromFile);
ifs >> change;
};
I know there are a lot of things you still need to resolve. I suggest you to investigate to resolve them so you understand well how to solve your problem.
Not only are you saving pointers, you're trying to save a shared_ptr but using the wrong size.
You need to write serialization functions for all your classes, taking care to never just write the raw bits of a non-POD type. It's safest to always implement member-by-member serialization for everything, because you never know what the future will bring.
Then handling collections of them is just a matter of also storing how many there are.
Example for the Cs:
void Game::save(ofstream& stream, const C& data)
{
// Save data as appropriate...
}
void Game::saveGame(string toFile) {
ofstream ofs(toFile, ios::binary);
ofs.write((char *)change.size(), sizeof(change.size());
for (vector<shared_ptr<C>>::const_iterator c = change.begin(); c != change.end(); ++c)
{
save(ofs, **c);
}
};
shared_ptr<C> Game::loadC(ofstream& stream)
{
shared_ptr<C> data(new C);
// load the object...
return data;
}
void Game::loadGame(string fromFile) {
change.clear();
size_t count = 0;
ifstream ifs(fromFile, ios::binary);
ifs.read((char *)&count, sizeof(count));
change.reserve(count);
for (int i = 0; i < count; ++i)
{
change.push_back(loadC(ifs));
}
};
All the error handling is missing of course - you would need to add that.
It's actually a good idea to at least start with text storage (using << and >>) instead of binary. It's easier to find bugs, or mess around with the saved state, when you can just edit it in a text editor.
Writing your own serialization is quite a challenge. Even if you do not use boost serializatoin I would recommend you learn how to use it and comprehend how it works rather than discovering it yourself.
When serializing you finally end up with a buffer of data of which content you have very vague idea. You have to save everything you need to be able to restore it. You read it chunk by chunk. Example (not compiled, not tested and not stylish ):
void save(ostream& out, const string& s)
{
out << s.size();
out.write(s.c_str(), s.size());
}
void load(istream& in, string& s)
{
unsigned len;
in >> len;
s.resize(len);
in.read((char*)s, len);
}
struct Game
{
void save(ostream& out)
{
player.save(out);
};
void load(istream& in)
{
player.load(in);
}
};
struct Player
{
void save(ostream& out)
{
// save in the same order as loading, serializing everything you need to read it back
save(out, name);
save(out, experience);
}
void load(istream& in)
{
load(in, name);
load(in, experience); //
}
};
I do not know why you would do it to yourself instead of using boost but those are some of the cases you should consider:
- type - you must figure out a way to know what "type of change" you actually have there.
- a string (vector, whatever) - size + data (then the first thing you read back from the string is the length, you resize it and copy the "length" number of characters)
- a pointer - save the data pointed by pointer, then upon deserialization you have to allocate it, construct it (usually default construct) and read back the data and reset the members to their respective values. Note: you have to avoid memory leakage.
- polymorphic pointer - ouch you have to know what type the pointer actually points to, you have to construct the derived type, save the values of the derived type... so you have to save type information
- null pointer... you have to distinguish null pointer so you know that you do not need to further read data from the stream.
- versioning - you have to be able to read a data after you added/removed a field
There is too much of it for you to get a complete answer.

C++ Binary file reading

I'm trying to keep objects including vectors of objects in a binary file.
Here's a bit of the load from file code:
template <class T> void read(T* obj,std::ifstream * file) {
file->read((char*)(obj),sizeof(*obj));
file->seekg(int(file->tellg())+sizeof(*obj));
}
void read_db(DB* obj,std::ifstream * file) {
read<DB>(obj,file);
for(int index = 0;index < obj->Arrays.size();index++) {
std::cin.get(); //debugging
obj->Arrays[0].Name = "hi"; //debugging
std::cin.get(); //debugging
std::cout << obj->Arrays[0].Name;
read<DB_ARRAY>(&obj->Arrays[index],file);
for(int row_index = 0;row_index < obj->Arrays[index].Rows.size();row_index++) {
read<DB_ROW>(&obj->Arrays[index].Rows[row_index],file);
for(int int_index = 0;int_index < obj->Arrays[index].Rows[row_index].i_Values.size();int_index++) {
read<DB_VALUE<int>>(&obj->Arrays[index].Rows[row_index].i_Values[int_index],file);
}
}
}
}
And here's the DB/DB_ARRAY classes
class DB {
public:
std::string Name;
std::vector<DB_ARRAY> Arrays;
DB_ARRAY * operator[](std::string);
DB_ARRAY * Create(std::string);
};
class DB_ARRAY {
public:
DB* Parent;
std::string Name;
std::vector<DB_ROW> Rows;
DB_ROW * operator[](int);
DB_ROW * Create();
DB_ARRAY(DB*,std::string);
DB_ARRAY();
};
So now the first argument to the read_db function would have correct values, and the vector Arrays on the object has the correct size, However if I index any value of any object from obj->Arrays it's going to throw the access violation exception.
std::cout << obj->Arrays[0].Name; // error
std::cout << &obj->Arrays[0]; // no error
The later always prints the same address, so when I save an object casted to char* does it save the address of it too?
As various commenters pointed out, you cannot simply serialize a (non-POD) object by saving / restoring it's memory.
The usual way to implement serialization is to implement a serialization interface on the classes. Something like this:
struct ISerializable {
virtual std::ostream& save(std::ostream& os) const = 0;
virtual std::istream& load(std::istream& is) = 0;
};
You then implement this interface in your serializable classes, recursively calling save and load on any members referencing other serializable classes, and writing out any POD members. E.g.:
class DB_ARRAY : public ISerializable {
public:
DB* Parent;
std::string Name;
std::vector<DB_ROW> Rows;
DB_ROW * operator[](int);
DB_ROW * Create();
DB_ARRAY(DB*,std::string);
DB_ARRAY();
virtual std::ostream& save(std::ostream& os) const
{
// serialize out members
return os;
}
virtual std::istream& load(std::istream& is)
{
// unserialize members
return os;
}
};
As count0 pointed out, boost::serialization is also a great starting point.
What is the format of the binary data in the file? Until you specify
that, we can't tell you how to write it. Basically, you have to specify
a format for all of your data types (except char), then write the code
to write out that format, byte by byte (or generate it into a buffer);
and on the other side, to read it in byte by byte, and reconstruct it.
The C++ standard says nothing (or very little) about the size and
representation of the data types, except that sizeof(char) must be
1, and that unsigned char must be a pure binary representation over
all of the bits. And on the machines I have access today (Sun Sparc and
PC's), only the character types have a common representation. As for
the more complex types, the memory used in the value representation
might not even be contiguous: the bitwise representation of an
std::vector, for example, is usually three pointers, with the actual
values in the vector being found somewhere else entirely.
The functions istream::read and ostream::write are
designed for reading data into a buffer for manual parsing, and writing
a pre-formatted buffer. The fact that you need to use a
reinterpret_cast to use them otherwise should be a good indication
that it won't work.

Most performant way to inherit structs in C++?

Let's say I have the following:
#pragma pack(push,1)
struct HDR {
unsigned short msgType;
unsigned short msgLen;
};
struct Msg1 {
unsigned short msgType;
unsigned short msgLen;
char text[20];
};
struct Msg2 {
unsigned short msgType;
unsigned short msgLen;
uint32_t c1;
uint32_t c2;
};
.
.
.
I want to be able to reuse the HDR struct so I don't have to keep defining the two members: msgType and msgLen. I don't want to involve vtables for performance reasons but I do want to override operator<< for each of the structs. Based on this last requirement, I don't see how I could possibly use a union since the sizes are also different.
Any ideas on how this can best be handled for pure performance
Composition seems most appropriate here:
struct Msg1
{
HDR hdr;
char text[20];
};
Whilst you could use C++ inheritance, it doesn't really make sense semantically in this case; a Msg1 is not a HDR.
Alternatively (and possibly preferentially), you could define an abstract Msg base type:
struct Msg
{
HDR hdr;
protected:
Msg() {}
};
and have all your concrete message classes derive from that.
What is wrong with normal C++ inheritance?
struct HDR { ... };
struct Msg1: HDR { ... };
Just don't declare any virtual member functions and you'll be all set.
Depending on what you're planning on doing with these structs, you can do this without incurring any vtable overhead like this:
struct HDR {
unsigned short msgType;
unsigned short msgLen;
};
struct Msg1: HDR {
char text[20];
friend ostream& operator<< (ostream& out, const Msg1& msg);
};
struct Msg2: HDR {
uint32_t c1;
uint32_t c2;
friend ostream& operator<< (ostream& out, const Msg2& msg);
};
Since the base class does not have any virtual functions in it, you won't get a vtable for these objects. However, you should be aware that this means that if you have a HDR pointer pointing at an arbitrary subclass, you won't be able to print it out, since it won't be clear which operator<< function to call.
I think, though, that there could be a more fundamental issue here. If you're trying to treat all of these objects uniformly through a base pointer but want to be able to print them all out, then you're going to have to take a memory hit to tag each object. You can either tag them implicitly with a vtable, or explicitly by adding your own type information. There really isn't a good way of getting around this.
If, on the other hand, you just want to simplify your logic by factoring the data members into the base class, then this approach should work.
A common pattern to deal with binary network protocols is define a struct that contains an union:
struct Message {
Header hdr;
union {
Body1 msg1;
Body2 msg2;
Body3 msg3;
};
};
Semantically you are stating that a Message is composed of a Header and a body that can be one-of Body1, Body2... Now, provide insertion and extraction operators for the header and each body separately. Then implement the same operators for the Message by calling it on the Header, and depending on the message type, the body of the message that makes sense.
Note that the elements of the union do not need to have the same size. The size of the union will be the maximum of it's members' sizes. This approach allows for a compact binary representation that can be read/written from the network. Your read/write buffer will be the Message and you will read just the header and then the appropriate body.
// Define operators:
std::ostream& operator<<( std::ostream&, Header const & );
std::ostream& operator<<( std::ostream&, Body1 const & ); // and the rest
// Message operator in terms of the others
std::ostream& opeartor<<( std::ostream& o, Message const & m )
{
o << m.header;
switch ( m.header.type ) {
case TYPE1: o << m.body1; break;
//...
};
return o;
}
// read and dump the contents to stdout
Message message;
read( socket, &message, sizeof message.header ); // swap the endianness, check size...
read( socket &message.msg1, message.header.size ); // ...
std::cout << message << std::endl;