I haven't seen the common sense notion of converting an integer to network order and to write the resulting bytes into an indexable entity in a string - string database vs. writing the string representation of the number anywhere in the documentation of such databases.
Surely the size overhead of writing a 64-bit int as a string into a database must outweigh the trivial complexity of having to do a ntohl call before writing the bytes back into an integer type.
I am therefore missing something here, what are the downsides to using big-endian bytes vs. strings as indexable entities in string-string databases ?
(C++/C tags as I am talking about writing bytes into the memory location of a programatic type, BDB as that is the database I am using, could be kyotodb as well).
The advantage of big-endian in this case is that the strings would sort correctly in ascending order.
If the database architecture cannot natively store 64-bit integers, but you need to store them anyway, stringifying them this way is a way to do it.
Of course if you later upgrade the database to one that can store 64-bit integers natively, you will either be "stuck" with the implementation or have to go through a migration process.
If the database validates that the string data you send is valid in the expected encoding then you can't just give it any data you want. You'll only be able to send such integers as happen to look like a valid encoding. I don't know if BDB or kyotodb do such validation.
Also it seems to me like a hack to try to trick one data type to hold something else, and then rely on clients to all know the trick. Of course that applies whether you're using the string to hold a ascii-decimal representation of the integer or if you're using the string as a raw memory buffer to hold the integer. It seems to me that it'd be better to use a database that actually holds the types you want to hold, instead of just strings.
Related
We're operating a private blockchain using Quorum for a research project. We're storing large amounts of data in transactions as strings. We've hit the size limit (we think, open to thoughts there too) and have been able to span the data across multiple transactions, etc., so the ability to store everything we want isn't really the issue.
We are storing just a long string of hexadecimal characters, and what we're wondering is if instead of string, we could store them more efficiently as something else and maybe stretch the data field out by switching from string to a different datatype specialized for hexadecimal.
Any ideas? Like I said, we're operational, but I've bene having a hard time figuring out whether this is possible, or even a good idea. The hexadecimal "0xab34ef6......26ef" would not resolve to a number or anything (its a string of independent characters that just happen to be hex.
Thanks and take care,
Mark
For a client/server application I need to send and recive c++ objects. I don't need the corresponding classes to do anything fancy but want to have maximal performance (regarding network traffic and computation). So I though of simply transferring them as binary strings. Basicly I want to be able to do the following
//Create original object
MyClass oldObj();
//save to char array
char* save = new char[sizeof(MyClass)];
memcpy(save, &oldObj, sizeof(MyClass));
//Somewhere of course there would be the transfer to the client/server
//Read back from char array
MyClass newObj();
memcpy(&newObj, save, sizeof(MyClass));
My question: What does my class need to fullfill in order for this to work?
Naturaly Pointers as members won't work when transferring to another application. But is it sufficient that my Class is considered POD (in c++03 and/or c++11) and does not have any pointers or equivalents (like STL containers) as members?
Both machines need to:
Have the same Endianess (for int)
The same floating point representation (double)
The same size for all types.
The Same compiler
The Same flags used to build the application.
Pointers dont transfer well.
BUT the network is going to be the slowest part here.
The cost of serializing most objects is going to be irrelevant compared to the cost of transfer. Of course the bigger your object the higher the cost but it takes a while before it is significant to make a dent.
The higher cost of maintenance is also you should factor in.
What does my class need to fulfill in order for this to work?
It must not have pointer members, you already mention that.
It must not have members whose size is implementation defined, like int.
It must not have integers members, due to different endianness.
It must not have floating point members, due to different representations.
...and probably more!
Basically, you cannot do that except for very particularly constrained scenarios. You will have to pick a protocol and make your data conform to it to send it through the network safely.
Is not a big deal since performance will be bounded by network speed and latency, not by the operations needed on your values to conform to the protocol.
How much control do you have over the hardware/OS that this runs on? Are you writing code that is super-portable, or will it ONLY run on 32- and 64-bit x86 Windows [for example]?
To be fully "super-portable", as explained above, you can't have any form of "implementation defined" sized objects (such as int that can be 16, 18, 32, 36 or 64 bits, for example). Such items need to be stored as bytes of defined number and order to make sure it will not get cut off/re-ordered when transferring. Floating point can be even worse...
A lot of "super-portable" applications store their data as text. It's a little slower, but it makes it trivially portable, since text is just a stream of bytes whatever architecture you run it on, and it's ordered the same way whichever machine you use (as long as you stick to 0-9, A-Za-z, !?<>,.()*& and a few other characters - and beware of EBCDIC encoded machines, but they tend to handle "ascii-to-ebcdic" conversion). The other end just need to conver the text back to strings/integers/floats/doubles, whatever you need. A conversion from integer to string of digits takes one divide per digit (using hex or base-36 makes that a bit better, but makes it much less human readable - sometimes a good thing, sometimes a bad thing). This is clearly slower than storing 4 bytes. THe other drawback is that it's (depending on values used) often longer to store a number in text than as binary. So your network packets will be a little larger. This will have a greater impact than the conversion, as processors can do a lot of math in the time it takes to send 1KB with a 10Gbit network card. And of course, you need a few extra bytes (spaces, commas, newlines or whatever it may be) so that you can tell the difference between one number 123456 and three 12, 34, 56. [Of course, no need to use ", " between each]. And you need some code to parse the whole thing at the other end once it has arrived.
If you know that your system(s) always have 32-bit integers and IEEE-754 floating point numbers [these are extremely common!], then you may well get away with just worrying about byte order. And if you know that it's always going to be on "x86" or some such, you don't have to worry about byte order either. But you now may have to modify your code when you decide that "running my code on an iphone would be a good idea". Of course, you could leave that to the iphone side of things to conform to whatever the rest requires.
Other answers have mentioned how it is possible to use a class for this purpose. Personally, I prefer to use a struct instead. In C++, a struct can have member methods/operators, constructor/destructor, supports inheritance, etc just like a class does. However, a struct has a well-defined and predictable memory layout and can have that layout explicitally aligned via #pragma statements to add/remove the compiler's implicit padding (I have never tried aligning a class before, but I think it is supported). I always use an 8bit-aligned struct for data that has to be exchanged outside of the app's process. For all intents and purposes, in modern compilers, a struct is basically identical to a class, just the default visibility of its members is public instead of private. But I like to keep struct and class separated for different purposes. A struct is just a raw container of data that you can freely manpulate, overwrite in memory, etc. A class is an object whose memory layout and padding is compiler-defined and should not be messed with.
I'm working with c++ and I am writing a budget program (I'm aware many are available--it's just a learning project).
I want to save what I call a book object that contains other objects such as 'pages'. Pages also contain cashflows and entries. The issue is that there can be any amount of entries or cashflows.
I have found a lot of information on saving data to text files but that is not what I want to do.
I have tried looking into using the boost library as I've been told serialization might be the solution to this problem. I'm not entirely sure which functions is boost are going to help or even what the proper ways are to use boost.
Most examples of binary files that I have seen are with objects that have fixed size members. For example, a point might contain an x value and a y value that are both doubles. This will always be the case so it is simple to just use sizeOf(Point).
So, I'm either looking for direct answers to this question or useful links to information on how to solve my problem. But please make sure you links are specific to the question.
I've also posted the same question on cplusplus
In general, there are two methods to store variable length records:
Store size integer first, followed by the data.
Store the data, append a sentinel character (or value) at the end.
C-style strings use the 2nd option.
For option one, the number contains the size of the data.
Optional Fields
If your considering relational database design for optional fields, you would have one table with the known or fixed records and another table containing an option field with the record ID.
A simpler route may be to go to something similar to XML: field labels.
Split your object into two sections: static fields and optional fields.
The static field section would be followed by an optional field section. The optional field section would contain the field name, followed by the field data. Read in the field name then the value.
I suggest you review your design to see if optional fields can be eliminated. Also, for complex fields, have them read in their own data.
Storing Binary Data
If the data is shared between platforms, consider using ASCII or textual representation.
Read up upon Endianess and also bit sizes. For example one platform could store its binary representation least significant byte first and use 32 bits (4 bytes). The receiving platform, 64-bit, most significant byte first, would have problems reading the data directly and would need to convert; thus losing any benefit from binary storage.
Similarly, floating point doesn't fare well in binary either. There is also the loss of precision when converting between floating point formats.
When using optional fields in binary, one would use a sentinel byte or number for the field ID rather than a textual name.
Also, data in textual format is much easier to debug than data in binary format.
Consider using a Database
See At what point is it worth using a database?
The boost::serialization documentation is here.
boost::serialization handles user-written classes as well as STL containers: std::deque, std::list, etc.
My guess is that data is scattered in physical memory (even the data of a class object is sequential in virtual memory), so in order to send the data correctly it needs to be reassembled, and to be able to send over the network, one additional step is the transformation of host byte order to network byte order. Is it correct?
Proper serialization can be used to send data to arbitrary systems, that might not work under the same architecture as the source host.
Even an object that only consist of native types can be troublesome sharing between two systems because of the extra padding that might exists in between and after members, among other things. Sharing raw memory dumps of objects between programs compiled for the same architecture but with different compiler versions can also turn into a big hassle. There is no guarantee how variable type T actually is stored in memory.
If you are not working with pointers (references included), and the data is meant to be read by the same binary as it's dumped from, it's usually safe just to dump a raw struct to disk, but when sending data to another host.. drum roll serialization is the way to go.
I've heard developers talking about ntohl / htonl / ntohl / ntohs as methods of serializing/deserializing integers, and when you think about it saying that isn't that far from the truth.
The word "serialization" is often used to describe this "complicated method of storing data in a generic way", but then again; your first programming assignment where you were asked to save information about Dogs to file (hopefully*) made use of serialization, in some way or another.
* "hopefully" meaning that you didn't dump the raw memory representation of your Dog object to disk
Pointers!
If you've allocated memory on the heap you'll just end up with a serialised pointer pointing to an arbitrary area of memory. If you just have a few ints and chars then yes you can just write it out directly to a file, but that then becomes platform dependent because of the byte ordering that you mentioned.
Pointer and data pack(data align)
If you memcpy your object's memory, there is dangerous to copy a wild pointer value instead of it's data. There is another risk, if the sender and receiver have different data pack(data align) method, you will get rubbish after decoding.
Binary representations may be different between different architectures, compilers and even different versions of the same compiler. There's no guarantee that what system A sees as a signed integer will be seen as the same on system B. Byte ordering, word langths, struct padding etc will become hard to debug problems if you don't properly define the protocol or file format for exchanging the data.
Class (when we speak of C++) also includes virtual method pointers - and they must be reconstructed on receiving end.
I do embedded software, but this isn't really an embedded question, I guess. I don't (can't for technical reasons) use a database like MySQL, just C or C++ structs.
Is there a generic philosophy of how to handle changes in the layout of these structs from version to version of the program?
Let's take an address book. From program version x to x+1, what if:
a field is deleted (seems simple enough) or added (ok if all can use some new default)?
a string gets longer or shorter? An int goes from 8 to 16 bits of signed / unsigned?
maybe I combine surname/forename, or split name into two fields?
These are just some simple examples; I am not looking for answers to those, but rather for a generic solution.
Obviously I need some hard coded logic to take care of each change.
What if someone doesn't upgrade from version x to x+1, but waits for x+2? Should I try to combine the changes, or just apply x -> x+ 1 followed by x+1 -> x+2?
What if version x+1 is buggy and we need to roll-back to a previous version of the s/w, but have already "upgraded" the data structures?
I am leaning towards TLV (http://en.wikipedia.org/wiki/Type-length-value) but can see a lot of potential headaches.
This is nothing new, so I just wondered how others do it....
I do have some code where a longer string is puzzled together from two shorter segments if necessary. Yuck. Here's my experience after 12 years of keeping some data compatible:
Define your goals - there are two:
new versions should be able to read what old versions write
old versions should be able to read what new versions write (harder)
Add version support to release 0 - At least write a version header. Together with keeping (potentially a lot of) old reader code around that can solve the first case primitively. If you don't want to implement case 2, start rejecting new data right now!
If you need only case 1, and and the expected changes over time are rather minor, you are set. Anyway, these two things done before the first release can save you many headaches later.
Convert during serialization - at run time, only keep the data in the "new format" in memory. Do necessary conversions and tests at persistence limits (convert to newest when reading, implement backward compatibility when writing). This isolates version problems in one place, helping to avoid hard-to-track-down bugs.
Keep a set of test data from all versions around.
Store a subset of available types - limit the actually serialized data to a few data types, such as int, string, double. In most cases, the extra storage size is made up by reduced code size supporting changes in these types. (That's not always a tradeoff you can make on an embedded system, though).
e.g. don't store integers shorter than the native width. (you might need to do that when you need to store long integer arrays).
add a breaker - store some key that allows you to intentionally make old code display an error message that this new data is incompatible. You can use a string that is part of the error message - then your old version could display an error message it doesn't know about - "you can import this data using the ConvertX tool from our web site" is not great in a localized application but still better than "Ungültiges Format".
Don't serialize structs directly - that's the logical / physical separation. We work with a mix of two, both having their pros and cons. None of these can be implemented without some runtime overhead, which can pretty much limit your choices in an embedded environment. At any rate, don't use fixed array/string lengths during persistence, that should already solve half of your troubles.
(A) a proper serialization mechanism - we use a bianry serializer that allows to start a "chunk" when storing, which has its own length header. When reading, extra data is skipped and missing data is default-initialized (which simplifies implementing "read old data" a lot in the serializationj code.) Chunks can be nested. That's all you need on the physical side, but needs some sugar-coating for common tasks.
(B) use a different in-memory representation - the in-memory reprentation could basically be a map<id, record> where id woukld likely be an integer, and record could be
empty (not stored)
a primitive type (string, integer, double - the less you use the easier it gets)
an array of primitive types
and array of records
I initially wrote that so the guys don't ask me for every format compatibility question, and while the implementation has many shortcomings (I wish I'd recognize the problem with the clarity of today...) it could solve
Querying a non existing value will by default return a default/zero initialized value. when you keep that in mind when accessing the data and when adding new data this helps a lot: Imagine version 1 would calculate "foo length" automatically, whereas in version 2 the user can overrride that setting. A value of zero - in the "calculation type" or "length" should mean "calculate automatically", and you are set.
The following are "change" scenarios you can expect:
a flag (yes/no) is extended to an enum ("yes/no/auto")
a setting splits up into two settings (e.g. "add border" could be split into "add border on even days" / "add border on odd days".)
a setting is added, overriding (or worse, extending) an existing setting.
For implementing case 2, you also need to consider:
no value may ever be remvoed or replaced by another one. (But in the new format, it could say "not supported", and a new item is added)
an enum may contain unknown values, other changes of valid range
phew. that was a lot. But it's not as complicated as it seems.
There's a huge concept that the relational database people use.
It's called breaking the architecture into "Logical" and "Physical" layers.
Your structs are both a logical and a physical layer mashed together into a hard-to-change thing.
You want your program to depend on a logical layer. You want your logical layer to -- in turn -- map to physical storage. That allows you to make changes without breaking things.
You don't need to reinvent SQL to accomplish this.
If your data lives entirely in memory, then think about this. Divorce the physical file representation from the in-memory representation. Write the data in some "generic", flexible, easy-to-parse format (like JSON or YAML). This allows you to read in a generic format and build your highly version-specific in-memory structures.
If your data is synchronized onto a filesystem, you have more work to do. Again, look at the RDBMS design idea.
Don't code a simple brainless struct. Create a "record" which maps field names to field values. It's a linked list of name-value pairs. This is easily extensible to add new fields or change the data type of the value.
Some simple guidelines if you're talking about a structure use as in a C API:
have a structure size field at the start of the struct - this way code using the struct can always ensure they're dealing only with valid data (for example, many of the structures the Windows API uses start with a cbCount field so these APIs can handle calls made by code compiled against old SDKs or even newer SDKs that had added fields
Never remove a field. If you don't need to use it anymore, that's one thing, but to keep things sane for dealing with code that uses an older version of the structure, don't remove the field.
it may be wise to include a version number field, but often the count field can be used for that purpose.
Here's an example - I have a bootloader that looks for a structure at a fixed offset in a program image for information about that image that may have been flashed into the device.
The loader has been revised, and it supports additional items in the struct for some enhancements. However, an older program image might be flashed, and that older image uses the old struct format. Since the rules above were followed from the start, the newer loader is fully able to deal with that. That's the easy part.
And if the struct is revised further and a new image uses the new struct format on a device with an older loader, that loader will be able to deal with it, too - it just won't do anything with the enhancements. But since no fields have been (or will be) removed, the older loader will be able to do whatever it was designed to do and do it with the newer image that has a configuration structure with newer information.
If you're talking about an actual database that has metadata about the fields, etc., then these guidelines don't really apply.
What you're looking for is forward-compatible data structures. There are several ways to do this. Here is the low-level approach.
struct address_book
{
unsigned int length; // total length of this struct in bytes
char items[0];
}
where 'items' is a variable length array of a structure that describes its own size and type
struct item
{
unsigned int size; // how long data[] is
unsigned int id; // first name, phone number, picture, ...
unsigned int type; // string, integer, jpeg, ...
char data[0];
}
In your code, you iterate through these items (address_book->length will tell you when you've hit the end) with some intelligent casting. If you hit an item whose ID you don't know or whose type you don't know how to handle, you just skip it by jumping over that data (from item->size) and continue on to the next one. That way, if someone invents a new data field in the next version or deletes one, your code is able to handle it. Your code should be able to handle conversions that make sense (if employee ID went from integer to string, it should probably handle it as a string), but you'll find that those cases are pretty rare and can often be handled with common code.
I have handled this in the past, in systems with very limited resources, by doing the translation on the PC as a part of the s/w upgrade process. Can you extract the old values, translate to the new values and then update the in-place db?
For a simplified embedded db I usually don't reference any structs directly, but do put a very light weight API around any parameters. This does allow for you to change the physical structure below the API without impacting the higher level application.
Lately I'm using bencoded data. It's the format that bittorrent uses. Simple, you can easily inspect it visually, so it's easier to debug than binary data and is tightly packed. I borrowed some code from the high quality C++ libtorrent. For your problem it's so simple as checking that the field exist when you read them back. And, for a gzip compressed file it's so simple as doing:
ogzstream os(meta_path_new.c_str(), ios_base::out | ios_base::trunc);
Bencode map(Bencode::TYPE_MAP);
map.insert_key("url", url.get());
map.insert_key("http", http_code);
os << map;
os.close();
To read it back:
igzstream is(metaf, ios_base::in | ios_base::binary);
is.exceptions(ios::eofbit | ios::failbit | ios::badbit);
try {
torrent::Bencode b;
is >> b;
if( b.has_key("url") )
d->url = b["url"].as_string();
} catch(...) {
}
I have used Sun's XDR format in the past, but I prefer this now. Also it's much easier to read with other languages such as perl, python, etc.
Embed a version number in the struct or, do as Win32 does and use a size parameter.
if the passed struct is not the latest version then fix up the struct.
About 10 years ago I wrote a similar system to the above for a computer game save game system. I actually stored the class data in a seperate class description file and if i spotted a version number mismatch then I coul run through the class description file, locate the class and then upgrade the binary class based on the description. This, obviously required default values to be filled in on new class member entries. It worked really well and it could be used to auto generate .h and .cpp files as well.
I agree with S.Lott in that the best solution is to separate the physical and logical layers of what you are trying to do. You are essentially combining your interface and your implementation into one object/struct, and in doing so you are missing out on some of the power of abstraction.
However if you must use a single struct for this, there are a few things you can do to help make things easier.
1) Some sort of version number field is practically required. If your structure is changing, you will need an easy way to look at it and know how to interpret it. Along these same lines, it is sometimes useful to have the total length of the struct stored in a structure field somewhere.
2) If you want to retain backwards compatibility, you will want to remember that code will internally reference structure fields as offsets from the structure's base address (from the "front" of the structure). If you want to avoid breaking old code, make sure to add all new fields to the back of the structure and leave all existing fields intact (even if you don't use them). That way, old code will be able to access the structure (but will be oblivious to the extra data at the end) and new code will have access to all of the data.
3) Since your structure may be changing sizes, don't rely on sizeof(struct myStruct) to always return accurate results. If you follow #2 above, then you can see that you must assume that a structure may grow larger in the future. Calls to sizeof() are calculated once (at compile time). Using a "structure length" field allows you to make sure that when you (for example) memcpy the struct you are copying the entire structure, including any extra fields at the end that you aren't aware of.
4) Never delete or shrink fields; if you don't need them, leave them blank. Don't change the size of an existing field; if you need more space, create a new field as a "long version" of the old field. This can lead to data duplication problems, so make sure to give your structure a lot of thought and try to plan fields so that they will be large enough to accommodate growth.
5) Don't store strings in the struct unless you know that it is safe to limit them to some fixed length. Instead, store only a pointer or array index and create a string storage object to hold the variable-length string data. This also helps protect against a string buffer overflow overwriting the rest of your structure's data.
Several embedded projects I have worked on have used this method to modify structures without breaking backwards/forwards compatibility. It works, but it is far from the most efficient method. Before long, you end up wasting space with obsolete/abandoned structure fields, duplicate data, data that is stored piecemeal (first word here, second word over there), etc etc. If you are forced to work within an existing framework then this might work for you. However, abstracting away your physical data representation using an interface will be much more powerful/flexible and less frustrating (if you have the design freedom to use such a technique).
You may want to take a look at how Boost Serialization library deals with that issue.