Looking around I found many places where the way to get the size of a certain object (class or struct) is explained. I read about the padding, about the fact that virtual function table influences the size and that "pure method" object has size of 1 byte. However I could not find whether these are facts about implementation or C++ standard (at least I was not able to find all them).
In particular I am in the following situation: I'm working with some data which are encoded in some objects. These objects do not hold pointers to other data. They do not inherit from any other class, but they have some methods (non virtual). I have to put these data in a buffer to send them via some socket. Now reading what I mentioned above, I simply copy my objects on the sender buffer, noticing that the data are "serialized" correctly, i.e. each member of the object is copied, and methods do not affect the byte structure.
I would like to know if what I get is just because of the implementation of the compiler or if it is prescribed by the standard.
The memory layouts of classes are not specified in the C++ standard precisely. Even the memory layout of scalar objects such as integers aren't specified. They are up to the language implementation to decide, and generally depend on the underlying hardware. The standard does specify restrictions that the implementation specific layout must satisfy.
If a type is trivially copyable, then it can be "serialised" by copying its memory into a buffer, and it can be de-it serialised back as you describe. However, such trivial serialisation only works when the process that de-serialises uses the same memory layout. This cannot generally be assumed to be the case since the other process may be running on entirely different hardware and may have been compiled with a different (version of) compiler.
You should use POD (plain-old-data). A structure is POD if it hasn't virtual table, some constructors, private methods and many other things.
There is garantee the pod data is placed in memory in declaration order.
There is alignment in pod data. And you should specify right alignment (it's your decision). See #pragma pack (push, ???).
Related
I have read this article and I encountered the following
A resource handle can be an opaque identifier, in which case it is
often an integer number (often an array index in an array or "table"
that is used to manage that type of resource), or it can be a pointer
that allows access to further information.
So a handle is either an opaque identifier or a pointer that allows access to further information. But from what I understand, these specific pointers are opaque pointers, so what exactly is the difference between these pointers ,which are opaque pointer, and opaque identifiers?
One of the literal meanings of "opaque" is "not transparent".
In computer science, an opaque identifier or a handle is one that doesn't expose its inner details. This means we can only access information from it by using some defined interface, and can't otherwise access information about its value (if any) or internal structure.
As an example, a FILE in the C standard library (and available in C++ through <cstdio>) is an opaque type. We don't know if it is a data structure, an integer, or anything else. All we know is that a set of functions, like fopen() return a pointer to one (i.e. a FILE *) and other functions (fclose(), fprintf(), ....) accept a FILE * as an argument. If we have a FILE *, we can't reliably do anything with it (e.g. actually write to a file) unless we use those functions.
The advantage of that is it allows different implementations to use different ways of representing a file. As long as our code uses the supplied functions, we don't have to worry about the internal workings of a FILE, or of I/O functions. Compiler vendors (or implementers of the standard library) worry about getting the internal details right. We simply use the opaque type FILE, and pointers to it, and stick to using standard functions, and our code works with all implementations (compilers, standard library versions, host systems)
An opaque identifier can be of any type. It can be an integer, a pointer, even a pointer to a pointer, or a data structure. Integers and pointers are common choices, but not the only ones. The key is only using a defined set of operations (i.e. a specific interface) to interact with those identifiers, without getting our hands dirty by playing with internal details.
A handle is said to be "opaque" when client code doesn't know how to see what it references. It's simply some identifier that can be used to identify something. Often it will be a pointer to an incomplete type that's only defined within a library and who's definition isn't visible to client code. Or it could just be an integer that references some element in some data structure. The important thing is that the client doesn't know or care what the handle is. The client only cares that it uniquely identifies some resource.
Consider the following interface:
widget_handle create_widget();
void do_a_thing(widget_handle);
void destroy_widget(widget_handle);
Here, it doesn't actually matter to the calling code what a widget_handle is, how the library actually stores widgets, or how the library actually uses a widget_handle to find a particular widget. It could be a pointer to a widget, or it could be an index into some global array of widgets. The caller doesn't care. All that matters is that it somehow identifies a widget.
One possible difference is that an integer handle can have "special" values, while pointer handle cannot.
For example, the file descriptors 0,1,2 are stdin, stdout, stderr. This would be harder to pull off if you have a pointer for a handle.
You really you shouldn't care. They could be everything.
Suppose you buy a ticket from person A for an event. You must give this ticket to person B to access the event.
The nature of the ticket is irrelevant to you, it could be:
a paper ticket,
an alphanumerical code,
a barcode,
a QR-Code,
a photo,
a badge,
whatever.
You don't care. Only A and B use the ticket for its nature, you are just carrying it around. Only B knows how to verify the validity and only A know how to issue a correct ticket.
An opaque pointer could be a memory location directly, while an integer could be an offset of a base memory address in a table but how is that relevant to you, client of the opaque handle?
In classic Mac OS memory management, handles were doubly indirected pointers. The handle pointed to a "master pointer" which was the address of the actual data. This allowed moving the actual storage of the object in memory. When a block was moved, its master pointer would be updated by the memory manager.
In order to use the data the handle ultimately referenced, the handle had to be locked, which would prevent it being moved. (There was little concurrency in the system so unless one was calling the operating system or libraries which might, one could also rely on memory not getting moved. Doing so was somewhat perilous however as code often evolved to call something that could move memory inside a place where that was not expected.)
In this design, the handle is a pointer but it is not an opaque type. A generic handle is a void ** in C, but often one had a typed handle. If you look here you'll find lots of handle types that are more concrete. E.g. StringHandle.
I have always used structs for packaging and receiving packets, will i gain anything by converting them to classes inherited from main packet class ? is there another "c++ish" way for packaging and any performance gain by this ?
It is very general and various solutions may be available. This is related to Serialization topic and what you say is a simple model of serialization where packets contains structs which they can be loaded directly into memory and vice versa. I think C and C++ are great in this case because they allow you to write something like struct directly to stream and read it back easily. In other languages you can implement your byte alignment or you should serialize objects to be able to write them to streams.
In some cases you need to read a string stream like XML, SOAP, etc. In some application you should use structs. In some cases you need to serialize your objects into stream. It depends. But I think using structs and pointers is more forward than using object serialization.
In your case, you have 2 structures for each entity I think. A struct which moved along wire or file and a class which holds the entity instance inside memory. If you use binary serialization for your object, you can use just a class for sending, receiving and keeping the instance.
Data modelling
Generally, your C++ classes should factor the redundancy in the data they model. So, if the packets share some common layout, then you can create a class that models that data and the operations on it. You may find it convenient to derive classes that add other data members reflecting the hierarchy of possible packet data layouts, but other times it may be equally convenient to have unrelated classes reflecting the different layouts of parts of the packet (especially if the length or order of parts of the message can vary).
To give a clearer example of the simplest case fitting in with your ideas - if you have a standard packet header containing say a record id, record size in bytes and sequence id, you might reasonably put those fields into a class, and publicly derive a class for each distinct record id. The base class might have member functions to read those values while converting from network byte order to the local byte order, check sequence ids are incrementing as needed etc. - all accessible to derived classes and their users.
Runtime polymorphism
You should be wary of virtual members though - in almost all implementations they will introduce virtual dispatch pointers in your objects that will likely prevent them mirroring the data layout in the network packets. If there's a reason to want run-time polymorphism (and there can easily be, especially when reading packets), you may find it useful to have a polymorphic hierarchy of classes having 1:1 correspondences with the hierarchy of non-polymorphic data-layout classes, and just containing a pointer to the location of the data in memory.
Performance
Using a class or struct with layout deliberately mirroring your network packets potentially lets you operate on that memory in-place and very conveniently, trusting the compiler to create efficient code to do so. Compilers are normally pretty good at that.
The efficiency (speed) of that access should be totally unaffected by the hierarchy of classes you use to model the data. The data offsets involved and calls to non-virtual functions will all be resolved at compile-time.
You may see performance degredation if you introduce virtual functions as they can prevent inlining and require an extra pointer indirection, but you should put that in context by considering how else and how often you'd have switched between the layout-specific operations you need to support (for example, using switch (record_id) all over the place, if (record_id == X), or explicit function pointers).
I'm attempting to implement a Save/Load feature into my small game. To accomplish this I have a central class that stores all the important variables of the game such as position, etc. I then save this class as binary data to a file. Then simply load it back for the loading function. This seems to work MOST of the time, but if I change certain things then try to do a save/load the program will crash with memory access violations. So, are classes guaranteed to have the same structure in memory on every run of the program or can the data be arranged at random like a struct?
Response to Jesus - I mean the data inside the class, so that if I save the class to disk, when I load it back, will everything fit nicely back.
Save
fout.write((char*) &game,sizeof Game);
Load
fin.read((char*) &game, sizeof Game);
Your approach is extremely fragile. With many restrictions, it can work. These restrictions are not worth subjecting your users (or yourself!) to in typical cases.
Some Restrictions:
Never refer to external memory (e.g. a pointer or reference)
Forbid ABI changes/differences. Common case: memory layout and natural alignment on 32 vs 64 will vary. The user will need a new 'game' for each ABI.
Not endian compatible.
Altering your type's layouts will break your game. Changing your compiler options can do this.
You're basically limited to POD data.
Use offsets instead of pointers to refer to internal data (This reference would be in contiguous memory).
Therefore, you can safely use this approach in extremely limited situations -- that typically applies only to components of a system, rather than the entire state of the game.
Since this is tagged C++, "boost - Serialization" would be a good starting point. It's well tested and abstracts many of the complexities for you.
Even if this would work, just don't do it. Define a file format at the byte-level and write sensible 'convert to file format' and 'convert from file format' functions. You'll actually know the format of the file. You'll be able to extend it. Newer versions of the program will be able to read files from older versions. And you'll be able to update your platform, build tools, and classes without fear of causing your program to crash.
Yes, classes and structures will have the same layout in memory every time your program runs., although I can't say if the standard enforces this. The machine code generated by C++ compilers use "hard-coded" offsets to access type fields, so they are fixed. Realistically, the layout will only change if you modify the C++ class definition (field sizes, order, virtual methods, etc.), compile with a different compiler or change compiler options.
As long as the type is POD and without pointer fields, it should be safe to simply dump it to a file and read it back with the exact same program. However, because of the above-mentionned concerns, this approach is quite inflexible with regard to versionning and interoperability.
[edit]
To respond to your own edit, do not do this with your "Game" object! It certainly has pointers to other objects, and those objects will not exist anymore in memory or will be elsewhere when you'll reload your file.
You might want to take a look at this.
Classes are not guaranteed to have the same structure in memory as pointers can point to different locations in memory each time a class is created.
However, without posting code it is difficult to say with certainty where the problem is.
I have several questions to ask that pertains to data position and alignment in C++. Do classes have the same memory placement and memory alignment format as structs?
More specifically, is data loaded into memory based on the order in which it's declared? Do functions affect memory alignment and data position or are they allocated to another location? Generally speaking, I keep all of my memory alignment and position dependent stuff like file headers and algorithmic data within a struct. I'm just curious to know whether or not this is intrinsic to classes as it is to structs and whether or not it will translate well into classes if I chose to use that approach.
Edit: Thanks for all your answers. They've really helped a lot.
Do classes have the same memory placement and memory alignment format
as structs?
The memory placement/alignment of objects is not contingent on whether its type was declared as a class or a struct. The only difference between a class and a struct in C++ is that a class have private members by default while a struct have public members by default.
More specifically, is data loaded into memory based on the order in
which it's declared?
I'm not sure what you mean by "loaded into memory". Within an object however, the compiler is not allowed to rearrange variables. For example:
class Foo {
int a;
int b;
int c;
};
The variables c must be located after b and b must be located after a within a Foo object. They are also constructed (initialized) in the order shown in the class declaration when a Foo is created, and destructed in the reverse order when a Foo is destroyed.
It's actually more complicated than this due to inheritance and access modifiers, but that is the basic idea.
Do functions affect memory alignment and data position or are they
allocated to another location?
Functions are not data, so alignment isn't a concern for them. In some executable file formats and/or architectures, function binary code does in fact occupy a separate area from data variables, but the C++ language is agnostic to that fact.
Generally speaking, I keep all of my memory alignment and position
dependent stuff like file headers and algorithmic data within a
struct. I'm just curious to know whether or not this is intrinsic to
classes as it is to structs and whether or not it will translate well
into classes if I chose to use that approach.
Memory alignment is something that's almost automatically taken care of for you by the compiler. It's more of an implementation detail than anything else. I say "almost automatically" since there are situations where it may matter (serialization, ABIs, etc) but within an application it shouldn't be a concern.
With respect with reading files (since you mention file headers), it sounds like you're reading files directly into the memory occupied by a struct. I can't recommend that approach since issues with padding and alignment may make your code work on one platform and not another. Instead you should read the raw bytes a couple at a time from the file and assign them into the structs with simple assignment.
Do classes have the same memory placement and memory alignment format as structs?
Yes. Technically there is no difference between a class and a struct. The only difference is the default member access specification otherwise they are identical.
More specifically, is data loaded into memory based on the order in which it's declared?
Yes.
Do functions affect memory alignment and data position or are they allocated to another location?
No. They do not affect alignment. Methods are compiled separately. The object does not contain any reference to methods (to those that say virtual tables do affect members the answer is yes and no but this is an implementation detail that does not affect the relative difference between members. The compiler is allowed to add implementation specific data to the object).
Generally speaking, I keep all of my memory alignment and position dependent stuff like file headers and algorithmic data within a struct.
OK. Not sure how that affects anything.
I'm just curious to know whether or not this is intrinsic to classes as it is to structs
Class/Structs different name for the same thing.
and whether or not it will translate well into classes if I chose to use that approach.
Choose what approach?
C++ classes simply translate into structs with all the instance variables as the data contained inside the structs, while all the functions are separated from the class and are treated like functions with accept those structs as an argument.
The exact way instance variables are stored depends on the compiler used, but they generally tend to be in order.
C++ classes do not participate in "persistence", like binary-mode structures, and shouldn't have alignment attached to them. Keep the classes simple.
Putting alignment with classes may have negative performance benefits and may have side effects too.
What are the pros and cons of
using Plain Old Data (POD)
structs\classes in C++?
In what cases should one prefer using
them over non-PODs?
Specifically,
do PODs have advantages while working
with serialization frameworks?
Perhaps when working cross-platform
and cross-language?
If you have a gazillion small objects, ensuring that those objects are POD can be a huge advantage.
You can calloc() or malloc() a large chunk of them, saving you a gazillion calls to constructors.
For persistence, rather than streaming out the objects one at a time, you can use fwrite() and fread() entire chunk of them for speed.
The disadvantage is, you have to keep your mind flexible to handle non-OOP PODs in your code. PODs are a fallback from old-style C code where you know and care about the layout of your data. When that layout is well-defined, you may optimize by working chunks of memory rather than lots of little pieces.
Please note that what I describe above applies to trivially laid out structures. In other words, if you call type_trait's std::is_trivially_copyable() on this type, you will get true. The requirements for POD are actually even stronger than that of trivially-copiable structures. So what I just described above applies to all PODs and even some non-PODs which happen to be trivially-copiable.
There is one advantage of POD in conjunction with constants.
If you declare/define a constant and you use a POD type for it the whole POD is put into the (constant) data section of the executable/library and is available after loading.
If you use a non-POD the constructor must run to initialize it. Since the order of running constructors of static classes in C++ is undefined you cannot access static A from static B's constructor or any code that is invoked from within static B's constructor.
So using PODs in this case is safe.
PODs can be used in C interfaces which means that you can have a library written in C++ but with a C interface which can advantageous.
The disadvantage is that you can't use a constructor to put burden of initialization onto the type itself - the user code will have to take care of that.
pods have some subtle advantage.
I don't know any portable way to calculate memory size required
for new[] operator if array elements are non-pod,
so it is dificult to use safely placement new[] operator for such an array.
If nonpod structure has a destructor, new[] needs extra space to store array size, but this extra size is implementation-dependent (althogh usually it is sizeof(size_t) + (perhaps) some padding to provide proper alignment)